This brilliant article about the history of moderation was shared by Renée Furner with me via Twitter. It gives a glimpse of how devastatingly difficult it is to choose what people should see and what they shouldn’t, and the far reaching implications for our online communities.
Broadbent’s research focuses on the ‘democratization of intimacy’. The video above highlights how interactions have changed between individuals and their loved ones since connectivity with phones and the Internet has improved. She makes reference to how institutions have prevented people from connecting to one another because communication channels have been locked down, historically. She says we are conditioned to focus and pay attention on the tasks we need to complete for work or school and this perpetuates isolation.
Kozinet (2010) makes reference to how Correll (1995) suggests that ‘community experience is mediated by impressions of real-world locations’. I wondered, while watching Broadbent, whether the reason there seems to be a lack of meaningful interaction on MOOCs is because people have been conditioned to focus on the task dictated to them by institutions, thereby denying development of intimate interaction between participants. If we assume that the MOOC model could represent the academic institution, people are focused on the tasks they should complete and are not of building intimate relationships that will help develop learning.
Another idea is that we restrict with whom we choose to become intimate. Those people we choose to connect with on a regular basis are important because we have a long standing emotional connection to them. A MOOC is such a vast space full of people it is difficult to discern who of the many, will provide meaningful learning opportunities.
How do we express what we really mean? Especially when there is much depth to the topic we are discussing. The expression of depth and meaning was quite challenging when making our visual artefact. This is evident through the conversations that have subsequently transpired. The intention of the creator is not always the same as the interpretation of the reader or viewer. Getting meaning across is no small feat! I struggle with this when writing academically and it was exacerbated further when I tried to portray my critical thinking in a picture.
It was while I was grappling with ‘online interaction’, ‘initial assumptions’ and ‘developing nuanced understandings of the online social world’ (Kozinets 2010) of participating in the online community that is Education and Digital Cultures, that I had a discussion with two other participants that left me utterly perplexed. Perhaps this is what Kozinets (2010) meant about ‘interpretive social cues'(p 24) developing between communities. The discussion is below:
What ultimately left me perplexed is how a conversation started by discussing MOOCs ended up with ‘[t]he sex industry’ being ‘an early adopter of new tech’. Did I miss something? Some earlier conversation where this would make sense? Is this part of the heirachy of our own online community of which I am not a part? Perhaps I’m looking too hard for meaning and this is simply an effort to build rapport in our online community. It’s lead me to question; how do we construe meaning from online exchanges that are less than 140 characters long? Is what we are trying to express being accurately conveyed? Do our readers/viewers understand what we mean? How do we record and interpret qualitative data objectively if a) the meaning is not clear; b) if we are part of that community ourselves?
I suppose what I’m wondering, as we head off to do our own ethnographic studies in our MOOCs, is how to construct meaning out of comments and behaviour online when it is clear that we cannot take all information we see at face value. I look forward to finding out.
I thought this was interesting in relation to digital cultures because the photograph was taken in 1909 and demonstrates how long our fascination with machines, who are made in our image, have existed.
This article really helped focus my mind on how racism, misogyny and homophobia are embedded within the technologies we use. It states very clearly how data can be used marginalize different groups and in particular with regards to education. Will the data that universities collect will ultimately become a tool to discipline students and academics alike? Is evidence of freedom of thought a risk to our future education or professions?