Daniela Rus’ presentation was interesting to watch in the context of having recently watched Audrey Watters’ presentation at Edinburgh on the automation of education. Rus doesn’t have the cynicism which Watters (justifiably) has. For example, she identifies an algorithm which is able to reduce the number of taxis required in New York City by 10,000 by redirecting drivers (if the public agrees to ride-share). While this could mean 10.000 job losses, Rus says that, with a new economic model, it doesn’t have to. She describes a different picture in which the algorithm could mean the same money for cab drivers, but shorter shifts, with 10,000 less cars on the road producing less pollution. It’s a solution which is good for taxi drivers, and good for society – but like Watters I fear that within capitalism there is little incentive for commercial entities to make the choice to value people or the environment over profits. Automation should, as Rus suggests in the presentation, take away the uninteresting and repetitive parts of jobs and enable a focus on the more ‘human’ aspects of work, but instead, it can be used to deskill professions and push down wages. Her key takeaway is that machines, like humans, are neither necessarily good or bad. For machines, it just depends on how we use them..
I was alerted to this excellent talk by Audrey Watters by peers who were tweeting while watching the live stream (thanks Colin, and others). Of course, it is also part of James Lamb’s ‘Lifestream’, in that he featured on screen 😉
I think as a listened to/watched this talk, my focus still on ‘imaginaries’:
the way machine learning is used as a metaphor for human learning (Watters asks, ‘Do humans even learn in this way’?), and the consequences that holding such an understanding will have on education;
the giving of agency to robots (‘Robots are coming for our jobs’) when, as Watters says, robots do not have agency, and the decision to replace humans is one of owners opting for automation, choosing profit over people – how does the supposedly ‘technological proclamation’ naturalise the loss of human labour?
the ‘Uber’ model, and ‘uberization’: how is the ‘driverless’ story of algorithmic governance sold, so that surveillance, the removal of human decision makers with human values and personalization all become naturalised.. and even seen as goals?
There is so much in this talk which I valued. I won’t go through it all – the basic premise is that we need to resist the imaginaries, the reliance on data, and we need to recognise the motivations driving the imaginaries while valuing the human in education. It links to my idea for the final assignment about unpacking ‘imaginaries’ more, but also to my ideas about making a site based around developing ‘algorithmic literacy’:
“You’ve got to be the driver. You’re not incharge when the algorithm is driving” [38:30]
In this video from September 2016, Mike Rugnetta responds to concerns about Facebook which arose in 2016:
May 2016: reports of Facebook suppressing conservative views
August 2016: editorial/news staff replaced with algorithm
He asks, primarily, why we expect Facebook to be unbiased, given that any news source is subject to editorial partiality, and connects their move to separate themselves from their editorial role through the employment of algorithms to ‘mathwashing’ (Fred Benson), or the use of math terms such as ‘algorithm’ to imply objectivity and impartiality, and the assumption that computers do not have bias, despite being programmed by humans with bias, and being reliant on data.. with bias.
Facebook’s sacking of their human team and movement to reliance on algorithms is demonstrative of one of Gillespie’s assertions, except that in Facebook’s case a reputation of neutrality was sought through the reputation of algorithms in general:
The careful articulation of an algorithm as impartial (even when that characterization is more obfuscation than explanation) certifies it as a reliable sociotechnical actor, lends its results relevance and credibility, and maintains the provider’s apparent neutrality in the face of the millions of evaluations it makes.
In the video, Rugnetta suggests there’s a need to abandon the myth of algorithmic neutrality. True – but we also need greater transparency. With so much information available we need some kind of sorting mechanism, and we also need to know (and be able to tweak) the criteria if we are to be in control of our civic participation.
Through this post I make a novel start to the algorithmic cultures block, with Julian Palacz’s interactive installation from 2010.
“Algorithmic search for love is a found footage film generator. It works like a search engine, where spoken language is searched using text input. Every word or sentence in a movie can be a possible search result.”
The installation broadcasts all video sequences found for each text match ‘end-to-end, producing a new audiovisual story. It’s fascinating to see the very varied contexts in which single phrases can be seen to find meaning.
Here’s Juian Palacz talking about how the algorithm works with film scripts, and utilises translation tools. Palacz comments that it is interesting to see what people enter, through a private process, and then see the search terms quite publicly realised through the broadcast. For me, this gives rise to questions about agency, and connects with Knox’s article, Active Algorithms: Sociomaterial Spaces in the E-learning and Digital Cultures MOOC (2104). In the same way that Knox suggests the EDCMOOC space was co-created by participants and the underlying algorithms of the sites that hosted resources, the broadcast produced in Palacz’s installation is co-produced by the algorithm, the human choice of search terms and by what is included in and excluded from the film database. No doubt the performative nature of the search return also influences the choice of search queries. I would be interested to see the results of an algorithm which similarly worked with spoken language and film, but also incorporated personal data such as genre preferences/viewing history/etc. of the user.
After a diversion to Collier’s post 1 of the same title (very ‘old skool’ of me, but after the first paragraph I knew I wanted to know what had gone before, to contextualise what I was reading) I read post 2, and followed a link to
You might have noticed, though, that the Lifestream feed comes from YouTube, not the Berkman Klein Centre: I chased it down there to ‘feed’ my lifestream, to ensure ‘varied’ media content (the last step felt contrived – living the life fantastic through the lens of an assignment;).
Enough about where it came from (thank you though, Twitter, for your surveillance on this occasion). Let’s talk content.
In her blog posts, Amy Collier provides a really solid introduction to what embodiment is, and why it matters both on and offline. She writes,
Embodiment does not just mean having a body. Embodiment involves the loads of meaning attached to our bodies and the ways in which our bodies are at the center of our experiences and therefore our existence. Its the social, political, and cultural attachments, so to speak, of the body and how those are experienced by a bodied human. Some of this involves our identity and how we perform it; some of it is what identity and performances people attach to our bodies. Embodiment is also about how we know things, believe things, and feel things through the “lens” of our bodies.
Collier goes on to talk about how
there is a misconception that, when we “go digital,” the body becomes irrelevant,
and provides examples of how embodiment online can be both explicit (through kinaesthetic gaming, haptic feedback, tangible user interfaces and virtual reality) and implicit (wherever the body is mediated or represented or reconfigured online). Then, in post 2, Collier takes up why embodiment matters online. Poignantly, she cites bell hooks (1994):
The person who is most powerful has the privilege of denying their body.
Collier’s point is, in suggesting that our bodies are erased when we go online, or in accepting mind-body dualism, we ignore the social and political experiences of embodiment online. Further, in failing to acknowledge that only white male bodies are given the privilege of neutrality, we perpetuate existing power structures and political and social inequities.
Tressie McMillan Cottom adds a further dimension to this discussion by bringing in sociomaterial factors. She says that, within a techno-determinist framing, inequality is seen to be magically erased through access to information/technology and that this perception is perpetuated in part because we build tools that do not see or measure inequality. As McMillan Cottom comments, clearly this is not the same as inequality not existing. Further, the data collected is used to reproduce inequalities, in that it is used to create new tools, or, ‘tool to the norm’. McMillan Cottom refers to this norm, for whom supposedly ‘disruptive’ HE innovations such as the MOOC are designed, as the ‘autodidact’:
the self-motivated, able learner that is simultaneously embedded in technocratic futures and disembedded from place, culture, history, markets and inequality regimes (McMillan Cottom, 2013)
According to McMillan Cottom, when education is designed for this ‘ideal’ learner, mechanisms which allow learners to connect are often left out. Anonymity is held to be democratising and privileged, but this can stop people from ‘finding their people’, from forming supportive groups, and from measuring their success against others.
An important point that McMillan Cottom raises with regard to the supportive communities that she has researched is the importance of trust. I’ve previously looked at trust dynamics (Wenger, 2010) from the perspective of knowledge, where knowledge is perceived to be located, and the role of trust or belief in being able to learn from others in the establishment of community. McMillan Cottom, however, broadens my understanding of trust within community dynamics. In the groups she studied, students share non-academic advice, and in this case need to trust where the information is coming from. i.e. They need a sense of shared (non-academic) experience to validate the advice they receive. This can not be achieved without identity signalling.
There’s a lot to take away from all three authors in this Tweet inspired journey – Amy Collier, Tressie McMillan Cottom and Audrey Watters – on factors affecting community cultures, and the experience of them. All three write/speak with greater depth and richness (and on wider themes) than I can do justice to here. I’ll end with a quote from Watters:
Bodies matter when we learn; communities and affinity and situatedness matter; digital learning, even though some of it is “virtual,” does not – or should not – change that.
This video caught my interest because in it Poole talks of the community of 4chan, in which users are anonymous and there is no site ‘memory’ or searchability. He suggests that with the proliferation of social networking sites (remembering he is speaking in 2010), and the persistent identities and lack of privacy that comes with them, the Internet is losing something valuable.
It’s at this point that I have to admit to not really understanding 4chan. Admittedly, I have never spent time there, but it just seems so alien and discombobulating to me. From the site’s FAQ I gather: “content is usually available for only a few hours or days before it is removed”. In some ways, it sounds like an event you don’t really want to go to but can’t miss, in case ‘something goes down’ there. In this sense, could the site be said to have ‘eventedness’? Where would it sit in a diagram charting co-presence and eventedness?
Lister et al. suggest that anonymity allows us to “experiment with other parts of ourselves, take risks or express aspects of self that we find impossible to live out in day-to-day meatspace'” (2009, p. 210), but equally Baym (1998, as cited in Lister et al., 2009, p. 215) reminds us that many online community participants seek to integrate their on and offline lives. 4chan users do not seem to be exceptions despite their anonymity, as demonstrated by the Anonymous Scientology protests, among other events.
Poole’s point about the move towards SNSs, persistent identities and privacy concerns correlates with a point made by Lister et al. (2009, p. 216), about the potential for data tracing to enable network mapping by both researchers and (in what they refer to as a “dark” side of Internet use) corporations. In this vein, it’s interesting to note that 4chan has been reported to be in financial trouble (October 2016 – the daily dot, the guardian, etc), and is now accepting donations, which it apparently hasn’t done since 2005 (see images below). Its economic woes, despite an apparent 27 million unique visitors per month and a million new posts per day (Hathaway, 2016) demonstrates the role of commerce in sustainability online, and just one of the ways the virtual is grounded in the material.
The reason 4chan can’t make any money, of course, is that it is the dark, disgusting underbelly of the Internet. For every LOLcat, there’s a dead cat. For every photo of a cute girl in punky clothes, there’s seven of people with no clothes. It’s content no advertiser would ever put its brand near.
While not wishing to make any comment on the value or lack thereof of 4Chan (the heated comments in youtube were enough for me), I felt Chris Poole (and audience) raised some interesting ideas about community formation online, and potential differences between that formation now (or rather, back in 2012) as compared to ‘the good ol’ days’ of the Interwebs.
In this first segment, Poole suggests that the interest-based web has given way to the friendship-based web, with social networking identity-based communities leading to the demise of ‘old’ communities. Rather than just nostalgia, Poole’s claim seems to be based on the investment that people had in ‘old’ communities, because it took them so long (‘weeks, months’ [7:40]) to find it. Poole raises this point again in the second segment [10:00], when he notes (not verbatim),
in order to become a member of a real community, you accumulate social capital just by being there. It’s a long process of you lurking, seeing how things work, dipping your toes in the water, making a post.. people yelling at you and you thinking, oh shit, I’ve got to lurk more. Finally you post a thread, and you get replies, and you think, I’ve won.. Can I do it again? Then you try again, and no, it’s going to take another year…
He uses the metaphor of pitching one’s tent: one used to have to look for the right village to pitch it in, but nowadays people can pitch their tent in the desert and then import their contacts. By morning, their tent is surrounded, the village has come to them – but it lacks the qualities of being a ‘particular’ village; it is ‘the’ village, the one that follows them through their contact list.
I wasn’t about on the Internet during the 90s, or even much in the 2000s, so I don’t have any direct experience of the earlier communities to which Poole refers, but it makes sense to me that the level of investment has a significant impact on sense of community, and committment to it. It is, I feel, related to Walther’s 1997 assertion about the significance of ‘anticipated future interaction’, which Kozinets (2010) refers to:
If participants believe that their intraction is going to be limited and will not result in future interactions, then their relations tend to be more task-oriented. If, however, a future interaction is anticipated, participants will act in a friendlier way, be more cooperative, self-disclose, and generally engage in socially positive communications.
What I think Poole’s comment adds, however, is the importance of ‘reading’ the community, and coming to understand its norms, and how if you are invested/anticipate future interaction, this is part of the process. While Kozinets does write of this pathway to group membership (pp. 27-28), the richness of the communities Poole talks about is also found in these communities being makers or producers of cultural artefacts, rather than just social networks.
Lifestream, Liked on YouTube: Chris Poole part 2/3- ROFLCON 2012 – Solo Panel
In this segment, Poole suggests that ‘net culture doesn’t exist anymore’, and that while we speak of ‘mainstreaming Internet culture’ what is really happening is ‘internetting of mainstream culture’. As a result, Poole suggests it has (or, its communities have) lost some of its (their) ‘richness’.
A member of the audience disputes this, because
some young people will always find sub-culture
the evolution/location of culture is cyclical:
A second audience member comments that while the Internet has more users, it seems to be shrinking because people visit fewer sites regularly. As such, she continues, it has become rewarding to find your place away from the mainstream. For me, this is probably connected to having a voice, and the impact of scale on community. What is the maximum scale in a community, for participant to still feel they have a voice?
I’m drawn back to a Dave Cormier’s video on success in MOOC, and the need to ‘cluster’ with people with similar interests / who are focused on what you are interested in. Managing community participation seems to be about connections, but also about filtering so as to manage scale and maintain a voice.
Lifestream, Liked on YouTube: Chris Poole part 3/3- ROFLCON 2012 – Solo Panel
In this final segment, Poole suggests an ecosystem for content:
Poole suggests that, with increased use of the Internet, most ‘people joining the web fall into the bucket of consuming’.
It occurs to me that what Poole counts as ‘real’ community is that of the ‘creators’. When I look at the strongest communities I know online, this – being creators – is frequently (though not always) a characteristic, whether they are creators of visual artefacts, or of scholarly ideas or social movements.
While I don’t agree with every point Mark Wills raises, it is interesting to hear his perspectives on community, and especially on the role of moderation within community.
The parts of Wills TED talk that I find dubious include:
he makes generational distinctions about learning to ‘be’ in online communities which I don’t think hold up to examination. His position is suggestive of a ‘digital natives’ (Prensky, 2001) construct, which ignores the fact that experience with technology isn’t generalizable by age and further, seems to equate ‘being’ with ‘doing’ (i.e. it suggests that because young people are all supposedly immersed in the world of technology through daily use, it is who they are – a negation of the complexities of identity).
Wills suggests that online communities flatten hierarchies to the extent that gender, geographic boundaries, age and ethnicity ‘don’t exist anymore’. For me, his assertion is too absolute: #Gamergate is evidence enough that this simply isn’t true.
However, other aspects of Wills’ talk were in accordance with my own experience and/or my reading:
the need to encourage longevity in user participation – this aligns with Walther’s 1997 assertion about the role of ‘anticipated future interaction’ in modifying participant behaviours (Kozinets, 2010, pp. 23-24);
Will’ s assertions about the need to reinforce cultural norms or shared values, as well as the role of participant voting as a way to empower (askers) and give status (to those that have responded to user/asker questions) supports Kozinets’ model developmental progression of participation in online communities:
Finally, I found the idea of people being able to use their credibility within an online technology help community externally, for example, when applying for jobs, interesting. In this sense, the platform can give users (those who answer questions) something genuinely valuable in exchange for their volunteer digital labour. Is it enough, though? Or is it insignificant in comparison to what they contribute (which drives the profits of the host)?
I’ll blog about Kozinets’ chapter at a later point, and link to it here. Whereas the chapter focuses on developing social understandings of online spaces, in the video Kozinets provides a brief introduction to netnography, and describes a case study in which netnography was used as a marketing tool. Key (pragmatic) points for netnographic process (from the video):
-establish research question
-identify potential fields and choose which you will focus on
-observe (swimming in the data)
-analyse for key themes
-(when using netnography for marketing) respond to collected data with marketing strategy.