
The first week of our algorithmic cultures week seemed ‘noisy’ – perhaps because there is so much recent news on the impact of algorithms, and studies into the same, for peers to share through Twitter. Certainly, it has felt like our investigations are timely.
My lifestream has also been busy, with 18 posts. Some of these (1, 2, 3) were focused on managing IFTTT, with week 8 seeing me introduce Pocket as an additional lifestream feed. Two posts were related to digital art, which I suggested worked to provide an alternative discourse on algorithms from that of impartiality and objectivity, and explored sociomateriality through engaging humans and non-humans in joint construction of artefacts.
Another theme which arose was the potential for algorithms to reinforce existing inequalities. I examined this in several contexts, including education, in response to Matt Reed’s post on stereotype threat within predictive analysis); bail hearings in New Jersey, where it is hoped algorithms will help overcome human bias; and the mathematical inability to create an algorithm that is both equally predictive of future crime and equally wrong about defendants regardless of race. I also interrogated a proposal that a more diverse tech industry could prevent discriminatory algorithms.
The role of algorithms in information prioritisation was also attended to. I responded to a post by Mike Caulfield (2017) on the failure of Google algorithms to identify reliable information, a video on Facebook’s turn to algorithms as opposed to human editorial, and included a Washington Post graphic which illustrates different Facebook feeds for Conservative and Republican supporters.
Finally, in a post illustrating my own algorithmic play, I showed that Google is selective in what it records from search history, that Google ads topics are hit and miss due to not understanding the meaning attached to online actions (demonstrating Enyon’s [2013, p. 239] assertion about the need to understand meaning, rather than just track actions), and the desire for validation when the self is presented back through data (following Gillespie’s 2012, p. 21 suggestion). For me, the findings of my play seemed trivial – but such a stance belies the potential for algorithms to have real (and negative) impact on people’s lives through profiling.
‘Certainly, it has felt like our investigations are timely.’
Yes, I hope so! It certainly feels like algorithms have had mainstream attention recently, particularly in the context of recent elections. One can also think of the three blocks of this course as (kind of) chronological – ‘algorithmic culture’ perhaps being where we are ‘now’.
Having said that, highlighting artistic responses is a really fantastic way of demonstrating the different ways we can view these issues – as in one of your comments this week. Creative responses to the algorithm is one way to counter instrumentalism.
You’ve pulled together lots of interesting work on bias and discrimination too – a really important aspect of algorithmic decision making, particularly in its ‘real world’ effects. I think some of these issues serve as a good counter to the triviality in our own algorithmic play – the same principles (perhaps Gillespie’s) can be applied in situations that involve significant social issues.
Lots of very relevant things happening in your blog this week Renée, nice work!