The first week of our algorithmic cultures week seemed ‘noisy’ – perhaps because there is so much recent news on the impact of algorithms, and studies into the same, for peers to share through Twitter. Certainly, it has felt like our investigations are timely.
My lifestream has also been busy, with 18 posts. Some of these (1, 2, 3) were focused on managing IFTTT, with week 8 seeing me introduce Pocket as an additional lifestream feed. Two posts were related to digital art, which I suggested worked to provide an alternative discourse on algorithms from that of impartiality and objectivity, and explored sociomateriality through engaging humans and non-humans in joint construction of artefacts.
Another theme which arose was the potential for algorithms to reinforce existing inequalities. I examined this in several contexts, including education, in response to Matt Reed’s post on stereotype threat within predictive analysis); bail hearings in New Jersey, where it is hoped algorithms will help overcome human bias; and the mathematical inability to create an algorithm that is both equally predictive of future crime and equally wrong about defendants regardless of race. I also interrogated a proposal that a more diverse tech industry could prevent discriminatory algorithms.
The role of algorithms in information prioritisation was also attended to. I responded to a post by Mike Caulfield (2017) on the failure of Google algorithms to identify reliable information, a video on Facebook’s turn to algorithms as opposed to human editorial, and included a Washington Post graphic which illustrates different Facebook feeds for Conservative and Republican supporters.
Finally, in a post illustrating my own algorithmic play, I showed that Google is selective in what it records from search history, that Google ads topics are hit and miss due to not understanding the meaning attached to online actions (demonstrating Enyon’s [2013, p. 239] assertion about the need to understand meaning, rather than just track actions), and the desire for validation when the self is presented back through data (following Gillespie’s 2012, p. 21 suggestion). For me, the findings of my play seemed trivial – but such a stance belies the potential for algorithms to have real (and negative) impact on people’s lives through profiling.