It’s a wonderful lifestream (or is it?) – Week 8 summary

Value is the main theme of the lifestream this week, both in the sense of a principle which governs our behaviour and something regarded as important or useful. Both definitions intersect in the development of algorithms, as well as in the ways in which their usefulness is communicated to us.

In a quite brilliant article about algorithms and personalising education, Watters asks the pertinent question:

What values and interests are reflected in its algorithm?

It’s a big and important question, but this Ted talk suggests to me that it would be propitious to change it to:

Whose values and interests are reflected in its algorithm?

Joy Buolamwini explores how human biases and inequalities might be translated into, and thus perpetuated in, algorithms, a phenomenon she has called the ‘coded gaze’. Similar considerations are taken up in this article too, as well as in this week’s reading by Eynon on big data, summarised here. I also did a mini-experiment on Goodreads, in which I found results which could potentially be construed as bias (but more evidence would definitely be required).

It isn’t just a question of the ways in which values are hidden or transparent, or how we might uncover them, though this is crucial too. My write-up of Bucher’s excellent article on EdgeRank and power, discipline and visibility touches on this, and I explored it briefly in the second half of this post on Goodreads. Rather, one of the ways in which hiddenness and transparency are negotiated is in the ways in which these values are communicated, and how they are marketed as having ‘added value’ to the user’s experience of a site. The intersection of these issues convinces me further of the benefit of taking a socio-material approach to the expression of values in algorithms.

What I’m reading

Three challenges of big data according to Eynon:

  1. Ethics – privacy, informed consent, protection of harm. Example of student registration: social implications of telling students if they are likely to drop out (according to learner analytics). Makes it a self-fulfilling prophecy?
  2. Kindsof research – the availability of data biases in the types of research we carry out, the questions we can ask. Can advances in open data help with this?
  3. Inequality – how big data reinforces and exacerbates social and educational inequalities e.g. tracking only those in a specific socio-economic bracket. Digital divide, yes, but doesn’t it also work the other way round – social inequalities mean that some people are better equipped to avoid surveillance via big data?


Eynon, R. (2013). The rise of Big Data: what does it mean for education, technology, and media research? Learning, Media and Technology, 38(3), 237-240. Doi: 10.1080/17439884.2013.771783

March 12, 2017 at 12:16PM
Open in Evernote