Value is the main theme of the lifestream this week, both in the sense of a principle which governs our behaviour and something regarded as important or useful. Both definitions intersect in the development of algorithms, as well as in the ways in which their usefulness is communicated to us.
In a quite brilliant article about algorithms and personalising education, Watters asks the pertinent question:
What values and interests are reflected in its algorithm?
It’s a big and important question, but this Ted talk suggests to me that it would be propitious to change it to:
Whose values and interests are reflected in its algorithm?
Joy Buolamwini explores how human biases and inequalities might be translated into, and thus perpetuated in, algorithms, a phenomenon she has called the ‘coded gaze’. Similar considerations are taken up in this article too, as well as in this week’s reading by Eynon on big data, summarised here. I also did a mini-experiment on Goodreads, in which I found results which could potentially be construed as bias (but more evidence would definitely be required).
It isn’t just a question of the ways in which values are hidden or transparent, or how we might uncover them, though this is crucial too. My write-up of Bucher’s excellent article on EdgeRank and power, discipline and visibility touches on this, and I explored it briefly in the second half of this post on Goodreads. Rather, one of the ways in which hiddenness and transparency are negotiated is in the ways in which these values are communicated, and how they are marketed as having ‘added value’ to the user’s experience of a site. The intersection of these issues convinces me further of the benefit of taking a socio-material approach to the expression of values in algorithms.
‘Whose values and interests are reflected in its algorithm?’
Well put. I really enjoyed seeing and hearing Joy Buolamwini’s work this week – definitely reflecting some important critical perspectives on algorithms. Given the tech industry tends to be proliferating in particular parts of the world, there would seem to be plenty of voices left out.
Your Goodreads experiment (and play!) was really super, and thorough. I think you’re right that a longer study would yield more fruitful results.
Great to see your blogs on Taina Bucher’s work too. I think your points here about algorithms working against the purported ahierarchical and ‘democratic’ nature of social media is a super link back to the theme of community, and some of the claims we find there.