It’s a wonderful lifestream (or is it?) – Week 8 summary

Value is the main theme of the lifestream this week, both in the sense of a principle which governs our behaviour and something regarded as important or useful. Both definitions intersect in the development of algorithms, as well as in the ways in which their usefulness is communicated to us.

In a quite brilliant article about algorithms and personalising education, Watters asks the pertinent question:

What values and interests are reflected in its algorithm?

It’s a big and important question, but this Ted talk suggests to me that it would be propitious to change it to:

Whose values and interests are reflected in its algorithm?

Joy Buolamwini explores how human biases and inequalities might be translated into, and thus perpetuated in, algorithms, a phenomenon she has called the ‘coded gaze’. Similar considerations are taken up in this article too, as well as in this week’s reading by Eynon on big data, summarised here. I also did a mini-experiment on Goodreads, in which I found results which could potentially be construed as bias (but more evidence would definitely be required).

It isn’t just a question of the ways in which values are hidden or transparent, or how we might uncover them, though this is crucial too. My write-up of Bucher’s excellent article on EdgeRank and power, discipline and visibility touches on this, and I explored it briefly in the second half of this post on Goodreads. Rather, one of the ways in which hiddenness and transparency are negotiated is in the ways in which these values are communicated, and how they are marketed as having ‘added value’ to the user’s experience of a site. The intersection of these issues convinces me further of the benefit of taking a socio-material approach to the expression of values in algorithms.

Goodreads and algorithms, part the fourth

In this (probably) last instalment of experimenting with the Goodreads algorithm, I’m particularly playing with specific biases. Joy Buolamwini, in the Ted talk I just watched (and posted), says this:

Algorithmic bias, like human bias, results in unfairness.

It would be hard, I think, to really test the biases in Goodreads, and especially insufficient to draw conclusions from just one experiment, but let’s see what happens. I’ve removed from my ‘to-read’ shelf all books written by men. I’ve added, instead, 70 new books, mostly but not exclusively from lists on Goodreads of ‘feminist’ books or ‘glbt’ books [their version of the acronym, not mine]. Every single book on my ‘to-read’ shelf is written by someone who self-identifies as female.

And after a little while (processing time again), my recommendations were updated:

Of the top five recommendations, 1 is written by a man (20%); of the fifty recommendations in total, 13 are written by men (26%).

I then reversed the experiment. I cleared out the whole of the ‘to-read’ shelf, and instead added 70 books, nearly exclusively fiction, and all written by people who identify as male.

And again, a slight pause for processing, and the recommendations update. Here are my top five:

Two of the top five books recommended are written by women, and of the 50 in total 7 were by women (14%).

So when the parameters are roughly the same, and with the very big caveat that this may be a one-off, it seems that Goodreads recommends more books by men than by women. Is this bias? Or just coincidence? Probably quite difficult to tell with just one experiment, but it may be worth repeating to learn more.

Finally, one weird thing. In both experiments, there were two books that appeared on the full recommendations list. One is by Anthony Powell, A Dance to the Music of Time which, given the general gravitas of the books I added in both experiments, is fairly understandable. The other, though, is this:

 

Bill Cosby’s ‘easy-to-read’ story, aimed at children, is included because I added John Steinbeck’s East of Eden? Unfortunately I have no idea why it was in the women-only list, because I didn’t check at the time, but that feels like a really, really peculiar addition.

Identity, Power, and Education’s Algorithms

Late Friday night, Buzzfeed published a story indicating that Twitter is poised this week to switch from a reverse-chronological timeline for Tweets to an algorithmically organized one.

from Pocket http://ift.tt/1TZlWIj
via IFTTT

An oldie, but one packed with interesting discussion of the relationships and intersections between algorithmic culture, freedom, marketing, values, discriminations. This is the crucial quotation:

 

MOOC assumptions

In the small amount of UX and ethnographic work I’ve already done, I’ve learned the value of admitting and questioning the biases and assumptions we might naturally hold. So I’ve spent a little time this week, in preparation for the micro-ethnography, thinking about what I might assume about the participants on the course, and whether those assumptions are fair.

I’ve come up with three things:

a) they’re human

But maybe they’re not. I’ve seen The New Adventures of Superman, and I cannot therefore discount the concept of robotic investigative journalism.

b) they know what a MOOC is

Leaving aside any epistemological dilemmas about the nature of knowledge, I’m not sure this is true. How much determinism can we assume? Heaven knows what I’ve signed up to without knowing it. So, erm, let’s switch this to…

b, again) they’ve heard of MOOCs, or they’ve been told about them, or they’ve stumbled across them randomly on the internet

Which feels like a spectacularly unhelpful statement. Finally, I ended up with:

c) they have an email address

This is probably all I can assume with any certainty. They have access to a computer, and to the internet – but we can’t be sure about the level of that access. What they do have is an email address and – crucially – the relevant skill set to set that up, to enrol and participate.