This week your task is to play with some algorithms and document the results.
(edc site intructions)
My initial investigations into how algorithms affect what is advertised to me, and how my search results are affected by algorithms, haven’t been terribly revealing thus far – which, could, of course, be said to be revealing in itself.
I tried the ‘puppy dog’ search suggested in Christian Sandvig’s blog post Show and Tell: Algorithmic Culture:
I wondered (accepting that different languages collocate differently) what a similar search would look like in Spanish. I went for just ‘puppy’ as the collocations would complicate: it’s ojos de cachorro (eyes of puppy) rather than puppy dog eyes, for example).
None of these options are reflective of my browsing history – I’ve looked for rescue Spanish water dog puppies (in both languages), and I’ve researched how long you can leave puppies alone when they are small (too long for the number of hours I’m at work). Also, I have sought lyrics for tracks which had nothing to do with puppies during the week – so perhaps this influenced the first results.
Google Ads information about me is a little hit and miss:
I do like sci-fi, documentaries & action & adventure films.. dogs..world news.. travel.. but make-up and cosmetics? Business productivity software? Shopping? Pop music? There’s quite a bit that is ‘off’ and with such broad categories it’s hard to even know what some of the categories refer to.
In thinking about where the information might ‘come from’, it’s occurred to me that the algorithms have no idea why I go to sites – whether it is to get white noise to block out the noisy neighbours or to find articles that students I teach might relate to. This point is taken up by Enyon (p. 239, 2013):
whilst being able to measure what people actually do is of great value, this is not sufficient for all kinds of social science research. We also need to understand the meanings of that behaviour which cannot be inferred simply from tracking specific patterns.
It’s also occurred to me that what I might actually be interested in is possibly getting drowned in the flow of the things I’m not. These algorithms are certainly not ‘exact’ in their calculations.
Still, I decided to change Google’s understanding of me… do I seek validation, for the algorithm(s) collecting my data to confirm my sense of self through a presentation of myself back to me, as Gillespie suggests we might (2012, p. 21 of the linked PDF)?
‘Ads’. What does it matter anyway?
Lots of the ads I ‘receive’ seem to be targeted on where I live rather than my browsing history:
There are exceptions, however:
The advertising at The Guardian knows I’ve been looking for accommodation in Barcelona (and presumably that I was searching in Spanish):
Facebook has also flagged this, as well as cottoning on to my need for long skirts in this region (not sure ‘Bohemian’ would go down at work, mind):
Amazon gave me mixed results.
Amazon.co.uk is basing its recommendations on books I bought in 2009:
Meanwhile, Amazon.com has linked my search for EU/US shoe size equivalency to my husband’s account, and is recommending some pretty ugly shoes to him, while he is logged in to his account:
To be honest, the impact of all this on me seems minimal. I’ve already booked the accommodation I want in Barcelona (you’re too late, FB), and I’m not looking for anything else offered. Will I consider Fly Dubai next time I’m going there? Sure, but I did before anyway. However, as Turow (2012) highlighted, in treating the impact of algorithms (‘just a few ads’) as trivial, we ignore the scale of algorithms’ potential for prejudice:
In broader and broader ways, computer- generated conclusions about who we are affect the media content-the streams of commercial messages, discount offers, information, news, and enter- tainment-each of us confronts. Over the next few decades the business logic that drives these tailored activities will transform the ways we see ourselves, those around us, and the world at large. Governments too may be able to use marketers’ technology and data to influence what we see and hear.
From this vantage point, the rhetoric of consumer power begins to lose credibility. In its place is a rhetoric of esoteric technological and statistical knowledge that supports the practice of social discrimination through profiling. We may note its outcomes only once in a while, and we may shrug when we do because it seems trivial — just a few ads, after all. But unless we try to understand how this profiling or reputation-making process works and what it means for the long term, our children and grandchildren will bear the full brunt of its prejudicial force.
For me, my larger concern is with Google’s sorting/prioritising of the search results I get. I can consciously choose to ignore advertising, but how do I know what information is available if Google selects for me, based on its ideas about me?