Daniela Rus’ presentation was interesting to watch in the context of having recently watched Audrey Watters’ presentation at Edinburgh on the automation of education. Rus doesn’t have the cynicism which Watters (justifiably) has. For example, she identifies an algorithm which is able to reduce the number of taxis required in New York City by 10,000 by redirecting drivers (if the public agrees to ride-share). While this could mean 10.000 job losses, Rus says that, with a new economic model, it doesn’t have to. She describes a different picture in which the algorithm could mean the same money for cab drivers, but shorter shifts, with 10,000 less cars on the road producing less pollution. It’s a solution which is good for taxi drivers, and good for society – but like Watters I fear that within capitalism there is little incentive for commercial entities to make the choice to value people or the environment over profits. Automation should, as Rus suggests in the presentation, take away the uninteresting and repetitive parts of jobs and enable a focus on the more ‘human’ aspects of work, but instead, it can be used to deskill professions and push down wages. Her key takeaway is that machines, like humans, are neither necessarily good or bad. For machines, it just depends on how we use them..
I posted a link to the complete Pew Research Report (Code-Dependent: Pros and Cons of the Algorithm Age) a few weeks back (March 11). This week, while thinking about my final assignment for Education and Digital Cultures, I returned to Theme 7: The need grows for algorithmic literacy, transparency and oversight.
While the respondents make a great deal of both interesting and important points about concerns that need to be addressed at a societal level – for example, managing accountability (or the dissolution thereof) and transparency of algorithms, avoiding centralized execution of bureaucratic reason/including checks and balances within the centralization enabled by algorithms – there were also points raised that need to be addressed at an educational level. Specifically, Justin Reich from MIT Teaching Systems Lab suggests that ‘those who design algorithms should be trained in ethics’, and Glen Ricart argues that there is a need for people to understand how algorithms affect them and for people to be able to personalize the algorithms they use. In the longer term, Reich’s point doesn’t seem to be limited to those studying computer science subjects, in that, if, as predicted elsewhere (theme 1) in the same report, algorithms continue to spread, more individuals will presumably be involved in their creation as a routine part of their profession (rather than their creation being reserved for computer scientists/programmers/etc.). Also, as computer science is ‘rolled out’ in primary and secondary schools, it makes sense that the study of (related) ethics ought to be a part of the curriculum at those levels also. Further, Ricart implies, in the first instance, that algorithmic literacy needs to be integrated into more general literacy/digital literacy instruction, and in the second, that all students will need to develop computational thinking and the ability to modify algorithms through code (unless black-boxed tool kits are provided to enable people to do this without coding per se, in the same way the Weebly enables people to build websites without writing code).
A little tool I hoped to use to help with my summarising this week – still programming its algorithm, though; it’s not quite ready to select what is relevant.
The first week of our algorithmic cultures week seemed ‘noisy’ – perhaps because there is so much recent news on the impact of algorithms, and studies into the same, for peers to share through Twitter. Certainly, it has felt like our investigations are timely.
Finally, in a post illustrating my own algorithmic play, I showed that Google is selective in what it records from search history, that Google ads topics are hit and miss due to not understanding the meaning attached to online actions (demonstrating Enyon’s [2013, p. 239] assertion about the need to understand meaning, rather than just track actions), and the desire for validation when the self is presented back through data (following Gillespie’s 2012, p. 21 suggestion). For me, the findings of my play seemed trivial – but such a stance belies the potential for algorithms to have real (and negative) impact on people’s lives through profiling.
My initial investigations into how algorithms affect what is advertised to me, and how my search results are affected by algorithms, haven’t been terribly revealing thus far – which, could, of course, be said to be revealing in itself.
Sandvig’s ‘puppy dog’ + space search results My ‘puppy dog’ + space results
I wondered (accepting that different languages collocate differently) what a similar search would look like in Spanish. I went for just ‘puppy’ as the collocations would complicate: it’s ojos de cachorro (eyes of puppy) rather than puppy dog eyes, for example).
search for ‘puppy’Search for ‘cachorro’
None of these options are reflective of my browsing history – I’ve looked for rescue Spanish water dog puppies (in both languages), and I’ve researched how long you can leave puppies alone when they are small (too long for the number of hours I’m at work). Also, I have sought lyrics for tracks which had nothing to do with puppies during the week – so perhaps this influenced the first results.
Google Ads information about me is a little hit and miss:
What Google thinks I like
I do like sci-fi, documentaries & action & adventure films.. dogs..world news.. travel.. but make-up and cosmetics? Business productivity software? Shopping? Pop music? There’s quite a bit that is ‘off’ and with such broad categories it’s hard to even know what some of the categories refer to.
In thinking about where the information might ‘come from’, it’s occurred to me that the algorithms have no idea why I go to sites – whether it is to get white noise to block out the noisy neighbours or to find articles that students I teach might relate to. This point is taken up by Enyon (p. 239, 2013):
whilst being able to measure what people actually do is of great value, this is not sufficient for all kinds of social science research. We also need to understand the meanings of that behaviour which cannot be inferred simply from tracking specific patterns.
It’s also occurred to me that what I might actually be interested in is possibly getting drowned in the flow of the things I’m not. These algorithms are certainly not ‘exact’ in their calculations.
Still, I decided to change Google’s understanding of me… do I seek validation, for the algorithm(s) collecting my data to confirm my sense of self through a presentation of myself back to me, as Gillespie suggests we might (2012, p. 21 of the linked PDF)?
Update: After deleting some youtube videos from my viewing history, my ‘interests’ initially reverted to zero, but a day later updated to this.
‘Ads’. What does it matter anyway?
Lots of the ads I ‘receive’ seem to be targeted on where I live rather than my browsing history:
Ad for? The bank? The Electricity and Water Authority?Unclear, since I don’t read Arabic. Appeared on a DIY website.Fly Dubai ads appear frequently. Never used them, mind. Ad from bbc.co.ukI have no interest in buying a car. I am not in Saudi Arabia – though I can see Saudi from my compound. I do like palindromic numbers, though;) Ad from bbc.co.ukThe nearest restaurant advertising near me? In another country (UAE)? Is there no good food to be had closer to home?
There are exceptions, however:
The advertising at The Guardian knows I’ve been looking for accommodation in Barcelona (and presumably that I was searching in Spanish):
Facebook has also flagged this, as well as cottoning on to my need for long skirts in this region (not sure ‘Bohemian’ would go down at work, mind):
Amazon gave me mixed results.
Amazon.co.uk is basing its recommendations on books I bought in 2009:
Meanwhile, Amazon.com has linked my search for EU/US shoe size equivalency to my husband’s account, and is recommending some pretty ugly shoes to him, while he is logged in to his account:
How are these ‘related’?
To be honest, the impact of all this on me seems minimal. I’ve already booked the accommodation I want in Barcelona (you’re too late, FB), and I’m not looking for anything else offered. Will I consider Fly Dubai next time I’m going there? Sure, but I did before anyway. However, as Turow (2012) highlighted, in treating the impact of algorithms (‘just a few ads’) as trivial, we ignore the scale of algorithms’ potential for prejudice:
In broader and broader ways, computer- generated conclusions about who we are affect the media content-the streams of commercial messages, discount offers, information, news, and enter- tainment-each of us confronts. Over the next few decades the business logic that drives these tailored activities will transform the ways we see ourselves, those around us, and the world at large. Governments too may be able to use marketers’ technology and data to influence what we see and hear.
From this vantage point, the rhetoric of consumer power begins to lose credibility. In its place is a rhetoric of esoteric technological and statistical knowledge that supports the practice of social discrimination through profiling. We may note its outcomes only once in a while, and we may shrug when we do because it seems trivial — just a few ads, after all. But unless we try to understand how this profiling or reputation-making process works and what it means for the long term, our children and grandchildren will bear the full brunt of its prejudicial force.
For me, my larger concern is with Google’s sorting/prioritising of the search results I get. I can consciously choose to ignore advertising, but how do I know what information is available if Google selects for me, based on its ideas about me?