— Renée Hann (@rennhann) April 2, 2017
Culture Digitally the Podcast Episode 3: Conversations on Algorithms and Cultural Production – Culture Digitally
I’m a little annoyed with myself that I’ve lost track of my path to finding ‘Culture Digitally’. It first appears in my work computer’s Chrome history on 24 March (but not this link, it was to The Relevance of Algorithms). Nothing was posted to the Lifestream on that day.. and reference to Gillespie’s work on algorithms came at least two weeks earlier. I like being able to picture my trails through the internet – little strings connecting and tying together some wanderings, starting new trails for others – but I’m lost on this one. It seems remarkably remiss in retrospect, as I really enjoyed the podcast and it’s likely that the source I got the link from (though it was probably to a different page in Culture Digitally) has links to other things I might like. This could be becoming an obsession though – and perhaps the volume of information has just exceeded my mental capacity – so I shall let it lie for now.
Right – the podcast. It is the type of transmission that causes several pages of sprawling notes, dotted with food if one happens to be cooking dinner, and multiple moments of vocalised agreement despite no chance the ‘author’ might engage. 2012, though, which means I’m only recently engaging in serious thought about topics Tarleton Gillespie was articulating with great finesse a good 5 years ago. Not to inflate my own insightfulness, how was I not concerned about the influence of algorithms 5 years ago? How is it that concern is not more widely felt today? I’ve digressed: the podcast.
The ‘conversation’ features Ted Striphas and Tarleton Gillespie, interviewed by Hector Postigo. Key things which came out of the discussion for me were:
- the role of the algorithm in knowledge generation
- the role of the algorithm in the creation of publics
- the cultural life of the algorithm
- materiality of the algorithm
- ethics of the algorithm
In terms of 1, in the recording Gillespie noted how dependent we are on algorithms for ‘knowing general things’, such as those ideas which are trending, and how this informs our understanding of the publics we exist in (2). Yet, this reliance is afforded despite algorithms being ‘malleable’ (4): how it decides what is relevant is based on the operator’s business interests, and can be changed to accommodate these, therefore casting doubt on the supposed legitimacy of the ‘knowledge’ produced. This leads into ‘the culture life of the algorithm’: what we expect it to do, what it does, and what its role is in business (3).
With such obvious conflicts of interest, concern about the ethics of algorithms (5) rise to the fore. Gillespie makes a really interesting comparison with journalism, and the ethics of journalism, which he separates (based on -reference given in recording) the promise of journalistic objectivity and journalistic practices. To the former Gillespie links the view of the algorithm as cold and objective. Of the latter – which he notes are messy in both fields – Gillespie asks, “What are the practices beneath the algorithm?” and “Where is the ombudsman of algorithms?” If we have no central authority, to which the culture of Silicon Valley is resistant, how can there be assurances that the tech companies employing algorithms are committed to their end users? Why would there be? And what are the consequence of the reification of informationalised worlds? What new dispositions are created? What new habits?
Very thought provoking questions. This ought to be more widely thought about now.
I was alerted to this excellent talk by Audrey Watters by peers who were tweeting while watching the live stream (thanks Colin, and others). Of course, it is also part of James Lamb’s ‘Lifestream’, in that he featured on screen 😉
I think as a listened to/watched this talk, my focus still on ‘imaginaries’:
- the way machine learning is used as a metaphor for human learning (Watters asks, ‘Do humans even learn in this way’?), and the consequences that holding such an understanding will have on education;
- the giving of agency to robots (‘Robots are coming for our jobs’) when, as Watters says, robots do not have agency, and the decision to replace humans is one of owners opting for automation, choosing profit over people – how does the supposedly ‘technological proclamation’ naturalise the loss of human labour?
- the ‘Uber’ model, and ‘uberization’: how is the ‘driverless’ story of algorithmic governance sold, so that surveillance, the removal of human decision makers with human values and personalization all become naturalised.. and even seen as goals?
There is so much in this talk which I valued. I won’t go through it all – the basic premise is that we need to resist the imaginaries, the reliance on data, and we need to recognise the motivations driving the imaginaries while valuing the human in education. It links to my idea for the final assignment about unpacking ‘imaginaries’ more, but also to my ideas about making a site based around developing ‘algorithmic literacy’:
“You’ve got to be the driver. You’re not incharge when the algorithm is driving” [38:30]
Really worth the watch.
Much of our work on Code Acts in Education over the past few years has focused on the work that algorithms do (and what they are made to do and by who) in relation to learning, policy and practice. But the work of algorithms extends far beyond education of course.
via Pocket http://ift.tt/2hLkgE5
Ben Williamson, while acknowledging the influence of algorithms on his own search results, performed inurl: searches for algorithms within major UK news websites. The short form results (all quoted):
- The Guardian‘s editorial line is to treat the algorithm as a governor;
- The Telegraph treats the algorithm as a useful scientist whose expertise is helping society;
- The Sun is largely disinterested in algorithms in terms of newsworthiness;
- the editorial line of The Mirror is to treat algorithms in terms of brainy expertise;
- Algorithms as problem-solvers might be one way of categorizing its [The Daily Mail‘s] editorial line*
*Based on an initial search. An hour later Williamson repeated the search, and received different results. “The Daily Mail is certainly not disinterested in algorithms–the result returns are pretty high compared to the tabloids, and the Mail does frequently re-post scientific content from sources like The Conversation–but by no means does it adopt the kind of critical line found in The Guardian.”
My concerns about algorithms are related to governance, and, I read the Guardian.. Do I read The Guardian because it (more than the other publications given) matches my worldview, or do I think the way I do because of the publications (like The Guardian) that I read? Or, was I initially attracted to The Guardian because of its similarity to my worldview, but now my worldview is influenced by the fact that I read The Guardian, and its initial similarity to my worldview perhaps allows some things to slip beneath the questioning of my ‘truth’ radar?
Fascinating work – makes me wonder, is there a website that presents diverse viewpoints on topics and events using inurl: searches? i.e. monitors news sites, feeding content from diverse sources, organised by topic or event, using humans to add new topics/events as events occur? And with an editorial team to summarise the editorial positions of the publications represented on specific topics? Would such a site help combat political polarisation and divisiveness?
Also.. how can we teach ‘algorithmic literacy’? Can we? When do we start? Would a site which unpacked what this could look like, and offered teaching ideas and a place for discussion be of use? [Assignment ideas..]
This is a talk I presented at the Nordic Educational Research Association conference at Aalborg University, Copenhagen, on 23 March 2017.
Education is currently being reimagined for the future. In 2016, the online educational technology magazine Bright featured a series of artistic visions of the future of education. One of them, by the artist Tim Beckhardt, imagined a vast new ‘Ocunet’ system.
via Pocket http://ift.tt/2n8CT5W
I found this post after reading Knox’s (2014) post on interpreting analytics in the same blog space. What would we call that? Searching laterally? Something which was, at the time, really frustrating in DEGC was that we were always given links to Journal home pages rather than to the specific article we were reading. While I seem to recall this being connected to copyright and appropriate practice, it was frustrating because none of the links were set to open in a new window/tab by default, so unless one right clicked and opened a new window/tab, one then had to go back to the original page to find out which issue one was looking for.. but I’ve subsequently reflected (repeatedly!) on how it made me much more aware of the types of ‘publications’ and their respective content, and perhaps resultantly, I think my ‘lateral searching’ has increased. It’s not a new practice, of course, but an addictive one nonetheless, and it’s always good to find a ‘treasure trove’ of good reads.
I’m getting tangential, though – what caught my eye about this post, in particular, was the focus on ‘imaginaries’, and the ways in which such ‘imaginaries’, or fictions, play a role in the creation of future reality. Williamsons writes,
..what I’m trying to suggest here is that new ways of imagining education through big data appear to mean that such practices of algorithmic governance could emerge, with various actions of schools, teachers and students all subjected to data-based forms of surveillance acted upon via computer systems.
Importantly too, imaginaries don’t always remain imaginary. Sheila Jasanoff has described ‘sociotechnical imaginaries’ as models of the social and technical future that might be realized and materialized through technical invention.Imaginaries can originate in the visions of single individuals or small groups, she argues, but gather momentum through exercises of power to enter into the material conditions and practices of social life. So in this sense, sociotechnical imaginaries can be understood as catalysts for the material conditions in which we may live and learn.
The post has a lot more in it, focusing on how the imaginaries of ‘education data science’ combined with affective computing and cognitive computing are leading to a new kind of ‘agorithmic governance’ within education. Frightening stuff, to be frank.
What I’m really interested in is the role of these ‘imaginaries’ though: how do fictions, and, frequently, corporate fictions, work their influence? Which previous imaginaries, captured in science fiction, can we trace – along with their reception over time – to present day materialities?
And, why are ‘the people’ so passive? Why isn’t there shouting about imaginaries being presented as inevitable? Why isn’t their protest? A rant: “Uh – you want to put a camera on my kid’s head, to tell me how she’s feeling? Have you thought about asking her? You want to produce data for parents? How about as a society ‘just’ recognising the value of non-working lives and giving people enough time to spend with their kids while they’re trying to pay rent or a mortgage?”
It would make an interesting study – perhaps too large for our EDC final assignment, but I’m wondering about it could be scaled back.
Renee, thanks for this – and for the alert to the very-well-hidden hyperlink. I wouldn’t have found it without your second comment!
The graphs risk masking something acknowledged in the accompanying text, namely that ” the annual number of single-author, non-review papers themselves, as tracked since 1981, has remained largely consistent in the course of the three decades”. The declining percentage share reflect the increase in multi-author pieces, not so much the decline in the single-authored pieces per se. Clearly a complex picture is in view.
Also, I’m curious that there is no category for ‘humanities’: presumably it’s incorporated within ‘social sciences’. I’d imagine, within that category, there are lots of sub-sectors, each with their own practices, circulations and markets. Different assemblages, reacting to and with digital cultures in differing ways. Great to have some data-led insight on it, and inviting of more. Many thanks!
from Comments for Matthew’s EDC blog http://ift.tt/2obZwrK
— Renée Hann (@rennhann) March 28, 2017
Stephen Downes’ summary:
When I spoke at the London School of Economics a couple years ago, part of my talk was an extended criticism of the use of models in learning design and analysis. “The real issue isn’t algorithms, it’s models. Models are what you get when you feed data to an algorithm and ask it to make predictions. As (Cathy) O’Neil puts it, ‘Models are opinions embedded in mathematics.'” This article is an extended discussion of the problem stated much more cogently than my presentation. “It’s E Pluribus Unum reversed: models make many out of one, pigeonholing each of us as members of groups about whom generalizations — often punitive ones (such as variable pricing) can be made.
My additions (i.e. from my reading of the article):
What are ‘weapons of math destruction’?
Statistical models that:
- are not opaque to their subjects
- are harmful to subjects’ interests
- grow exponentially to run at large scale
What’s wrong with these models that leads to them being so destructive?
1. lack of feedback and tuning
2. the training data is biased. For example,
The picture of a future successful Ivy League student or loan repayer is painted using data-points from the admittedly biased history of the institutions
3. “the bias gets the credibility of seeming objectivity”
Why does it matter?
It’s a grim picture of the future: WMD makers and SEO experts locked in an endless arms-race to tweak their models to game one another, and all the rest of us being subjected to automated caprice or paying ransom to escape it (for now). In that future, we’re all the product, not the customer (much less the citizen).
Inside this picture, the cost of ‘cleaning up’ the negative externalities that result from sloppy statistical models is more expensive than the savings that companies make through maintaining the models. Yet, we pay for the cleaning up (individually, collectively), while those pushing the weak statistical models save.
The other loss is, of course, the potential: algorithms could, with good statistical modelling, serve societal needs, and those in need within society.
The line of argument is hard to argue with – but one does have to ask, is ‘sloppy’ the right term? Is it just sloppiness? At what point does such ‘sloppiness’ become culpable? Or, malicious disregard?
In the final teaching week of Education and Digital Cultures, my Lifestream blog seems to have divided into several tributaries, winding towards the final assignment. The first tributary meandered through some of the work of peers, with visits to Daniel’s blog, which houses an impressive attempt to coordinate peer notetaking; Stuart’s blog, where I found an excellent, coordinated algorithmic analysis with Chenée; Matthew’s blog, to talk about algorithmic impacts on notions of singular authorship (pending moderation); and Dirk’s blog, in search of answers about how data was created and missing meaningfulness. This tributary then diverted to relatively social engagement in Twitter, with discussion of breaking black boxes, which I did symbolically by cracking open my computer shell for a repair, and spam.
The second tributary was concerned with the role of ‘data’ and algorithms in research process and products. My exploration began with discussion of a blog post from Knox (2014) regarding inverting notions of abstraction. A reading of Vis (2013) continued these explorations, with a focus on how data are selected, how visibility is instrumentalised, and the unreliability which is induced by monetisation. Next, through Markam (2013), I questioned the neutrality of ‘data’ as a research frame, and was wooed by her calls to embrace complexity, and acknowledge research as a ‘messy, continual, dialogic, messy, entangled, and inventive’ process. This tributary culminated in a damming of sorts (or, damning), with my analysis of our Tweetorial’s algorithmic interpretation by Tweet Archivist.
In the final tributary, I investigated the entanglement of human and technical agency, driven by wider concerns about the governance of society and how ‘citizens’ can maintain a voice in that governance when so much influence is exerted through commercial and technical agency. Divisions in (and the co-evolution of) agency were explored through discussion of Matias’ (2017) research into algorithmic nudges with /r/worldnews (and in these notes on a Tweet), and developed based on a blog post in which Rahwan (2016) writes of “embedding judgement of society, as a whole, in the algorithmic governance of outcomes.” A peer (Cathy) helped me to connect this with predictive analysis ‘nudges’ in education, where I similarly see a need for collective agency to be used to integrate human values and ensure accountability. This line of thinking also links to ethical concerns about new technologies raised in our cybercultures block.