Power in the Digital Age

talking politics logo

Corbyn! Trump! Brexit! Politics has never been more unpredictable, more alarming or more interesting. TALKING POLITICS is the podcast that will try to make sense of it all.

from Pocket http://ift.tt/2o76yyJ
via IFTTT

Another one of my favourite podcasts, but this time it’s totally relevant to this course. Look at the synopsis for it:

synopsis of post

This particular episode looks at the ways in which politics and technology intersect, socio-critical and socio-technical issues around power and surveillance, the dominance of companies, and the impact of the general political outlook of the technologically powerful.

There are two things that I think are really relevant to the themes of the algorithmic cultures block. The first is about data. Data is described as being like ‘the land […], what we live on’, and machine learning is the plough, it’s what digs up the land. What we’ve done, they argue, is to give the land to the people who own the ploughs. This, Runciman, the host, argues, is not capitalism, but feudalism.

I’m paraphrasing the metaphor, so I may have missed a nuance or two. It strikes me as being different from the data-as-oil one, largely because of the perspective taken. It’s not really taken from a corporate perspective, although I think in the data-as-land metaphor there’s an assumption that we once ‘owned’ our data, or that it was ever conceived by us of as our intellectual property. I have the impression that Joni Mitchell might have been right – don’t it always seem to go that you don’t know what you’ve got ’til it’s gone – and that many of us really didn’t think about it much before.

The second point is about algorithms, where the host and one of his guests (whose name I missed, sorry) gently approach a critical posthumanist perspective of technology and algorithms without ever acknowledging it. Machine learning algorithms have agency – polymorphous, mobile, agency – which may be based on simulation but is nonetheless real. The people that currently control these algorithms, it is argued, are losing control, as the networked society allows for them to take on a dynamic of their own. Adopting and paraphrasing the Thomas theorem, it is argued that:

If a machine defines a situation as real, it is real in its consequences.

I say ‘gently approaching’ because I think that while the academics in this podcast are recognising the agency and intentionality of non-human actants – or algorithms – there’s still a sense that they believe there’s a need to wrest back this control from them. There’s still an anthropocentrism in their analysis which aligns more closely with humanism than posthumanism.

Confessions of a distance learning refusenik-linear courses

An occasional blog, pulled together from my research diary for the Teaching and Learning Online Module for the MA: Digital Technologies, Communication and Education at the University of Manchester.

from Pocket http://ift.tt/2oLX7HE
via IFTTT

The post above is written by a colleague and friend of mine, Ange Fitzpatrick. Ange is a student on the Digital Technologies course at the University of Manchester. It is a brutally honest post about the ways in which she engages with the course she is taking, and in it she talks about her engagement with the course structure, and the technology through which it is enacted.

The post resonated with me for several reasons. I’m interested in the way that Ange is taught, in comparison with the way that I am, in the similarities and differences between the two offerings. Empathy is a big thing too – like Ange, I’ve juggled this course with a family (occasionally in crisis, like most families) and a demanding job. I can snatch time here and there during the week, and usually am able to carve out more time at weekends, but it means I’m not always available (or awake enough) for much of the pre-fixed ‘teaching’.

Like Ange, I’ve been an independent learner for a long time; I fear it’s turned me into a really bad student. I like finding my own stuff to read rather than going with what is suggested. I feel as though I don’t need much support (though others may disagree!). I’m neither proud nor ashamed of this, but it does put me at odds – and it makes me feel at odds – with what has been an extremely supportive cohort of students and teachers. I have a laissez-faire attitude to assessment: I’ll do my best, and I do care a little about the marks. But more than anything I’m here to be ‘contaminated’ (to borrow the term of Lewis and Khan) by ideas that are new to me. I’d rather things got more complicated than more simple.

The reason I really wanted to share this, though, was that I feel that Ange’s post highlights and exemplifies the entanglements of digital and distance education. It reveals the complex assemblages and networks at play in how we engage with course materials, in how we define ‘engagement’. It uncovers the dispersal of activity, the instability, the times when instrumentalist approaches feel like the only option. It epitomises our attempts to stay in control, to centre and recentre ourselves at the nexus of our studying. It underlines the networks: the multi-institutional, political, cultural, familial, social, soteriological networks that combine and collide and co-constitute. It exposes the totalising sociomateriality of experience, “the delicate material and cultural ecologies within which life is situated” (Bayne, 2015, p. 15). And it does so from the perspective of the student.

But it also, I think, emphasises the – I say this tentatively – relative redundancy of these ideas and critical assessments. Recognition of the networks and rhizomes does not provide Ange with a more navigable path through her course. This doesn’t mean that these considerations are not important but it does – for me at least – point to a disjunction between theory and practice.

References

Bayne, S. (2015). What’s the matter with ‘technology-enhanced learning’? Learning, Media and Technology, 40(1), 5–20. https://doi.org/10.1080/17439884.2014.915851

With many many thanks to Ange for letting me share her post.

 

The Top Ed-Tech Trends (Aren’t ‘Tech’)

Every year since 2010, I’ve undertaken a fairly massive project in which I’ve reviewed the previous twelve months’ education and technology news in order to write ten articles covering “the top ed-tech trends.

from Pocket http://ift.tt/2nwX9OP
via IFTTT

This is a really interesting post from one of my favourite blogs, Hack Education. It’s the rough transcript of a talk given by Audrey Watters, about her work developing the ‘top ed-tech trends’. She talks about the ways in which this cannot be predictive, but is a ‘history’ of technology, and one which is immersed in claims made about technology by the people who are trying to sell it to us. Technology, she says wryly, is always amazing.

I want us to think more critically about all these claims, about the politics, not just the products (perhaps so the next time we’re faced with consultants or salespeople, we can do a better job challenging their claims or advice).

Her argument is a profound one, and one which coheres nicely with the principal themes in EDC. Her conceptualisation of technologies is that they are ideological practices, rather than tools, and rather than things you can go out and buy and in doing so render yourself ‘ed-tech’, a form of technological solutionism. They have a narrative, and that narrative includes the $2.2 billion spent on technology development in 2016.

Personalization. Platforms. These aren’t simply technological innovations. They are political, social – shaping culture and politics and institutions and individuals in turn.

Watters ends with a plea to us all. When we first encounter new technologies, consider not just what it can do, or what our ownership or mastership of the product might say about us. But also consider its ideologies and its implications.

Really, definitely, absolutely worth reading.

The future is algorithms, not code

some code-like algorithms

The current ‘big data’ era is not new. There have been other periods in human civilisation where we have been overwhelmed by data. By looking at these periods we can understand how a shift from discrete to abstract methods demonstrate why the emphasis should be on algorithms not code.

from Pocket http://ift.tt/2nGVibD
via IFTTT

Trump is erasing gay people like me from American society by keeping us off the census

As a gay man, I literally don’t count in America. Despite previous reports that we would be counted for the first time in history, this week the Trump administration announced that LGBT Americans will not be included in the 2020 census.

from Pocket http://ift.tt/2njLWRP
via IFTTT

I read about this earlier in the week, and when I watched the TED talk on statistics I was reminded about this. There was talk, recently, about LGBT Americans being counted in the 2020 census. Obviously being able to quantify the number of LGBT people will mean that policy will have to take this information into account – if the census demonstrates conclusively that x% of Americans are LGBT, then that is a weapon for agitating for better rights, better provisions, better services, better everything, really. The plan to count LGBT Americans has been shelved this week, and this represents a major challenge for the LGBT community in the US.

I think it’s a really clear example of the socio-critical elements of data and algorithmic cultures. If you have an unequal structure to begin with, then the algorithms used to make sense of that may well replicate that inequality. And if you assume that the data is not necessary to begin with, then there’s no accountability at all.

Revealed: what people watch on Netflix where you live in the UK

Screenshot from Gilmore Girls
Netflix has revealed the most popular TV shows and films in regions across the UK. And it’s thrown up some surprising differences in country’s viewing habits.

from Pocket http://ift.tt/2nTlClj
via IFTTT

Netflix has revealed the most popular TV shows and films in regions across the UK. And it’s thrown up some surprising differences in country’s viewing habits. By analysing statistics between October 2016 and this month, the streaming service was able to reveal what parts of the country are more inclined to watch a specific genre compared to others.

So quotes the article above. I know it’s only a bit of silliness – it’s one step away from a Buzzfeed-esque ‘Can we guess where you live based on your favourite Netflix show?’. The worst bit is that there’s a tiny amount of truth to it: I have watched Gilmore Girls AND I live in the South East. I reject the article’s proposal, however, that this implies that I am “pining for love”.

So yes, it’s overly simplistic and makes assumptions (such as the one that everyone watches Netflix, or that heterogeneity is a result of a postcode lottery); ultimately, it’s a bit of vapid fluff. But it’s also a bit of vapid fluff that exemplifies how far algorithmic cultures are embedded in the media we consume: the data collected about us now just entertainment ouput.

 

Elon Musk Isn’t the Only One Trying to Computerize Your Brain

Elon Musk wants to merge the computer with the human brain, build a “neural lace,” create a “direct cortical interface,” whatever that might look like.

from Pocket http://ift.tt/2nqnLSf
via IFTTT

This reminds me of the part about Moravec’s Mind Children in N. Katherine Hayles’ book, How we Became Posthuman (I just read ‘Theorizing Posthumanism by Badmington, which refers to it as well). There’s a scenario in Mind Children, writes Hayles, where Moravec argues that it will soon be possible to download human consciousness into a computer.

How, I asked myself, was it possible for someone of Moravec’s obvious intelligence to believe that mind could be separated from body? Even assuming that such a separation was possible, how could anyone think that consciousness in an entirely different medium would remain unchanged, as if it had no connection with embodiment? Shocked into awareness, I began to notice he was far from alone. (1999, p. 1)

It appears that Moravec wasn’t wrong about the possibility of the technology to ‘download’ human consciousness, but let’s hope the scientists all get round to reading Hayles’ work on this techno-utopia before the work really starts…

References

Badmington, N. (2003). Theorizing Posthumanism. Cultural Critique, (53), 10–27.

Hayles, N. K. (1999). How we became posthuman: virtual bodies in cybernetics, literature, and informatics. Chicago, Ill: University of Chicago Press.

WhatsApp’s privacy protections questioned after terror attack

a silhouette of a padlock on a green whatsapp logo of a telephone

Chat apps that promise to prevent your messages being accessed by strangers are under scrutiny again following last week’s terror attack in London. On Sunday, the home secretary said the intelligence services must be able to access relevant information.

from Pocket http://ift.tt/2nXBsM5
via IFTTT

This is only tangentially related to our readings and the themes we’ve been exploring throughout the course, but I do think it’s worth including. Many ‘chat’ apps use end-to-end encryption, so messages sent are private, even to the company itself. The government clearly believes that this shouldn’t be allowed, and is attempting to take steps to prevent it. Hopefully unsuccessfully, I should add.

There’s an assumption here that data about us ought to be at least potentially public – chat apps, says the Home Secretary, must not provide a ‘secret place’. It’s not far from this position to one that says that we don’t own the data we generate, along with the data generated about us: where we are, who we send messages to, and so on. There are questions around the intersection of civil liberties and technology, and whether there’s a digital divide in terms of the ability to protect yourself from surveillance online.

 

Extremist ads and LGBT videos: do we want YouTube to be a censor, or not?

rainbow flag
Is the video-sharing platform a morally irresponsible slacker for putting ads next to extremist content – or an evil, tyrannical censor for restricting access to LGBT videos? YouTube is having a bad week.

from Pocket http://ift.tt/2n6Zi5b
via IFTTT

YouTube has been in the news lately because of two connected battles: the positioning of certain ads around what might be considered to be ‘extremist’ content, and the inherent problems in the algorithms used to categorise content as extremist or restricted.

The article in the New Statesment attempts to bring the moral arguments to bear on these algorithmic moves, raising the point about false equivalence of extremist videos and LGBT content, and whether responsibility for censoring certain voices ultimately represents the handing over of power in a problematic way.