Power in the Digital Age

talking politics logo

Corbyn! Trump! Brexit! Politics has never been more unpredictable, more alarming or more interesting. TALKING POLITICS is the podcast that will try to make sense of it all.

from Pocket http://ift.tt/2o76yyJ
via IFTTT

Another one of my favourite podcasts, but this time it’s totally relevant to this course. Look at the synopsis for it:

synopsis of post

This particular episode looks at the ways in which politics and technology intersect, socio-critical and socio-technical issues around power and surveillance, the dominance of companies, and the impact of the general political outlook of the technologically powerful.

There are two things that I think are really relevant to the themes of the algorithmic cultures block. The first is about data. Data is described as being like ‘the land […], what we live on’, and machine learning is the plough, it’s what digs up the land. What we’ve done, they argue, is to give the land to the people who own the ploughs. This, Runciman, the host, argues, is not capitalism, but feudalism.

I’m paraphrasing the metaphor, so I may have missed a nuance or two. It strikes me as being different from the data-as-oil one, largely because of the perspective taken. It’s not really taken from a corporate perspective, although I think in the data-as-land metaphor there’s an assumption that we once ‘owned’ our data, or that it was ever conceived by us of as our intellectual property. I have the impression that Joni Mitchell might have been right – don’t it always seem to go that you don’t know what you’ve got ’til it’s gone – and that many of us really didn’t think about it much before.

The second point is about algorithms, where the host and one of his guests (whose name I missed, sorry) gently approach a critical posthumanist perspective of technology and algorithms without ever acknowledging it. Machine learning algorithms have agency – polymorphous, mobile, agency – which may be based on simulation but is nonetheless real. The people that currently control these algorithms, it is argued, are losing control, as the networked society allows for them to take on a dynamic of their own. Adopting and paraphrasing the Thomas theorem, it is argued that:

If a machine defines a situation as real, it is real in its consequences.

I say ‘gently approaching’ because I think that while the academics in this podcast are recognising the agency and intentionality of non-human actants – or algorithms – there’s still a sense that they believe there’s a need to wrest back this control from them. There’s still an anthropocentrism in their analysis which aligns more closely with humanism than posthumanism.

Trump is erasing gay people like me from American society by keeping us off the census

As a gay man, I literally don’t count in America. Despite previous reports that we would be counted for the first time in history, this week the Trump administration announced that LGBT Americans will not be included in the 2020 census.

from Pocket http://ift.tt/2njLWRP
via IFTTT

I read about this earlier in the week, and when I watched the TED talk on statistics I was reminded about this. There was talk, recently, about LGBT Americans being counted in the 2020 census. Obviously being able to quantify the number of LGBT people will mean that policy will have to take this information into account – if the census demonstrates conclusively that x% of Americans are LGBT, then that is a weapon for agitating for better rights, better provisions, better services, better everything, really. The plan to count LGBT Americans has been shelved this week, and this represents a major challenge for the LGBT community in the US.

I think it’s a really clear example of the socio-critical elements of data and algorithmic cultures. If you have an unequal structure to begin with, then the algorithms used to make sense of that may well replicate that inequality. And if you assume that the data is not necessary to begin with, then there’s no accountability at all.

How to spot a bad statistic

This is a great talk on the use of statistics. Mona Chalabi is a data journalist, and here she outlines three ways of questioning statistics, based on her assessment that “the one way to make numbers more accurate is to have as many people as possible be able to question them.”

The three questions she provides were, I thought, generally quite obvious; as a teacher of information literacy they echo quite substantially the sorts of questions I encourage my students to ask. But there were two points that she made that I thought were really important and relevant to EDC, especially algorithmic cultures.

The first was about overstating certainty and how statistics can be used in a way that makes them describe situations as either black or white, with little middle ground. Sometimes this is a result of how they’re collected in the first place, and sometimes it’s how the statistics are communicated, and sometimes it’s in how they’re interpreted. I think this is one of the reasons that I’m hesitant about learning analytics; its innate tendency towards what can be quantified might lead to an overestimation of certainty, either in the way data about students is collected, communicated or interpreted. And, as we’ve seen, that data can become predictive, or a self-fulfilling prophecy.

The second point that I thought was really interesting was how Mona was responding to this situation of certainty. She takes real data sets, and turns them into hand-drawn visualisations so that the imprecision, the uncertainty, can be revealed. She says, “so that people can see that a human did this, a human found the data and visualised it”. A human did this, and so we anticipate uncertainty. Inherent here is a mistrust in the ability of technology to replicate nuance and complexity, which I think is misguided. But there’s also an underlying assumption about statistics – that a computer is able to hide the imprecision in a way that humans cannot. That computer data visualisations are sleek, while human data visualisations are shaky. This is a fascinating conceptualisation of the relationship between humans and technology, of the ways in which both humans and technology can be used instrumentally to make up for the weaknesses of the other.

WhatsApp’s privacy protections questioned after terror attack

a silhouette of a padlock on a green whatsapp logo of a telephone

Chat apps that promise to prevent your messages being accessed by strangers are under scrutiny again following last week’s terror attack in London. On Sunday, the home secretary said the intelligence services must be able to access relevant information.

from Pocket http://ift.tt/2nXBsM5
via IFTTT

This is only tangentially related to our readings and the themes we’ve been exploring throughout the course, but I do think it’s worth including. Many ‘chat’ apps use end-to-end encryption, so messages sent are private, even to the company itself. The government clearly believes that this shouldn’t be allowed, and is attempting to take steps to prevent it. Hopefully unsuccessfully, I should add.

There’s an assumption here that data about us ought to be at least potentially public – chat apps, says the Home Secretary, must not provide a ‘secret place’. It’s not far from this position to one that says that we don’t own the data we generate, along with the data generated about us: where we are, who we send messages to, and so on. There are questions around the intersection of civil liberties and technology, and whether there’s a digital divide in terms of the ability to protect yourself from surveillance online.

 

Always look on the bright side of life(stream) – Week 10 summary

Interpretation is the theme this week, wedded strongly to recognition of the need to make space for cognitive dissonance, for the pluralism of truth, for the concurrent existence of multiple and conflicting interpretations.

It emerges, for example, in considerations of what does, or should, constitute restricted content on YouTube. It’s there in questions around whether learning analytics might help or hinder the development of critical reflective skills on learning gain. And of course, it’s readily apparent in responses to the analytics of the tweetorial last week. In my padlet, my point wasn’t to indicate that some conclusions are better than others, though clearly sometimes they are. It was to demonstrate the potential co-existence of varying, contradictory interpretations. In my blog post analysing the data, I argue that it is the stability of data which gives pause, rather than its scope for misinterpretation. The data remains fixed while its meanings change, an ongoing annulment of data and meaning.

In many ways, this seems to conflict rather than cohere with EDC themes. In cybercultures, I questioned whose voices we hear and the ‘black boxing’ of the powerless or unprivileged. In community cultures, I discussed how singularity of voice or shared experience might engender community development. Here, though, I’m finding that interpretation is ceaselessly multifaceted.

Knox (2014) discusses the ways in which learning analytics might be a means of ‘making the invisible visible’. Perhaps this is happening here. The data is visible, where it once might be hidden; this permits a multitude of interpretations to be visible too, where once only the dominant interpretation would have been. Perhaps learning analytics elicits a shift in power.

Or, perhaps, the dominant interpretation has become this multitude of voices. The dissonance is destabilising, and so in the end only the data is rendered visible, stable, victorious.

Or, perhaps, both.

References

Knox, J. (2014). Abstracting Learning Analytics. Retrieved from https://codeactsineducation.wordpress.com/2014/09/26/abstracting-learning-analytics/

Analytics padlet

I had a go at making a padlet as a way of commenting on the tweetorial analytics. I’ve taken five of the separate ‘analytics’, and offered sometimes conflicting and sometimes totally contradictory interpretations. Most of them are reasonable, though, if a little tongue-in-cheek. Some of them are complimentary, some less so and some potentially rather damaging.

This is borne of my absence during the tweetorial, and the subsequent and fundamental decontextualisation, for me, of the data provided. But I also don’t want to suggest that the analytics are objective, and that it is only interpretation which is subjective – I take this argument up later in my blog post.

So click the image above to see it, or go here.

I missed the tweetstorm!

But…I always knew I would. It coincided with the last two days of our academic term, so my calendar was full, and long days meant I had no time even to read the tweets, let alone contribute to any of the discussions.

I’ve read it this morning, though, and it looks to have been hugely successful. Lots of tweets, lots of content, and I now find myself more than ever impressed and a little daunted by the talents and skills of my coursemates. But when Jeremy and James release the analytics of the tweetstorm that they’ve been collecting, I won’t be on it.

So here’s my question: how do you analyse absence?

a student who logs into an LMS leaves thousands of data points, including navigation patterns, pauses, reading habits and writing habits (Siemens, p. 1381)

Well, not only has Siemens never seen the dreadful analytics potential of the VLE we use, the crucial point is this: “who logs into”. Similar ideas are critically raised in the lecture we heard this week – the idea of using captured learning data from cradle to university, and using this to provide customised experiences. Learning analytics requires ‘logging’, both in the sense of ‘logging in’, but also in terms of the data trails left behind. You have to put your name to your activity.

This has significant implications. Thinking about assessment, it invites considerations around assessors’ bias (unconscious or otherwise). There are implications for openness and scale too – it’s probably pretty easy for the EDC website to track our locations, even our IP addresses, but you can never know for sure who is reading the website and who isn’t. You can probably come up with an average ‘time’ it might take to read the website page. You can probably track clicks on the links for the readings, but you can’t be sure it’s been read. So there are potentially knock-on effects for the sorts of platforms and media by which teaching can be performed. This relates back to something Jeremy and I discussed a while ago – a sort of tyranny around providing constant evidence for the things that we do, for our engagement with course ideas and course materials. It also smacks of behaviourism which – as Siemens and Long (2011) argue, is not appropriate in HE.

But it also has implications for the ‘lurkers’ among us. Students who may not engage in the ‘prescribed’ way, whether that’s through volition, through a poor internet connection, lack of time, changes in circumstances. How might these people have a personalised learning experience? What data might be collected about these people, and how can it incorporate the richness and subjectivity of experience, of happenstance, of humanness.

My question then, is this: can learning analytics track engagement without framing it entirely within the context of participation, or logging in? Because while these are indicators of engagement, they are not the same thing.

References

Siemens, G. (2013). Learning Analytics: The Emergence of a Discipline. American Behavioral Scientist, 57(10), 1380–1400. https://doi.org/10.1177/0002764213498851

Siemens, G., & Long, P. (n.d.). Penetrating the Fog: Analytics in Learning and Education. Retrieved 19 March 2017, from http://er.educause.edu/articles/2011/9/penetrating-the-fog-analytics-in-learning-and-education

The Surprising Things Algorithms Can Glean About You From Photos

More than three-quarters of American adults own a smartphone, and on average, they spend about two hours each day on it. In fact, it’s estimated that we touch our phones between 200 and 300 times a day—for many of us, far more often than we touch our partners.

from Pocket http://ift.tt/2malWIv
via IFTTT

I’ve included this on the lifestream because it reminded me of one of the challenges presented by big data outlined by Eynon – inequality. Big data, she argues, may both reinforce and even exacerbate existing social and educational inequalities – Eynon particularly points to those who are online more frequently (those people, as far as she is concerned, who are in a particular socio-economic bracket).

But these few lines gave me pause for thought:

I’d argue, contra Eynon, that in fact the socio-economic bracket to which she refers is (very much) potentially indicative of one’s ability to protect oneself online – able to afford to download tor browsers, for example. So it’s not that more data will be collected about people of a specific socio-economic status; it’s that those of a different socio-economic status may be less able to control what data is collected.

 

No one reads terms of service, studies confirm

Apparently losing rights to data and legal recourse is not enough of a reason to inspect online contracts. So how can websites get users to read the fine print? The words on the screen, in small type, were as innocent and familiar as a house key.

from Pocket http://ift.tt/2lCnKt1
via IFTTT

An interesting article about how we don’t read the T&Cs, featuring a research study by two Canadian professors who managed to get a load of students to agree to promise a(n imaginary) company their first born children.

This has, I think, many important implications for the way we use technology. From a UX perspective, knowing that the T&Cs aren’t being read would mean that websites and companies ought to rethink the way they give information to potential customers, so they’re fully informed when they sign up. Somehow I can’t imagine this happening. The author of the article, however, suggests a sort of unspoken digital ethics contract (similar to the Hippocratic Oath), but how that might work is another matter.

There’s also how far we’re unable to do anything at all about terms and conditions we disagree with. If our use of a particular site is entirely optional then we can choose not to use it; if it isn’t – if our employer insists on it, or if it’s something expected of us – then we can hardly demand that Google or Facebook comes up with an alternative set of T&Cs just for us.

This is on my mind, particularly, as a result of an action I took in responding to the mid-term feedback from Jeremy. One of the points made – and a completely valid one – was that I might look to broaden my horizons in terms of the feeds coming into the lifestream. I added a couple of feeds and then looked to link up YouTube to the WordPress blog. And I was then faced with this:

Manage? I clicked on the ‘i’ to see what it inferred, and was faced with this:

At this point, I was turned completely off the idea of linking the two – any videos will just have to be – as Cathy brilliantly put it – glued on to the lifestream. I’m sure their intention is not particularly insidious, and I’ve probably already inadvertently given up lots of my data, but this seemed just a step too far.

But, on the other hand, at least it was clear.