WhatsApp’s privacy protections questioned after terror attack

a silhouette of a padlock on a green whatsapp logo of a telephone

Chat apps that promise to prevent your messages being accessed by strangers are under scrutiny again following last week’s terror attack in London. On Sunday, the home secretary said the intelligence services must be able to access relevant information.

from Pocket http://ift.tt/2nXBsM5
via IFTTT

This is only tangentially related to our readings and the themes we’ve been exploring throughout the course, but I do think it’s worth including. Many ‘chat’ apps use end-to-end encryption, so messages sent are private, even to the company itself. The government clearly believes that this shouldn’t be allowed, and is attempting to take steps to prevent it. Hopefully unsuccessfully, I should add.

There’s an assumption here that data about us ought to be at least potentially public – chat apps, says the Home Secretary, must not provide a ‘secret place’. It’s not far from this position to one that says that we don’t own the data we generate, along with the data generated about us: where we are, who we send messages to, and so on. There are questions around the intersection of civil liberties and technology, and whether there’s a digital divide in terms of the ability to protect yourself from surveillance online.

 

Always look on the bright side of life(stream) – Week 10 summary

Interpretation is the theme this week, wedded strongly to recognition of the need to make space for cognitive dissonance, for the pluralism of truth, for the concurrent existence of multiple and conflicting interpretations.

It emerges, for example, in considerations of what does, or should, constitute restricted content on YouTube. It’s there in questions around whether learning analytics might help or hinder the development of critical reflective skills on learning gain. And of course, it’s readily apparent in responses to the analytics of the tweetorial last week. In my padlet, my point wasn’t to indicate that some conclusions are better than others, though clearly sometimes they are. It was to demonstrate the potential co-existence of varying, contradictory interpretations. In my blog post analysing the data, I argue that it is the stability of data which gives pause, rather than its scope for misinterpretation. The data remains fixed while its meanings change, an ongoing annulment of data and meaning.

In many ways, this seems to conflict rather than cohere with EDC themes. In cybercultures, I questioned whose voices we hear and the ‘black boxing’ of the powerless or unprivileged. In community cultures, I discussed how singularity of voice or shared experience might engender community development. Here, though, I’m finding that interpretation is ceaselessly multifaceted.

Knox (2014) discusses the ways in which learning analytics might be a means of ‘making the invisible visible’. Perhaps this is happening here. The data is visible, where it once might be hidden; this permits a multitude of interpretations to be visible too, where once only the dominant interpretation would have been. Perhaps learning analytics elicits a shift in power.

Or, perhaps, the dominant interpretation has become this multitude of voices. The dissonance is destabilising, and so in the end only the data is rendered visible, stable, victorious.

Or, perhaps, both.

References

Knox, J. (2014). Abstracting Learning Analytics. Retrieved from https://codeactsineducation.wordpress.com/2014/09/26/abstracting-learning-analytics/

Analysing the tweetorial, or why we shouldn’t focus on subjectivity

Two disclaimers before getting started:

  1. I mentioned in a blog post last week that regrettably I’d had to miss the tweetorial, and was only able to cursorily glance through some of the later tweets once it was all over.  This absence, and my subsequent uncertainty about how it unfolded, strongly influenced this blog post as well as the padlet I created.
  2. I’ve noticed a tendency that I have to lean a little too heavily on the literature, critiquing others rather than trying to use my own critical voice. I know that’s normally OK, and it’s just a question of balance, but anyway, here I’m trying to counter that. No references!

OK. Here goes.

The literal answer to Jeremy and James’ first question: “how has the Twitter archive represented our tweetorial?” is reasonably simple. The archive has stored tweets which used a predetermined hashtag, and specific tweet metadata, in a way which is linear and yet unfinished. It has used the tweets – or, at least, specific elements of them – to quantify behaviour and activity. This might allow us (or a computer) to extrapolate and draw conclusions. In this sense, it all seems rather objective.

And yet it isn’t objective. The choices made about the data collected and attached, and those which are not, were subjective. They were subjective regardless of who made them – human, computer or both. The visual representation of the data is also mediated and subjective – the clue is in the word ‘representation’. It’s necessarily, inescapably reductive. The key point is that this isn’t fundamentally bad. The data being subjective doesn’t make it meaningless or inaccurate or untrustworthy. Why privilege impartiality anyway?

And, moreover, the charge of subjectivity is easily dealt with. The quantified facts the archive presents are of course not the ‘whole picture’ (whatever that is). The conclusions we draw ought to be questioned. We should ensure that the non-quantifiable (tiredness, workload, scepticism) is considered too. There is scope for multiple interpretations, all at the same time (as I tried to show in the padlet). The ways in which the analytics are presented may or may not have educational value; we cannot be conclusive as this depends on the individual. It will motivate some while demotivating others. It will give some confidence while causing others to question themselves. There is space for all of these attitudes concurrently. The archive can’t tell us whether learning happened, or didn’t happen, or the quality of it: it was never intended to do so.

So, for me, the problem – the danger, even – with analytics like this isn’t that they’re subjective. It lies instead in their inescapable finality, even as the data collection is ongoing. The finality easily gives way to become ‘authority’, and the platform doesn’t particularly lend itself to that authority being questioned. Given the sheer number of tweets, searching and retrieving them is not simple. You can’t retrospectively change the choices made about the data which is collected, if you can change them at all. The platform does not allow it. That’s another choice, by the way. And again, it doesn’t really matter who made it.

Our ability to answer the questions set by Jeremy and James (or in my case, inability) is so fundamentally predicated on the fact that it happened last week. Our ability to identify where the data collected is subjective, and where or why this is problematic, is based on the same thing. We were there, we can remember it, so we can interpret it. And yet the fixedness, the finality, and the stability of the archive has to be compared with the fleetingness of the qualitative information and individual interpretation that we’re using to gloss it. Right at this moment, we can question the archive. Right at this moment, we know better. We have authority. But it’s temporary. After all, the data will last longer.

 

Analytics padlet

I had a go at making a padlet as a way of commenting on the tweetorial analytics. I’ve taken five of the separate ‘analytics’, and offered sometimes conflicting and sometimes totally contradictory interpretations. Most of them are reasonable, though, if a little tongue-in-cheek. Some of them are complimentary, some less so and some potentially rather damaging.

This is borne of my absence during the tweetorial, and the subsequent and fundamental decontextualisation, for me, of the data provided. But I also don’t want to suggest that the analytics are objective, and that it is only interpretation which is subjective – I take this argument up later in my blog post.

So click the image above to see it, or go here.

Comment from Cathy’s blog

Cathy, this is great! Nice work, and an innovative way to question what data is captured – it’s important to balance our interpretation of the meaning of data with what is captured (and what is missing). Thank you!
-Helen

from Comments for Cathy’s Lifestream http://ift.tt/2ohg295
via IFTTT

What I’m reading

At a conference today! #cctl2017

Tags:
March 23, 2017 at 11:53AM
Open in Evernote

I attended a Teaching Forum hosted by the Cambridge Centre for Teaching and Learning on Thursday, and this is a photo of some of the notes that I took during a presentation by Dr Sonia Ilie, on the LEGACY project. Dr Ilie discussed the results of a bit of qualitative research surrounding students’ understanding of learning gain.  One of her arguments put me in mind of learning analytics.

In case my handwriting isn’t clear, Dr Ilie reported that the research had demonstrated that students are variably equipped to reflect upon their own learning. I wondered – in the bottom comment of the photo – about the impact that learning analytics might have upon this. I’m interested in whether learning analytics might help students to develop critically reflective skills, or whether it might let them off the hook by effectively providing them with a shorthand version of that reflection.