Revealed: what people watch on Netflix where you live in the UK

Screenshot from Gilmore Girls
Netflix has revealed the most popular TV shows and films in regions across the UK. And it’s thrown up some surprising differences in country’s viewing habits.

from Pocket

Netflix has revealed the most popular TV shows and films in regions across the UK. And it’s thrown up some surprising differences in country’s viewing habits. By analysing statistics between October 2016 and this month, the streaming service was able to reveal what parts of the country are more inclined to watch a specific genre compared to others.

So quotes the article above. I know it’s only a bit of silliness – it’s one step away from a Buzzfeed-esque ‘Can we guess where you live based on your favourite Netflix show?’. The worst bit is that there’s a tiny amount of truth to it: I have watched Gilmore Girls AND I live in the South East. I reject the article’s proposal, however, that this implies that I am “pining for love”.

So yes, it’s overly simplistic and makes assumptions (such as the one that everyone watches Netflix, or that heterogeneity is a result of a postcode lottery); ultimately, it’s a bit of vapid fluff. But it’s also a bit of vapid fluff that exemplifies how far algorithmic cultures are embedded in the media we consume: the data collected about us now just entertainment ouput.


Elon Musk Isn’t the Only One Trying to Computerize Your Brain

Elon Musk wants to merge the computer with the human brain, build a “neural lace,” create a “direct cortical interface,” whatever that might look like.

from Pocket

This reminds me of the part about Moravec’s Mind Children in N. Katherine Hayles’ book, How we Became Posthuman (I just read ‘Theorizing Posthumanism by Badmington, which refers to it as well). There’s a scenario in Mind Children, writes Hayles, where Moravec argues that it will soon be possible to download human consciousness into a computer.

How, I asked myself, was it possible for someone of Moravec’s obvious intelligence to believe that mind could be separated from body? Even assuming that such a separation was possible, how could anyone think that consciousness in an entirely different medium would remain unchanged, as if it had no connection with embodiment? Shocked into awareness, I began to notice he was far from alone. (1999, p. 1)

It appears that Moravec wasn’t wrong about the possibility of the technology to ‘download’ human consciousness, but let’s hope the scientists all get round to reading Hayles’ work on this techno-utopia before the work really starts…


Badmington, N. (2003). Theorizing Posthumanism. Cultural Critique, (53), 10–27.

Hayles, N. K. (1999). How we became posthuman: virtual bodies in cybernetics, literature, and informatics. Chicago, Ill: University of Chicago Press.

Pinned to Education and Digital Cultures on Pinterest

Just Pinned to Education and Digital Cultures:

This is a photo of my very tiny, very messy desk at home, taken last weekend, just hours after my computer keyboard and trackpad decided to pack in permanently.

It wasn’t a major problem – I already had a bluetooth mouse and keyboard, and I was able to get an appointment to get the computer fixed this week. But I included this image because this slight interruption in the way that I work felt unsettling. The computer not working as I expected it to affected the way that I would normally study, and it affected (well, delayed) what I had planned to do over the weekend.

One of the themes of EDC is battling the supposed binary of technological instrumentalism and technological determinism, of proving that it’s all a little more complex and nuanced than that. This was, for me, a reminder (and a pretty annoying one) that my conceptualisations of how technology might be used and practised is not always followed through in my enactment of it.</P

WhatsApp’s privacy protections questioned after terror attack

a silhouette of a padlock on a green whatsapp logo of a telephone

Chat apps that promise to prevent your messages being accessed by strangers are under scrutiny again following last week’s terror attack in London. On Sunday, the home secretary said the intelligence services must be able to access relevant information.

from Pocket

This is only tangentially related to our readings and the themes we’ve been exploring throughout the course, but I do think it’s worth including. Many ‘chat’ apps use end-to-end encryption, so messages sent are private, even to the company itself. The government clearly believes that this shouldn’t be allowed, and is attempting to take steps to prevent it. Hopefully unsuccessfully, I should add.

There’s an assumption here that data about us ought to be at least potentially public – chat apps, says the Home Secretary, must not provide a ‘secret place’. It’s not far from this position to one that says that we don’t own the data we generate, along with the data generated about us: where we are, who we send messages to, and so on. There are questions around the intersection of civil liberties and technology, and whether there’s a digital divide in terms of the ability to protect yourself from surveillance online.


Always look on the bright side of life(stream) – Week 10 summary

Interpretation is the theme this week, wedded strongly to recognition of the need to make space for cognitive dissonance, for the pluralism of truth, for the concurrent existence of multiple and conflicting interpretations.

It emerges, for example, in considerations of what does, or should, constitute restricted content on YouTube. It’s there in questions around whether learning analytics might help or hinder the development of critical reflective skills on learning gain. And of course, it’s readily apparent in responses to the analytics of the tweetorial last week. In my padlet, my point wasn’t to indicate that some conclusions are better than others, though clearly sometimes they are. It was to demonstrate the potential co-existence of varying, contradictory interpretations. In my blog post analysing the data, I argue that it is the stability of data which gives pause, rather than its scope for misinterpretation. The data remains fixed while its meanings change, an ongoing annulment of data and meaning.

In many ways, this seems to conflict rather than cohere with EDC themes. In cybercultures, I questioned whose voices we hear and the ‘black boxing’ of the powerless or unprivileged. In community cultures, I discussed how singularity of voice or shared experience might engender community development. Here, though, I’m finding that interpretation is ceaselessly multifaceted.

Knox (2014) discusses the ways in which learning analytics might be a means of ‘making the invisible visible’. Perhaps this is happening here. The data is visible, where it once might be hidden; this permits a multitude of interpretations to be visible too, where once only the dominant interpretation would have been. Perhaps learning analytics elicits a shift in power.

Or, perhaps, the dominant interpretation has become this multitude of voices. The dissonance is destabilising, and so in the end only the data is rendered visible, stable, victorious.

Or, perhaps, both.


Knox, J. (2014). Abstracting Learning Analytics. Retrieved from

Analysing the tweetorial, or why we shouldn’t focus on subjectivity

Two disclaimers before getting started:

  1. I mentioned in a blog post last week that regrettably I’d had to miss the tweetorial, and was only able to cursorily glance through some of the later tweets once it was all over.  This absence, and my subsequent uncertainty about how it unfolded, strongly influenced this blog post as well as the padlet I created.
  2. I’ve noticed a tendency that I have to lean a little too heavily on the literature, critiquing others rather than trying to use my own critical voice. I know that’s normally OK, and it’s just a question of balance, but anyway, here I’m trying to counter that. No references!

OK. Here goes.

The literal answer to Jeremy and James’ first question: “how has the Twitter archive represented our tweetorial?” is reasonably simple. The archive has stored tweets which used a predetermined hashtag, and specific tweet metadata, in a way which is linear and yet unfinished. It has used the tweets – or, at least, specific elements of them – to quantify behaviour and activity. This might allow us (or a computer) to extrapolate and draw conclusions. In this sense, it all seems rather objective.

And yet it isn’t objective. The choices made about the data collected and attached, and those which are not, were subjective. They were subjective regardless of who made them – human, computer or both. The visual representation of the data is also mediated and subjective – the clue is in the word ‘representation’. It’s necessarily, inescapably reductive. The key point is that this isn’t fundamentally bad. The data being subjective doesn’t make it meaningless or inaccurate or untrustworthy. Why privilege impartiality anyway?

And, moreover, the charge of subjectivity is easily dealt with. The quantified facts the archive presents are of course not the ‘whole picture’ (whatever that is). The conclusions we draw ought to be questioned. We should ensure that the non-quantifiable (tiredness, workload, scepticism) is considered too. There is scope for multiple interpretations, all at the same time (as I tried to show in the padlet). The ways in which the analytics are presented may or may not have educational value; we cannot be conclusive as this depends on the individual. It will motivate some while demotivating others. It will give some confidence while causing others to question themselves. There is space for all of these attitudes concurrently. The archive can’t tell us whether learning happened, or didn’t happen, or the quality of it: it was never intended to do so.

So, for me, the problem – the danger, even – with analytics like this isn’t that they’re subjective. It lies instead in their inescapable finality, even as the data collection is ongoing. The finality easily gives way to become ‘authority’, and the platform doesn’t particularly lend itself to that authority being questioned. Given the sheer number of tweets, searching and retrieving them is not simple. You can’t retrospectively change the choices made about the data which is collected, if you can change them at all. The platform does not allow it. That’s another choice, by the way. And again, it doesn’t really matter who made it.

Our ability to answer the questions set by Jeremy and James (or in my case, inability) is so fundamentally predicated on the fact that it happened last week. Our ability to identify where the data collected is subjective, and where or why this is problematic, is based on the same thing. We were there, we can remember it, so we can interpret it. And yet the fixedness, the finality, and the stability of the archive has to be compared with the fleetingness of the qualitative information and individual interpretation that we’re using to gloss it. Right at this moment, we can question the archive. Right at this moment, we know better. We have authority. But it’s temporary. After all, the data will last longer.


Analytics padlet

I had a go at making a padlet as a way of commenting on the tweetorial analytics. I’ve taken five of the separate ‘analytics’, and offered sometimes conflicting and sometimes totally contradictory interpretations. Most of them are reasonable, though, if a little tongue-in-cheek. Some of them are complimentary, some less so and some potentially rather damaging.

This is borne of my absence during the tweetorial, and the subsequent and fundamental decontextualisation, for me, of the data provided. But I also don’t want to suggest that the analytics are objective, and that it is only interpretation which is subjective – I take this argument up later in my blog post.

So click the image above to see it, or go here.