Analysing the tweetorial, or why we shouldn’t focus on subjectivity

Two disclaimers before getting started:

  1. I mentioned in a blog post last week that regrettably I’d had to miss the tweetorial, and was only able to cursorily glance through some of the later tweets once it was all over. ┬áThis absence, and my subsequent uncertainty about how it unfolded, strongly influenced this blog post as well as the padlet I created.
  2. I’ve noticed a tendency that I have to lean a little too heavily on the literature, critiquing others rather than trying to use my own critical voice. I know that’s normally OK, and it’s just a question of balance, but anyway, here I’m trying to counter that. No references!

OK. Here goes.

The literal answer to Jeremy and James’ first question: “how has the Twitter archive represented our tweetorial?” is reasonably simple. The archive has stored tweets which used a predetermined hashtag, and specific tweet metadata, in a way which is linear and yet unfinished. It has used the tweets – or, at least, specific elements of them – to quantify behaviour and activity. This might allow us (or a computer) to extrapolate and draw conclusions. In this sense, it all seems rather objective.

And yet it isn’t objective. The choices made about the data collected and attached, and those which are not, were subjective. They were subjective regardless of who made them – human, computer or both. The visual representation of the data is also mediated and subjective – the clue is in the word ‘representation’. It’s necessarily, inescapably reductive. The key point is that this isn’t fundamentally bad. The data being subjective doesn’t make it meaningless or inaccurate or untrustworthy. Why privilege impartiality anyway?

And, moreover, the charge of subjectivity is easily dealt with. The quantified facts the archive presents are of course not the ‘whole picture’ (whatever that is). The conclusions we draw ought to be questioned. We should ensure that the non-quantifiable (tiredness, workload, scepticism) is considered too. There is scope for multiple interpretations, all at the same time (as I tried to show in the padlet). The ways in which the analytics are presented may or may not have educational value; we cannot be conclusive as this depends on the individual. It will motivate some while demotivating others. It will give some confidence while causing others to question themselves. There is space for all of these attitudes concurrently. The archive can’t tell us whether learning happened, or didn’t happen, or the quality of it: it was never intended to do so.

So, for me, the problem – the danger, even – with analytics like this isn’t that they’re subjective. It lies instead in their inescapable finality, even as the data collection is ongoing. The finality easily gives way to become ‘authority’, and the platform doesn’t particularly lend itself to that authority being questioned. Given the sheer number of tweets, searching and retrieving them is not simple. You can’t retrospectively change the choices made about the data which is collected, if you can change them at all. The platform does not allow it. That’s another choice, by the way. And again, it doesn’t really matter who made it.

Our ability to answer the questions set by Jeremy and James (or in my case, inability) is so fundamentally predicated on the fact that it happened last week. Our ability to identify where the data collected is subjective, and where or why this is problematic, is based on the same thing. We were there, we can remember it, so we can interpret it. And yet the fixedness, the finality, and the stability of the archive has to be compared with the fleetingness of the qualitative information and individual interpretation that we’re using to gloss it. Right at this moment, we can question the archive. Right at this moment, we know better. We have authority. But it’s temporary. After all, the data will last longer.


One Reply to “Analysing the tweetorial, or why we shouldn’t focus on subjectivity”

Comments are closed.