I missed the tweetstorm!

But…I always knew I would. It coincided with the last two days of our academic term, so my calendar was full, and long days meant I had no time even to read the tweets, let alone contribute to any of the discussions.

I’ve read it this morning, though, and it looks to have been hugely successful. Lots of tweets, lots of content, and I now find myself more than ever impressed and a little daunted by the talents and skills of my coursemates. But when Jeremy and James release the analytics of the tweetstorm that they’ve been collecting, I won’t be on it.

So here’s my question: how do you analyse absence?

a student who logs into an LMS leaves thousands of data points, including navigation patterns, pauses, reading habits and writing habits (Siemens, p. 1381)

Well, not only has Siemens never seen the dreadful analytics potential of the VLE we use, the crucial point is this: “who logs into”. Similar ideas are critically raised in the lecture we heard this week – the idea of using captured learning data from cradle to university, and using this to provide customised experiences. Learning analytics requires ‘logging’, both in the sense of ‘logging in’, but also in terms of the data trails left behind. You have to put your name to your activity.

This has significant implications. Thinking about assessment, it invites considerations around assessors’ bias (unconscious or otherwise). There are implications for openness and scale too – it’s probably pretty easy for the EDC website to track our locations, even our IP addresses, but you can never know for sure who is reading the website and who isn’t. You can probably come up with an average ‘time’ it might take to read the website page. You can probably track clicks on the links for the readings, but you can’t be sure it’s been read. So there are potentially knock-on effects for the sorts of platforms and media by which teaching can be performed. This relates back to something Jeremy and I discussed a while ago – a sort of tyranny around providing constant evidence for the things that we do, for our engagement with course ideas and course materials. It also smacks of behaviourism which – as Siemens and Long (2011) argue, is not appropriate in HE.

But it also has implications for the ‘lurkers’ among us. Students who may not engage in the ‘prescribed’ way, whether that’s through volition, through a poor internet connection, lack of time, changes in circumstances. How might these people have a personalised learning experience? What data might be collected about these people, and how can it incorporate the richness and subjectivity of experience, of happenstance, of humanness.

My question then, is this: can learning analytics track engagement without framing it entirely within the context of participation, or logging in? Because while these are indicators of engagement, they are not the same thing.

References

Siemens, G. (2013). Learning Analytics: The Emergence of a Discipline. American Behavioral Scientist, 57(10), 1380–1400. https://doi.org/10.1177/0002764213498851

Siemens, G., & Long, P. (n.d.). Penetrating the Fog: Analytics in Learning and Education. Retrieved 19 March 2017, from http://er.educause.edu/articles/2011/9/penetrating-the-fog-analytics-in-learning-and-education