Lifestream analytics

When we first started setting up our lifestream blogs, I remember wondering briefly why we didn’t have access to WordPress’ normal in-built analytics and statistics. I have another WordPress blog, and I’ve got access to loads of stuff from there: number of visitors, where they’re from, etc. I think at the time I thought it must be a license issue, something to do with the way the university is using WordPress. I didn’t dwell on it particularly.

But one of the things about EDC that has been really stark for me so far is that it’s a bit of a metacourse. It’s experimental, and thoughtful, and deliberate. And so the quiet conspiracy theorist in me is wondering if this too is deliberate.

I started thinking about the analytics I could easily (i.e. in under 5 minutes) extract from the lifestream blog, and I was able to (manually) figure this out, throw the numbers into Excel and create a chart:

My posts per week (so far)

I also learned that I’ve used 177 tags in 129 posts, and the most popular tags are:

Tags used (so far)

Neither of these is massively revelatory. But there isn’t much other quantifiable information I could access simply and efficiently.

We’re reviewing our lifestreams at the moment, which means looking back at the things we’ve written, ideas we’ve encountered, and so on. There’s a practically unspoken set of rules about what it’s OK to edit, and what it isn’t; we might improve on the tags we’ve used, or categorise our posts, or we might correct a spelling mistake or a broken link. But we probably shouldn’t rewrite posts, tighten up ideas, or make things reflect what we’re thinking now rather than what we were thinking then. I say ‘practically unspoken’, because James practically spoke it earlier this week:

This is making me think about the role analytics plays in the assessment of the course. When we considered analytics for the tweetorial, one of the things I and a lot of people mentioned was how it was the quantifiable and not the qualifiable that is measured. How far do the analytics of our lifestream (which we can’t access easily, but maybe our glorious leaders can) impact upon the assessment criteria?

The course guide suggests that this is how we might get 70% or more on the lifestream part of the assessment:

From the course guide

Only one of these is quantifiable – Activity – and even that isn’t totally about the numbers. The frequency of posts, and the range of sources, are, but the appropriateness of posts isn’t. The number of lifestream summary posts, in Reflection, can be quantified, and the activities mentioned in Knowledge and Understanding are quantifiable too. But nothing else is. Everything else is about the quality of the posts. The assessment, largely, is about quality not quantity (apart from the few bits about quantity).

So evidently there are educational positives around growth, development, authenticity – not quite a ‘becoming’ (because I’ve been reading about how this educational premise is problematically humanist, natch) but ‘deepening’ or ‘ecologising’, if I can get away with making up two words in one blog post.

My first instinct is to say that the learning analytics that we seem to have access to at the moment really don’t seem to be up to the job, along with the prediction that this will not always be the case. But if there’s one thing I’ve learned about education and technology in this course it’s that technology shapes us as far as we shape it. So if it’s the case that the technology through which learning analytics can be performed won’t ever be able to capture the current state of educational feedback, does that mean that the state of educational feedback will be shaped or co-constituted by the technology available? And what does that look like? What are the points of resistance?

What I’m reading

Bayne, S. (2015). What’s the matter with technology-enhanced learning? Learning, Media and Technology, 40(1), 5:20. http://ift.tt/2kEs2zR

In this article, Bayne argues convincingly in favour of subjecting the term ‘technology-enhanced learning’ to a far more rigorous critique. This, she contends, would fruitfully draw on critical posthumanism, considerations of the boundaries of ‘the human’, and the inescapable politics of education, how we perceive of education’s function and purpose. In a deliciously meta move, she conducts such a critique, and concludes that – among other things – she was right to do so.

I was struck by the nuanced way in which Bayne describes the enmeshment of the term ‘TEL’, and the reality (if we can call it that) to which it shorthandedly refers. I liked how she used her critique of the term to inform and illuminate her critique of what it reflects, how it is used, and the political, social, educational situation which gave rise to it.

Bayne argues that technology and education are:

co-constitutive of each other, entangled in cultural, material, political and economic assemblages of great complexity

The term TEL, and what it implies, are flawed, because it doesn’t take account of this complexity. But what’s the alternative? It’d be a hard case to push to a VC that their new spangly TEL department should be renamed the ‘Department of Co-Constituitive Assemblages of Technology and Education’. DECATE for short. But I don’t think this is really what Bayne is getting at.

Instead, I think the real message underlying Bayne’s argument is contra shorthandedness in general – the lazy binary of technological determinism vs technological instrumentalism, and the assumptions that we might make about education and technology and the relationship between the two. Bayne’s rallying call, ultimately, is for a heck of a lot of critical thinking.

 

Tags:
January 28, 2017 at 01:57PM
Open in Evernote