When we first started setting up our lifestream blogs, I remember wondering briefly why we didn’t have access to WordPress’ normal in-built analytics and statistics. I have another WordPress blog, and I’ve got access to loads of stuff from there: number of visitors, where they’re from, etc. I think at the time I thought it must be a license issue, something to do with the way the university is using WordPress. I didn’t dwell on it particularly.
But one of the things about EDC that has been really stark for me so far is that it’s a bit of a metacourse. It’s experimental, and thoughtful, and deliberate. And so the quiet conspiracy theorist in me is wondering if this too is deliberate.
I started thinking about the analytics I could easily (i.e. in under 5 minutes) extract from the lifestream blog, and I was able to (manually) figure this out, throw the numbers into Excel and create a chart:
I also learned that I’ve used 177 tags in 129 posts, and the most popular tags are:
Neither of these is massively revelatory. But there isn’t much other quantifiable information I could access simply and efficiently.
We’re reviewing our lifestreams at the moment, which means looking back at the things we’ve written, ideas we’ve encountered, and so on. There’s a practically unspoken set of rules about what it’s OK to edit, and what it isn’t; we might improve on the tags we’ve used, or categorise our posts, or we might correct a spelling mistake or a broken link. But we probably shouldn’t rewrite posts, tighten up ideas, or make things reflect what we’re thinking now rather than what we were thinking then. I say ‘practically unspoken’, because James practically spoke it earlier this week:
@HerrSchwindenh_ @philip_downey @helenwalker7 Would prefer it to reflect thinking/engagement at time rather than a revised version #mscedc
— James Lamb (@james858499) March 28, 2017
This is making me think about the role analytics plays in the assessment of the course. When we considered analytics for the tweetorial, one of the things I and a lot of people mentioned was how it was the quantifiable and not the qualifiable that is measured. How far do the analytics of our lifestream (which we can’t access easily, but maybe our glorious leaders can) impact upon the assessment criteria?
The course guide suggests that this is how we might get 70% or more on the lifestream part of the assessment:
Only one of these is quantifiable – Activity – and even that isn’t totally about the numbers. The frequency of posts, and the range of sources, are, but the appropriateness of posts isn’t. The number of lifestream summary posts, in Reflection, can be quantified, and the activities mentioned in Knowledge and Understanding are quantifiable too. But nothing else is. Everything else is about the quality of the posts. The assessment, largely, is about quality not quantity (apart from the few bits about quantity).
So evidently there are educational positives around growth, development, authenticity – not quite a ‘becoming’ (because I’ve been reading about how this educational premise is problematically humanist, natch) but ‘deepening’ or ‘ecologising’, if I can get away with making up two words in one blog post.
My first instinct is to say that the learning analytics that we seem to have access to at the moment really don’t seem to be up to the job, along with the prediction that this will not always be the case. But if there’s one thing I’ve learned about education and technology in this course it’s that technology shapes us as far as we shape it. So if it’s the case that the technology through which learning analytics can be performed won’t ever be able to capture the current state of educational feedback, does that mean that the state of educational feedback will be shaped or co-constituted by the technology available? And what does that look like? What are the points of resistance?