Lifestream analytics

When we first started setting up our lifestream blogs, I remember wondering briefly why we didn’t have access to WordPress’ normal in-built analytics and statistics. I have another WordPress blog, and I’ve got access to loads of stuff from there: number of visitors, where they’re from, etc. I think at the time I thought it must be a license issue, something to do with the way the university is using WordPress. I didn’t dwell on it particularly.

But one of the things about EDC that has been really stark for me so far is that it’s a bit of a metacourse. It’s experimental, and thoughtful, and deliberate. And so the quiet conspiracy theorist in me is wondering if this too is deliberate.

I started thinking about the analytics I could easily (i.e. in under 5 minutes) extract from the lifestream blog, and I was able to (manually) figure this out, throw the numbers into Excel and create a chart:

My posts per week (so far)

I also learned that I’ve used 177 tags in 129 posts, and the most popular tags are:

Tags used (so far)

Neither of these is massively revelatory. But there isn’t much other quantifiable information I could access simply and efficiently.

We’re reviewing our lifestreams at the moment, which means looking back at the things we’ve written, ideas we’ve encountered, and so on. There’s a practically unspoken set of rules about what it’s OK to edit, and what it isn’t; we might improve on the tags we’ve used, or categorise our posts, or we might correct a spelling mistake or a broken link. But we probably shouldn’t rewrite posts, tighten up ideas, or make things reflect what we’re thinking now rather than what we were thinking then. I say ‘practically unspoken’, because James practically spoke it earlier this week:

This is making me think about the role analytics plays in the assessment of the course. When we considered analytics for the tweetorial, one of the things I and a lot of people mentioned was how it was the quantifiable and not the qualifiable that is measured. How far do the analytics of our lifestream (which we can’t access easily, but maybe our glorious leaders can) impact upon the assessment criteria?

The course guide suggests that this is how we might get 70% or more on the lifestream part of the assessment:

From the course guide

Only one of these is quantifiable – Activity – and even that isn’t totally about the numbers. The frequency of posts, and the range of sources, are, but the appropriateness of posts isn’t. The number of lifestream summary posts, in Reflection, can be quantified, and the activities mentioned in Knowledge and Understanding are quantifiable too. But nothing else is. Everything else is about the quality of the posts. The assessment, largely, is about quality not quantity (apart from the few bits about quantity).

So evidently there are educational positives around growth, development, authenticity – not quite a ‘becoming’ (because I’ve been reading about how this educational premise is problematically humanist, natch) but ‘deepening’ or ‘ecologising’, if I can get away with making up two words in one blog post.

My first instinct is to say that the learning analytics that we seem to have access to at the moment really don’t seem to be up to the job, along with the prediction that this will not always be the case. But if there’s one thing I’ve learned about education and technology in this course it’s that technology shapes us as far as we shape it. So if it’s the case that the technology through which learning analytics can be performed won’t ever be able to capture the current state of educational feedback, does that mean that the state of educational feedback will be shaped or co-constituted by the technology available? And what does that look like? What are the points of resistance?

Always look on the bright side of life(stream) – Week 10 summary

Interpretation is the theme this week, wedded strongly to recognition of the need to make space for cognitive dissonance, for the pluralism of truth, for the concurrent existence of multiple and conflicting interpretations.

It emerges, for example, in considerations of what does, or should, constitute restricted content on YouTube. It’s there in questions around whether learning analytics might help or hinder the development of critical reflective skills on learning gain. And of course, it’s readily apparent in responses to the analytics of the tweetorial last week. In my padlet, my point wasn’t to indicate that some conclusions are better than others, though clearly sometimes they are. It was to demonstrate the potential co-existence of varying, contradictory interpretations. In my blog post analysing the data, I argue that it is the stability of data which gives pause, rather than its scope for misinterpretation. The data remains fixed while its meanings change, an ongoing annulment of data and meaning.

In many ways, this seems to conflict rather than cohere with EDC themes. In cybercultures, I questioned whose voices we hear and the ‘black boxing’ of the powerless or unprivileged. In community cultures, I discussed how singularity of voice or shared experience might engender community development. Here, though, I’m finding that interpretation is ceaselessly multifaceted.

Knox (2014) discusses the ways in which learning analytics might be a means of ‘making the invisible visible’. Perhaps this is happening here. The data is visible, where it once might be hidden; this permits a multitude of interpretations to be visible too, where once only the dominant interpretation would have been. Perhaps learning analytics elicits a shift in power.

Or, perhaps, the dominant interpretation has become this multitude of voices. The dissonance is destabilising, and so in the end only the data is rendered visible, stable, victorious.

Or, perhaps, both.

References

Knox, J. (2014). Abstracting Learning Analytics. Retrieved from https://codeactsineducation.wordpress.com/2014/09/26/abstracting-learning-analytics/

What I’m reading

At a conference today! #cctl2017

Tags:
March 23, 2017 at 11:53AM
Open in Evernote

I attended a Teaching Forum hosted by the Cambridge Centre for Teaching and Learning on Thursday, and this is a photo of some of the notes that I took during a presentation by Dr Sonia Ilie, on the LEGACY project. Dr Ilie discussed the results of a bit of qualitative research surrounding students’ understanding of learning gain.  One of her arguments put me in mind of learning analytics.

In case my handwriting isn’t clear, Dr Ilie reported that the research had demonstrated that students are variably equipped to reflect upon their own learning. I wondered – in the bottom comment of the photo – about the impact that learning analytics might have upon this. I’m interested in whether learning analytics might help students to develop critically reflective skills, or whether it might let them off the hook by effectively providing them with a shorthand version of that reflection.

Big Data, learning analytics, and posthumanism

I’ve now read a few articles assessing the pros and cons of learning analytics and, regardless of the methodologies employed, there are patterns and themes in what is being found. The benefits include institutional efficiency and institutional performance around financial planning and recruitment; for students, the benefits correspond to insights into learning and informed decision-making. These are balanced against the cons: self-fulfilling prophecies concerning at-risk students, the dangers of student profiling, risks to student privacy and questions around data ownership (Roberts et al., 2016; Lawson et al., 2016). This is often contextualised by socio-critical understandings which converge on notions of power and surveillance; some of the methodologies explicitly attempt to counter presumptions made as a result of this, for example, by bringing in the student voice (Roberts et al., 2016).

In reading these articles and studies, I was particularly interested in ideas around student profiling and student labelling, and how this is perceived (or sometimes spun) as a benefit for students. Arguments against student profiling focus on the oversimplification of student learning, students being labelled on past decisions, student identity being in a necessary state of flux (Mayer-Schoenberger, 2011). One of the things, though, that’s missing in all of this, the absence of which I am feeling keenly, is causation. It strikes me that big data and learning analytics can tell us what is, but not always why.

A similar observation leads Chandler to assert that Big Data is the kind of Bildungsroman of posthumanism (2015). He argues that Big Data is an epistemological revolution:

“displacing the modernist methodological hegemony of causal analysis and theory displacement” (2015, p. 833).

Chandler is not interested in the pros and cons of Big Data so much as the way in which it changes how knowledge is produced, and how we think about knowledge production. This is an extension of ideas espoused by Anderson, in which he argues that theoretical models are becoming redundant in a world of Big Data (2008). Similar, Cukier and Schoenberger argue that Big Data:

“represents a move away from trying to understand the deeper reasons behind how the world works to simply learning about an association among phenomena, and using that to get that done” (2013, p. 32).

Big Data aims not at instrumental knowledge, nor causal reasoning, but the revealing of feedback loops. It’s reflexive. And for Chandler, this represents an entirely new epistemological approach for making sense of the world, gaining insights which are ‘born from the data’, rather than planned in advance.

Chandler is interested in the ways in which Big Data can intersect with ideas in international relations and political governance, and many of his ideas are extremely translatable and relevant to higher education institutions. For example, Chandler argues that Big Data reflects political reality (i.e. what is) but it also transforms it through enabling community self-awareness. It allows reflexive problem-solving on the basis of this self-awareness. Similarly, it may be seen that learning analytics allows students to gain understanding of their learning and their progress, possibly in comparison with their peers.

This sounds great, but Chandler contends that it is necessarily accompanied by a warning: it isn’t particularly empowering for those who need social change:

Big Data can assist with the management of what exists […] but it cannot provide more than technical assistance based upon knowing more about what exists in the here and now. The problem is that without causal assumptions it is not possible to formulate effective strategies and responses to problems of social, economic and environmental threats. Big Data does not empower people to change their circumstances but merely to be more aware of them in order to adapt to them (p. 841-2).

The problem of lack of understanding of causation is raised in consideration of ‘at risk’ students – a student being judged on a series of data without any (potentially necessary) contextualisation. The focus is on reflexivity and relationality rather than how or why a situation has come about, and what the impact of it might be. Roberts et al. found that students were concerned about this, that learning analytics might drive inequality through advantaging only some students (2016).The demotivating nature of the EASI system for ‘at risk’ students is also raised by Lawson et al. (2016, p. 961). Too little consideration is given to the causality of ‘at risk’, and perhaps too much to essentialism.

His considerations of Big Data and international relations leads Chandler to assert cogently that:

Big Data articulates a properly posthuman ontology of self-governing, autopoietic assemblages of the technological and the social (2015, p. 845).

No one here is necessarily excluded, and all those on the periphery are brought in. Rather paradoxically, this appears to be both the culmination of the socio-material project, as well as an indicator of its necessity. Adopting a posthumanist approach to learning analytics may be a helpful critical standpoint, and is definitely something worth exploring further.

References

Anderson, C. (2008). The End of Theory: The Data Deluge Makes the Scientific Method Obsolete. Retrieved 19 March 2017, from https://www.wired.com/2008/06/pb-theory/
Chandler, D. (2015). A World without Causation: Big Data and the Coming of Age of Posthumanism. Millennium, 43(3), 833–851. https://doi.org/10.1177/0305829815576817
Cukier, K., & Mayer-Schoenberger, V. (2013). The Rise of Big Data: How It’s Changing the Way We Think About the World. Foreign Affairs, 92(3), 28–40.
Lawson, C., Beer, C., Rossi, D., Moore, T., & Fleming, J. (2016). Identification of ‘at risk’ students using learning analytics: the ethical dilemmas of intervention strategies in a higher education institution. Educational Technology Research and Development, 64(5), 957–968. https://doi.org/10.1007/s11423-016-9459-0
Mayer-Schonberger, V. (2011). Delete: the virtue of forgetting in the digital age: Princeton: Princeton University Press.
Roberts, L. D., Howell, J. A., Seaman, K., & Gibson, D. C. (2016). Student Attitudes toward Learning Analytics in Higher Education: ‘The Fitbit Version of the Learning World’. Frontiers in Psychology, 7. https://doi.org/10.3389/fpsyg.2016.01959

What I’m reading

Note

Tags:
March 18, 2017 at 04:48PM
Open in Evernote

I included this because it so strongly chimed with what I was thinking about student profiling – in particular, the highlighted bit reflects my experience of working with young people in HE, and the (in my opinion) dangers of treating any people, but particularly young people, as linear, as models of themselves, or as unable to follow unpredictable paths.

It’s from here, by the way:

Lawson, C., Beer, C., Rossi, D., Moore, T., & Fleming, J. (2016). Identification of ‘at risk’ students using learning analytics: the ethical dilemmas of intervention strategies in a higher education institution. Educational Technology Research and Development, 64(5), 957–968. https://doi.org/10.1007/s11423-016-9459-0

[Edit on Sunday 19th March: it’s also, I notice very much retrospectively, an attempt for me to use the lifestream to model how I study, how I make notes, how I identify comments and other thoughts. There’s another example here. I didn’t really realise I was doing this.]

‘Don’t call us lazy!’ Students call for activity tracking apps to be let loose on the education sector

85% of FE students think an activity tracking app for learning and teaching would be helpful, finds a new survey by Jisc, the digital services and solutions organisation for UK education and research.

from Pocket http://ift.tt/2mTCrdg
via IFTTT

I included this because it’s extremely recent, and contains such a different voice and attitude to learning analytics in comparison to what else I’ve been reading. The article documents how FE students are almost exclusively positive about learning analytics, in comparison to HE students documented in other places.

I missed the tweetstorm!

But…I always knew I would. It coincided with the last two days of our academic term, so my calendar was full, and long days meant I had no time even to read the tweets, let alone contribute to any of the discussions.

I’ve read it this morning, though, and it looks to have been hugely successful. Lots of tweets, lots of content, and I now find myself more than ever impressed and a little daunted by the talents and skills of my coursemates. But when Jeremy and James release the analytics of the tweetstorm that they’ve been collecting, I won’t be on it.

So here’s my question: how do you analyse absence?

a student who logs into an LMS leaves thousands of data points, including navigation patterns, pauses, reading habits and writing habits (Siemens, p. 1381)

Well, not only has Siemens never seen the dreadful analytics potential of the VLE we use, the crucial point is this: “who logs into”. Similar ideas are critically raised in the lecture we heard this week – the idea of using captured learning data from cradle to university, and using this to provide customised experiences. Learning analytics requires ‘logging’, both in the sense of ‘logging in’, but also in terms of the data trails left behind. You have to put your name to your activity.

This has significant implications. Thinking about assessment, it invites considerations around assessors’ bias (unconscious or otherwise). There are implications for openness and scale too – it’s probably pretty easy for the EDC website to track our locations, even our IP addresses, but you can never know for sure who is reading the website and who isn’t. You can probably come up with an average ‘time’ it might take to read the website page. You can probably track clicks on the links for the readings, but you can’t be sure it’s been read. So there are potentially knock-on effects for the sorts of platforms and media by which teaching can be performed. This relates back to something Jeremy and I discussed a while ago – a sort of tyranny around providing constant evidence for the things that we do, for our engagement with course ideas and course materials. It also smacks of behaviourism which – as Siemens and Long (2011) argue, is not appropriate in HE.

But it also has implications for the ‘lurkers’ among us. Students who may not engage in the ‘prescribed’ way, whether that’s through volition, through a poor internet connection, lack of time, changes in circumstances. How might these people have a personalised learning experience? What data might be collected about these people, and how can it incorporate the richness and subjectivity of experience, of happenstance, of humanness.

My question then, is this: can learning analytics track engagement without framing it entirely within the context of participation, or logging in? Because while these are indicators of engagement, they are not the same thing.

References

Siemens, G. (2013). Learning Analytics: The Emergence of a Discipline. American Behavioral Scientist, 57(10), 1380–1400. https://doi.org/10.1177/0002764213498851

Siemens, G., & Long, P. (n.d.). Penetrating the Fog: Analytics in Learning and Education. Retrieved 19 March 2017, from http://er.educause.edu/articles/2011/9/penetrating-the-fog-analytics-in-learning-and-education

What I’m reading

Initial thoughts on the JISC report on learning analytics

Sclater, N., Peasgood, A., & Mullan, J. (2016). Learning analytics in higher education. Retrieved 17 March 2017, from http://ift.tt/1SDGa6m.

  

Summary statement:

 
 

The executive summary identifies four areas in which learning analytics might be used.

 

1      “As a tool for quality assurance and quality improvement” – LA as diagnostic tool both at individual and systemic level; demonstrating compliance with new quality assurance arrangements.

2      “As a tool for boosting retention rates” – with institutions using analytics to identify at risk students, and intervening.

3      “As a tool for assessing and acting upon differential outcomes among the student population” – engagement and progress of e.g. BME students, students from low participation areas.

4      “As an enabler for the development and introduction of adaptive learning” – personalised learning delivered at scale.

Interested in the instrumentalist approach here: “as a tool”, “as an enabler” seems to make this inescapable. Needs – I would say – more recognition of the fact that the platforms, data sources, infrastructures are socio-technical: informed, at the very least, by the humans who created them. Who analyses the learning analytics? What would a posthuman analysis of learning analytics look like?

Some other interesting points made in the Introduction:

Imperative for universities to obtain value from the rich data sources they are building up about their learners. Information known about a person in advance of their application, data accumulated about educational progress, learners likely to withdraw can be identified (p. 12).

This is fair enough, but by heck there’s the potential for a lot of inequality and leveraging of privilege here. Students being judged by their past absolutely inhibits room for development, especially among young people for whom university may be a ‘fresh start’. Also issues around linearity of university experience, who (or what) defines what ‘progress’ looks like, and the fact that new university students are ‘starting’ from different places. Achievement at university level may be a line to reach (1st class, 2.i, 2.ii, etc.) but potential is not.

Learning analytics can furnish teachers with information on the quality of educational content and activities they are providing, and on teaching and assessment processes.

 Identifying a problem is great and useful, but solving that problem is even more important. Can learning analytics help here? Also, suggests that the quality of educational content and activities is fundamentally based on the – what, ability? – of the teacher, rather than the institutional pressures that teacher is under, things like the TEF disrupting good practices, austerity, the socio-economic climate, funding, university overcrowding, lack of resources, etc. It seems perhaps a little good to be true.

Benefits for learners include giving students better information on how they are progressing and what they need to do to meet their educational goals, which has the potential to transform learning and their understanding of how they learn by providing continual formative feedback.

That is, unless some of the things the students are doing – and possibly not doing well – are not trackable e.g. how well a student takes notes while they’re reading, for my students, is a pretty big deal. How will formative feedback be provided there? Tyranny of assessment, tyranny of evidence-base. And, if automated, couldn’t this be demotivating? 

 Adaptive learning systems are emerging to help students develop skills and knowledge in a more personalised way; “set to revolutionise the teaching of basic skills and the provision of educational content” (p. 13).

Basic skills? Such as…I don’t know enough about STEM to know if that would work there, but within HASS I can’t think of that many basic skills that this could help. Critical thinking, synthesising diverse opinions, forming an argument, developing a critical voice, clarity of expression – would these be ‘basic skills’?  Feels like quite a narrow view of what HE learning might be; in my experience, both as student and librarian it isn’t just box-ticking.

 Tags:

March 17, 2017 at 08:47AM
Open in Evernote