Lifestream, Tweets

In the article Cathy tweeted,  ‘Predictive Analytics: Nudging, Shoving, and Smacking Behaviors in Higher Education’, there is the suggestion that HE institutions could use the wide net of data points they collect to ‘nudge’ students and improve student outcomes. Regardless of the intentions, I find it all a bit sickening, to be honest: ‘Nudging, used wisely,  offers a promising opportunity to redirect students’ decisions’/’With predictive analytics, colleges and universities are able to “nudge” individuals toward making better decisions and exercising rational behavior to enhance their probabilities of success.’ How efficient everything will be once those pesky irrational decisions are eradicated.. and we all behave in the same, well-policed manner. Heck, we won’t even need robots then..

It made me think of this video –

A focus on ‘rational’ decision-making doesn’t just make me  (so. very. utterly.) mad because of the restrictions to free will that it could imply. Throughout history, people (women, people of colour, less-abled people) have been denied access to political process and justice due to not being considered capable of ‘rational’ thought. Who decides what is rational? Based on what values? Any kind of system which encourages (or enforces) any particular way of thinking needs to be accountable, and we – society – need to be able to influence the values underpinning the system.

This, of course, goes full-circle, and links back to Rahwan’s (2016) ideas about putting ‘society-in-the-loop’ of algorithmic governance, and to the ethical concerns associated with technologies that grew out of our cyber cultures block (discussed, for example, in an earlier blog post on the political beliefs of various transhumanist positions).

Lifestream, Tweets

The paper referenced here is chapter 6 from Bruning’s Cognitive
Psychology and Instruction (2004). It connects with the Durall and Gros (2014) article I wrote about earlier in the week, in that one of the chapter’s foci is self-regulated learning. The chapter also discusses attribution theory, which looks at what individuals use to explain causation of events in their lives. LA could have influence in the attribution cycle, as it could make data available to students about how their study habits differed to those of their peers. However, it’s an ethically messy area, since giving students access to comparative data also requires revealing other students’ data to them.

Lifestream, Tweets

In ‘Learning Analytics as a Metacognitive Tool‘, Durall & Gros (2014) argue for making data more transparent to students, and providing visualisation tools which students can (selectively) use to view their data. In this way, students can make data more meaningful by only focusing on those metrics which they value. The authors align such an approach with Human-Data Interaction:

According to Haddadi et al. (2013) “The term Human-Data-Interaction (HDI) arises from the need, both ethical and practical, to engage users to a much greater degree with the collection, analysis, and trade of their personal data, in addition to providing them with an intuitive feedback mechanism” (p.3).

Durall & Gross, 2014, p. 382

Further, they suggest that giving access to and control of data to students in this way can support self-directed learning (SDL) and self-regulated learning (SRL). I agree that it can – but we need to support developing this kind of learner, so that such tools can be effective, and help establish support networks for students who prefer more communal learning approaches or seek greater direction/instruction. It’s a much more exciting (pedagogically) way of looking at LA, to me.

Here are the slides to go with the paper: