— Cathy Hills (@fleurhills) March 23, 2017
— Cathy Hills (@fleurhills) March 22, 2017
Predictive analytics – will it be used for nudging?
Bradbury et al declare the project of behavioural economics is
to model the essential irrationality of choosers, and in so doing to render the flaws in their choosing predictable … then be used to make claims as to how social and economic systems might be designed to counteract individuals’ tendencies to make ‘bad’ decisions and to stimulate ‘good’ decisions.
(Bradbury, McGimpsey and Santori, 2012, p.250)
The Educause article similarly relates the concept of the nudge as a
theory which centers on prompting individuals to modify their behavior in a predictable way (usually to make wiser decisions) without coercing them, forbidding actions, or changing consequences.
These descriptions point to how ‘irrational’ student behaviour may emerge from learning analytics data to be met with helpful and gentle attempts at ‘correction’ in the students’ best interests.
It sounds plausible and paternalistic, yet whilst making a point of neither forbidding nor coercing the individual, the ‘choice architect’ or ‘policy maker’ is concerned with constructing a situation in which the ‘correct’ course of action is not only implicit, but foundational and pervasive. It is a dynamic bias-in-action under the guise of neutrality and provision of choice. Disingenuous too, because it advertises human irrationality as undesirable whilst sloping the ground towards the one choice it deems appropriate.
Bradbury et al describe this ‘liberal paternalism’ as ‘the co-option of behavioural economics for the continuity of the neoliberal project’ (p.255), with economic reasons for adoption in education settings being cited by the Educause article,
The combination of automation and nudges is alluring to higher education institutions because it requires minimal human intervention. This means that there are greater possibilities for more interventions and nudges, which are likely to be much more cost- and time-effective.
Nudging and its more coercive or punitive variations, ‘shoving’ and ‘smacking’, carry the risk of inappropriate application through, for example, misinterpreting data or disregarding contextual detail excluded from it. Worse, the attempt to correct or eliminate irrationality is dangerous when the long-term effects of doing so are unknown, when what is considered ‘irrational’ is up for question and when it is subject to the substitution of only one option by a determinedly non-neutral party. An attempt to curb our freedom to choose what is regarded by one political project as ‘incorrect’ is an incursion of human rights and those rights, particularly as they belong to students already dominated by institutional or commercialised powers, should be protected. As the article concludes,
with new technologies, we need to know more about the intentions and remain vigilant so that the resulting practices don’t become abusive. The unintended consequences of automating, depersonalizing, and behavioral exploitation are real. We must think critically about what is most important: the means or the end.
Bradbury, A., McGimpsey, I., and Santori, D. (2012). Revising rationality: the use of ‘Nudge’ approaches in neoliberal education policy. Journal of Education Policy 28 (2), pp. 247-267.
Storify on #mscedc Algorithm Tweet chat
— Jeremy Knox (@j_k_knox) March 10, 2017
The video shared by Chenée exemplifies Gillespie’s Patterns of Inclusion,
Patterns of inclusion: the choices behind what makes it into an index in the first place, what is excluded, and how data is made algorithm ready
Gillespie, T. (2012). The Relevance of Algorithms. in Media Technologies, ed. Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot. Cambridge, MA: MIT Press.
— Matthew Sleeman (@Digeded) March 8, 2017
Creating new words for this community of practice and suggesting a requirement for new language to plot and perform our changing digital and educational landscape.
— Eric Stoller (@EricStoller) March 8, 2017
I read this Jisc news item and it made me angry. It looks like a classic piece of technological determinism – applying Learning Analytics to education to ‘improve’ retention and reduce administrative costs. Why not, if, as Jisc asserts, there’s ‘trouble’ with humans,
how good do we actually think people are?
And makes spurious comparisons,
How difficult is it to intervene with a student identified as at risk by a learning analytics processor? Is it harder than driving a car, which computers already do better than us?
At the moment I take issue with just about every sentence, but I want time to think and read about it. The article is certainly part of the discursive effort creating the conditions for Learning Analytics to become a reality.