Predictive analytics, nudging, shoving and smacking and the secret sauce of algorithms. #mscedc https://t.co/xGq78LBkpo
— Cathy Hills (@fleurhills) March 22, 2017
Predictive analytics – will it be used for nudging?
Bradbury et al declare the project of behavioural economics is
to model the essential irrationality of choosers, and in so doing to render the flaws in their choosing predictable … then be used to make claims as to how social and economic systems might be designed to counteract individuals’ tendencies to make ‘bad’ decisions and to stimulate ‘good’ decisions.
(Bradbury, McGimpsey and Santori, 2012, p.250)
The Educause article similarly relates the concept of the nudge as a
theory which centers on prompting individuals to modify their behavior in a predictable way (usually to make wiser decisions) without coercing them, forbidding actions, or changing consequences.
These descriptions point to how ‘irrational’ student behaviour may emerge from learning analytics data to be met with helpful and gentle attempts at ‘correction’ in the students’ best interests.
It sounds plausible and paternalistic, yet whilst making a point of neither forbidding nor coercing the individual, the ‘choice architect’ or ‘policy maker’ is concerned with constructing a situation in which the ‘correct’ course of action is not only implicit, but foundational and pervasive. It is a dynamic bias-in-action under the guise of neutrality and provision of choice. Disingenuous too, because it advertises human irrationality as undesirable whilst sloping the ground towards the one choice it deems appropriate.
Bradbury et al describe this ‘liberal paternalism’ as ‘the co-option of behavioural economics for the continuity of the neoliberal project’ (p.255), with economic reasons for adoption in education settings being cited by the Educause article,
The combination of automation and nudges is alluring to higher education institutions because it requires minimal human intervention. This means that there are greater possibilities for more interventions and nudges, which are likely to be much more cost- and time-effective.
Nudging and its more coercive or punitive variations, ‘shoving’ and ‘smacking’, carry the risk of inappropriate application through, for example, misinterpreting data or disregarding contextual detail excluded from it. Worse, the attempt to correct or eliminate irrationality is dangerous when the long-term effects of doing so are unknown, when what is considered ‘irrational’ is up for question and when it is subject to the substitution of only one option by a determinedly non-neutral party. An attempt to curb our freedom to choose what is regarded by one political project as ‘incorrect’ is an incursion of human rights and those rights, particularly as they belong to students already dominated by institutional or commercialised powers, should be protected. As the article concludes,
with new technologies, we need to know more about the intentions and remain vigilant so that the resulting practices don’t become abusive. The unintended consequences of automating, depersonalizing, and behavioral exploitation are real. We must think critically about what is most important: the means or the end.
Bradbury, A., McGimpsey, I., and Santori, D. (2012). Revising rationality: the use of ‘Nudge’ approaches in neoliberal education policy. Journal of Education Policy 28 (2), pp. 247-267.
This is a really interesting topic Cathy, I read about it last week and it really got me thinking about the recommendation algorithm we were all talking about in relation to recommending courses. Apart from the obvious of the authority in charge of the recommendations and their motives for the recommendations, I worried about how it could be used or abused by the person being given recommendations.
I know from working at a Uni that there are a lot of students who choose their courses based on how easy they think the assignments sound and how the course fits into what they want to do (like no group work). So I wondered about how we’d sell this recommendation, if we said, here are courses that people who took your course passed, would that influence students to choose courses because it might be an easy pass?
Eli
Yeung, K., 2017. “Hypernudge”: Big Data as a mode of regulation by design. Information, Communication and Society, 20(1), pp.118–136.
Dear Eli
Thanks for your comment and for the insight on students choosing courses with easy-looking assignments – what a good idea 😉
One interesting point in the article is the description of choice architecture
Choice architecture doesn’t look for individuals to act more rationally; instead, it seeks to create environments that accord with rational decision-making.
One way of looking at this is that it hopefully makes the architects (with all that the word implies in terms of power and influence) think about the ‘environments that accord with rational decision-making’ – that might mean how can we nudge students to do what they don’t like but is ‘good for them’, but it can also mean, why don’t they like, for example, group assignments? Does their dislike counteract any of the perceived benefits of doing them? Do they really prepare students for collaborating in the work place …? I just thought that there might be a two-way aspect to nudging by analysing what is actually happening and choosing to change the environment, not always the individual.
Cathy