During snatched moments this week I have been thinking about algorithms and learning analytics, but in an uninformed and distracted way as it has been busy at work. Yet this time was spent in a world semi-constituted and organised by algorithms without my really taking note, as Nigel’s tweet about the way emails get placed into Clutter folders reminded me,
and as even my own lifestream should have underlined as it filled with tweets and posts left uncommented.
My default position on Learning Analytics I expressed early on, but I recognised the need to fight this instinct or at least to examine it more carefully. Siemens’ suggestion that
For some students, the sharing of personal data with an institution in exchange for better support and personalised learning will be seen as a fair value exchange.
(Siemens, 2013, p.1394)
had compounded my involuntary rejection of LA as it packed so many contentious statements in one short sentence.
I took issue with the bargaining trope of data exchange for assured personal gain. I questioned who decides what ‘better support’ is and whether such a promise would hold out after the relinquishing of data. I remained suspicious of the student and institution arriving at a fair outcome when the power balance of that relationship is characterised by inequality. I was wary of ‘personalised learning’ and wondered what it really means and whether it would divest the learner of any of their own thinking skills.
Although at the week’s end when I could read more I discovered Jisc’s counter to my worry,
Students maintain appropriate levels of autonomy in decision making relating to their learning, using learning analytics where appropriate to help inform their decisions.
I remained sceptical, however, because for some students reflection and meta-cognition are not easily achieved (nor always introduced and encouraged) and an effort to develop them may more simply be contracted out to graphs and graphics, leading to a misunderstanding of what counts in learning.
After reading Siemens (2013) my head was full of buzzwords such as actionable insights. I consoled myself by deciding actionable is not a word, but when I looked it up, I found its definition to be rooted in law and, seemingly, marketing, which was indeed insightful.
I had to keep reminding myself (and having to be reminded) that politics and power struggles happen with or without algorithms and not to fall into the trap of algorithms bad, no algorithms good. (What is the opposite of algorithm? Chaos? Proper choice? Manual?) I didn’t think their pervasive and deep penetration of our daily lives was a reason not to want to examine them and get a measure of their scope, dangers and failings, in accordance with Beer’s stated acknowledgement of
a sense that we need to understand what algorithms are and what they do in order to fully grasp their influence and consequences
(Beer, 2017, p.3)
Kitchin (2017) offers “six methodological approaches” (Abstract) to understanding them such as spending time with coders, conducting ethnographies, reverse engineering and witnessing others doing so.
I did of course, get ensnared in thinking that algorithms are dissociable from the sociotechnical world they co-constitute, especially frustrating as I see exactly how coded IF statements are firmly rooted in context: IF … THEN … ELSE …, where the elipses here stand in for prescriptive descriptions of the very detail of our lives and can comprise, too, more nested IF statements or containers into which variables are poured – by us, or by other algorithms, with such complexity, interrelation and recursiveness that these codes seem at once to be “neutral and trustworthy systems working beyond human capacity” (Beer, 2017, p.9-10) as well as organic-seeming and mutable, causing the need, from time to time, for the hand of the putative “viewer of everything from nowhere” (the fictitious person alluded to in Ben Williamson’s lecture) to make the fine adjustments named tweaks. The hand that tweaks is firmly located, but hidden, often in financial, commercial, government or educational institutions, involved in a secret and protected remit to organise and present the knowledge that ensures their continued power.
As Beer, quoting Foucault, makes the point,
… the delicate mechanisms of power cannot function unless knowledge, or rather knowledge apparatuses, are formed, organised and put into circulation.”
(Beer, 2017, p.10)
Manovich (1999, p.27) states that the point of the computer game is the gradual revealing of its hidden structure, the exact opposite of the algorithm which operates under cover by stealth to confound our mapping of it. Algorithms all too easily offer themselves as inscrutable and indecipherable, attributes which supply their perfect camouflage of objectivity and neutrality, as mechanisms for avoiding the bias and prejudice of messy human judgement. Commenting on the twofold “translation of a task or problem” into code, Kitchin states
The processes of translation are often portrayed as technical, benign and commonsensical
(Kitchin 2017, p.17).
It is recognised that Learning Analytics needs to gather information from multiple data points from distributed systems to better map and model the learner in recursive processes. Inherent in this gathering are decisions about what to collect, from where and how, with each of these decisions dependent on the platforms and software that capture the information and which have encoded in them their own particular affordances, constraints and bias. Once aggregated by another encoded fitment, decisions on how to interpret data have to be made as well as comparisons drawn against like typical and historical models in order to arrive at what might be predicted or trigger action. Siemens (2013) outlines problems of data interoperability himself,
distributed and fragmented data present a significant challenge for analytics researchers
(Seimens, 2013, p.1393)
This complex sociotechnical construction is not in any way an objective systematised analysis of authentic behaviour, but a range of encoded choices afforded by particular softwares and programming languages made by living and breathing individuals acting on a range of motivations to construct a more, but probably less, reliable image of the student. The construction of LA will favour some but perhaps inhibit, repel, harm or exclude others.
In addition, learning analytics posits the educational project as reducible to numbers, as a discernible learning process which may be audited and in which
‘dataveillance’ functions to decrease the influence of ‘human’ experience and judgement, with it no longer seeming to matter what a teacher may personally know about a student in the face of his or her ‘dashboard’ profile and aggregated tally of positive and negative ‘events’
(Selwyn, 2014 p.59)
Learning Analytics attempts to seek out patterns which naturally begs the question, what about the data which falls away from the pattern cutter?
Another danger of pattern searching is voiced by boyd,
Big Data enables the practice of apophenia: seeing patterns where none actually exist
(boyd, 2012, p.668)
Patterns are concerned with data that recurs and they fail to take account of the myriad minute varied detail in which crucial contextual information may lie,
Data are not generic. There is value to analysing data abstractions, yet retaining context remains critical, particularly for certain lines of inquiry. Context is hard to interpret at scale and even harder to maintain when data are reduced to fit a model.
(boyd, 2012, p.671)
Siemens (2013) too, alludes to the difficulty in getting the measure of the individual,
recognizing unique traits, goals, and motivtions of individuals remains an important activity in learning analytics
(Siemens, 2013, p.1383)
So much for my own objectivity and neutrality, I seem to have fallen back into that pit whose muddy walls are white and mostly black. Struggling back out, I voiced my concerns in the tweetorial, but attempted to remain open minded,
If this state of affairs which is learning analytics today, is surfaced and properly taken into account, the endeavour shouldn’t be rejected out of hand, but investigated, honed and trialed to see if can usefully help understand the conditions for learning as well as support learners. It should be done in full partnership with students, enabling a more equal and transparent participatory experience as the University of Edinburgh’s LARC project demonstrates.
The significant barriers to LA, ethics and privacy, can be foregrounded and regarded as “enablers rather than barriers” (Gašević, Dawson and Jovanović, 2016) as the editors of the Journal of Learning Analytics encourage,
We would [also] like to posit that learning analytics can be only widely used once these critical factors are addressed, and thus, these are indeed enablers rather than barriers for adoption (p.2)
Jisc has drawn up a Code of Practice for learning analytics (2015) which does attempt to address issues of privacy, transparency and consent. For example,
Options for granting consent must be clear and meaningful, and any potential adverse consequences of opting out must be explained. Students should be able easily to amend their decisions subsequently.
(Jisc, 2015, p.2)
Pardo and Seimens (2014) identify a set of principles
to narrow the scope of the discussion and point to pragmatic approaches to help design and research learning experiences where important ethical and privacy issues are considered. (Abstract)
Yet even if the challenges of ethics and privacy are overcome, there remains the danger that learning analytics reveals only a very pixelated image of the student, one which might place her at a judged disadvantage, an indelible skewed blueprint existing in perpetuity and following her to future destinations. That this should be the case is not surprising if we consider that a sociomaterial account of learning analytics foregrounds its complex mix of the human, the technical and the material performing an analysis and an analysand by a partial apparatus of incomplete measurement. The encoded institution’s audit met with the absence of student context or nuance, means that LA will struggle to give anything other than general actionable insights.
Beer, D. (2017). The social power of algorithms. Information, Communication & Society, 20(1), pp.1-13.
boyd, d. and Crawford, K. (2012). Critical questions for Big Data. Information, Communication & Society, 15(5), pp.662-679.
Gašević, D., Dawson, S., Jovanović, J. (2016). Ethics and privacy as enablers of Learning Analytics. Journal of Learning Analytics, 3(1), pp.1-4.
Jisc, (2015). Code of practice for learning analytics. Available at: https://www.jisc.ac.uk/guides/code-of-practice-for-learning-analytics
Kitchin, R. (2017). Thinking critically about and researching algorithms. Information, Communication & Society, 20(1), pp.14-29
Manovich, L. (1999). Database as a symbolic form. Millennium Film Journal (Archive), 34, Screen Studies Collection, pp. 24-43
Pardo, A., Siemens, G. (2014). Ethical and privacy principles for learning analytics. British Journal of Educational Technology, 45(3), pp.438-450.
Selwyn, N. (2014). Distrusting Educational Technology. Routledge, New York.
Siemens, G. (2013). Learning Analytics: the emergence of a discipline. American Behavioral Scientist, 57(10), pp.1380-1400
Williamson, B. (2017). Computing brains: learning algorithms and neurocomputation in the smart city. Information, Communication & Society, 20(1), pp.81-99.