Big Data, learning analytics, and posthumanism

I’ve now read a few articles assessing the pros and cons of learning analytics and, regardless of the methodologies employed, there are patterns and themes in what is being found. The benefits include institutional efficiency and institutional performance around financial planning and recruitment; for students, the benefits correspond to insights into learning and informed decision-making. These are balanced against the cons: self-fulfilling prophecies concerning at-risk students, the dangers of student profiling, risks to student privacy and questions around data ownership (Roberts et al., 2016; Lawson et al., 2016). This is often contextualised by socio-critical understandings which converge on notions of power and surveillance; some of the methodologies explicitly attempt to counter presumptions made as a result of this, for example, by bringing in the student voice (Roberts et al., 2016).

In reading these articles and studies, I was particularly interested in ideas around student profiling and student labelling, and how this is perceived (or sometimes spun) as a benefit for students. Arguments against student profiling focus on the oversimplification of student learning, students being labelled on past decisions, student identity being in a necessary state of flux (Mayer-Schoenberger, 2011). One of the things, though, that’s missing in all of this, the absence of which I am feeling keenly, is causation. It strikes me that big data and learning analytics can tell us what is, but not always why.

A similar observation leads Chandler to assert that Big Data is the kind of Bildungsroman of posthumanism (2015). He argues that Big Data is an epistemological revolution:

“displacing the modernist methodological hegemony of causal analysis and theory displacement” (2015, p. 833).

Chandler is not interested in the pros and cons of Big Data so much as the way in which it changes how knowledge is produced, and how we think about knowledge production. This is an extension of ideas espoused by Anderson, in which he argues that theoretical models are becoming redundant in a world of Big Data (2008). Similar, Cukier and Schoenberger argue that Big Data:

“represents a move away from trying to understand the deeper reasons behind how the world works to simply learning about an association among phenomena, and using that to get that done” (2013, p. 32).

Big Data aims not at instrumental knowledge, nor causal reasoning, but the revealing of feedback loops. It’s reflexive. And for Chandler, this represents an entirely new epistemological approach for making sense of the world, gaining insights which are ‘born from the data’, rather than planned in advance.

Chandler is interested in the ways in which Big Data can intersect with ideas in international relations and political governance, and many of his ideas are extremely translatable and relevant to higher education institutions. For example, Chandler argues that Big Data reflects political reality (i.e. what is) but it also transforms it through enabling community self-awareness. It allows reflexive problem-solving on the basis of this self-awareness. Similarly, it may be seen that learning analytics allows students to gain understanding of their learning and their progress, possibly in comparison with their peers.

This sounds great, but Chandler contends that it is necessarily accompanied by a warning: it isn’t particularly empowering for those who need social change:

Big Data can assist with the management of what exists […] but it cannot provide more than technical assistance based upon knowing more about what exists in the here and now. The problem is that without causal assumptions it is not possible to formulate effective strategies and responses to problems of social, economic and environmental threats. Big Data does not empower people to change their circumstances but merely to be more aware of them in order to adapt to them (p. 841-2).

The problem of lack of understanding of causation is raised in consideration of ‘at risk’ students – a student being judged on a series of data without any (potentially necessary) contextualisation. The focus is on reflexivity and relationality rather than how or why a situation has come about, and what the impact of it might be. Roberts et al. found that students were concerned about this, that learning analytics might drive inequality through advantaging only some students (2016).The demotivating nature of the EASI system for ‘at risk’ students is also raised by Lawson et al. (2016, p. 961). Too little consideration is given to the causality of ‘at risk’, and perhaps too much to essentialism.

His considerations of Big Data and international relations leads Chandler to assert cogently that:

Big Data articulates a properly posthuman ontology of self-governing, autopoietic assemblages of the technological and the social (2015, p. 845).

No one here is necessarily excluded, and all those on the periphery are brought in. Rather paradoxically, this appears to be both the culmination of the socio-material project, as well as an indicator of its necessity. Adopting a posthumanist approach to learning analytics may be a helpful critical standpoint, and is definitely something worth exploring further.

References

Anderson, C. (2008). The End of Theory: The Data Deluge Makes the Scientific Method Obsolete. Retrieved 19 March 2017, from https://www.wired.com/2008/06/pb-theory/
Chandler, D. (2015). A World without Causation: Big Data and the Coming of Age of Posthumanism. Millennium, 43(3), 833–851. https://doi.org/10.1177/0305829815576817
Cukier, K., & Mayer-Schoenberger, V. (2013). The Rise of Big Data: How It’s Changing the Way We Think About the World. Foreign Affairs, 92(3), 28–40.
Lawson, C., Beer, C., Rossi, D., Moore, T., & Fleming, J. (2016). Identification of ‘at risk’ students using learning analytics: the ethical dilemmas of intervention strategies in a higher education institution. Educational Technology Research and Development, 64(5), 957–968. https://doi.org/10.1007/s11423-016-9459-0
Mayer-Schonberger, V. (2011). Delete: the virtue of forgetting in the digital age: Princeton: Princeton University Press.
Roberts, L. D., Howell, J. A., Seaman, K., & Gibson, D. C. (2016). Student Attitudes toward Learning Analytics in Higher Education: ‘The Fitbit Version of the Learning World’. Frontiers in Psychology, 7. https://doi.org/10.3389/fpsyg.2016.01959

Transcript of “Big data is better data”

Self-driving cars were just the start. What’s the future of big data-driven technology and design? In a thrilling science talk, Kenneth Cukier looks at what’s next for machine learning — and human knowledge.

from Pocket http://ift.tt/2nA1wwV
via IFTTT

This is a good (and pithy) talk, but there are two points he makes that I find particularly interesting:

  1. We have to be the master of this technology, not its servant […] This is a tool, but this is a tool that, unless we’re careful, will burn us“. A patent warning against technological determinism, here, but one which (in my opinion) is not necessarily couched with enough care to help us to understand how to avoid a fully instrumentalist approach.
  2. Humanity can finally learn from the information that it can collect, as part of our timeless quest to understand the world and our place in it“. This accompanies a strong sense of why Big Data is important, but it’s also very essentialist: it’s about reflecting the here and now, rather than attempting to understand the past. I wonder if there are some historiographical problems here, given that Big Data collection is so recent and new, and still so patchy in places. The ‘timeless quest’, given this, seems to be one which will be answered from a position of privilege: from those who are fortunate enough, paradoxically, to have data collected about them.

What I’m reading

Note

Tags:
March 18, 2017 at 04:48PM
Open in Evernote

I included this because it so strongly chimed with what I was thinking about student profiling – in particular, the highlighted bit reflects my experience of working with young people in HE, and the (in my opinion) dangers of treating any people, but particularly young people, as linear, as models of themselves, or as unable to follow unpredictable paths.

It’s from here, by the way:

Lawson, C., Beer, C., Rossi, D., Moore, T., & Fleming, J. (2016). Identification of ‘at risk’ students using learning analytics: the ethical dilemmas of intervention strategies in a higher education institution. Educational Technology Research and Development, 64(5), 957–968. https://doi.org/10.1007/s11423-016-9459-0

[Edit on Sunday 19th March: it’s also, I notice very much retrospectively, an attempt for me to use the lifestream to model how I study, how I make notes, how I identify comments and other thoughts. There’s another example here. I didn’t really realise I was doing this.]

The End of Theory: The Data Deluge Makes the Scientific Method Obsolete

from Pocket http://ift.tt/2dG4Izc
via IFTTT

I saved this because it’s mentioned in the lecture this week, and I wanted to remind myself to go back to it and read it further. It’s worth the read. In the article, Chris Anderson discusses how the incarnation of big data is superseding the need for scientific theory and modelling. It’s a fascinating argument, and definitely gives one pause for thought.

It’s a wonderful lifestream (or is it?) – Week 8 summary

Value is the main theme of the lifestream this week, both in the sense of a principle which governs our behaviour and something regarded as important or useful. Both definitions intersect in the development of algorithms, as well as in the ways in which their usefulness is communicated to us.

In a quite brilliant article about algorithms and personalising education, Watters asks the pertinent question:

What values and interests are reflected in its algorithm?

It’s a big and important question, but this Ted talk suggests to me that it would be propitious to change it to:

Whose values and interests are reflected in its algorithm?

Joy Buolamwini explores how human biases and inequalities might be translated into, and thus perpetuated in, algorithms, a phenomenon she has called the ‘coded gaze’. Similar considerations are taken up in this article too, as well as in this week’s reading by Eynon on big data, summarised here. I also did a mini-experiment on Goodreads, in which I found results which could potentially be construed as bias (but more evidence would definitely be required).

It isn’t just a question of the ways in which values are hidden or transparent, or how we might uncover them, though this is crucial too. My write-up of Bucher’s excellent article on EdgeRank and power, discipline and visibility touches on this, and I explored it briefly in the second half of this post on Goodreads. Rather, one of the ways in which hiddenness and transparency are negotiated is in the ways in which these values are communicated, and how they are marketed as having ‘added value’ to the user’s experience of a site. The intersection of these issues convinces me further of the benefit of taking a socio-material approach to the expression of values in algorithms.

What I’m reading

Three challenges of big data according to Eynon:

  1. Ethics – privacy, informed consent, protection of harm. Example of student registration: social implications of telling students if they are likely to drop out (according to learner analytics). Makes it a self-fulfilling prophecy?
  2. Kindsof research – the availability of data biases in the types of research we carry out, the questions we can ask. Can advances in open data help with this?
  3. Inequality – how big data reinforces and exacerbates social and educational inequalities e.g. tracking only those in a specific socio-economic bracket. Digital divide, yes, but doesn’t it also work the other way round – social inequalities mean that some people are better equipped to avoid surveillance via big data?

 

Eynon, R. (2013). The rise of Big Data: what does it mean for education, technology, and media research? Learning, Media and Technology, 38(3), 237-240. Doi: 10.1080/17439884.2013.771783

Tags:
March 12, 2017 at 12:16PM
Open in Evernote