Cathy, this is great! Nice work, and an innovative way to question what data is captured – it’s important to balance our interpretation of the meaning of data with what is captured (and what is missing). Thank you!
from Comments for Cathy’s Lifestream http://ift.tt/2ohg295
At a conference today! #cctl2017
March 23, 2017 at 11:53AM
Open in Evernote
I attended a Teaching Forum hosted by the Cambridge Centre for Teaching and Learning on Thursday, and this is a photo of some of the notes that I took during a presentation by Dr Sonia Ilie, on the LEGACY project. Dr Ilie discussed the results of a bit of qualitative research surrounding students’ understanding of learning gain. One of her arguments put me in mind of learning analytics.
In case my handwriting isn’t clear, Dr Ilie reported that the research had demonstrated that students are variably equipped to reflect upon their own learning. I wondered – in the bottom comment of the photo – about the impact that learning analytics might have upon this. I’m interested in whether learning analytics might help students to develop critically reflective skills, or whether it might let them off the hook by effectively providing them with a shorthand version of that reflection.
Is the video-sharing platform a morally irresponsible slacker for putting ads next to extremist content – or an evil, tyrannical censor for restricting access to LGBT videos? YouTube is having a bad week.
from Pocket http://ift.tt/2n6Zi5b
YouTube has been in the news lately because of two connected battles: the positioning of certain ads around what might be considered to be ‘extremist’ content, and the inherent problems in the algorithms used to categorise content as extremist or restricted.
The article in the New Statesment attempts to bring the moral arguments to bear on these algorithmic moves, raising the point about false equivalence of extremist videos and LGBT content, and whether responsibility for censoring certain voices ultimately represents the handing over of power in a problematic way.
There are twin themes to the lifestream this week: the first, a desire to engage critically with the content of the block, emerges through plenty of references to critical texts and articles. Learning analytics, and the omnipresent ethical debate which surrounds it, is a key component. For example, I included Anderson’s brilliant Wired article (mentioned in the lecture we heard this week), a short reading list on learning analytics and ethics, and a JISC report, which I found to be quite uncritical: I included some of the notes I made while reading it, and my responses.
There’s a keen sense, then, of ‘what I’m reading‘. The phrase, chosen as a blog title over two months ago when I set up IFTTT to work with Evernote, encapsulates the second of the twin themes: an emerging issue of temporality. This, I think, is present in several ways. With regard to learning analytics and Big Data, temporality incorporates ideas of the currency of the collection of data: methods of data collection and analysis will improve, but this will not necessarily account for the patchiness of past data, upon which decisions will continue to be made. This, for me, problematises our historiographical understanding of how Big Data might help us to become more self-aware, especially with regard to the “timeless quest” to which Cukier refers in this Ted talk. But temporality is present too in a personal sense: in my reasons for missing the tweetstorm, which was locked into a specific point in time, but also in a keenly felt fixing of attention to forthcoming assignments.
The themes converge with a slightly more substantive blog post on Big Data, learning analytics and posthumanism which, I think, may be a fruitful topic to consider further in the digital essay.
I’ve now read a few articles assessing the pros and cons of learning analytics and, regardless of the methodologies employed, there are patterns and themes in what is being found. The benefits include institutional efficiency and institutional performance around financial planning and recruitment; for students, the benefits correspond to insights into learning and informed decision-making. These are balanced against the cons: self-fulfilling prophecies concerning at-risk students, the dangers of student profiling, risks to student privacy and questions around data ownership (Roberts et al., 2016; Lawson et al., 2016). This is often contextualised by socio-critical understandings which converge on notions of power and surveillance; some of the methodologies explicitly attempt to counter presumptions made as a result of this, for example, by bringing in the student voice (Roberts et al., 2016).
In reading these articles and studies, I was particularly interested in ideas around student profiling and student labelling, and how this is perceived (or sometimes spun) as a benefit for students. Arguments against student profiling focus on the oversimplification of student learning, students being labelled on past decisions, student identity being in a necessary state of flux (Mayer-Schoenberger, 2011). One of the things, though, that’s missing in all of this, the absence of which I am feeling keenly, is causation. It strikes me that big data and learning analytics can tell us what is, but not always why.
A similar observation leads Chandler to assert that Big Data is the kind of Bildungsroman of posthumanism (2015). He argues that Big Data is an epistemological revolution:
“displacing the modernist methodological hegemony of causal analysis and theory displacement” (2015, p. 833).
Chandler is not interested in the pros and cons of Big Data so much as the way in which it changes how knowledge is produced, and how we think about knowledge production. This is an extension of ideas espoused by Anderson, in which he argues that theoretical models are becoming redundant in a world of Big Data (2008). Similar, Cukier and Schoenberger argue that Big Data:
“represents a move away from trying to understand the deeper reasons behind how the world works to simply learning about an association among phenomena, and using that to get that done” (2013, p. 32).
Big Data aims not at instrumental knowledge, nor causal reasoning, but the revealing of feedback loops. It’s reflexive. And for Chandler, this represents an entirely new epistemological approach for making sense of the world, gaining insights which are ‘born from the data’, rather than planned in advance.
Chandler is interested in the ways in which Big Data can intersect with ideas in international relations and political governance, and many of his ideas are extremely translatable and relevant to higher education institutions. For example, Chandler argues that Big Data reflects political reality (i.e. what is) but it also transforms it through enabling community self-awareness. It allows reflexive problem-solving on the basis of this self-awareness. Similarly, it may be seen that learning analytics allows students to gain understanding of their learning and their progress, possibly in comparison with their peers.
This sounds great, but Chandler contends that it is necessarily accompanied by a warning: it isn’t particularly empowering for those who need social change:
Big Data can assist with the management of what exists […] but it cannot provide more than technical assistance based upon knowing more about what exists in the here and now. The problem is that without causal assumptions it is not possible to formulate effective strategies and responses to problems of social, economic and environmental threats. Big Data does not empower people to change their circumstances but merely to be more aware of them in order to adapt to them (p. 841-2).
The problem of lack of understanding of causation is raised in consideration of ‘at risk’ students – a student being judged on a series of data without any (potentially necessary) contextualisation. The focus is on reflexivity and relationality rather than how or why a situation has come about, and what the impact of it might be. Roberts et al. found that students were concerned about this, that learning analytics might drive inequality through advantaging only some students (2016).The demotivating nature of the EASI system for ‘at risk’ students is also raised by Lawson et al. (2016, p. 961). Too little consideration is given to the causality of ‘at risk’, and perhaps too much to essentialism.
His considerations of Big Data and international relations leads Chandler to assert cogently that:
Big Data articulates a properly posthuman ontology of self-governing, autopoietic assemblages of the technological and the social (2015, p. 845).
No one here is necessarily excluded, and all those on the periphery are brought in. Rather paradoxically, this appears to be both the culmination of the socio-material project, as well as an indicator of its necessity. Adopting a posthumanist approach to learning analytics may be a helpful critical standpoint, and is definitely something worth exploring further.
Self-driving cars were just the start. What’s the future of big data-driven technology and design? In a thrilling science talk, Kenneth Cukier looks at what’s next for machine learning — and human knowledge.
from Pocket http://ift.tt/2nA1wwV
This is a good (and pithy) talk, but there are two points he makes that I find particularly interesting:
- “We have to be the master of this technology, not its servant […] This is a tool, but this is a tool that, unless we’re careful, will burn us“. A patent warning against technological determinism, here, but one which (in my opinion) is not necessarily couched with enough care to help us to understand how to avoid a fully instrumentalist approach.
- “Humanity can finally learn from the information that it can collect, as part of our timeless quest to understand the world and our place in it“. This accompanies a strong sense of why Big Data is important, but it’s also very essentialist: it’s about reflecting the here and now, rather than attempting to understand the past. I wonder if there are some historiographical problems here, given that Big Data collection is so recent and new, and still so patchy in places. The ‘timeless quest’, given this, seems to be one which will be answered from a position of privilege: from those who are fortunate enough, paradoxically, to have data collected about them.
March 18, 2017 at 04:48PM
Open in Evernote
I included this because it so strongly chimed with what I was thinking about student profiling – in particular, the highlighted bit reflects my experience of working with young people in HE, and the (in my opinion) dangers of treating any people, but particularly young people, as linear, as models of themselves, or as unable to follow unpredictable paths.
It’s from here, by the way:
Lawson, C., Beer, C., Rossi, D., Moore, T., & Fleming, J. (2016). Identification of ‘at risk’ students using learning analytics: the ethical dilemmas of intervention strategies in a higher education institution. Educational Technology Research and Development, 64(5), 957–968. https://doi.org/10.1007/s11423-016-9459-0
[Edit on Sunday 19th March: it’s also, I notice very much retrospectively, an attempt for me to use the lifestream to model how I study, how I make notes, how I identify comments and other thoughts. There’s another example here. I didn’t really realise I was doing this.]
85% of FE students think an activity tracking app for learning and teaching would be helpful, finds a new survey by Jisc, the digital services and solutions organisation for UK education and research.
from Pocket http://ift.tt/2mTCrdg
I included this because it’s extremely recent, and contains such a different voice and attitude to learning analytics in comparison to what else I’ve been reading. The article documents how FE students are almost exclusively positive about learning analytics, in comparison to HE students documented in other places.
Ethics and learning analytics: a short reading list
March 18, 2017 at 10:58AM
Open in Evernote