Browsed by
Tag: Knox

Our Tweetorial

Our Tweetorial

NB: this analysis focuses on Friday’s Tweetorial activity. The Tweet Archivist search terms don’t allow for a date range to be entered, so this post focuses on only one of the two days in which the #MSCEDC students and tutors were engaged in the Analytics Tweetorial. 

How has the Twitter archive represented our Tweetorial?

Necessarily, perhaps, the archive represents our Tweetorial in quantifiable terms and via a range of graphs, lists, charts and visualisations.

As Colin and Nigel both highlighted in our tutorial this morning, the archive uses rankings. The number of contributions made determines who made it to the top of the ‘Users’ list:

The URLs mentioned and ‘Influencer Index’ are both represented as graphs:

The most used words, user mentions, and hashtags used are represented as word clouds:

The tweet source is shown as a pie chart:

And, finally, images are presented in their entirety:

What do these visualisations, summaries and snapshots say about what happened during our Tweetorial, and do they accurately represent the ways you perceived the Tweetorial to unfold, as well as your own contributions?

These visualisations, summaries and snapshots present us with quantifiable data about the Tweetorial. They are based on counts. From my subjective position, the two most interesting ‘pictures’ are the word cloud showing our ‘top words’ and the image of the Eynon quote. With regard to the former, this does, at least, provide some sense of what we discussed, including Nigel’s ‘cheese bomb’ which distracted us from our focus on analytics. With regard to the latter, this is the only piece of data which does, I would suggest, provide some sort of insight into the depth and quality of the discussion, which hints at a sense of the complexity of some of the ideas which were unfolding (even within the limiting constraints of 140 characters. As to how well the Tweet Archivist data represents my perceptions of the experience as a participant and a learner, it simply doesn’t. Many of my contributions to the Tweetorial were focused on the learner voice, on ensuring that the learner was not ‘done to’ by LA. And, ironically, these representations serve to obscure the learner and the learner’s experience.

As Knox proposes, many practitioners, researchers and big data developers claim that Learning Analytics ‘‘makes visible the invisible’. In other words, there is stuff going on in education that is not immediately perceptible to us, largely due to scale, distribution and duration, and Learning Analytics provides the means to ‘see’ this world.’ I would suggest that this presentation of the #mscedc discussion does the obverse: the qualitative is hidden behind crude quantitative representations. The complexity of the discussion, the pace of interactions, the quality of contributions and, ultimately, insights into what we actually learned from the exercise are missing from these visualisations and lists. They provide no sense of what it is to be a learner and they provide no insights into my experience as a learner within the session.

What might be the educational value or limitations of these kinds of visualisations and summaries, and how do they relate to the ‘learning’ that might have taken place during the ‘Tweetorial’?

However, Knox goes on to suggest that ‘to critique Learning Analytics simply on the grounds that it makes certain worlds visible while hiding others remains within a representational logic that diverts attention from the contingent relations involved in the process of analysis itself.’ What is important is to recognise that these visual abstractions are not reality and that they don’t provide transparent insights into learning; and transparency itself should not be the aim either. Knox again: ‘if we strive for Learning Analytics to be transparent, to depict with precise fidelity the real behaviours of our students, then we are working to hide the processes inherent to analysis itself.’ To focus on how accurately LA represents reality is to miss a sociomaterial trick: ‘my argument is that if we focus exclusively on whether the educational reality depicted by analysis is truthful or not, we seem to remain locked-in to the idea that a ‘good’ Learning Analytics is a transparent one.’ What is key, he posits, is to focus on ‘the processes that have gone into the analysis itself.’ So, in terms of what is presented to us here, for example, the number of contributions is a measure of who is a ‘Top User’. As Knox highlights in his critique of the ‘Course Signals’ traffic lights system used at Purdue University, it’s interesting to consider why the number of contributions is an indicator of being ‘top’. It doesn’t provide a sense of how meaningful or relevant the participants’ contributions were, nor does it indicate other subtle factors, such as whether the participant was engaged within conversations, moving ideas along, or was simply ‘firing out’ their own tweets without reflecting on or engaging with, others’ tweets. The considerations about what factors (technical, social, political, & etc) contribute to this indicator being used is, Knox posits, of real interest. We touched on this in the Tweetorial itself:

What is interesting to consider is how this first experience of the Tweetorial and the associated presentation of the analytics might affect/influence our future behaviours as learners if we were presented with a similar task. As Colin noted in our tutorial on Friday, learners can try to beat the machine and this can have an adverse effect on both learning and outcomes. As a participant, I was aware that our conversation was going to be subject to analysis but I didn’t know what the form of that analysis would be and what the ‘success criteria’ were. Now that I’ve seen them, and if I was being judged on these alone, I would be inclined to fire out as many tweets as possible (regardless of the content) and try and get more followers (to improve my ‘influencer’ ranking). Neither of these would have a positive or meaningful impact on my learning.

Liked on YouTube: The Coded Gaze: Unmasking Algorithmic Bias

Liked on YouTube: The Coded Gaze: Unmasking Algorithmic Bias

The Coded Gaze: Unmasking Algorithmic Bias
Debuted at the Museum of Fine Arts Boston, The Coded Gaze mini documentary follows Poet of Code Joy Buolamwini’s personal frustrations with facial recognition software and the need for more inclusive code. Learn more at www.ajlunited.org
via YouTube https://youtu.be/162VzSzzoPs

– Knox 2015

– Eynon 2013

Knox

Knox

At the start of the course, I prepared a visualisation of the Knox reading which can be found here.

Knox highlights the shift away from notions of the ‘virtual’ and towards the ‘network’; the web is positioned as a positive force which can ‘support and enhance conventional social life’. Like Bayne on TEL, Knox counsels caution with regard to this positioning of digital technologies. It suggests that that technology is just a ‘passive instrument’ in service to users. This stance fails to recognise the powerful economic and ideological forces which shape the digital tech industry: the web is not a neutral conduit for our interactions.

Networked, collaborative learning positioned as positive and beneficial

The move towards the ‘network’ aligns with a broader shift in education from teacher-centric to student-centred learning. There was a shift to understanding learning as the social construction of knowledge.* Our own studies reflect this repositioning; what Garrison and Anderson refer to as the ‘teacher presence’ does not dominate our interactions within our community.

*For a subjective discussion about the practical – and emotional – impact of some of these changes, you may want to read my exchange with James here.

Garrison, D. and Anderson, T. 2003. E-Learning in the 21st Century. Routledge-Falmer, London

Knox, J. 2015. Community Cultures. Excerpt from Critical Education and Digital Cultures. In Encyclopedia of Educational Philosophy and Theory. M. A. Peters (ed.). DOI 10.1007/978-981-287-532-7_124-1