Browsed by
Category: Blog

Final summary

Final summary

My lifestream started frenetically, reflecting the confusion and sense of disorientation which comes with immersion within a new field and a new learning environment. I made 25 posts in the first week of the course and also put an inordinate amount of time into crafting artefacts.  In the first weeks of the course I was grappling with challenging and fascinating content and also with producing content in an entirely new way: via multiple feeds and sources. On reviewing those early entries, many are focused on the practicalities of working together. My reflection – my metadata – is either missing or tentative (see here, here and here, for example); I was, as most new learners are, operating within Bloom’s domains of knowledge and comprehension.


As my lifestream progresses, the number of posts per week reduces (mostly – the tweetorial skewed this) but the entries become more focused and reflective as I begin to move into the domains of application and analysis. This is particularly marked after the mid-point review by James, in which targeted and specific advice about how to improve the blog was offered. Feedback works.


A visual theme which run as a thread throughout my lifestream is the Instagram shot of my laptop in various places. I reflected on the significance of this recuurent image here. I was in many different places when I took these photographs, but I was also only in one space: our MSCEDC learning space. This image also reflects my life as a cyborg, augmented by an array of mobile technologies which enable me to work, study and communicate anywhere.

In terms of stream feeds, Twitter dominates. On asking peers why they thought this was, ‘inertia’, ‘ease’, ‘connectedness’, ‘multimedia’ and guaranteed lifestream feed were all cited as reasons. For me, it was immediacy. I could quickly feel connected to other learners on Twitter, whereas blog communications were asynchronous; this was highlighted particularly during the tweetorial.

In my experience, there was a lack of roaming between blogs ( I reflect on this immobility here). I made comments on others’ but, when visits weren’t reciprocated, I returned to Twitter and its sociable babble. My MOOC micro-ethnography artefact attracted by far the most comments of all of my blog posts. As I reflected on here, there are a number of reasons why this might be: I posted fairly early in the week (so the arena wasn’t saturated with work to comment on); I included an amount of personal information in the artefact; and, finally and most importantly, we were encouraged to do so. The movement between blogs was scaffolded by the learning task: teaching presence is key to building a successful online community of inquiry.

Overall, what the lifestream traces is a journey from the domains of knowledge and comprehension through to synthesis, structured via the various artefacts we have created, and evaluation, via the weekly summaries and this final blog post.



Lifestream summary: week 11

Lifestream summary: week 11

This week has been spent thinking about, and starting to collate content for, the final assignment*. At the start of the week, I was unsure about both its form and its content. However, following a very useful email exchange with James, the former is now more defined. I’ve decided to create a photo diary of a typical day (or part of a typical day) which highlights and reflects upon the various digital and technological entanglements which are part of my experiences. Thinking about form is proving more tricky due to a (perhaps unfair) PowerPoint aversion, but I‘m getting there.

I’ve also spent some time reviewing this lifestream blog in preparation for writing the final entry next week. It’s interesting to observe the development of voice and form as it progresses. What becomes apparent as I assess the lifestream content is that we have become a learning community which uses Twitter extensively, more so (as James highlighted in last week’s hangout) than previous cohorts. It’s interesting to consider why this medium appeals; it is more ‘natural and immediate’ than commenting on others’ blogs and feels more akin to the sort of conversation we might have f2f. The reading and commenting on others’ blogs, however, offers the space and time for more considered and critical reflection. All interesting things to note as I think about how I can apply some of the techniques and approaches from the course in my own professional practice.

*as well as starting to move house…**

**…which in itself has brought with it a raft of reflections on the algorithmic results which have resulted from the connected online activity…


Our Tweetorial

Our Tweetorial

NB: this analysis focuses on Friday’s Tweetorial activity. The Tweet Archivist search terms don’t allow for a date range to be entered, so this post focuses on only one of the two days in which the #MSCEDC students and tutors were engaged in the Analytics Tweetorial. 

How has the Twitter archive represented our Tweetorial?

Necessarily, perhaps, the archive represents our Tweetorial in quantifiable terms and via a range of graphs, lists, charts and visualisations.

As Colin and Nigel both highlighted in our tutorial this morning, the archive uses rankings. The number of contributions made determines who made it to the top of the ‘Users’ list:

The URLs mentioned and ‘Influencer Index’ are both represented as graphs:

The most used words, user mentions, and hashtags used are represented as word clouds:

The tweet source is shown as a pie chart:

And, finally, images are presented in their entirety:

What do these visualisations, summaries and snapshots say about what happened during our Tweetorial, and do they accurately represent the ways you perceived the Tweetorial to unfold, as well as your own contributions?

These visualisations, summaries and snapshots present us with quantifiable data about the Tweetorial. They are based on counts. From my subjective position, the two most interesting ‘pictures’ are the word cloud showing our ‘top words’ and the image of the Eynon quote. With regard to the former, this does, at least, provide some sense of what we discussed, including Nigel’s ‘cheese bomb’ which distracted us from our focus on analytics. With regard to the latter, this is the only piece of data which does, I would suggest, provide some sort of insight into the depth and quality of the discussion, which hints at a sense of the complexity of some of the ideas which were unfolding (even within the limiting constraints of 140 characters. As to how well the Tweet Archivist data represents my perceptions of the experience as a participant and a learner, it simply doesn’t. Many of my contributions to the Tweetorial were focused on the learner voice, on ensuring that the learner was not ‘done to’ by LA. And, ironically, these representations serve to obscure the learner and the learner’s experience.

As Knox proposes, many practitioners, researchers and big data developers claim that Learning Analytics ‘‘makes visible the invisible’. In other words, there is stuff going on in education that is not immediately perceptible to us, largely due to scale, distribution and duration, and Learning Analytics provides the means to ‘see’ this world.’ I would suggest that this presentation of the #mscedc discussion does the obverse: the qualitative is hidden behind crude quantitative representations. The complexity of the discussion, the pace of interactions, the quality of contributions and, ultimately, insights into what we actually learned from the exercise are missing from these visualisations and lists. They provide no sense of what it is to be a learner and they provide no insights into my experience as a learner within the session.

What might be the educational value or limitations of these kinds of visualisations and summaries, and how do they relate to the ‘learning’ that might have taken place during the ‘Tweetorial’?

However, Knox goes on to suggest that ‘to critique Learning Analytics simply on the grounds that it makes certain worlds visible while hiding others remains within a representational logic that diverts attention from the contingent relations involved in the process of analysis itself.’ What is important is to recognise that these visual abstractions are not reality and that they don’t provide transparent insights into learning; and transparency itself should not be the aim either. Knox again: ‘if we strive for Learning Analytics to be transparent, to depict with precise fidelity the real behaviours of our students, then we are working to hide the processes inherent to analysis itself.’ To focus on how accurately LA represents reality is to miss a sociomaterial trick: ‘my argument is that if we focus exclusively on whether the educational reality depicted by analysis is truthful or not, we seem to remain locked-in to the idea that a ‘good’ Learning Analytics is a transparent one.’ What is key, he posits, is to focus on ‘the processes that have gone into the analysis itself.’ So, in terms of what is presented to us here, for example, the number of contributions is a measure of who is a ‘Top User’. As Knox highlights in his critique of the ‘Course Signals’ traffic lights system used at Purdue University, it’s interesting to consider why the number of contributions is an indicator of being ‘top’. It doesn’t provide a sense of how meaningful or relevant the participants’ contributions were, nor does it indicate other subtle factors, such as whether the participant was engaged within conversations, moving ideas along, or was simply ‘firing out’ their own tweets without reflecting on or engaging with, others’ tweets. The considerations about what factors (technical, social, political, & etc) contribute to this indicator being used is, Knox posits, of real interest. We touched on this in the Tweetorial itself:

What is interesting to consider is how this first experience of the Tweetorial and the associated presentation of the analytics might affect/influence our future behaviours as learners if we were presented with a similar task. As Colin noted in our tutorial on Friday, learners can try to beat the machine and this can have an adverse effect on both learning and outcomes. As a participant, I was aware that our conversation was going to be subject to analysis but I didn’t know what the form of that analysis would be and what the ‘success criteria’ were. Now that I’ve seen them, and if I was being judged on these alone, I would be inclined to fire out as many tweets as possible (regardless of the content) and try and get more followers (to improve my ‘influencer’ ranking). Neither of these would have a positive or meaningful impact on my learning.

Tutorial notes

Tutorial notes

It was good to catch up with James and some of my peers in the Hangout on Friday. Colin made the most impressive entrance: he’d managed to get a green screen working behind him and, as the tutorial progressed, the images (all related to the course) shifted and changed.

The key focus of our discussion was our experience of the Tweetorial, how we felt about it as a learning experience and how our thoughts and behaviours were affected by the knowledge that our contributions were going to be analysed: we were in broad agreement that this knowledge did have an impact on how we approached the Tweetorial questions. However, having seen the fairly superficial data which emerged from the activity, it’s interesting to consider how our engagement might have altered had we seen an example of the sort of analytics which would be generated before we started…

One thing which it was interesting to discuss was that although Twitter is, conceptually, an asynchronous communication forum, a number of us felt pressure to contribute as quickly as possible. Trying to make sense of conversations which had branched and extended over a period of hours was, it was observed, difficult. Thus, for many of us, the Tweetorial experience felt either frenetic, as we tried to keep up with the multiple threads of conversation, or discombolutaing, as we joined complex conversations which involved multiple participants.

The brief notes I took during the tutorial can be found here: Tutorial 24.03.17


Lifestream summary: week 9

Lifestream summary: week 9

Having taken the learning analytics course last year, at the start of this week I was back in familiar territory, reading Siemens on LA and EDM and watching Ben Williamson’s lecture on the digital university. In the second half of the week, we engaged in a two-day ‘tweetorial’ and I found myself communicating with Ben directly about LA.

The tweetorial was very much a tweetathon and I was fascinated to follow Anne’s link to some emergent analytics around our engagement and communications over the two days. Nigel’s cheesy diversion had an impact on the data which was generated via Twitter.

It will be fascinating to see what further analysis offers up, but this initial insight provided evidence of the conflicting interpretations as to what algorithms can offer us: order and chaos. The data generated by our discussion were, to an extent, captured and ordered by the algorithm, but the results are simultaneously ‘messy’ and require human agency to make sense of  the ‘cheese’ in the data.

For me, thinking summatively about what we’ve focused on over the last 9 weeks, I keep circling back to Bayne’s term, ‘entanglement’ (Bayne, 2015).

The sociomaterialist perspective of the  ‘the constitutive entanglement of the social and the material’ (Orlikowski, 2007) and, therefore, the technical, is a seam which has run throughout our blocks of study and was highlighted in both the Siemens and Williamson readings. As Siemens highlights, learning cannot be reduced to data:

The tension, the interplay, between the technical – the algorithm – and the human, informed much of our discussion during the tweetorial. Discussions circled back to the subjective agency which informs LA – both in terms of data extraction and interpretation – and to the impact of data on the subjects – both teachers and students. Kitchin and Dodge’s definition of algorithms – cited by Williamson – reminds us that data are not objective:

I’m looking forward to drawing more strands of thought together as we progress into week 10.

Orlikowski, W. J. (2007). Sociomaterial Practices: Exploring Technology at Work. Organization Studies (28)9, pp. 1435-1448.

Lifestream summary: week 8

Lifestream summary: week 8

I spent some time in other spaces this week, firstly reviewing other’s ethnographies and then starting to look at the results of our algorithmic play. As I start to think about the final assignment, it’s interesting to sees such creative use of a range of tools.

Renée: Screencast-O-Matic
Chenée: Sway
Daniel – SoundCloud
Daniel – Slidely
Cathy – Storify

With the shift in our focus to algorithmic cultures, as I roam in these spaces I have been thinking about the tracks and traces I am leaving and how my experience of the content is necessary entangled with the technologies I am using. As Knox (2015) reflects:


As I discussed with James last week, the content in my lifestream is impacted by the choices we are making about the spaces we are inhabiting as we learn and explore.

In terms of my play with algorithms, it has been, firstly, lots of fun and provided a new lens through which I observed my own internet use this week. I appear to be ‘rules-orientated’ and found myself feeling slightly transgressive as I explored some of the words on the Google blacklist. Have I subconsciously absorbed an algorithmic blacklist? Or am I attuned to the fact that my interactions are being tracked by algorithms?

My activities this week have been captured here. As we reflected upon in Learning Analytics, the implications of the use of Big Data and the permeation of algorithms into education provide us with much to reflect on. I tried to reflect some of this in my Padlet, with the video about Knewton and the School of One embedded within a cluster of quotes from Knox and Eynon about the educational implications of the use of algorithms.

Lifestream summary: week 7

Lifestream summary: week 7

This week has been comprised of four key activities:

  1. completing my own mini-autoethnography: this submission has generated the most responses from my peers and I’ve been thinking about why that might be. It’s a video – is that format more engaging than some of the other media I’ve used? I submitted it fairly early on in our weekly cycle so the forum was less crowded with submissions. It contains some insights into my personal life. As I reflected in my response to Chenée’s comment*, I’m not really sure how I feel about this context collapse, but I recognise that the personal is of interest.
  2. responding to comments on my submission; the dialogue around the netnography has stimulated more thoughts and ideas about to how to approach the final assignment.
  3. commenting on others’ netnographies; I intend to continue to do more of this over the coming days. As well as offering insightful commentaries on the MOOCs, the submissions also offer a variety of creative approaches to using a range of tools. I particularly liked Eli’s use of Adobe Spark and Myles’ use of Padlet.
  4. and, in light of James’ mid-course reflections on my lifestream, revisiting some blog posts and adding more reflective metadata to them: amended posts can be found herehere, here and here. The new content within these posts is orange. In terms of reflecting on and consolidating what I’ve produced already, and in beginning to think about the final assignment, this was a useful exercise to undertake.

*’I had to be pushed by my partner to include the personal images: it sits uncomfortably with me to blend my private space with this public one (I know that this is something which you reflected on in your own lifestream ( but he felt that I needed to reference why the medium of the MOOC wasn’t working to deliver the sort of mindful experiences which I get from other areas of my life. I think it works but I still feel a little uneasy about this ‘collapse of context’.’ from

Lifestream summary: week 6

Lifestream summary: week 6

This week, I finally received a response from Monash University regarding my ethnographic study of their FutureLearn Mindfulness course. As expected, I will be unable to quote course participants as they haven’t provided their consent. As Marshall discusses, consent must be obtained: ‘all of the conditions of informed consent must…hold…participants must be informed in advance of the research, and all data collected and the uses made of it needs to be specified accurately and completely’ (p.257). In anticipation of this outcome, my focus had already shifted towards what James informs me is an ‘autoethnographic’ approach – focusing on my experience of being a participant in the MOOC. As I’ve reflected on already this week, I am interested in the tensions inherent in the use of a MOOC to deliver a mindfulness course and this will be the focus of my ethnography. Baggaley’s reflections on the digestibility of the ‘supersized’ MOOC content and the sense some participants have of feeling ‘overwhelmed’ is pertinent here. Adams et al’s paper suggests that the effective use of video may override some of the issues of teaching at a massive scale; however, as the paper highlights, positive feedback from engaged participants must be set against an average non-completion rate of around 90%.

After reviewing some really excellent work by previous students on the course, I have decided to present research via video and I’m currently working on this.

In terms of our community’s interactions, Twitter continues to offer much activity, interaction on the hub has petered out. Cathy made a welcome visit to my lifestream this week. My aim next week is to roam a little more into others’s blogs. I’ve made a start here.

Adams, C. et al., 2014. A phenomenology of learning large: the tutorial sphere of xMOOC video lectures. Distance Education, 35(2), pp.1–15.

Baggaley, J., 2014. MOOCS: digesting the facts. Distance Education, 35(2), pp.159–163.

Marshall, S., 2014. Exploring the ethical implications of MOOCs. Distance Education, 35(2), pp.250–262.