Week 11 Summary

In week 11, I’ve consciously tried to wind back or wind down the Lifestream a little. I understand it is still assessed at this point – but by wind down I don’t mean stop. Rather, I mean ‘refocus’. I’ve tried to be selective about the content I’m feeding in, in order to focus on the assignment, and just feed content associated with that. It’s hard though, as in a sense adding to the Lifestream has supplanted my old habits of storing ‘to read’ texts elsewhere, and telling ‘the story of’ digital cultures through the Lifestream (and my own attempts to subvert algorithms that may be at play) has actually become quite addictive.

That said, I’ve been pursuing two themes in relation to the final assignment: the way that ‘imaginaries’ help create (educational) futures, and the notion of ‘algorithmic literacy’ and the potential to develop this. There’s been more of the former in the feed, but in part I think this may be just because it is better represented in media and research – I’m not convinced that the latter is the less worthy route to pursue.

Here are the ways I’ve been thinking about those themes, and some other bits of ‘life’ that have appeared in the stream:

Date/link to post media linked to (topic)
28 March Tweet Poor statistical models used in predictions (‘weapons of math destruction’. Cathy O’Neil)
28 March Comment on blog role of algorithms in destabilising the single-author (Matthew’s blog)
28 March Pocket Ben Williamsons’ blog post on how the imaginaries of ‘education data science’ combined with affective computing and cognitive computing are leading to a new kind of ‘agorithmic governance’
29 March Pocket Ben Williamsons’ analysis of UK media’s editorial line on algorithms
29 March Diigo How the metrics of a site (FB in the study) prescribe what social acts are appropriate. Evidence that society is shaped by tech as well as the reverse.
30 March liked on YouTube Audrey Watters’ talk at University of Edinburgh. Relates to both role of ‘imaginaries’, and also – though less so- to need for algorithmic literacy.
31 March Pinterest Podcast from Culture Digitally featuring Tarleton Gillespie and Ted Striphas. Links to a need for algorithmic awareness and/or literacy.
31 March Tweet Book announcement: Digital counterculture& the struggle for community. Link to our community block, and to cyber cultures.
1 April Diigo Kirby’s 2010 paper on the role of film in promoting technological ‘imaginaries’, and thereby making them possible realities.
2 April Tweet Confirming assignment dates
3 April Diigo Snip from programme I was presenting in– meant as an explanation for lack of focus!

 

Interpreting the algorithmic interpretation

Warning – my analysis became somewhat unwieldy in length – for the fast read skip to the summary.


In week 9 of #mscedc our course leaders hosted a two-day ‘tweetorial’. Our activity during this period (and indeed, all our activity on Twitter, including at other times) was analysed by underlying algorithms – but what do the visualisations and numbers really reveal? How useful are they? And to whom?

CC0 Public Domain

How has the Twitter archive represented our Tweetorial?

Tweet Archivist is a freemium Twitter analytics service. On the website, the commercial usefulness of the type of analysis offered is outlined (my emphasis):

We also analyze the archive for you, bubbling up information like top users, words, urls, hashtags and more. This allows you to find the influencers, measure campaign effectiveness, determine sentiment and view the most popular images associated with the tweets for this term. In addition, we do a language breakdown and a volume analysis based on number of tweets per day.

However, the usefulness of the analysis is also assured more general, with the promise of ‘valuable insight into trends and behaviours’. Are these ‘behaviours’ relevant to education?

The narrative of Tweet Archivist is one of objectivity and the ability to ‘‘make visible the invisible’, which is common to the narrative frequently encountered with regard to learning analytics (Knox, 2014).

Figure 2: list of the measurements made by Tweet Archivist

The data is presented in via drop down menus/links surrounding most recent activity for the hashtag followed. The free version of the site does not enable isolation of date ranges, and seems to be limited with regard to the time frame for which it can retrieve particular days’ tweets (which may be related to Twitter’s own policies). As such, for the purpose of this interpretation, I will be using analytics related to the second day of the Tweetorial (Friday, 17 March) as well as the ongoing archive data, which runs from 5 March – present.

What do these visualisations, summaries and snapshots say about what happened during our Tweetorial?

Figure 3: Tweets, Impressions and date(s)

A basic presentation of the number of tweets and impressions, which refers to ‘the total number of times tweets were delivered to timelines with this search or hashtag in this archive’.

These figures are presented without ‘judgement’; at this point there is no indication of how the numbers compare with other numbers. However, the underlying sense seems to be that more activity equals more value.

top users
Figure 4a: ‘Top users’ 5/3/17 to 27/3/17
Figure 4b: Top users 17-03-17 ONLY

Here those people who used the hashtag #mscedc are ranked according to the number of tweets they sent containing the hashtag during each period. The labelling as ‘top’ users could be seen to imply value, equivalent to ‘best’ when in fact it refers only to ‘most’. There is no link between the analysis and what might have been valued in the tweetorial – either explicitly (through ‘likes’ or through content comments, for example) or implicitly, but without a public signal.

It may be worth noting that those who used the hashtag most during the tweetorial are also, primarily, those that have used the hashtag most on Twitter over the longer period.

Figure 4c: Comparison of number of tweets on 17/3/17 and over longer period 5-27/3/17, with ‘rankings’ (data from Tweet Archivist, copied to Excel)

Colin is an exception – and in his own reflection on the event noted that he did not expect to be amongst the highest tweeters during the tweetorial.

top words
Figure 5a: ‘Top words’ 5-27/3/17
Figure 5b: ‘Top words’ 17/3/17

The word clouds produced by Tweet Archivist are indicative of the highest frequency words during each time period. As the number of tweets recorded on the second day of the tweetorial is equivalent to 13.19% of the tweets overall, it is not surprising that ‘algorithms’, ‘data’ and ‘analytics’ feature in both word clouds. ‘Learning’ was only used four times during the second day of the tutorial. Is this significant?  Why isn’t ‘LA’ there? Without contextual information, little more can be understood than principle topics discussed., and even this is misleading as jokes and asides containing ‘cheese’ appear to be more significant to conversation than ‘students’. The topics are further obfuscated by the inclusion of words which don’t convey much meaning independently (I’m, their, yes, you’re, got).

top Urls
Figure 6a: ‘Top URLs’ 5-27/3/17
Figure 6: ‘Top URLs’ 17/03/17

The ‘top URLs’ illustrates how many times sites were linked to. Since the lowest number is 1 (once) for each URL provided for both the short, one-day data and the longer, 22 days’ data, it is difficult to know how Tweet Archivist decides what to include. Many more links have been shared ‘once’ that could have been included.

Many of the URLs linked to are simply other tweets (4 of 12 for 17/03/17 and 2 of 25 for 5-27/3/17). Four of the links for the longer period are to the EDC course blog or individual course blogs. Other URLs are obscured (at a glance – they are still hyperlinked) as they use Tiny URLs or Google shortcodes. One would think that these URLs would be indicative of what is valued enough by students/tutors to share. However, as noted, the inclusion of those shared just once, when there were many more links shared just once, makes it unclear what Tweet Archivist included in the measure.

source of tweet
Figure 7a: Source of tweets 5-27/3/17
Figure 7b: Source of tweets 17/03/17

Students were advised to use Tweetdeck for the tweetorial. It appears (though cannot be confirmed) that some students who would ordinarily have used Twitter Web Client switched to using TweetDeck as instructed. However, the actual figures represent not users, but Tweets sent from particular sources/platforms. All we really know is that, on the second day of the tweetorial, a larger proportion of Tweets were sent from TweetDeck. Is this because more students used TweetDeck, or because those students that used TweetDeck were more prolific in their tweeting than those using other platforms/devices?

Similarly, in the (slightly) longer term, more tweets have been sent from TweetDeck than other sources. Do students who are inclined to Tweet more have a preference for TweetDeck, does TweetDeck enable increased participation through Twitter, or do more students use TweetDeck than other sources? If we are more inclined to use TweetDeck and Twitter Web Client than iPhone, Android and Hootsuite, is anything suggested about our mobility or lack there of?

More students Tweet from iPhones than Android. Does this suggest more students have and use iPhones, or that iPhone users are more prolific tweeters than Android users? Who is this information useful to? App developers? iPhone marketing? It is unclear.

language

As far as I know, all conversations for the tweetorial were in English. However, at one point it was suggested Eli was speaking Swedish, despite Twitter’s translation tool not being able to translate it. These metrics don’t reveal much as we’re communicating mono linguistically and contrary indications are erroneous. The limitations of the tools are revealed, and data suggests English is the main language spoken by participants. However, this last point is a generalisation – we can only surmise that English is the lingua franca based on the given ‘evidence’, not that it is students’ preferred language.

Volume over time
Figure 8a: ‘Number of Tweets per day’ 5-27/3/17

The longer term view shows that, as would be expected, the volume of Tweets spiked during the tweetorial.

To gain more detailed information about Saturday 17 March, I used Mozdeh to extract a timeline. The information could not be obtained using the freemium version of Tweet Archivist.

Figure 8b: Volume of Tweets by hour, 9am to 10pm 17/03/17

The graph shows that a higher volume of Tweets were sent at 10am (GMT), with smaller peaks at 12:00, 3pm, 5pm and 8pm. From this style of presentation, it is unclear whether these peaks were created by more prolific tweeting by individuals, or if collectively more people were online at these times. It is also unclear whether participation was influenced by geographical location and time zone. Were particular users active over the entire day, others for short bursts, or is there a different explanation? The patterns behind active periods are not revealed in any detail here. Looking at specific users could reveal something more about the conditions the data is produced in, but such analytics are not provided by Tweet Archivist.

user mentions
Figure 9a: ‘User mentions’ 5-27/3/17
Figure 9b: ‘User mentions’ 17/03/17

Tweet Archivist counts the number of mentions users get on tweets containing the #mscedc hashtag. What does this reveal? Presumably, receiving mentions, for example through replies, or because more people choose to involve you in their conversations, is viewed as a positive measurement – commercially it might be indicative of how engaged you are with clients, for instance. However, since question numbers were not used within our tweetorial, mentioning other users (through reply) was the only way to keep messages organised. Because of this, more mentions may simply be related to starting conversation threads/posing questions (i.e. James during the Friday of the Tweetorial), and responding to questions earlier, which could result in gaining mentions in subsequent replies.

There does not seem to be a correlation between the number of tweets sent and the number of mentions received. Without completing a content analysis, it is impossible to know if mentions were received in response to questions posed (as they presumably were in the tutorial by James), whether the mentions added information to points made or challenged/complimented/complained about them. Further, since Tweet Archivist reports on it but not on the reverse, there seems to be a perceived value in receiving mentions, but not in giving them. Or, is the data just more readily available for the former? I would have thought such ‘data’ to be equally accessible.

Tweet Archivist does not provide network mapping to see the tweets between individual users. I openly admit to not knowing how to control the parameters within Mozdeh, but this is the network map produced for 17/03/17:

Figure 10: network of user connections in tweets for top 20 users a) tweeter -@tweetee and b) @tweetee1 – @tweetee2 networks (where ‘top’ is determined by most tweets TO user)

This network map suggests (I think – I am not experienced at reading such diagrams) that the most communication on Friday 17 March connected James, Philip, Eli, Nigel, Daniel, and Colin. It suggests my interactions were greatest with Colin, Eli and Nigel. This is based on being mentioned in these participants’ tweets, but it is deceptive as just two more mentions results in a considerably thicker connection line (connecting me to Colin versus me to Daniel, based on their mentions of me in Tweets).

It’s also to important to look at how choice of who to include changes the visualisation. In the presented map, ‘top’ users are those who had the most tweets to them. How would changing  the criteria for ‘top’ influence who is included and excluded from the visualisation? For instance, ‘top’ could be determined by tweets from or tweets from and to,  or be based on content analysis. If used for assessment purposes, or to influence algorithmic ‘nudges’ in education, using basic quantitative measures may encourage students to ‘game’ the system and interfere with actual engagement while simultaneously producing a picture which is perceived to be of it.

Hashtags

Analysis of hashtags primarily reveals that as a group we are not very inclined to use hashtags, and that when we do, the hashtag is frequently not directly connected to the topic we are discussing. Course tags seem to be most used.

Figure 11a: ‘top’ hashtags used 5-27/03/17 – highest used hashtag was used 8 times
Figure 11b: hashtags used 17/03/17 – 1 use of each #

Note that 11a needs cross-checking as it does not even include #mscedc. The included hashtags seem to be faulty. If it is deliberately excluded from the set, why is it included in figure 11b?

Images
Figure 12: image shared on 17/03/17

Just one image was shared on 17/03/17, and pertinently, it was an image of text. Could this be construed as a subversive act against the ‘infrastructure’ hosting our tweetorial? Pushing back against the 140 character limit in an attempt to engage more profoundly with the subject matter?

Or is it a subversive act against algorithmic surveillance? In using image, does Anne’s tweet allude automatic analysis and typical data mining approaches?

Influencer Index
Figure 13a: Follower counts of those who used the #mscedc hashtag 5-27/03/17
Figure 13b: follower counts of those who used the #mscedc hashtag 17/03/17

The follower counts reveal that we have attracted some people who happen to have a lot of followers to our #mscedc discussions. They do not indicate how involved these people are in our discussions. Nor do they indicate what the value is in having people with more (or less) followers involved in your discussion. For some students, an increased potential of being seen when participating online, with one’s emergent identity, could actually be perceived as a threat. In this case, it might be desirable to have people with fewer followers involved in a conversation. Alternatively, students may see those with many followers as gateways to more resources, and more people who could potentially answer their questions or connect them with ideas. How does this work in reality, though? For many of our conversations we replied to previous tweets on the same thread or question. As a result (as I understand it), only people who follow both the tweeter and the tweeter(s) they replied to would see the message. The visibility of any tweet would further be affected by Twitter’s algorithmic filtering. Yet, the number of followers, and the associated scale and reach that having a large number of followers is perceived to entail is valued, hence this metric is included in the analysis.

Do these visualisations, summaries and snapshots accurately represent the ways you perceived the Tweetorial to unfold, as well as your own contributions?

Enyon (who is also referred to in Anne’s image tweet above) reminds us that while we can count all kinds of things, the numbers alone are not a measurement of the value people place on them (2013). The same is true of these pictures of our tweetorial. For me, on the second day of the tweetorial the significant discussions (or comments that triggered ‘significant thinking’) were around:

  1. ownership of data, and rights and responsibilities related to how that data is used (and by whom);
  2. the impact (and interference) of algorithms on research process
  3. ways in which perceptions and values are algorithmically shaped.

Neither these topics, nor my/our thinking about them are conveyed clearly by the focus on most used words. The word cloud is reductionist, and to me, does not seem reflective of the thought that went into the discussions.

In addition to ‘academic conversations’, I enjoyed the supportive, sometimes amusing banter of my peers. The record of cheeses, rollerskates or tales of spam may make it into the archive as decontextualised words, but without their context they are meaningless,  perhaps even making us all look a little deranged!

Positives of the experience aside, there was a point at which ‘life interrupted’ and I had to disconnect. This also happened on the first day, where I had back to back teaching and could not engage with questions/ideas. This experience remains invisible within algorithmic reporting.

It is perhaps most useful to look at the visualisations to see the world in which they are produced, rather than the events they report on. In this sense we would be following Knox’s (2014) advice, in seeking not to see “the reality ‘behind’ the image, but how and why the image itself was produced”. In this case, it seems that many of the measurements are produced with a commerical/advertising model of getting maximum eyes, through the scale and reach of participants. For these purposes, it is important how many times a key (or brand) word is used, and how many people might see it being used, to establish brand identity/identification with brand. Is this a model we can apply to learning and more broadly, to education?

What might be the educational value or limitations of these kinds of visualisations and summaries, and how do they relate to the ‘learning’ that might have taken place during the ‘Tweetorial’?

Previously I’ve blogged about how network mapping could be used to help teachers monitor online group work and ensure all participants are ‘involved’. However, the automated version of this seems too rudimentary. Counting instances of interaction or reference to other learners not only fails to identify whether interaction is meaningful or not, but it is easily ‘gamed’, and has the potential to alter behaviours based not on engagement but on giving the illusion of engagement.

Similarly, reporting of most used words does not capture the complexities of learning, nor advise teachers about whether students have used terms appropriately and meaningfully to talk about the concepts behind them. This is not to say that there is no use for such visualisations, but, I feel, it is necessary to use such summaries in consultation with students or in cohort with ethnographic approaches so that whatever is captured can be interpreted meaningfully.

Measurements of periods in which students are most active could help with the provision of support, within reason. For instance, my students tend to use our LMS most in the very early hours of the morning. It’s practical to work with this information by ensuring IT capacity during their active times; less so to provide teaching support.

I am less able to find a usefulness for measurements such as the influencer index. Within education, I fear such measures would be more likely to reinforce existing inequalities than improve education.

Week 9 Summary

This week the focus on algorithms turned more specifically to learning analytics (LA), with a video lecture by Ben Williamson (2014), and George Siemens’ (2013) paper on the emergence of LA as a discipline. In my own explorations, the focus has been more on how analytics might be used to improve learning, and obstacles to this, including the potential subversion of analytics by competing organisations. For the opportunities to improve learning, I wrote in response to Lockyer, Heathcote and Dawson‘s proposal for a check-points and processes framework to evaluate learning design,  Durall & Gros‘ (2014) suggestion that LA could be used by students as a metacognitive tool to underpin self-directed and self-regulated learning and Wintrup‘s assertion that LA could be used by teachers and students to work on the process of learning (2017). Wintrup’s paper also contributed significantly to the development of my understanding of the risks of using LA to “improve” student engagement and learning, alerting me to a lack of alignment between the way ‘student engagement’ is used within LA and the way it is used in established research (Koh, 2001, for example). Wintrup also writes about how our ability to measure specific data points can result in these points being viewed as significant within assumptions about quality, and resultantly shaping learning in potentially unwelcome ways. This notion of analytics producing worlds rather than just reporting on them (Knox, 2015; Kitchin & Dodge, 2011) was also taken up in discussion of an article on the potential for AI to be used by authoritarian regimes, and in reference to IBM’s visions of cradle to career tracking.

The impact of how – and by whom – analytics are applied was a recurring theme within most posts this week. It also appeared in the Tweetorial – to which many of my lifestream posts this week relate. These will be unpacked more fully in week 10, but the main concerns expressed within discussions I took part in related to ownership of data, the intrusion of corporate motives into learning, value-laden assumptions implicit in LA, and the partial nature of the picture that LA offers about learning and student engagement.

 

 

 

 

 

Week 7 Summary

For such a busy week my lifestream seems relatively quiet. Mostly, I have been commenting on other people’s micro-ethnographies, and working on creating my own: final observations, analysis of data and presenting findings. The lack of observable data generated while working on my ethnography acts as evidence to a post from last week in which I referenced Lesley Gourlay (2015): the narrative of student engagement privileges publicly observable ways of being a student and undervalues quiet, solitary acts. Yet, in the end, the product of my silence is observable – both in the prezi-come-video ‘breaking up with MOOC’ and the wordier, text-based sway presentation ‘looking for community’.

Key themes arising out of my own ethnography and those of my peers included:

  • an instructionist or behaviourist focus and transmission pedagogies (Dirk)
  • discordance between subject matter and delivery (Helen)
  • constructionist pedagogy and participant formation of connections around the materials within their own, place-based communities (Clare)
  • The scale of the MOOC, course design and student motivations impeding community formation (Stuart)
  • the potential to enrich and strengthen community through an expansion of participant roles to teachers, contributors and storytellers, and the role of personally meaningful disclosure in creating a sense of kinship (Anne)
  • the role of the LMS/digital infrastructure in opening up or shutting down participant interaction (mine)
  • the impact of shortness of time and lack of anticipated future interaction on the developmental progression of communities (mine)
  • the importance of personal motivations (Dirk, Linzi) and validation (Linzi)
  • financial incentives for MOOC providers (Linzi)
  • the role of empathetic listening in community building (Anne).
Kozinets, 2010, p. 28. The interaction period in my MOOC was too short to see norm development or much beyond identity exchange.

In other (non-comment/non-ethnography) posts, connections were made to some of the ideas arising out of the ethnographies. From Pinterest, a connection was made to the importance of empathetic listening in building a MOOC community, as well as to the value of facilitating location-based communities for MOOC participants. Another Pin, from Martin Weller, focused on the need for financial sustainability in order to make MOOCs viable. Through Diigo, I shared an article which gave me insight into research approaches for examining social learning within MOOCs.

Also through Diigo, I followed up on my questions about materiality and discourse from last week. I hope to return to these ideas, looking further at agency.

And now: onward to algorithmic cultures!

Week One Summary

There have been several recurring themes for me during the first week of #mscedc:

  • The need for diversity


While setting up IFTTT, Twitter conversation jumped to algorithmic cultures and the ‘filter bubble’ (Pariser, 2011). Starting with boyd’s (2017) ideas of self-segregation, talk turned to motives for such segregation and the need for diversity in networks to support democratic process. The call for diversity was echoed in posts about Ghosts in the Shell, within which characters suggest similarity weakens the group, and difference is the foundation of life.

  • Memory


The short film Memory 2.0, as well as Eter9 (which promises an eternal digital life), caused discomfort connected to memories being recreated potentially without the consent or presence of those involved. Similarly, encounters with extropianism through Dahls’ William & Mary (1961)  and a comic (‘transhumanism gift cards’) raised questions about the ownership of disembodied minds (including memory data) and potential changes in the terms and conditions of service by corporate ‘body’ or ‘eternity’ providers. Memory was also considered in connection to identity in discussion of Robot & Frank (2012) and the character Motoko in Ghost in the Shell.

  • Lack of clarity about the ‘natural’ human

This arose from readings of the body as a site of cultural activity and quest for social distinction (Bourdieu, 1984; Williams & Bendelow, 1998), as well as recognition of the difficulty in defining ‘natural’ human effort in sport.

  • Technology’s influence on culture
Neolithic tools by Michael Greenhalgh (CC BY-SA 2.5)

From the impact of changed affirmation practices on self-segregation to questions of whether being assessed changes participation and musings about the affordances of print vs film, I was repeatedly drawn to the idea of technology not just as tool but as co-creator of culture.