The Tweetorial

How has the Twitter archive represented our Tweetorial?

1. Ranked league tables

Winners podium
from http://www.eventprophire.com

The first thing that struck me about the analytics is that many of them are ranked and use words like ‘top’ or heat-map style representations with most used words or most frequently mentioned contributors shown in decreasing order of size on the graphic.

Studies such as Cherry, T. and Ellis, L.V. (2005) indicate that that norm-ranked grading drives competition.  Whilst this not the intention of the analytics I did find myself taking an interest in where I ‘ranked’ in the various tables and charts and had some sense of achievement in appearing in the ‘top’ half or higher in most of them.

This sense of achievement is, of course, entirely spurious.  Most of the results indicate quantity rather than quality.  Siemens (2013) raises this as an issue “Concerns about data quality, sufficient scope of the data captured to reflect accurately the learning experience, privacy, and ethics of analytics are among the most significant concerns” .  Dirk’s Tweetorial analysis highlights this well and he asks a similar question “Is any of the presented data and data analysis relevant at all? Does it say anything about quality?” While it doesn’t for the use we are making of it, for a marketeer knowing who the key influencers are would be very useful.

2. Participation

I was working from home on the day of the Tweetorial so was able to join in for several hours.  Borrowing from Kozinets (2010) classifications of community participation, my own contribution to the event felt like a reasonable balance between ‘mingler’, ‘lurker’ and ‘insider’.  The quantitative nature of the analytics does not enable any distinction between these types of participation.

The high number of mentions I had was largely due to my experiment early on in the day, attempting to ‘attract’ algorithm driven followers through keywords.  I had noticed that I was gaining followers from the data analytics and artificial intelligence fields, presumably based on the content of my tweets, so I decided to try tweeting the names of cheeses to find out if this yielded similar results.  Helen took up the theme and ran with it and this became the source of some playful behaviours and social cohesion over the course of the day.  A good example perhaps, of the first follower principle illustrated in  Derek Sivers’ (much debated) theory illustrated by his ‘Leadership  lessons from dancing guy’ video:

3. Most used words

Interestingly the word Cheese and other words such as ‘need’ that appear quite prominent on the heatmap below shared by Anne, do not appear on the analytics linked from the course site.  This is likely to be due to the capture period selected and, if so, it illustrates how statistics can be manipulated both intentionally and unintentionally, to convey a particular narrative.

The most used words seem to be fairly constrained, words I’d expect to see given the nature of the questions are there, but having taken part in the Twitter ‘conversation’ I can see that they do not capture the diversity of the topics discussed.  Some of the more diverse words do show up in the hash tag heat map.

Cathy’s Thinglink summary of the Tweetorial points out the frequent use of the word ‘perhaps’ and she offers a possible explanation “It may reflect a tentative and questioning aspect of some tweets”.  I know I tend to use the word when I have not qualified a statement with a source, or I feel I’m interpreting a source differently to the way the author intended, so this might be another explanation…perhaps.

Overall, while the analytics imposes some order on the presentation of the data, human interpretation by someone who was present during the event (shades of the ethnography exercise here) are necessary to make sense of them.  As Siemens (2013) points out “The learning process is essentially social and cannot be completely reduced to algorithms.”

What do these visualisations, summaries and snapshots say about what happened during our Tweetorial, and do they accurately represent the ways you perceived the Tweetorial to unfold, as well as your own contributions?

1. Volume over time

This is another example of the time frame used providing only a limited insight.  In this case the fact that the number of tweets increased markedly on the days of the Tweetorial is hardly an insight at all.  I’ll refrain from using a popular British idiom involving a fictional detective here, but this would have only been an insight had the reverse been true, or there had been no increase.  Had they happened both of these alternatives scenarios would also have required human interpretation to make any sense of them.

A more useful time frame might have been 15 minute slots over the course of the two days (or even each minute), as the data could then have been aligned to when new questions were asked by Jeremy or James.  It would then have been possible to see the different levels of activity following each question and pass judgement on which were the most effective at generating debate. However, even with a greater degree of granularity it still wouldn’t have been possible to attribute an increase in activity to a tutor question, as it could also have been due to a supplementary question being asked by one the students.

2. The contribution of others

The user mentions heat map has Jeremy and James as central to the discussions, presumably because a lot of the tweets were posted as replies to their questions.  While they were active contributors I don’t think they were as central to the discussions as the heat map would suggest, indeed the focus moved around between contributors as the discussions progressed.

3. My own contributions

I’ve already made some observations about quantity versus quality and the top-tweeter, Philip, has rather humbly (unnecessarily so) made similar self-deprecating comments about his own contributions.

Being purely quantitative the analytics would provide no useful data, if student’s contributions were being assessed and graded for educational purposes.  I made a similar point during the Tweetorial  – simply counting the number of tweets is similar to the way some learning management systems count learning as ‘completed’ if a learner opens a pdf or other document.

As well as academic discourse I believe some social interaction and risk taking by participants is good for healthy debate, but again the limited analytics we have available do not provide any insights into this type of community participation.

4. Images 

I’m not sure if it’s because I’m a particularly ‘visual’ person, but I found the images give by far the most accurate representation of how the tweetorial felt to take part in.  They capture both the academic and social aspects of the conversations and they provide a useful ongoing resource.

What might be the educational value or limitations of these kinds of visualisations and summaries, and how do they relate to the ‘learning’ that might have taken place during the ‘Tweetorial’?

1. The format

As a medium for education the format would take some getting used to.  The multiple streams of discourse can be difficult to follow and I felt the conversation had often moved on by the time I reflected on a particular point and formulated my answer.  I experienced a very similar situation during a previous course when I took part in a synchronous reading of one of the set papers and an accompanying Twitter chat.  It was soon clear that everyone read at a different pace and before long the whole thing was out of sync and one paragraph was being confused with another.  Tools such as Tweetdeck and Hootsuite do help visualise the conversation by allowing the user to split a continuous stream into multiple columns, for example one column for a specific person, another for key word(s) and so on.

I see some potential as means of kick-starting a discussion, the pace and multi-modality can generate a lot of ideas and links to resources very quickly.  Follow up activities could then explore the various threads in more detail, with further Tweetorial(s) to reinvigorate any topics that slow down or stall.

In this experiment there was some value in not knowing exactly what analytics were going to be recorded, as this made it less likely that our behaviours would be influenced.  Personally I had forgotten there would be any analysis by the time the second question was asked.  If was going to use this format with my learners and analytics were going to be used I think I would adopt an open model and be clear up front about the limited nature of what was going to be recorded and how it would be used.

2. The analytics

In his blog Abstracting Learning Analytics post Jeremy Knox writes “… my argument is that if we focus exclusively on whether the educational reality depicted by analysis is truthful or not, we seem to remain locked-in to the idea that a ‘good’ Learning Analytics is a transparent one.  

In this blog post Knox refers to a painting of Stalin lifting a child and points out that there might be more to be understood from abstracting this depiction than might be gained from “attempting to come up with a new, more accurate painting that shows us what Stalin was really doing at the time.”

So, what if we take a more abstract view of the depictions of the Tweetorial presented by the Tweet Archivist analytics?   Following Knox’s lead perhaps the real questions we should be asking include:

  • Why have these particular data been selected as important?
  • Why is the number of mentions an individual receives considered more important than, for example, the number of links to external resources they provide?
  • Why is a ranked or heat map view used rather than a spider graph or other mechanism that might better demonstrate connections?

Knox brings this idea of taking a more abstract view of analytics back to education “What may be a far more significant analysis of education in our times is not whether our measurements are accurate, but why we are fixated on the kinds of measurements we are making, and how this computational thinking is being shaped by the operations of the code that make Learning Analytics possible.

In the case of the Tweetorial, analytics were provided to us, possibly in the knowledge that they would raise precisely the sort of ‘lack of transparency’ questions I have discussed above.  In reality I could take Dirk’s example a step further and carry out my own data collection and analysis, or used a different tool such as the tool shown below ‘Keyhole’, which provides additional views such as ‘sentiment’ scores percentage of tweets that are positive, negative or neutral and any gender bias.

Analytics from keyhole.co
Analytics from keyhole.co Click image to open in higher resolution

Similarly, in my own professional practice I could take a critical look at the data we’re collecting and ask some fundamental questions about what it tells us about our organisation and what we value in our learners.

References:

Cherry, T. and Ellis, L.V. (2005) Does Rank-Order Grading Improve Student Performance? Evidence from a Classroom Experiment, International Review of Economics Education, volume 4, issue 1 (2005), pp. 9-19

Siemens, G. (2013) Learning Analytics: the emergence of a discipline. American Behavioral Scientist, 57(10): 1380-1400

Kozinets, R. V. (2010) Chapter 2 ‘Understanding Culture Online’, Netnography: doing ethnographic research online. London: Sage. pp. 21-40.

Sivers, D.  TED talk ‘How to start a movement’ https://www.ted.com/talks/derek_sivers_how_to_start_a_movement

One thought on “The Tweetorial”

Leave a Reply