More on Tweetorial Analysis
Questions from EDC:
- How has the Twitter archive represented our Tweetorial?
- What do these visualisations, summaries and snapshots say about what happened during our Tweetorial, and do they accurately represent the ways you perceived the Tweetorial to unfold, as well as your own contributions?
- What might be the educational value or limitations of these kinds of visualisations and summaries, and how do they relate to the ‘learning’ that might have taken place during the ‘Tweetorial’?
The Twitter archive for #mscedc has represented our Tweetorial event in the following ways (see quotation) in terms of quantitative data:
“Besides providing us with quantitative data (top users, top words, top URLs, source of tweets, language, volume over time, user mentions, hashtags, images and influencer index), what meaning do these numbers give us and what is the significance?” (quote taken from my other blog post found HERE)
But what are we missing? What about the qualitative side? For instance, we know that Philip had the most number of tweets, and that Angela did not participate (she did not contribute any tweets during this time). What does this say about these two learners? Given the numbers, did Philip learn more than Angela? This is a tough question and one that we cannot be sure of an accurate answer because Angela may have learned a lot – perhaps she followed every tweet in our Tweetorial and read all the links, etc. If this was the case, then Angela probably gained plenty of knowledge and experience from the Tweetorial. It is difficult to create an accurate analysis of knowledge gained or if learning was acquired simply from analysing the metrics.
In looking at my own contributions during our Tweetorial, I found the analytics section on my own Twitter profile (something I hadn’t really investigated before).
Here are some screenshots I took of the data:
From this data, I see that from day one of the Tweetorial to day two (Mar. 16-17) I gained more ‘likes’, engagements and clicks. Does this information mean that I was beginning to learn more or that I was creating more influence in our Tweetorial #mscedc community? I’m not sure of these answers, but it is certainly interesting (and exciting) to see the progression via analytical information.
And here are some screenshots from the Audience section of my Twitter analytics:
From this data, it is interesting to note the gender data: percentage of males to females is higher (what does this say/mean?)… And that the highest percentage of my audience is out of my age range (I’m engaging a younger audience!). There are no surprises, however, in that the highest percentage of my audience comes from Canada (where I live). I do credit the 16% of UK audience engagement from being a student at U of Edinburgh (obviously!).
I wonder how this use of analytical data could be applied to my primary professional practice of teaching figure skating; how can a physical (kinesthetic learning) sport be analysed via Twitter?
I sometimes have my figure skating students track their jump attempts in a log over a period of time. They perform either five or 10 of the same jump, then record how many out of those five or 10 they landed successfully (‘successfully’ meaning they landed on one foot, with minimal shakiness and with adequate flow, etc.). At the end of a month or so, my skaters can see if they have improved their consistency with a particular jump – the data reveals trends and results that are beneficial for learning and for their own progression in the sport. Perhaps, in future, I could have my students incorporate Twitter in this exercise to have a digital analytical log of their jump progress!
Knox (2014) points to the significance of how we interpret learning analytics and of how we ‘frame’ the results. Coming back to my example of jump tracking in figure skating, it is important to note that sometimes basing achievement simply on number of successful landings is not always accurate; for instance, I have a skater who is trying to progress from a double jump (two rotations in the air) to a triple jump (three rotations in the air). Going from a double jump to a triple jump is big step up and can take years to master, in some cases. If this skater tracked their triple jump attempts and found they were ‘unsuccessful’ over a period of time (i.e. they landed zero out of five), I could take this at face value and conclude that this skater was not progressing. This is conclusion is not always accurate though because what if those jump attempts were (fairly) well done and technically correct? What if that skater obtained the full rotation in the air (a positive thing), but just couldn’t quite get the landing right? This long-winded example tells me that the numbers cannot always convey an accurate representation of learning or of so-called ‘successful’ results.
I will end with an fantastic quotation from Pegrum (2010) in reference to digital networks:
“Digital resources are distributed across countless sites, services and channels, and can encompass material which students have located and evaluated; collections they have tagged and curated; and even artefacts they have individually or collaboratively created.”
I feel that our Tweetorial encompasses Pegrum’s observation, and demonstrated learning and engagement through the theories of connectivism and social constructivism, suggesting that the teacher’s role becomes one of a facilitator who shapes the discussion (Pegrum 2010). As Jeremy and James posted questions during the Tweetorial, they were shaping our discussion and leading it in certain directions to keep with our course topics. We did get derailed at some points though, with the endless yet hilarious posts about cheese!
References
Knox, J. (2014). Abstracting Learning Analytics. Code Acts in Education ESRC seminar series blog. http://codeactsineducation.wordpress.com/2014/09/26/abstracting-learning-analytics/
Pegrum, M. (2010) I link, therefore I am: Network literacy as a core digital literacy. E–Learning and Digital Media. 7(4): 346-354.