Tweetorial and TweetArchivist – my thoughts

How has the Twitter archive represented our Tweetorial?

TweetArchivist has represented our “intensive tweeting” period as a series of graphs, tables and word clouds. It has used quantitative measurements such as a sum (e.g. total number of tweets per user), averages (e.g. word clouds) and counts (e.g. number of hashtags used). It has done so with a tool that is not specifically designed to offer learning analytics. Not say that fact precludes its presentation from being used in such a manner, which I’ll get to later in this post.

The period of the representation is “3/5/2017 – 3/26/2017″ at time of writing this blog post. A full archive of each graph and at the time of writing has been included in this image library (note that “images” is not included). That is to say, the actual period of the tweetorial was far shorter than the period represented. This leaves me with considerably more data than I can use, with no obvious means to draw it back just to the dates of the tweetorial.

The information presented in our tweetorial has been stripped of any meaning, emotional intent and humour and paired right back to the machine-readable content of ascii.

It has scraped the information publicly visible on the twitter stream via a single hashtag, #mscedc. Likely using Python or something similar in to some raw format which can be parsed and manipulated using complex routines to create the graphs and other visual representations presented online.

What do these visualisations, summaries and snapshots say about what happened during our Tweetorial?

View post on imgur.com

The above image is a list of each measurement made by the TweetArchivist system. As the period of data is not limited to the period of the tweetorial, there is only a single measurement that can actually be drawn back to reflect on the activity during the tweetorial:

View post on imgur.com

This graph clearly demonstrates the number of tweets spiking in and around the period of the tweetorial. As we might expect, the “intensive tweeting” (Knox, 2017) required of us resulted in a substantial surge of the number of tweets made. In response to the activity, the drop-off after the tweetorial was the lowest point recorded on the graph, possibly suggesting that people were taking a breath from engaging with Twitter during that period.

Do they accurately represent the ways you perceived the Tweetorial to unfold, as well as your own contributions?

No. The results as I view them today do not represent how I perceive them from the event. I have already hinted at the problem, but now I openly question the validity of the results recorded.

From observations based on the figures captured on other participants’ blogs, as well as my own view of the figures earlier in the week, I can say that they generally suggested that my own frequency of posting on twitter was higher than all but two other participants. I did not expect this. I thought I would be further down the list. However, accessing the discussion via my android phone did give me opportunity to check in on an ad-hoc basis through the days, from breakfast time, right through to the evening, so it was perhaps this that enabled my contribution count to be higher. That said, I make no claims as to the overall quality of my contributions. We found ourselves posting about cheese to test a theory that developed about algorithms picking up on tweets and either making automated re-tweets or follows.

I felt that there were periods of sustained bursts of activity, where there would be a short period (perhaps around morning coffee time, GMT) where posts would be responded to, and questions posted. I would be interested in drilling down to quarter hour segments over two days to see if this was the case.

What might be the educational value or limitations of these kinds of visualisations and summaries, and how do they relate to the ‘learning’ that might have taken place during the ‘Tweetorial’?

Participation does not equal engagement. That said, without being measured as present, you could be hard-pressed to say that someone was engaged. Last year, in IDEL we looked at a method of taking similar quantitative measurements and drawing some formative feedback out of them (LARC). Gasevic, Dawson and Siemens (2015 p2) said

learning analytic tools are generally not developed from theoretically established instructional strategies, especially those related to provision of student feedback

In comparison, LARC actually gives you feedback from the perspective of the academic, with suggestions on how the stats could look if you were operating in a manner that was more akin to good scholarship.

View post on imgur.com

(Source, Miller, C A (2016) Video. Online. My IDEL video covering some points which would make for interesting comparison between LARC with TweetArchivist)

As we see from Helen’s excellent analysis of the Tweetorial, it’s quite possible to arrive at a multitude of interpretations from the information presented by TweetArchivist, not all of which might include educational value has been drawn from the results.  With the type of statistics presented by TweetArchivist, we are left to draw our own meaning. Sometimes our own personality will inflict particular conclusions on that, if we are of a particularly competitive nature, then we learn to post more to boost our placement. If we are less keen on standing out, then we would post less frequently. We may learn to pad our discussions with keywords to play the system, knowing what the measurements will show.

This said, I do think there is merit in knowing, for example, that there was interaction between the participants. As I found in my micro-ethnography a learning community which responds only to the instructor’s original posts, is not forming the type of community which is conducive to the kind of constructivist activity that may have been intended by the organisers of the event in question.

Equally, a word cloud which does not include the key phrases or words that are part of the topic under scrutiny would certainly suggest that the possibility of a group of students having missed the point entirely could be considered by the teaching team.

A list of relevant URLs could also show a high-level of engagement with the subject, and may well indicate participants were very keen and seeking to make points, well referenced and perhaps drawing from current affairs, history, or other academic disciplines. Though to take inspiration from Helen’s analysis it could also mean that they were sharing pictures of cats, or perhaps even cheese.

Conclusion

The results presented, with one exception, were not actually measuring the activity surrounding the tweetorial event. That said, through comparison with the type of feedback gained via The University of Edinburgh’s own LARC system, as well as some suggestions on how to draw some meaning from the quantitative figures, I hope to have shown how such measurements could be of use to both learner and teacher.

3 thoughts on “Tweetorial and TweetArchivist – my thoughts”

Leave a Reply

Your email address will not be published. Required fields are marked *