Week 10 – Weekly Synthesis

It was good to catch up with the group during this week’s Google Hangout. I always really enjoy discussing recent tasks and themes with my peers as I always find a new and interesting points to consider as a result.

One example of such points would be considering the difference between text based communications on Twitter in comparison to those within a MOOC. My hand-drawn diagram within the ‘#mscedc Digital Cacophony – Tweetorial vs MOOC’ post suggests that despite not being purposefully built for education, I found Twitter to be a more suitable forum for group discussion.

Following the Tweetorial I further investigated the need for analytics and big data within education. In the post entitled ‘Big Data, the Science of Learning, Analytics, and Transformation of Education’, Candace Thille noted that online environments encourage students to collaboratively move towards set goals whilst being able to synthesise knowledge to apply in new contexts. It is that ability that held my interest throughout the week and became a consideration that I took into my critical analysis of the Tweetorial.

For my critical analysis I examined the analytical data from the Tweetorial. I found myself comparing my performance to that of my fellow students and documenting my thoughts from both an individual and a collaborative perspective. The data would indicate that I made a lower than average contribution which on initial observation could be interpreted negatively. However I felt that I both contributed and received useful information throughout the activity and constructed new knowledge as a result.

I felt that this week afforded me the opportunity to gain first-hand experience of the topics and themes that I have been studying.

 

Stuart Milligan the Tweetorial participant vs Stuart Milligan the student – A critical analysis

Introduction

Week 9 of Education and Digital Cultures was my first experience of a ‘Tweetorial’. It was a very public way for our group to explore the topic of ‘Learning Analytics and Calculating Academics’. The openness was certainly consistent with the ethos of the course as a whole. The activity encouraged the group to engage with each other (and indeed the wider Twitter community) to discuss a range of topics that were explored throughout the previous few weeks. The benefit of using Twitter to facilitate the activity was to gather data and analytics by using the #mscedc hashtag and some Twitter-related data archiving tools.

I had mixed feelings about participating in such an expanded forum.  A combination of fears such as exposing my learning to a huge and unfamiliar mass of people, time constraints and a 140 character messaging limit all contributed to my less-than-average participation throughout the duration of the activity. Overall however, I felt that I had made a decent contribution to the Tweetorial.

 

Summary

The Tweet Archivist data added a much needed context to a seemingly fathomless digital abyss. An immediate example of a surprising statistic was that around 700 (at time of writing) tweets were posted during a 19 day period. In my self-defined role as a ‘small contributor/big lurker’ at no point during the Tweetorial did I ever feel aware of the high volume of activity going on around me. It is only on reflection that I consider this statistic to be accurate. I find it interesting that the total number of text based contributions during the Tweetorial mirrors that of an average discussion forum that I observed within the ‘Internet of Things’ MOOC. Despite this similarity, I cannot say that I was aware of the same “digital cacophony” (Milligan 2017) that I experienced whilst conducting the micro-ethnography on the IoT MOOC.

The Tweetorial can be considered a success when comparing the final analysis with the objectives identified prior to the start of the activity. The aim of the Tweetorial was to conduct “some intensive tweeting around the ideas raised in weeks 8 and 9 of the course”. The top word analysis successfully identified and summarised the key words and discussion topics that have emerged throughout the preceding 8 weeks of the Digital Cultures course.

 

Analysis

Some of the final statistics cast a sobering effect over me when I contrasted them with my own evaluation of contribution to the Tweetorial – most notably with the top user and user mention statistics. Prior to reading the final analysis I was content with my contribution and felt that I had contributed to most discussion threads and had a decent input to the Tweetorial. However after realising I was ranked 18th (out of 25) in the top user table and that I did not feature in the user mention rankings at all I felt somewhat deflated. Based on this, I felt relatively insignificant to both the activity and to the wider Twitter community whilst also feeling slightly embarrassed and disappointed in myself. As Kohn (1999) suggests, exposing students to ranking systems turns education into a competitive process rather than a learning one.

As I sought solace I investigated the analytics associated with my own Twitter account. I was uplifted after reading that during the same 19 day period my own tweets:

  • had 3600 impressions
  • received 39 likes (avg 2 per day)
  • received 15 replies (avg 1 per day)

From an individual perspective I was generally happy with these statistics and was relieved when I compared them with the same metrics for the group. I was therefore afforded the opportunity to appreciate that general analysis of big data often neglects the circumstances and performance of the individual. Though my performance was considerably lower than that of my peers I certainly felt that I constructed knowledge and make a contribution to the Tweetorial with which I am happy.

Personal analytics
Personal analytics

 

Conclusion

In conclusion, as a learner I feel that there was little educational value in having access to analytic data of my performance within the Tweetorial. If anything, reviewing the data made me feel apprehensive and worried about my performance in comparison to my peers – whereas my individual analysis proved to be quite pleasing. I felt that I had contributed enough to both learn from and contribute to the activity, the only doubts that I had were as a direct result of comparing myself with others.

Due to the nature of the activity I felt very limited by having no opportunity to re-visit the Tweetorial and make additional contributions to alleviate my concerns. However I do wonder if further learning could be achieved if I had the opportunity to make more contributions. I could potentially fall into the trap of tweeting for the sake of tweeting, just to improve my statistics which would have little or no benefit for either the group of myself.


References

Kohn, A. (1999). From Degrading to De-Grading. Retrieved: 24 March 2017. http://www.alfiekohn.org/article/degrading-de-grading/

Milligan, S. (2017). The Internet of Things MOOC’ – First Impressions. Retrieved 24 March 2017. http://edc17.education.ed.ac.uk/smilligan/2017/02/12/the-internet-of-things-mooc-first-impressions/

#mscedc Digital Cacophony – Tweetorial vs MOOC

During this week’s Google Hangout, James Lamb asked me about the differences between the Tweetorial and the discussion forums within a MOOC and how they each contribute to what I describe as digital cacophony. The above image notes some key differences that I observed whilst participating in each.

via Instagram http://ift.tt/2mqiVZ3

Week 9 – Weekly Synthesis

Week 9 already! Wow!

This week’s Lifestream activity has been dominated by the group ‘Tweetorial’ in which we investigated some topics and issues highlighted in the recommended viewings and readings. In summarising my Tweetorial activity, I would note that I contributed to discussion threads surrounding the following key themes concerning Big Data and Learning Analytics (LA):

  • Ethical considerations
  • Social media influence on algorithmic culture
  • Big data influence over students
  • Algorithmic pattern identification
  • Dependence on analytics

I felt it essential to explore the vastness of Big Data and to consider the implications of identifying patterns when it is analysed. I felt that this week’s recommended material focused on either how data was gathered/analysed or the resulting consequences for students. Therefore, I became increasingly interested in the gap between big data and hypotheses and what new knowledge we can discover from the space in between. My ‘Analyzing and modeling complex and big data’ post attempted to address this issue.

Following on from the ‘Tweetorial’ I was motivated to explore some of the issues raised to put them into a relevant context. My ‘Learning Analytics – A code of practice’ post summarised my investigation into a JISC funded LA project in which the project team addressed many (if not all) of my concerns around ethics and student intervention. In hindsight, I had only really considered LA from the perspective of the institution and the learner – not of the individual as a person.

It was another enjoyable week and I’d like to thank my tutors and peers for a very engaging Tweetorial.

 

Learning Analytics – A code of practice

This week’s Tweetorial highlighted areas of Learning Analytics (LA) that I was interested in investigating further – in particular ethics and student intervention.

Until recently I had a vague awareness of a JISC funded project aimed at developing a Learning Analytics service for UK Colleges and Universities (Jisc, 2015). I decided to delve into the project’s Code of Practice to gain a clearer understanding of how the education sector currently addresses some of the issues that we have been discussing this week.

During the Tweetorial, James Lamb asked the #mscedc group:

James' Tweet
James’ Tweet

I responded by tweeting:

Stuart's Tweet
Stuart’s Tweet

Therefore, I was relieved to read that JISC acknowledge that “Institutions recognise that analytics can never give a complete picture of an individual’s learning and may sometimes ignore personal circumstances”.

What I also found to be of high interest when reviewing the Code of Practice was guidelines relating to student access to analytical data. JISC stress “If an institution considers that the analytics may have a harmful impact on the student’s academic progress or wellbeing it may withhold the analytics from the student, subject to clearly defined and explained policies.”

I found this fascinating as we have been considering the potential consequences for students based on the comparison between analytical output and an institution’s performance benchmarks. What I hadn’t considered is how a student’s performance may be affected by viewing their own analytical data.


References

JISC. (2015). Code of practice for learning analytics. Retrieved: 18 March 2017. https://www.jisc.ac.uk/sites/default/files/jd0040_code_of_practice_for_learning_analytics_190515_v1.pdf