Tweet

I love attending seminars, I’ve posted quite regularly on my blog about some of the seminars I’ve been lucky enough to attend over the course. I’m not too fussed about the way in which the content is delivered. Usually, I find it’s boring and dry, delivered through a discussion panal or a lecture. Unlike some of my classmates I don’t love a good old-fashioned lecture. I’d much rather be doing something for at least some of the time.

Audrey Watters has featured quite heavily on my blog, I linked her talk on Ed-Tech in the time of Trump and the Algorithmic future of education. I wish I had been able to come to her talk at the Moray School of Education, right at the time I was using her as a reference.

Tweet

I have been reflecting a lot this week about my experience in participating in Education and Digital Culture (#MSCEDC). Never have I felt so vulnerable as a student as doing #MSCEDC. I was very uncomfortable when the course first started about displaying my work to my peers and even more uncomfortable that most of my feedback would be visible to other participants.

I recently had a tutorial with my personal tutor and I surprised myself when I told her and other students on the MSc that I loved the open spaces afforded to us in #MSCEDC, I didn’t really care that people could see all my academic flaws anymore. I realised that they too, were probably trying to keep up with the workload. They weren’t judging me but looking to me for inspiration, just as I looked toward them. I was forced to be vulnerable on this course and am a better student for it.

Writing: A method of inquiry

How do we construct knowledge?

from Diigo

Assignments and assessed work often leave me frantic and anxious about what to do. This text has really helped me with trying to understand scholarly processes much better. After reading it, I was able to reconcile that sometimes I just need to start writing to ultimately find out what to write about.

Do we really need to measure everything? Week 10

It is seldom if ever that my professional life and my studies amalgamate so whole-heartedly. I don’t really know if this is a good or a bad thing because inevitably the short snippets in which my studies present surpass my professional life and leave it behind while I’m still grappling with the issues at hand.

Last week I posted about going on training on Moodle analytics. I thought this would really help me make sense of one of the courses I work on as well as contribute to my understanding of this course.

Unwieldy data, how do we manage it?

How does one make sense of a course that has over 2480 pages and 12 thousand students enrolled on it?

The Tweetorial was much easier to manage but it did make me wonder if there was any purpose to recording students’ activity in open educational spaces. How can we really know that they are engaged? What struck me in our hangout this week is that one of the participants had a different handle and gender than what I have come to recognise. Since he was not in the tutorial as himself, does that mean he didn’t participate?

I went to London and South East Learning Technologies Conference for Health Education. A lot of the discourse centred on ‘technology enhanced learning’. The first seminar I attended was on wearable technology. The speaker spoke of physiolytics, the study of the information retrieved from a device worn on one’s body such at a staff ID card and Fitbit. The doctor presenting spoke of a “smart condom” that measured one’s performance and then fed that information back to an app on a phone. I had to wonder at this point whether man’s obsession with measuring his performance has gone a step too far.

Tweetorial: A critical analysis

We can’t use data alone to measure student success.

The data from Tweetorial was graphically presented in charts and lists. It was easy to understand but it is limited in what it records for educational purposes. The analytics tool can only measure participation of those students who are active (tweeting, retweeting and responding) in Twitter. It provides no information about those who are passive (scrolling, liking and direct messaging) in the environment. The analytics are problematic because they contribute to the visualisation of participation but not necessarily learning and rate students against one another.

Our Tweetorial analytics consisted of data comprising of top users, top words, top URLs, source of tweet, language, volume over time, user mentions, hashtags, images and influencer index. This kind of analytic data is helpful when showing ‘what people actually do’ (Eynon 2013), for example who tweeted the most, what words they used in their tweets, where they got the information they tweeted about from, what language they tweeted in, how many tweets they produced, if they were mentioned by others, what hashtags and images they used and how many followers they have. It is more problematic when looking at the content of the tweets and measuring learning. Perhaps, a tool like NVivo would be helpful in trying to pull together the quality of the content being discussed but this still limits the understanding because not all participants’ learning is evident as content can only be measured through active participation.

There is a flaw in the Tweetorial analytics; students who did not actively participate were not included in the data. If we compare the Tweetorial to a traditional tutorial, the tutor could ask the same questions, in both environments there will be students who dominate the conversation and those who are more comfortable to watch and not actively contribute. Those who do not actively contribute are still present. This is not measured in the Tweetorial analytics.

It was interesting to see that one of the students in our cohort, who could be perceived to have been inactive in the Tweetorial was also very quiet in the Hangout tutorial. As an ethical consideration, I will not name the individual. In other tutorials in which I have participated, this individual has contributed much more and I have to wonder whether they were more withdrawn because the analysis did not show them in a favourable light and they felt reluctant to contribute. I have subsequently looked at their own blog about the Tweetorial and their weekly synthesis, both make for very engaging reading and brought a unique perspective to my own scholarly thought. They mention their inactivity but this did not seem to affect their learning. This person is clearly engaged with the course and has made excellent contribution but not in the space that was being measured. The data does not therefore represent reality accurately.

Part of the problem when one is a student using an open educational space for learning, is the acceptance of vulnerable position of having your academic work being available for both your peers and the online world; the online world is far less of risk because the likelihood of them being interested in what you are talking about is substantially less than having your work being visible to your peers. Peer review is a common academic practise but for those working outside academia and not necessarily wanting to pursue a career in academia, this openness can be daunting. In an open course such as Education and Digital Cultures, students can often feel the added pressure of their peers judging not only the quality of their work but also their participation. While this outlook is probably exaggerated for me personally, the public nature of the participation of the Tweetorial overall motivated me to take part. I felt relief that my participation had been recorded but at the same time I struggled with the competitive nature of learning in an open environment.

The visualisations, summaries and snapshots are measuring participation and although they are not ultimately measuring performance these visualisations are similar to grades, rating student success. There are particular issues with using analytic data in this way, not least of all that if students get graded poorly in front of their peers, this can lead to resentment, anger and demotivation (Kohn 1999). The most interesting factor is that the results of the Tweetorial do not actually measure learning so neither my peers nor my tutors can see how much I had attained, nor could we see that attainment for others.

As educational researchers, the content that is provided by analytic tools such as the one used in the Tweetorial limits the kinds of questions we can ask about learning (Eynon 2013) because the recording of learning in these environments is problematic. We can only study what is recorded and we can only ask questions around that data. The data presents a snapshot and it is related to participation and not attainment. If our research focuses on how students learn, we have to build relationships with those students as in order for data to be effective because it needs to be interpreted in context through observation and manipulation (Siemens 2013).

The data that is presented will allow teachers to be able to identify trends and patterns exhibited by users (Siemens 2013), this will then allow tutors the opportunity of adjusting the course accordingly. Although this was not exhibited as such in the Tweetorial, our discussion around cheese could similarly be related. If the tutor was able to see that content which was not explicitly related to the course being discussed, they could adjust their questions or add additional content accordingly.

Analytics tools only provide information for part of the students’ experience. Although useful, this data should be used in the context of the greater course. It needs to be interpreted concurrently with other data gathered through observation and evidence. It can assist the tutor with being able to monitor the trajectory of the course and show who is actively participating but it is limited when trying to establish attainment. Tutors should also be mindful that data such as that presented in our Tweetorial can also affect student motivation and participation.


Eynon, R. (2013) The rise of Big Data: what does it mean for education, technology, and media research? Learning, Media and Technology, 38(3): pp. 237-240.

Kohn, A. (1999) From Degrading to De-Grading, Retrieved 24 November 2016. http://www.alfiekohn.org/article/degrading-de-grading/

Siemens, G. (2013) Learning Analytics: the emergence of a discipline. American Behavioral Scientist, 57(10): pp. 1380-1400

 

 

 

Instagram: The history of algorithms.

How are all these people linked to algorithms?

This was in a presentation I saw today. I wish the speaker had expanded on this and explained how all these people are linked to algorithms. #mscedc March 22, 2017 at 04:43PM

via IFTTT

I managed to look into this a little bit after the conference. Something that struck me while looking at the picture presentation of the speaker was that all the pictures were of very old white men, except for Al-Khwarizmi, who was Arabic. I found a nice timeline with a representation of the history of algorithms. At least here they mentioned Ada Lovelace.

Tweet

This post of mine didn’t particularly relate to this week’s theme but I thought it was interesting. Students training to be paramedics and working for the ambulance service are now able to train is environments that simulate real-life situations in real time. Immersive Technologies, the company that makes this possible, can provide any situation as long as it can be filmed by a camera. I thought that learning like this provides students with more learning opportunities than they would have without it and it demonstrated explicitly how technology can provide students with many more possibilities.

Tweet

In one of the seminars I attended this week I heard the term physio-lytics being bandied about. This is the practise, from what I  understand as I haven’t been able to information on it, of measuring and making sense of data that is retracted from wearable technology. It is the information on your staff ID card, Fitbit or smartwatch that will be used for this kind of analysis.