I thought this might be a useful guide on how to manage data when it seem unwieldy.
When you can’t just cram it into tables
from Diigo
Chenée's Education & Digital Culture blog
Just another Education and Digital Cultures 2017 site
I thought this might be a useful guide on how to manage data when it seem unwieldy.
When you can’t just cram it into tables
from Diigo
The data from Tweetorial was graphically presented in charts and lists. It was easy to understand but it is limited in what it records for educational purposes. The analytics tool can only measure participation of those students who are active (tweeting, retweeting and responding) in Twitter. It provides no information about those who are passive (scrolling, liking and direct messaging) in the environment. The analytics are problematic because they contribute to the visualisation of participation but not necessarily learning and rate students against one another.
Our Tweetorial analytics consisted of data comprising of top users, top words, top URLs, source of tweet, language, volume over time, user mentions, hashtags, images and influencer index. This kind of analytic data is helpful when showing ‘what people actually do’ (Eynon 2013), for example who tweeted the most, what words they used in their tweets, where they got the information they tweeted about from, what language they tweeted in, how many tweets they produced, if they were mentioned by others, what hashtags and images they used and how many followers they have. It is more problematic when looking at the content of the tweets and measuring learning. Perhaps, a tool like NVivo would be helpful in trying to pull together the quality of the content being discussed but this still limits the understanding because not all participants’ learning is evident as content can only be measured through active participation.
There is a flaw in the Tweetorial analytics; students who did not actively participate were not included in the data. If we compare the Tweetorial to a traditional tutorial, the tutor could ask the same questions, in both environments there will be students who dominate the conversation and those who are more comfortable to watch and not actively contribute. Those who do not actively contribute are still present. This is not measured in the Tweetorial analytics.
It was interesting to see that one of the students in our cohort, who could be perceived to have been inactive in the Tweetorial was also very quiet in the Hangout tutorial. As an ethical consideration, I will not name the individual. In other tutorials in which I have participated, this individual has contributed much more and I have to wonder whether they were more withdrawn because the analysis did not show them in a favourable light and they felt reluctant to contribute. I have subsequently looked at their own blog about the Tweetorial and their weekly synthesis, both make for very engaging reading and brought a unique perspective to my own scholarly thought. They mention their inactivity but this did not seem to affect their learning. This person is clearly engaged with the course and has made excellent contribution but not in the space that was being measured. The data does not therefore represent reality accurately.
Part of the problem when one is a student using an open educational space for learning, is the acceptance of vulnerable position of having your academic work being available for both your peers and the online world; the online world is far less of risk because the likelihood of them being interested in what you are talking about is substantially less than having your work being visible to your peers. Peer review is a common academic practise but for those working outside academia and not necessarily wanting to pursue a career in academia, this openness can be daunting. In an open course such as Education and Digital Cultures, students can often feel the added pressure of their peers judging not only the quality of their work but also their participation. While this outlook is probably exaggerated for me personally, the public nature of the participation of the Tweetorial overall motivated me to take part. I felt relief that my participation had been recorded but at the same time I struggled with the competitive nature of learning in an open environment.
The visualisations, summaries and snapshots are measuring participation and although they are not ultimately measuring performance these visualisations are similar to grades, rating student success. There are particular issues with using analytic data in this way, not least of all that if students get graded poorly in front of their peers, this can lead to resentment, anger and demotivation (Kohn 1999). The most interesting factor is that the results of the Tweetorial do not actually measure learning so neither my peers nor my tutors can see how much I had attained, nor could we see that attainment for others.
As educational researchers, the content that is provided by analytic tools such as the one used in the Tweetorial limits the kinds of questions we can ask about learning (Eynon 2013) because the recording of learning in these environments is problematic. We can only study what is recorded and we can only ask questions around that data. The data presents a snapshot and it is related to participation and not attainment. If our research focuses on how students learn, we have to build relationships with those students as in order for data to be effective because it needs to be interpreted in context through observation and manipulation (Siemens 2013).
The data that is presented will allow teachers to be able to identify trends and patterns exhibited by users (Siemens 2013), this will then allow tutors the opportunity of adjusting the course accordingly. Although this was not exhibited as such in the Tweetorial, our discussion around cheese could similarly be related. If the tutor was able to see that content which was not explicitly related to the course being discussed, they could adjust their questions or add additional content accordingly.
Analytics tools only provide information for part of the students’ experience. Although useful, this data should be used in the context of the greater course. It needs to be interpreted concurrently with other data gathered through observation and evidence. It can assist the tutor with being able to monitor the trajectory of the course and show who is actively participating but it is limited when trying to establish attainment. Tutors should also be mindful that data such as that presented in our Tweetorial can also affect student motivation and participation.
Eynon, R. (2013) The rise of Big Data: what does it mean for education, technology, and media research? Learning, Media and Technology, 38(3): pp. 237-240.
Kohn, A. (1999) From Degrading to De-Grading, Retrieved 24 November 2016. http://www.alfiekohn.org/article/degrading-de-grading/
Siemens, G. (2013) Learning Analytics: the emergence of a discipline. American Behavioral Scientist, 57(10): pp. 1380-1400
#LaSETEL 👄Physio-lytics = practice of linking wearing computing devices to collect data. #mscedc #TELdiscourse?
— Chenée Psaros (@Cheneehey) March 22, 2017
In one of the seminars I attended this week I heard the term physio-lytics being bandied about. This is the practise, from what I understand as I haven’t been able to information on it, of measuring and making sense of data that is retracted from wearable technology. It is the information on your staff ID card, Fitbit or smartwatch that will be used for this kind of analysis.
I started this week still stuck on how algorithms worked and how they might be seen to influence education. Which lead me to send my first tweet out about asking whether database results could shape research. I tweeted my question and it was in this vein of thought I went looking for any academic papers that could support what I suspected. There were a lot about bias but I found an example which I saved on Diigo. This article focused on some of the issues around systematic reviews with regards to database searches. It prompted my thinking on how research could be adversely affected by search results but more importantly highlighted the human element of how important information literacy is for scholarly processes.
It was only during the tweetathon I finally felt like I had joined the party with regards to how data and learning analytics play a role in shaping education, but it was quite difficult making sense of what was going on. I felt I was more active than I demonstrated.
I pinned a graphic from Pinterest promising that data mining and learning analytics enhance education which was reminiscent of the instrumentalism around discourse (Bayne 2014) in Block 1.
The TED talk presented how big data can be used to build better schools and transform education by showing governments where to spend their money in education. It made me realise that, when looked at quite broadly, data can revolutionise education.
Finally, I reflected on the traffic light systems that track and rate students, something I’d like to explore further. Ironically, on the first day of week ten, while I was playing catch up in Week 9, I attended some staff training on Learning Analytics, ‘Utilizing Moodle logs and recorded interactions’, where I was shown how to analyse quantitative data to monitor students’ use and engagement.
Bayne, S. (2014). What’s the matter with ‘Technology Enhanced Learning’? Learning, Media and Technology, 40(1): pp5-20.
Learning analytics, when when your personal and professional life collide. #learninganalytics #mscedc #data #ethics #monitoringsystem March 20, 2017 at 06:05PM
via IFTTT
via IFTTT
#mscedc #educationaldatamining #learninganalytics March 19, 2017 at 04:56PM
These are some of the systems used for collecting data for educational purposes I’ve come across while working in education.
I’ve been grappling with how Donna Haraway’s utopian metaphor of the cyborg relates to our relationship with technology and contemporary politics, as well as how it fits in with digital education.
If we are to live as cyborgs as Haraway’s metaphor suggests, we cannot divorce our own nature and history from that of our future selves. This seems implausible, unachievable and very much like an allegorical fairy tale from bygone times. But much like those fairy tales about power and loss, we see the dominations of ‘race’, ‘gender’, ‘sexuality’ and ‘class’, by those in positions of power, evident throughout our technological world.
There are countless examples of oppression in relation to technology. There are examples of the disparities; of how wealthy (white) companies still exploit poor (black) countries and their people for their resources without supporting the connectivity needs of those countries. Since The Cyborg Manifesto was published we have seen the gender gap in careers in technology widen. The digital divide is persistent in developed countries with regards to location and income and ethnic background; while undeveloped countries struggle to find alternative ways to access information with the lack of infrastructure.
In relation to education, Watters in her article Ed-Tech in the Time of Trump gives examples of how universities can use data to carry out surveillance on students and staff. She demonstrates how this happens through the collection of data. Using data, universities, big companies, governments and powerful individuals are able to control what we see, where we go and how we access information. This is evident in the UK with the Government’s Counter-Terrorism Strategy and how universities are tasked with monitoring extremism with the Prevent Duty agenda. Students are being monitored more than ever before.
The ‘ubiquity’ and ‘invisibility’ of the cyborg that Haraway dreams of is simply not possible because the technology and the spaces that we inhabit when online, have been taught to recognise us. Technology has been taught to read us, tasked to find out what we like, see what we look like and with whom we engage. It knows what we buy, sell, watch, read, and search for. It knows where we worship and who we love. It knows us. Most importantly technology has been taught to remember this information, this information then shapes our experiences online.
The control universities, companies and governments have over our information perpetuates the injustices and exclusions that occur in the physical world. If individuals are not aware of the information that is being collected, and of how that information is being used, they could marginalised without knowing it.
Haraway, D (1991). “A cyborg manifesto” from Bell, David; Kennedy, Barbara M (eds), The cyber cultures reader pp.34-65, London Routledge
Watters, A (2017). Ed-Tech in a time of Trump. Retrieved: 6 February 2017 http://hackeducation.com/2017/02/02/ed-tech-and-trump