Red, amber, green: Learning analytics. Week 9

Is there any benefit to rating students’ success?

I started this week still stuck on how algorithms worked and how they might be seen to influence education. Which lead me to send my first tweet out about asking whether database results could shape research. I tweeted my question and it was in this vein of thought I went looking for any academic papers that could support what I suspected. There were a lot about  bias but I found an example which I saved on Diigo. This article focused on some of the issues around systematic reviews with regards to database searches. It prompted my thinking on how research could be adversely affected by search results but more importantly highlighted the human element of how important information literacy is for scholarly processes.

It was only during the tweetathon I finally felt like I had joined the party with regards to how data and learning analytics play a role in shaping education, but it was quite difficult making sense of what was going on. I felt I was more active than I demonstrated.

I pinned a graphic from Pinterest promising that data mining and learning analytics enhance education which was reminiscent of the instrumentalism around discourse (Bayne 2014) in Block 1.

The TED talk presented how big data can be used to build better schools and transform education by showing governments where to spend their money in education. It made me realise that, when looked at quite broadly, data can revolutionise education.

Finally, I reflected on the traffic light systems that track and rate students, something I’d like to explore further. Ironically, on the first day of week ten, while I was playing catch up in Week 9, I attended some staff training on Learning Analytics, ‘Utilizing Moodle logs and recorded interactions’, where I was shown how to analyse quantitative data to monitor students’ use and engagement.


Bayne, S. (2014). What’s the matter with ‘Technology Enhanced Learning’? Learning, Media and Technology, 40(1): pp5-20.

Can the way in which databases present information affect research?

I have been wondering about how algorithms may affect education. At the moment much of my everyday work overlaps with ‘Information Literacy’ and how to prepare students to critically use and engage with those skills involved with critically assessing different texts. I know that from my own studies finding sources is tedious and requires patience and if students aren’t able to analyse texts critically their work becomes very difficult. It is arduous work trying to find both primary texts and secondary texts trying to support an argument. It has got me thinking about how academic databases present information to those looking for it. I think the consequences of this is perhaps less apparent for those looking for information for subjects based within the Social Sciences but could potentially be very harmful for those doing degrees in Medicine.

As an example, I refer back to the late 1990s and Dr Andrew Wakefield’s claims that the Measles, Mumps and Rubella vaccine was linked to Autism. His theory was found untrue. His assertions and the way in which he conducted his research saw him struck of the medical register. His article is still available to read but has been retracted. It is still available, so others can learn from it.  The consequences of his claims have proved far reaching. There has been an increase in Measles as many parents chose not to inoculate their children after Wakefield’s claims. (I noticed on my last trip to the University of Edinburgh that there was an outbreak recently.) The article in question, Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children has been cited nearly two and a half thousand times according to Google Scholar. Although I haven’t looked at all of the texts citing Wakefield et al (1998), I would hedge that many of them would be disproving his theory and demonstrating what not to do when carrying out medical research. My interest in this particular example is that, not all academic articles will be as contentious and certainly not all subjects will have hard data to assist in providing a critical lens.

Google Scholar results for Wakefield.

The reason I mention this allegory, is that I think it demonstrates how databases only present information, we cannot trust algorithms to sort that information. We may not be aware of why certain sources have been cited as much as they have, those which have been cited a lot haven’t always been cited because they are good. The Wakefield example is extreme. There were almost seventy papers published this week in the British Medical Journal alone. It would be impossible to expect that doctors read all new research. Just as it is impossible for academics to read all the information in their fields. Databases are a key tool in higher education but it is not often explicit how information is displayed. By relevance? By popularity? By date? By number of citations. Is it clear how this information is being presented? Could the way in which information in databases is being prioritised (Knox 2015) be affecting the way in which research is carried out?


Knox, J. (2015). Algorithmic Cultures. Excerpt from Critical Education and Digital Cultures. In Encyclopedia of Educational Philosophy and Theory. M. A. Peters (ed.). DOI 10.1007/978-981-287-532-7_124-1

Wakefield, A., Murch, S. H., Anthony, A., Linnel, J., Casson, D. M., Malik, M., Berelowitz, M., Dhillon, A. P., Thompson, M. A., Harvey, P., Valentine, A., Davies, S. E., Walker-Smith, J. A., (1998). Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children. The Lancet 351(9103): pp. 637-641