Final Reflection. Week 12

Reflection: Looking through a lens (Image: @bodzofficial Instagram)

A hundred years after Dewey published his book Democracy and Education (1916) championing education as a communal process, I wonder how the process of being a scholar of education in the digital age compares now to how it did then. The key principle of reflection in Dewey’s theory is still relevant today. Dewey claims that ‘[e]ducation, in its broadest sense, is the … social continuity of life.’ (p 4), since we live so much of our lives online it makes sense that educational communities have evolved and that we study them there.

The pressure on academics to publish using different mediums shows that scholars are required to do much more than thinking and writing alone. They are tasked with ‘new ways of working and new ways of imagining [themselves]’ (Fitzpatrick, 2011, p 3). This was certainly true in the use of a lifestream blog as a scholarly record. The constant pressure to be creative by publishing in a range of mediums and working quickly to meet tight deadlines is what it means to be a scholar in a digital world.

In Cybercultures we discussed how discourse contributed to instrumentalism (Bayne 2014) in relation to digital education. The discourse around ‘enhancement’ evolved into how our bodies are being changed by technology this was echoed in my visit to a Learning Technologies Conference on Health Education. We looked at how we are no longer limited to text when trying to portray scholarly thought (Sterne 2006) and I was able to do this by creating digital artefacts. It was interesting to see how other participants were able to construct meaning in ways I did not anticipate.

Community Cultures allowed us to see how educational communities are constantly evolving. The Massive Open Online Courses in which we participated supported our roles as researchers and students. Here we could see how digital education is changing and how cMOOCs have morphed into more individualistic xMOOCs over the last few years and have evolved to be smaller, less focused on community and more geared towards promoting participating universities and encouraging employability.

In Algorithmic Culture we reviewed how algorithms relate to pedagogical issues like sequencing, pacing and goal setting and evaluation of learning (Fournier 2014) and how these algorithms help our machines ‘remember’ us thereby determining the content we access. The discourse around Learning Management Systems (LMS) and their effectiveness to capture data (Siemens 2014) about students and their learning was reminiscent of discourse mentioned in Cyberculture.  The way in which institutions track and monitor students by using data echoed the issues around discrimination and invisibility I looked at earlier in the course.

I was daunted and anxious about my lifestream at the beginning of the course; having to do so much, so publicly was overwhelming. Seeing what other people did also inspired me. Having a reflective piece of work to map my learning is helpful as I can see how my development in my lifestream progressed. I feel it highlights not only my reflection (Dewey 1916), but my creativity and my technical skill. It has given me a new way of imagining myself as a student (Fitzpatrick 2011).


References

Bayne, S. (2014). What’s the matter with ‘Technology Enhanced Learning’? Learning, Media and Technology 40(1): pp. 5-20.

Dewey, J. (1916). Democracy and Education: An Introduction to the Philosophy of Education. Retrieved: 4 April 2017. https://s3.amazonaws.com/arena-attachments/190319/2a5836b93124f200790476e08ecc4232.pdf

Fitzpatrick, K. (2011). The Digital Future of Authorship: Rethinking Originality. Culture Machine 12: pp. 1-26.

Fournier, H., Kop, R. and Durand, G. (2014). Challenges to Research in MOOCs. Journal of Online Learning and Teaching, 10(1), pp. 1–15.

Siemens, G. (2013). Learning Analytics: the emergence of a discipline. American Behavioral Scientist, 57(10): pp. 1380-1400.

Sterne, J. (2006). The historiography of cyberculture. In Critical cyberculture studies. (New York University Press.) pp. 17-28.

What to do? Week 11

Could I use this as a post-human metaphor?

I have enjoyed my blog more as I have been going through it. I feel like it is something I can be proud of but I’m still struggling with what I would like to do for my final assignment.

I am not one of these people who has an idea of what they want to write about half way into the course. This week I’ve trawled through my blog to find inspiration. I was hoping that something that I’d written or posted would set my imagination alight and I would find a topic, formulate a question and criteria on which I would like to be graded. I thought by the end of the week I would least have a topic. Unfortunately, I don’t. I’ve looked through other’s blogs and have seen the amazing creativity of my peers! But I’m still wondering what to write.

There’s PDF on how to develop academic writing and I plan to start doing some of the exercises to formulate my question and help develop my ideas. I can’t help but be reminded of the Hemmingway quote, ‘There is nothing to writing. All you do is sit down at a typewriter and bleed.’ I’m sure I could work wonders with that metaphor and post-humanism, especially considering we covered the body and its relation to technology in Block 1. The Dory meme is to remind me what to do. All the while also thinking that writing isn’t the only option of submitting my work. Alas, I feel overwhelmed again.

I saw that Audrey Watters was at Edinburgh, I really wish I could have attended her talk! There’s always something magical about being able to experience something like that in context. Maybe I would have found the inspiration I need. Most of my feed was in relation to my assignment and blog. I just need to write now.

Do we really need to measure everything? Week 10

It is seldom if ever that my professional life and my studies amalgamate so whole-heartedly. I don’t really know if this is a good or a bad thing because inevitably the short snippets in which my studies present surpass my professional life and leave it behind while I’m still grappling with the issues at hand.

Last week I posted about going on training on Moodle analytics. I thought this would really help me make sense of one of the courses I work on as well as contribute to my understanding of this course.

Unwieldy data, how do we manage it?

How does one make sense of a course that has over 2480 pages and 12 thousand students enrolled on it?

The Tweetorial was much easier to manage but it did make me wonder if there was any purpose to recording students’ activity in open educational spaces. How can we really know that they are engaged? What struck me in our hangout this week is that one of the participants had a different handle and gender than what I have come to recognise. Since he was not in the tutorial as himself, does that mean he didn’t participate?

I went to London and South East Learning Technologies Conference for Health Education. A lot of the discourse centred on ‘technology enhanced learning’. The first seminar I attended was on wearable technology. The speaker spoke of physiolytics, the study of the information retrieved from a device worn on one’s body such at a staff ID card and Fitbit. The doctor presenting spoke of a “smart condom” that measured one’s performance and then fed that information back to an app on a phone. I had to wonder at this point whether man’s obsession with measuring his performance has gone a step too far.

Tweetorial: A critical analysis

We can’t use data alone to measure student success.

The data from Tweetorial was graphically presented in charts and lists. It was easy to understand but it is limited in what it records for educational purposes. The analytics tool can only measure participation of those students who are active (tweeting, retweeting and responding) in Twitter. It provides no information about those who are passive (scrolling, liking and direct messaging) in the environment. The analytics are problematic because they contribute to the visualisation of participation but not necessarily learning and rate students against one another.

Our Tweetorial analytics consisted of data comprising of top users, top words, top URLs, source of tweet, language, volume over time, user mentions, hashtags, images and influencer index. This kind of analytic data is helpful when showing ‘what people actually do’ (Eynon 2013), for example who tweeted the most, what words they used in their tweets, where they got the information they tweeted about from, what language they tweeted in, how many tweets they produced, if they were mentioned by others, what hashtags and images they used and how many followers they have. It is more problematic when looking at the content of the tweets and measuring learning. Perhaps, a tool like NVivo would be helpful in trying to pull together the quality of the content being discussed but this still limits the understanding because not all participants’ learning is evident as content can only be measured through active participation.

There is a flaw in the Tweetorial analytics; students who did not actively participate were not included in the data. If we compare the Tweetorial to a traditional tutorial, the tutor could ask the same questions, in both environments there will be students who dominate the conversation and those who are more comfortable to watch and not actively contribute. Those who do not actively contribute are still present. This is not measured in the Tweetorial analytics.

It was interesting to see that one of the students in our cohort, who could be perceived to have been inactive in the Tweetorial was also very quiet in the Hangout tutorial. As an ethical consideration, I will not name the individual. In other tutorials in which I have participated, this individual has contributed much more and I have to wonder whether they were more withdrawn because the analysis did not show them in a favourable light and they felt reluctant to contribute. I have subsequently looked at their own blog about the Tweetorial and their weekly synthesis, both make for very engaging reading and brought a unique perspective to my own scholarly thought. They mention their inactivity but this did not seem to affect their learning. This person is clearly engaged with the course and has made excellent contribution but not in the space that was being measured. The data does not therefore represent reality accurately.

Part of the problem when one is a student using an open educational space for learning, is the acceptance of vulnerable position of having your academic work being available for both your peers and the online world; the online world is far less of risk because the likelihood of them being interested in what you are talking about is substantially less than having your work being visible to your peers. Peer review is a common academic practise but for those working outside academia and not necessarily wanting to pursue a career in academia, this openness can be daunting. In an open course such as Education and Digital Cultures, students can often feel the added pressure of their peers judging not only the quality of their work but also their participation. While this outlook is probably exaggerated for me personally, the public nature of the participation of the Tweetorial overall motivated me to take part. I felt relief that my participation had been recorded but at the same time I struggled with the competitive nature of learning in an open environment.

The visualisations, summaries and snapshots are measuring participation and although they are not ultimately measuring performance these visualisations are similar to grades, rating student success. There are particular issues with using analytic data in this way, not least of all that if students get graded poorly in front of their peers, this can lead to resentment, anger and demotivation (Kohn 1999). The most interesting factor is that the results of the Tweetorial do not actually measure learning so neither my peers nor my tutors can see how much I had attained, nor could we see that attainment for others.

As educational researchers, the content that is provided by analytic tools such as the one used in the Tweetorial limits the kinds of questions we can ask about learning (Eynon 2013) because the recording of learning in these environments is problematic. We can only study what is recorded and we can only ask questions around that data. The data presents a snapshot and it is related to participation and not attainment. If our research focuses on how students learn, we have to build relationships with those students as in order for data to be effective because it needs to be interpreted in context through observation and manipulation (Siemens 2013).

The data that is presented will allow teachers to be able to identify trends and patterns exhibited by users (Siemens 2013), this will then allow tutors the opportunity of adjusting the course accordingly. Although this was not exhibited as such in the Tweetorial, our discussion around cheese could similarly be related. If the tutor was able to see that content which was not explicitly related to the course being discussed, they could adjust their questions or add additional content accordingly.

Analytics tools only provide information for part of the students’ experience. Although useful, this data should be used in the context of the greater course. It needs to be interpreted concurrently with other data gathered through observation and evidence. It can assist the tutor with being able to monitor the trajectory of the course and show who is actively participating but it is limited when trying to establish attainment. Tutors should also be mindful that data such as that presented in our Tweetorial can also affect student motivation and participation.


Eynon, R. (2013) The rise of Big Data: what does it mean for education, technology, and media research? Learning, Media and Technology, 38(3): pp. 237-240.

Kohn, A. (1999) From Degrading to De-Grading, Retrieved 24 November 2016. http://www.alfiekohn.org/article/degrading-de-grading/

Siemens, G. (2013) Learning Analytics: the emergence of a discipline. American Behavioral Scientist, 57(10): pp. 1380-1400

 

 

 

Red, amber, green: Learning analytics. Week 9

Is there any benefit to rating students’ success?

I started this week still stuck on how algorithms worked and how they might be seen to influence education. Which lead me to send my first tweet out about asking whether database results could shape research. I tweeted my question and it was in this vein of thought I went looking for any academic papers that could support what I suspected. There were a lot about  bias but I found an example which I saved on Diigo. This article focused on some of the issues around systematic reviews with regards to database searches. It prompted my thinking on how research could be adversely affected by search results but more importantly highlighted the human element of how important information literacy is for scholarly processes.

It was only during the tweetathon I finally felt like I had joined the party with regards to how data and learning analytics play a role in shaping education, but it was quite difficult making sense of what was going on. I felt I was more active than I demonstrated.

I pinned a graphic from Pinterest promising that data mining and learning analytics enhance education which was reminiscent of the instrumentalism around discourse (Bayne 2014) in Block 1.

The TED talk presented how big data can be used to build better schools and transform education by showing governments where to spend their money in education. It made me realise that, when looked at quite broadly, data can revolutionise education.

Finally, I reflected on the traffic light systems that track and rate students, something I’d like to explore further. Ironically, on the first day of week ten, while I was playing catch up in Week 9, I attended some staff training on Learning Analytics, ‘Utilizing Moodle logs and recorded interactions’, where I was shown how to analyse quantitative data to monitor students’ use and engagement.


Bayne, S. (2014). What’s the matter with ‘Technology Enhanced Learning’? Learning, Media and Technology, 40(1): pp5-20.

Can the way in which databases present information affect research?

I have been wondering about how algorithms may affect education. At the moment much of my everyday work overlaps with ‘Information Literacy’ and how to prepare students to critically use and engage with those skills involved with critically assessing different texts. I know that from my own studies finding sources is tedious and requires patience and if students aren’t able to analyse texts critically their work becomes very difficult. It is arduous work trying to find both primary texts and secondary texts trying to support an argument. It has got me thinking about how academic databases present information to those looking for it. I think the consequences of this is perhaps less apparent for those looking for information for subjects based within the Social Sciences but could potentially be very harmful for those doing degrees in Medicine.

As an example, I refer back to the late 1990s and Dr Andrew Wakefield’s claims that the Measles, Mumps and Rubella vaccine was linked to Autism. His theory was found untrue. His assertions and the way in which he conducted his research saw him struck of the medical register. His article is still available to read but has been retracted. It is still available, so others can learn from it.  The consequences of his claims have proved far reaching. There has been an increase in Measles as many parents chose not to inoculate their children after Wakefield’s claims. (I noticed on my last trip to the University of Edinburgh that there was an outbreak recently.) The article in question, Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children has been cited nearly two and a half thousand times according to Google Scholar. Although I haven’t looked at all of the texts citing Wakefield et al (1998), I would hedge that many of them would be disproving his theory and demonstrating what not to do when carrying out medical research. My interest in this particular example is that, not all academic articles will be as contentious and certainly not all subjects will have hard data to assist in providing a critical lens.

Google Scholar results for Wakefield.

The reason I mention this allegory, is that I think it demonstrates how databases only present information, we cannot trust algorithms to sort that information. We may not be aware of why certain sources have been cited as much as they have, those which have been cited a lot haven’t always been cited because they are good. The Wakefield example is extreme. There were almost seventy papers published this week in the British Medical Journal alone. It would be impossible to expect that doctors read all new research. Just as it is impossible for academics to read all the information in their fields. Databases are a key tool in higher education but it is not often explicit how information is displayed. By relevance? By popularity? By date? By number of citations. Is it clear how this information is being presented? Could the way in which information in databases is being prioritised (Knox 2015) be affecting the way in which research is carried out?


Knox, J. (2015). Algorithmic Cultures. Excerpt from Critical Education and Digital Cultures. In Encyclopedia of Educational Philosophy and Theory. M. A. Peters (ed.). DOI 10.1007/978-981-287-532-7_124-1

Wakefield, A., Murch, S. H., Anthony, A., Linnel, J., Casson, D. M., Malik, M., Berelowitz, M., Dhillon, A. P., Thompson, M. A., Harvey, P., Valentine, A., Davies, S. E., Walker-Smith, J. A., (1998). Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children. The Lancet 351(9103): pp. 637-641

Co-constructed ecosystems Week 8

ecosystem
Photo: Flickr @giveaphuk

I started my week trying to find out what exactly algorithms were. I had a vague understanding that they were part of the coding that looks for patterns and then changes functionality of certain online spaces, usually to do with shopping and social media. I’ve mostly come across them through social media feeds where influencers are usually advocating for you to turn notifications on about their posts. What surprised me when I started looking for information about how algorithms work, almost as often information on how to manipulate them popped up.

I was trying think about how algorithms may influence education and where they might fall short when I stumbled upon the amazing Joy Buolamwini. She highlighted the real consequences of how having a lack of diversity in programming can impact technology in ways we do not expect. It was evident from her experience that technology rendered her invisible by not being able to read her features. I wonder how many other invisibilities are not yet evident.

We met for our weekly Skype group and some of the bigger themes emerging from that conversation were about how algorithms are used for control and surveillance. We wondered if this might cause students from certain, ethnic, socio-economic backgrounds to be marginalised.

The TED talk on How algorithms shape our world. Was really insightful on how algorithms link. The ‘ecosystem’ metaphor Slavin used echoed Active algorithms: Sociomaterial spaces in the E-learning and digital cultures MOOC (Knox 2014).

It was in this vein I found Hack Education’s article about the Algorithmic Future of Education. Watter’s highlights how the marketization of education and how important ‘care’ is when dealing with students.

I rounded the week off working with Stuart by comparing how algorithms work in different online spaces.


Knox, J. 2015. Algorithmic Cultures. Excerpt from Critical Education and Digital Cultures. In Encyclopedia of Educational Philosophy and Theory. M. A. Peters (ed.). DOI 10.1007/978-981-287-532-7_124-1

Communities. Week 7

The Indignados used social media to mobilise. Photo: @thecommenator

I attended a Digital Cultures seminar, The People’s Memes: Populist Politics in a Digital Society held at King’s College London. There were interesting comments about how political movements developed out of what were the inequalities and disenfranchisement felt by those outside of the political elite. Digital communities like the Indignados who were the birthplace of Podemos, a Spanish party to form a more accessible alternative. What I found particularly interesting about the research being done in this field, is that much of the hierarchical systems that these new movements were responding to with regards to inequalities and inaccessibility, is now being replicated online. I thought this example linked well to the Knox (2015) paper and how technology is seen to become ‘anti-institutional and emancipatory’ but in fact just continues to replicate what is already present in society.

After receiving feedback, I commented on other participants’ blogs, trying to get inspiration so I could link more feeds with IFTTT to my lifestream.

On Wednesday, a few of the participants had a Skype chat to share what feedback they had received about their lifestream. It was here, talking to others, that I realised that a narrative for my lifestream synthesis was more about what I had posted and less about what I was thinking.

This interaction with my peers and my dabbling within my MOOCs lead me to question how communities are built? Which is why I bookmarked the Abbott (1995) paper Community participation and its relationship to community development on Diigo.

Most experiences of MOOCs seemed to be negative which lead me to question if they are sustainable.

Finally, I browsed the ethnography posts within MSCEDC so get inspiration for exhibiting my own.


References

Abbott, J (1995). Community participation and its relationship to community development. Community Development Journal 30(2): pp158-168.

Knox, J. (2015). Community Cultures. Excerpt from Critical Education and Digital Cultures. In Encyclopedia of Educational Philosophy and Theory. M. A. Peters (ed.). DOI 10.1007/978-981-287-532-7_124-1