EDC Week 9 Summary

DCIM142GOPRO (Image: Author)


For Data decryption, copy the above text and go to:
Paste into Input text box

Use the 'Decrpyt' option

Click in Output area for decryption

2 thoughts on “EDC Week 9 Summary”

  1. Ah, this is clever. In order to access your thoughts on algorithmic code I first need to decrypt the computer jargon. Truth be told, I was on the point of e-mailing you to highlight a problem when I noticed the link at the bottom of the page: I’m very glad I did, if nothing else to spare my own blushes.

    Anyway, onto the summary itself. Thanks for pulling your thoughts together here, Myles. More generally, thanks for your input to last week’s tweetorial which I found to be captivating. The success really depended on members of the group getting into the spirit of the exercise and, along with everyone else, you really did that.

    ‘While data scientists and administrators and many other stakeholders have successfully created and refined tools to gather every possible endpoint of interaction, movement, engagement and even indirect activity from learners it has resulted in an avalanche of data and information but no single clear and explicit way of determining the most effective path to learning success.’

    The first thing I like about what you have put here is the mention of ‘data scientists, administrators and many other stakeholders’. I think this is a much-needed reminder than when we loosely talk about ‘universities’ or ‘institutions’, we’re in fact talking about complex organisations that house a range of people with different – possibly competing – priorities. In fact this is something which came up during Friday’s tutorial in response to ethical questions about designating students using big data. It’s conceivable that university managers, teachers and student support officers (and others) all might have somewhat competing interests which in turn would make it more difficult to decide how to use gathered data: as you say, there’s ‘an avalanche of data and information but no single clear and explicit way of determining the most effective path to learning success.’

    ‘It stands to reason then that as educators we need to ensure we remain, at all times when studying learning analytical data, a tiny bit cynical perhaps (as indicated in the Dark Side section in Siemens (2013)), to guard against the dreaded assumption and reliance of cold calculation.’

    Earlier this evening I read (via Stuart’s blog) about some JISC-funded research which outlines a code of conduct around the use of big data within education. Based upon the conversations that took place last Thursday and Friday, it would seem that this recognition of the ethical issues surrounding big data should come to the fore. Within this – and as you allude to here through the work of Siemens – we need to be careful about how we understand the results of this ‘cold calculation’. There has been interesting research (for instance by Introna & Hayes, in case this is of particular interest) where an over-reliance on reports from ‘plagiarism detection software’ – the cold calculation – has seen students wrongly suspected of academic misconduct. The ‘cold calculation’ of the algorithm lacks cultural nuance, while at the same time the procedures within some universities would seem to be set up in a way that doesn’t allow for the type of care you (and Siemens) highlight as necessary.

    Based upon your thoughts in this weekly summary I’m looking forward to your blog post where you’ll discuss the learning analytics from the Tweetorial.

  2. Yes, the JISC sourced research is also some of what I have encountered that asks important questions around the use of the data collected on learners. It frustrates me that not enough of my colleagues, educators and people in general do not question this type of development enough and fail to recognize that data, (they are not all just students) is being collected about all of us all the time. I suspect that, because of the absence of a tactile and direct experience of it, its easy to dismiss the collection process as ‘something that happens to other people’. The influence of a type of distance in the process of collection makes it somehow less personal.

    And likewise, theses administrators we speak of potentially see the subjects from which data is collected as subjects within an experiment, faceless entities to be acted upon and cajoled into predictable action or hypothesised patterns of response to direct or indirect stimuli.

    Perhaps education needs its own Edward Snowden to highlight us to the issues at hand?

Comments are closed.