Tweet! Chatting with Myles

I sometimes forget that we don’t all work in a higher education institute and sometimes the actions and behaviours I experience as normal every day seem very different to those outside the H.E. circuit.

I feel that learning analytics is one of those very things, but the shoe is on the other foot. As a learning consultant for a non education establishment, my entire job was based around being able to provide data to prove the value of the training we offered. If we couldn’t prove with analytics, that the training was making a positive impact on the business then it wasn’t deemed important enough to have.  There was no place for personal development or learning that didn’t directly feed into the bottom line.

I sometimes wonder now as I progress through the different courses in the MSc, how my classmates from different backgrounds and even other H.E. institutions take onboard our learning, how it relates to their work place and if they are on the same journal as me or if they are indeed taking a very different perspective on the things we learn.

I guess it’s a fantastic way to introduce the idea of interpretation to this weeks workload. We all start with the same raw data, how we interpret it will then factor into how we will use it. Something which I am sure will come up frequently as we continue our fun with algorithms and learning analytics this week.

What to share and what to withhold

 

 

 

 

 

If humans are programming algorithms, does that mean human biases won’t affect the algorithm? What about an unconscious bias that we are not aware of ourselves, can we ensure we won’t influence the algorithm if we are not aware of our own bias? From another perspective, if we are using big data and algorithms to identify things deemed important in education,  for instance, the potential failure of a student, what steps do we need to take, if any, to ensure that the data doesn’t negatively influence the student or bias those receiving the data?  The example in this week’s reading is the “Course Signals” system by Purdue University, one of the earliest and most-cited learning analytics systems.

Using an algorithm to mine data collected from a VLE, the purpose of signals is identify students at risk of academic failure in a specific course. It does this by identifying three main outcome types – a student who is at a high risk, moderate risk, and not at risk of failing the course. These three outcomes are then represented as traffic light (red, orange, and green respectively). The traffic lights serve to provide an early warning “signal” to both instructor and student (Gašević, D., Dawson, S. & Siemens, G., 2014). Few could argue that this is not a noble gesture, the opportunity to intervene before a student fails and potentially change their path is every educator’s dream, however, with only data and algorithms to make these decisions we run a very real risk of this knowledge adversely influencing the student. What of the student who for one reason or another is not of a frame of mind sufficient enough to be told they must work harder, might being told they are at risk of failing the course nudge them in an unwanted direction, potentially one where the student gives up instead of being influenced to try harder?  In this instance, surely human intervention to make the decision of whether or not students see data about themselves is essential?  Are the choices to use technology in this instance for the benefit of the student?  Is it a case of the  “warm human and institutional choices that lie behind these cold mechanisms” (Gillespie, 2012) where good intentions are at the heart of the introduction of the technology but the cold heart of administration and economies of business are the driving force behind its use?
However, the decision to withhold information also comes with its pitfalls in relation to bias, could, for example, the knowledge that a student behaves in a particular way, is weak in particular areas or stronger in others influence how their teacher interacts with them? Would a teacher have different expectations of one new student over another if they had prior knowledge to show that students strengths and weaknesses? This may not be solely about big data and algorithms, as this type of information can be known on a much smaller scale however if we take it up a notch and say a student’s behaviour is on record and shows that the student is prone to anger, outbursts and potentially violence. If we choose not to share that information so as to not unduly bias the interaction with that student, would the person making the decision to withhold that information then be responsible if that student attacked a teacher or another student and we potentially had knowledge which could have prevented this?

 

References

Sclater, N., Peasgood, A., Mullan, J., 2016. Learning Analytics in Higher Education, JISC. Available at: https://www.jisc.ac.uk/sites/default/files/learning-analytics-in-he-v3.pdf.

Gašević, D., Dawson, S. & Siemens, G., 2014. Let’s not forget: Learning analytics are about learning. TechTrends, 59(1), pp.64–71. Available at: http://link.springer.com/article/10.1007/s11528-014-0822-x [Accessed December 7, 2016].

Gillespie, T., Gillespie, T. & Boczkowski, P., 2014. The relevance of algorithms. technologies: Essays on …. Available at: https://books.google.com/books?hl=en&lr=&id=zeK2AgAAQBAJ&oi=fnd&pg=PA167&dq=Relevance+Algorithms+Gillespie&ots=GmoJNXY0we&sig=BwtHhKix2ITFbvDg5jrbdZ8zLWA.