Tweet! Chatting with Myles

I sometimes forget that we don’t all work in a higher education institute and sometimes the actions and behaviours I experience as normal every day seem very different to those outside the H.E. circuit.

I feel that learning analytics is one of those very things, but the shoe is on the other foot. As a learning consultant for a non education establishment, my entire job was based around being able to provide data to prove the value of the training we offered. If we couldn’t prove with analytics, that the training was making a positive impact on the business then it wasn’t deemed important enough to have.  There was no place for personal development or learning that didn’t directly feed into the bottom line.

I sometimes wonder now as I progress through the different courses in the MSc, how my classmates from different backgrounds and even other H.E. institutions take onboard our learning, how it relates to their work place and if they are on the same journal as me or if they are indeed taking a very different perspective on the things we learn.

I guess it’s a fantastic way to introduce the idea of interpretation to this weeks workload. We all start with the same raw data, how we interpret it will then factor into how we will use it. Something which I am sure will come up frequently as we continue our fun with algorithms and learning analytics this week.

A summary of my tweetorial

Linked from Pocket: Google tells invisible army of ‘quality raters’ to flag Holocaust denial

10,000 contractors told to flag ‘upsetting-offensive’ content after months of criticism over hate speech, misinformation and fake news in search results Google is using a 10,000-strong army of independent contractors to flag “offensive or upsetting” content, in order to ensure

from Pocket http://ift.tt/2nIzpbb
via IFTTT

This appeared at the perfect time as I was finishing reading “Algorithmically recognizable: Santorum’s Google problem, and Google’s Santorum problem” by Gillespe, where he talks about how easy it is to manipulate the google algorithm into ranking a preferred site or keyword search. The problem with this is that we blindly trust the google search algorithm that there is no ulterior motive for the results which are given to us on a search. We incorrectly assume that the page ranking we are met by is genuinely a list of pages which meet our search criteria in order of relevance and have not been falsely inflated.

If this is the case for search engine optimisation, is it also something to consider in terms of research?  We often chose which papers to read based on the order they appear from a search, again assuming this is a pure result, however, if search results can be affected,  maliciously or as a result of behaviour, should we be assessing our behaviour?

For instance, deciding on the “relevance” of a research paper, is that decided by how often that paper is cited, how often it is read or checked out of an electronic library system? How often it’s been shared or added to external referencing and storage tools like paperpile, or by the keywords the author or publisher has assigned to it?  Thinking back to the idea that a user may choose papers based on the return order of their search query, this may inappropriately inflate the search report, resulting in papers which meet the search criteria more appropriately listing further down and those which have been read more often, therefore cited more often and shared more often due simply to convenience then climbing the ranking further which in turn restarts the cycle. This, then, could affect the view of this work, where certain papers or academics become more highly associated with particular areas or ideas purely because their name is being seen more often.

I wonder how often, for example, citations of Knox or Bayne could be attributed to students on MSCDE versus students at the O.U. course? Are we falsely inflating the return status of papers?

References

Gillespie, T., 2017. Algorithmically recognizable: Santorum’s Google problem, and Google’s Santorum problem. Information, Communication and Society, 20(1), pp.63–80.

Tweet! Respecting IFTTT

It’s no secret that I’ve fought a long battle with IFTTT.com in order to get it to act the way I want and do the things I expect of it. This week I chose to look at it from outside the setting of my course blog and look at the tool in its “natural” habitat and I was actually a bit impressed. It can do some really useful things to help make life a bit easier, like send a text to my wife to let her know I’ve left work, or send a text when I am at a certain point on the journey home. This one is handy for knowing when to put the tea on but I found another use for it.  I thought it would be a great piece of data to use to show that algorithmic data can easily be misinterpreted and how different people might interpret it differently.

I set this up to publish a post to my blog to let the world know every time my phone GPS picked up that I was at Moray House, School of Education. My thinking that as a student of Moray House, this would be seen as significant and could be interpreted that I was there to visit the library of for studies. The fact that the algorithm should kick off twice every day, once in the morning and once at around 5:15 pm, I thought might imply that I as arriving and leaving for my day’s studies.

My intention of this play around with the algorithm was to see what conclusions my classmates drew from the minimal data:

  1. Eli is a student of Moray House School of Education
  2. Eli’s GPS from her phone is showing as at Moray House School of Education each morning at the same time and
  3. each evening at the same time.

It’s not a lot of information to go on and therefore involved “interpreting” what this information means. This was exactly the point I wanted to make, that with learning analytics, we are interpreting data, when we may not actually have enough of the picture to fully understand that data in context. As Yeung( 2017) stated of algorithmic use, in her paper concerning the use of data to affect behaviours,

Big Data ’ s extensive harvesting of personal digital data is troubling, not only due to its implications for privacy, but also due to the particular way in which that data are being utilised to shape individual decision-making…

Unfortunately, my experiment didn’t happen as, yes you guessed it, the IFTTT algorithm didn’t work, not even once. So instead of having a minimal amount of data to interpret to represent our possible failures of learning analytics, we have an algorithm that doesn’t fire at all and returns no data. I guess this gives us a whole different learning experience and another algorithmic potential to be critical of.

References

Yeung, K., 2017. “Hypernudge”: Big Data as a mode of regulation by design. Information, Communication and Society, 20(1), pp.118–136.

Tweet!

Helen’s tweet about the lecture this week raised a smile as I was feeling just as excited abut the opportunity to participate in a lecture. Even though I know there is a raging debate about the benefits and drawbacks of our lecture based education system and how effective it may be, I thrive when there is an opportunity to listen instead of reading. Something that there has not been a lot of opportunities for in the ODL offerings I’ve experienced and so I was gleeful.

I wondered if Helen’s joy at a lecture was because it felt more like being an on-campus student and therefore a stronger connection to our assumptions about what it would be like to study at university, or if her joy was because like me she found listening or watching a better tool for her learning.

Whether or not you think learning styles are a real thing (which is a whole different educational conversation), people do have different strengths and weaknesses, different habits and different abilities. Reading and writing are such core values of the education system that they are the backbone of almost every course.

I raise this as an opportunity, with my classmates studying Digital Education with the intention of moving into a career in an educational setting, or indeed who already work in an individual setting – a chance to ask you to think about your course design and how it brings out the best in your students and gives them the best opportunity to learn and I wonder, how will a student with reading difficulties fair in your course? Is there an opportunity to use digital tools in a way that flips traditional teaching on its head, a way to level the opportunity to for all the student on your course?

If you were designing a new course for the Digital Education programme, how would you do it? What tools would you use? What would you keep from your experiences and what would you change?

Just some random food for thought.

 

What to share and what to withhold

 

 

 

 

 

If humans are programming algorithms, does that mean human biases won’t affect the algorithm? What about an unconscious bias that we are not aware of ourselves, can we ensure we won’t influence the algorithm if we are not aware of our own bias? From another perspective, if we are using big data and algorithms to identify things deemed important in education,  for instance, the potential failure of a student, what steps do we need to take, if any, to ensure that the data doesn’t negatively influence the student or bias those receiving the data?  The example in this week’s reading is the “Course Signals” system by Purdue University, one of the earliest and most-cited learning analytics systems.

Using an algorithm to mine data collected from a VLE, the purpose of signals is identify students at risk of academic failure in a specific course. It does this by identifying three main outcome types – a student who is at a high risk, moderate risk, and not at risk of failing the course. These three outcomes are then represented as traffic light (red, orange, and green respectively). The traffic lights serve to provide an early warning “signal” to both instructor and student (Gašević, D., Dawson, S. & Siemens, G., 2014). Few could argue that this is not a noble gesture, the opportunity to intervene before a student fails and potentially change their path is every educator’s dream, however, with only data and algorithms to make these decisions we run a very real risk of this knowledge adversely influencing the student. What of the student who for one reason or another is not of a frame of mind sufficient enough to be told they must work harder, might being told they are at risk of failing the course nudge them in an unwanted direction, potentially one where the student gives up instead of being influenced to try harder?  In this instance, surely human intervention to make the decision of whether or not students see data about themselves is essential?  Are the choices to use technology in this instance for the benefit of the student?  Is it a case of the  “warm human and institutional choices that lie behind these cold mechanisms” (Gillespie, 2012) where good intentions are at the heart of the introduction of the technology but the cold heart of administration and economies of business are the driving force behind its use?
However, the decision to withhold information also comes with its pitfalls in relation to bias, could, for example, the knowledge that a student behaves in a particular way, is weak in particular areas or stronger in others influence how their teacher interacts with them? Would a teacher have different expectations of one new student over another if they had prior knowledge to show that students strengths and weaknesses? This may not be solely about big data and algorithms, as this type of information can be known on a much smaller scale however if we take it up a notch and say a student’s behaviour is on record and shows that the student is prone to anger, outbursts and potentially violence. If we choose not to share that information so as to not unduly bias the interaction with that student, would the person making the decision to withhold that information then be responsible if that student attacked a teacher or another student and we potentially had knowledge which could have prevented this?

 

References

Sclater, N., Peasgood, A., Mullan, J., 2016. Learning Analytics in Higher Education, JISC. Available at: https://www.jisc.ac.uk/sites/default/files/learning-analytics-in-he-v3.pdf.

Gašević, D., Dawson, S. & Siemens, G., 2014. Let’s not forget: Learning analytics are about learning. TechTrends, 59(1), pp.64–71. Available at: http://link.springer.com/article/10.1007/s11528-014-0822-x [Accessed December 7, 2016].

Gillespie, T., Gillespie, T. & Boczkowski, P., 2014. The relevance of algorithms. technologies: Essays on …. Available at: https://books.google.com/books?hl=en&lr=&id=zeK2AgAAQBAJ&oi=fnd&pg=PA167&dq=Relevance+Algorithms+Gillespie&ots=GmoJNXY0we&sig=BwtHhKix2ITFbvDg5jrbdZ8zLWA.

 

 

 

Linked from Pocket: How We Trained an Algorithm to Predict What Makes a Beautiful Photo

“To me, photography is the simultaneous recognition, in a fraction of a second, of the significance of an event.” — Henri Cartier Bresson As a child I waited anxiously for the arrival of each new issue of National Geographic Magazine.

from Pocket http://ift.tt/1r0dQne
via IFTTT

I am quite interested in reading more on this topic, I also noticed this article

http://www.slate.com/articles/technology/future_tense/2017/03/the_surprising_creepy_things_algorithms_can_glean_from_photographs.html

So these are my “fun” study readings for this week.