Commenting on Clare’s blog

I can recommend this show, I loved it 🙂
Not sure if it would be ruined for me now wit all this algorithm talk though 🙁
Eli

from Comments for Clare’s EDC blog http://ift.tt/2m3tdxR
via IFTTT

Commenting on Renee’s blog

Great find Renee, I think it explains the difference really well.

Could we possibly say that Data Mining is about helping us understand how to personalise and adapt teaching and learning with the hope to make improvements? (Siemens 2013)
Whilst Learning Analytics is about measuring the individual?
Eli

from Comments for RenĂ©e’s EDC blog http://ift.tt/2m3fbfM
via IFTTT

Pinned to #MSCDE on Pinterest

Just Pinned to #MSCDE: Lifestream, Pinned to #mscedc on Pinterest – RenĂ©e’s EDC blog http://ift.tt/2nHeIfE
Sometimes infographics just find the simplest way of explaining complicated things. I loved this one Renée had found and it seemed to align so nicely with the readings from Gillespie.

Linked from Pocket: Google tells invisible army of ‘quality raters’ to flag Holocaust denial

10,000 contractors told to flag ‘upsetting-offensive’ content after months of criticism over hate speech, misinformation and fake news in search results Google is using a 10,000-strong army of independent contractors to flag “offensive or upsetting” content, in order to ensure

from Pocket http://ift.tt/2nIzpbb
via IFTTT

This appeared at the perfect time as I was finishing reading “Algorithmically recognizable: Santorum’s Google problem, and Google’s Santorum problem” by Gillespe, where he talks about how easy it is to manipulate the google algorithm into ranking a preferred site or keyword search. The problem with this is that we blindly trust the google search algorithm that there is no ulterior motive for the results which are given to us on a search. We incorrectly assume that the page ranking we are met by is genuinely a list of pages which meet our search criteria in order of relevance and have not been falsely inflated.

If this is the case for search engine optimisation, is it also something to consider in terms of research?  We often chose which papers to read based on the order they appear from a search, again assuming this is a pure result, however, if search results can be affected,  maliciously or as a result of behaviour, should we be assessing our behaviour?

For instance, deciding on the “relevance” of a research paper, is that decided by how often that paper is cited, how often it is read or checked out of an electronic library system? How often it’s been shared or added to external referencing and storage tools like paperpile, or by the keywords the author or publisher has assigned to it?  Thinking back to the idea that a user may choose papers based on the return order of their search query, this may inappropriately inflate the search report, resulting in papers which meet the search criteria more appropriately listing further down and those which have been read more often, therefore cited more often and shared more often due simply to convenience then climbing the ranking further which in turn restarts the cycle. This, then, could affect the view of this work, where certain papers or academics become more highly associated with particular areas or ideas purely because their name is being seen more often.

I wonder how often, for example, citations of Knox or Bayne could be attributed to students on MSCDE versus students at the O.U. course? Are we falsely inflating the return status of papers?

References

Gillespie, T., 2017. Algorithmically recognizable: Santorum’s Google problem, and Google’s Santorum problem. Information, Communication and Society, 20(1), pp.63–80.

Tweet! Respecting IFTTT

It’s no secret that I’ve fought a long battle with IFTTT.com in order to get it to act the way I want and do the things I expect of it. This week I chose to look at it from outside the setting of my course blog and look at the tool in its “natural” habitat and I was actually a bit impressed. It can do some really useful things to help make life a bit easier, like send a text to my wife to let her know I’ve left work, or send a text when I am at a certain point on the journey home. This one is handy for knowing when to put the tea on but I found another use for it.  I thought it would be a great piece of data to use to show that algorithmic data can easily be misinterpreted and how different people might interpret it differently.

I set this up to publish a post to my blog to let the world know every time my phone GPS picked up that I was at Moray House, School of Education. My thinking that as a student of Moray House, this would be seen as significant and could be interpreted that I was there to visit the library of for studies. The fact that the algorithm should kick off twice every day, once in the morning and once at around 5:15 pm, I thought might imply that I as arriving and leaving for my day’s studies.

My intention of this play around with the algorithm was to see what conclusions my classmates drew from the minimal data:

  1. Eli is a student of Moray House School of Education
  2. Eli’s GPS from her phone is showing as at Moray House School of Education each morning at the same time and
  3. each evening at the same time.

It’s not a lot of information to go on and therefore involved “interpreting” what this information means. This was exactly the point I wanted to make, that with learning analytics, we are interpreting data, when we may not actually have enough of the picture to fully understand that data in context. As Yeung( 2017) stated of algorithmic use, in her paper concerning the use of data to affect behaviours,

Big Data ’ s extensive harvesting of personal digital data is troubling, not only due to its implications for privacy, but also due to the particular way in which that data are being utilised to shape individual decision-making…

Unfortunately, my experiment didn’t happen as, yes you guessed it, the IFTTT algorithm didn’t work, not even once. So instead of having a minimal amount of data to interpret to represent our possible failures of learning analytics, we have an algorithm that doesn’t fire at all and returns no data. I guess this gives us a whole different learning experience and another algorithmic potential to be critical of.

References

Yeung, K., 2017. “Hypernudge”: Big Data as a mode of regulation by design. Information, Communication and Society, 20(1), pp.118–136.

What questions should we be asking about algorithmic culture

  • we must know more about the assumptions upon which they are based, the information about us upon which they act, the priorities they serve, and the ways in which they shape, distort, or tip the process (Gillespie 2017, p64 )
  • treating the world in which the algorithmic system operates as otherwise simple, untouched, and vulnerable to manipulation. (Gillespie 2017, p64)
  • untouched, and vulnerable to manipulation (Gillespie 2017, p64)

Just some thoughts to remember

References

Gillespie, T., 2017. Algorithmically recognizable: Santorum’s Google problem, and Google’s Santorum problem. Information, Communication and Society, 20(1), pp.63–80.

Pinned to #MSCDE on Pinterest

Just Pinned to #MSCDE: What is a digital footprint, protecting your digital foot… – ThingLink http://ift.tt/2n73ynB

What kind of digital footprint do students have as a result of their attendance at university?

Swiping a digital identity card to access the school or library, logging into a virtual learning environment, submitting assessments online or using a cloud or network printer to print, charging that same identity card to pay for meals in the canteen or indeed their printing. When using the VLE provided, clicking links, navigation, patterns of use, reading habits and writing habit can all be recorded. Connecting to campus wifi opens up student personal devices to the same potential for data mining – students are leaving behind them both passive and active digital footprints.  But are we being honest with our students about the data that is being collected, or indeed that it is being collected at all?  Have we explained why this information is being collected and how it will be used and essentially how it will be stored or for how long? Most importantly do the university have a duty of care to ensure students understand the concept of digital footprint so that they can make informed choices about how and when they will participate?

Uses for the data being collected

There are two types of data which can be collected and used with the intent of improving the student experience at a university;

  1. collecting information about students activities on campus can help manage timetables, staffing, and equipment availability to reduce bottlenecks and improve services.
  2. Information about reading and writing habits, VLE use and online submissions can be used to better understand teaching and learning, and to personalise or adapt learning to the student’s needs (Siemens, G., 2013).

However, one option I’m hearing spoken about on our course frequently is that of the recommendations algorithm used by commercial sector companies like amazon and Tesco. This algorithm can take information about a student and make recommendations, for instance, students who took your current course also found this course engaging.  As has been pointed out in our tweetorial this week, in many cases, this could open up the possibility of options which a student may not have considered otherwise, lead to a path of study and indeed even career choices which may have been missed without such a recommendation algorithm. However, as much as I can see benefits in this and do enjoy the use of this algorithm in my personal life, I can also see the opportunity for misuse by both the information provider and the student themselves.

There has been extensive concern over the black boxing of the algorithms being used, our lack of understanding of how they work, what information is being used and what the intent is.  Yeung (2017) talks of reliance on a mechanism of influence in algorithms of this nature called “nudge”. Where essentially the intention is to gently nudge the consumer in the desired direction, think of Tesco and the vouchers they send out fo money off certain items. Nudging customers to come back to the store and use those vouchers and hopefully also spend more money on things they didn’t intend buying until they enter the store. I can see a similar use of the recommendation algorithm in universities. After all, universities are businesses which need the income of students taking courses, therefore encouraging students to buy more courses would be beneficial.

There is another potential for misuse of this algorithm that doesn’t seem to have been addressed in our conversations this week and that is from the student. Depending on the information and recommendations given, could a student chose courses based on the perception that one may be easier than another, from student reviews and feedback, much like the students who currently chose courses based on assessment criteria because they don’t like group work or don’t want to sit an exam?  Does this press higher education into a consumer culture against the premise of improving learning or understanding education where students don’t chose studies based on things they want to learn but rather on an easier route to attaining the big bit of paper with the university crest on it?

References

Siemens, G., 2013. Learning Analytics: The Emergence of a Discipline. The American behavioral scientist, 57(10), pp.1380–1400.

Yeung, K., 2017. “Hypernudge”: Big Data as a mode of regulation by design. Information, Communication and Society, 20(1), pp.118–136.

Tweet!

Helen’s tweet about the lecture this week raised a smile as I was feeling just as excited abut the opportunity to participate in a lecture. Even though I know there is a raging debate about the benefits and drawbacks of our lecture based education system and how effective it may be, I thrive when there is an opportunity to listen instead of reading. Something that there has not been a lot of opportunities for in the ODL offerings I’ve experienced and so I was gleeful.

I wondered if Helen’s joy at a lecture was because it felt more like being an on-campus student and therefore a stronger connection to our assumptions about what it would be like to study at university, or if her joy was because like me she found listening or watching a better tool for her learning.

Whether or not you think learning styles are a real thing (which is a whole different educational conversation), people do have different strengths and weaknesses, different habits and different abilities. Reading and writing are such core values of the education system that they are the backbone of almost every course.

I raise this as an opportunity, with my classmates studying Digital Education with the intention of moving into a career in an educational setting, or indeed who already work in an individual setting – a chance to ask you to think about your course design and how it brings out the best in your students and gives them the best opportunity to learn and I wonder, how will a student with reading difficulties fair in your course? Is there an opportunity to use digital tools in a way that flips traditional teaching on its head, a way to level the opportunity to for all the student on your course?

If you were designing a new course for the Digital Education programme, how would you do it? What tools would you use? What would you keep from your experiences and what would you change?

Just some random food for thought.