Tweet! Chatting with Myles

I sometimes forget that we don’t all work in a higher education institute and sometimes the actions and behaviours I experience as normal every day seem very different to those outside the H.E. circuit.

I feel that learning analytics is one of those very things, but the shoe is on the other foot. As a learning consultant for a non education establishment, my entire job was based around being able to provide data to prove the value of the training we offered. If we couldn’t prove with analytics, that the training was making a positive impact on the business then it wasn’t deemed important enough to have.  There was no place for personal development or learning that didn’t directly feed into the bottom line.

I sometimes wonder now as I progress through the different courses in the MSc, how my classmates from different backgrounds and even other H.E. institutions take onboard our learning, how it relates to their work place and if they are on the same journal as me or if they are indeed taking a very different perspective on the things we learn.

I guess it’s a fantastic way to introduce the idea of interpretation to this weeks workload. We all start with the same raw data, how we interpret it will then factor into how we will use it. Something which I am sure will come up frequently as we continue our fun with algorithms and learning analytics this week.

Linked from Pocket: Google tells invisible army of ‘quality raters’ to flag Holocaust denial

10,000 contractors told to flag ‘upsetting-offensive’ content after months of criticism over hate speech, misinformation and fake news in search results Google is using a 10,000-strong army of independent contractors to flag “offensive or upsetting” content, in order to ensure

from Pocket http://ift.tt/2nIzpbb
via IFTTT

This appeared at the perfect time as I was finishing reading “Algorithmically recognizable: Santorum’s Google problem, and Google’s Santorum problem” by Gillespe, where he talks about how easy it is to manipulate the google algorithm into ranking a preferred site or keyword search. The problem with this is that we blindly trust the google search algorithm that there is no ulterior motive for the results which are given to us on a search. We incorrectly assume that the page ranking we are met by is genuinely a list of pages which meet our search criteria in order of relevance and have not been falsely inflated.

If this is the case for search engine optimisation, is it also something to consider in terms of research?  We often chose which papers to read based on the order they appear from a search, again assuming this is a pure result, however, if search results can be affected,  maliciously or as a result of behaviour, should we be assessing our behaviour?

For instance, deciding on the “relevance” of a research paper, is that decided by how often that paper is cited, how often it is read or checked out of an electronic library system? How often it’s been shared or added to external referencing and storage tools like paperpile, or by the keywords the author or publisher has assigned to it?  Thinking back to the idea that a user may choose papers based on the return order of their search query, this may inappropriately inflate the search report, resulting in papers which meet the search criteria more appropriately listing further down and those which have been read more often, therefore cited more often and shared more often due simply to convenience then climbing the ranking further which in turn restarts the cycle. This, then, could affect the view of this work, where certain papers or academics become more highly associated with particular areas or ideas purely because their name is being seen more often.

I wonder how often, for example, citations of Knox or Bayne could be attributed to students on MSCDE versus students at the O.U. course? Are we falsely inflating the return status of papers?

References

Gillespie, T., 2017. Algorithmically recognizable: Santorum’s Google problem, and Google’s Santorum problem. Information, Communication and Society, 20(1), pp.63–80.

Tweet! Respecting IFTTT

It’s no secret that I’ve fought a long battle with IFTTT.com in order to get it to act the way I want and do the things I expect of it. This week I chose to look at it from outside the setting of my course blog and look at the tool in its “natural” habitat and I was actually a bit impressed. It can do some really useful things to help make life a bit easier, like send a text to my wife to let her know I’ve left work, or send a text when I am at a certain point on the journey home. This one is handy for knowing when to put the tea on but I found another use for it.  I thought it would be a great piece of data to use to show that algorithmic data can easily be misinterpreted and how different people might interpret it differently.

I set this up to publish a post to my blog to let the world know every time my phone GPS picked up that I was at Moray House, School of Education. My thinking that as a student of Moray House, this would be seen as significant and could be interpreted that I was there to visit the library of for studies. The fact that the algorithm should kick off twice every day, once in the morning and once at around 5:15 pm, I thought might imply that I as arriving and leaving for my day’s studies.

My intention of this play around with the algorithm was to see what conclusions my classmates drew from the minimal data:

  1. Eli is a student of Moray House School of Education
  2. Eli’s GPS from her phone is showing as at Moray House School of Education each morning at the same time and
  3. each evening at the same time.

It’s not a lot of information to go on and therefore involved “interpreting” what this information means. This was exactly the point I wanted to make, that with learning analytics, we are interpreting data, when we may not actually have enough of the picture to fully understand that data in context. As Yeung( 2017) stated of algorithmic use, in her paper concerning the use of data to affect behaviours,

Big Data ’ s extensive harvesting of personal digital data is troubling, not only due to its implications for privacy, but also due to the particular way in which that data are being utilised to shape individual decision-making…

Unfortunately, my experiment didn’t happen as, yes you guessed it, the IFTTT algorithm didn’t work, not even once. So instead of having a minimal amount of data to interpret to represent our possible failures of learning analytics, we have an algorithm that doesn’t fire at all and returns no data. I guess this gives us a whole different learning experience and another algorithmic potential to be critical of.

References

Yeung, K., 2017. “Hypernudge”: Big Data as a mode of regulation by design. Information, Communication and Society, 20(1), pp.118–136.

What questions should we be asking about algorithmic culture

  • we must know more about the assumptions upon which they are based, the information about us upon which they act, the priorities they serve, and the ways in which they shape, distort, or tip the process (Gillespie 2017, p64 )
  • treating the world in which the algorithmic system operates as otherwise simple, untouched, and vulnerable to manipulation. (Gillespie 2017, p64)
  • untouched, and vulnerable to manipulation (Gillespie 2017, p64)

Just some thoughts to remember

References

Gillespie, T., 2017. Algorithmically recognizable: Santorum’s Google problem, and Google’s Santorum problem. Information, Communication and Society, 20(1), pp.63–80.

Pinned to #MSCDE on Pinterest

Just Pinned to #MSCDE: What is a digital footprint, protecting your digital foot… – ThingLink http://ift.tt/2n73ynB

What kind of digital footprint do students have as a result of their attendance at university?

Swiping a digital identity card to access the school or library, logging into a virtual learning environment, submitting assessments online or using a cloud or network printer to print, charging that same identity card to pay for meals in the canteen or indeed their printing. When using the VLE provided, clicking links, navigation, patterns of use, reading habits and writing habit can all be recorded. Connecting to campus wifi opens up student personal devices to the same potential for data mining – students are leaving behind them both passive and active digital footprints.  But are we being honest with our students about the data that is being collected, or indeed that it is being collected at all?  Have we explained why this information is being collected and how it will be used and essentially how it will be stored or for how long? Most importantly do the university have a duty of care to ensure students understand the concept of digital footprint so that they can make informed choices about how and when they will participate?

Uses for the data being collected

There are two types of data which can be collected and used with the intent of improving the student experience at a university;

  1. collecting information about students activities on campus can help manage timetables, staffing, and equipment availability to reduce bottlenecks and improve services.
  2. Information about reading and writing habits, VLE use and online submissions can be used to better understand teaching and learning, and to personalise or adapt learning to the student’s needs (Siemens, G., 2013).

However, one option I’m hearing spoken about on our course frequently is that of the recommendations algorithm used by commercial sector companies like amazon and Tesco. This algorithm can take information about a student and make recommendations, for instance, students who took your current course also found this course engaging.  As has been pointed out in our tweetorial this week, in many cases, this could open up the possibility of options which a student may not have considered otherwise, lead to a path of study and indeed even career choices which may have been missed without such a recommendation algorithm. However, as much as I can see benefits in this and do enjoy the use of this algorithm in my personal life, I can also see the opportunity for misuse by both the information provider and the student themselves.

There has been extensive concern over the black boxing of the algorithms being used, our lack of understanding of how they work, what information is being used and what the intent is.  Yeung (2017) talks of reliance on a mechanism of influence in algorithms of this nature called “nudge”. Where essentially the intention is to gently nudge the consumer in the desired direction, think of Tesco and the vouchers they send out fo money off certain items. Nudging customers to come back to the store and use those vouchers and hopefully also spend more money on things they didn’t intend buying until they enter the store. I can see a similar use of the recommendation algorithm in universities. After all, universities are businesses which need the income of students taking courses, therefore encouraging students to buy more courses would be beneficial.

There is another potential for misuse of this algorithm that doesn’t seem to have been addressed in our conversations this week and that is from the student. Depending on the information and recommendations given, could a student chose courses based on the perception that one may be easier than another, from student reviews and feedback, much like the students who currently chose courses based on assessment criteria because they don’t like group work or don’t want to sit an exam?  Does this press higher education into a consumer culture against the premise of improving learning or understanding education where students don’t chose studies based on things they want to learn but rather on an easier route to attaining the big bit of paper with the university crest on it?

References

Siemens, G., 2013. Learning Analytics: The Emergence of a Discipline. The American behavioral scientist, 57(10), pp.1380–1400.

Yeung, K., 2017. “Hypernudge”: Big Data as a mode of regulation by design. Information, Communication and Society, 20(1), pp.118–136.

Tweet!

Helen’s tweet about the lecture this week raised a smile as I was feeling just as excited abut the opportunity to participate in a lecture. Even though I know there is a raging debate about the benefits and drawbacks of our lecture based education system and how effective it may be, I thrive when there is an opportunity to listen instead of reading. Something that there has not been a lot of opportunities for in the ODL offerings I’ve experienced and so I was gleeful.

I wondered if Helen’s joy at a lecture was because it felt more like being an on-campus student and therefore a stronger connection to our assumptions about what it would be like to study at university, or if her joy was because like me she found listening or watching a better tool for her learning.

Whether or not you think learning styles are a real thing (which is a whole different educational conversation), people do have different strengths and weaknesses, different habits and different abilities. Reading and writing are such core values of the education system that they are the backbone of almost every course.

I raise this as an opportunity, with my classmates studying Digital Education with the intention of moving into a career in an educational setting, or indeed who already work in an individual setting – a chance to ask you to think about your course design and how it brings out the best in your students and gives them the best opportunity to learn and I wonder, how will a student with reading difficulties fair in your course? Is there an opportunity to use digital tools in a way that flips traditional teaching on its head, a way to level the opportunity to for all the student on your course?

If you were designing a new course for the Digital Education programme, how would you do it? What tools would you use? What would you keep from your experiences and what would you change?

Just some random food for thought.

 

Playing with algorithms

My play with algorithms this week was unemphatically dull. I am aware of ad changes connected with my surfing habits, particularly with Amazon, so I was expecting to see a lot more come from a controlled experiment but alas it was a bit lacklustre which I suspect is due to my online security habits. My previous job involved supporting people with their digital footprints and making them aware of their computers security and the potential risks so out of habit, I tend to have things like cookies controlled. I suspect this is why I didn’t experience as much influence of the algorithms as I was expecting. However, I chose to leave my settings as they are and look at this from my real life perspective.

I chose to look at how my actions on amazon affect the ads I see elsewhere in my internet world. I am vaguely aware that amazon shopping trips have resulted in corresponding ads on facebook in the past and also with google form search words so I aimed at deliberately spiking things to see what would happen. To do this, I had to ensure that my amazon searches were for things I would not normally search for so that I could be sure the results were to do with this experiment.

Lunch break, over a cup of tea and a sarnie, I browsed for ballet slippers on amazon (the idea came to me after chatting with Linzi who is a dancer, I am most definitely not). On first results this was unremarkable. I didn’t even see ballet slippers come up next time I logged into amazon. Epic fail.

MacBook: related searches on amazon

Again I searched for ballet slippers and this time I added pink satin to the description. I also changed behaviour, and this time I clicked on specific items that came up. This seemed to trigger the amazon algorithm which then shows relational items against your previous history (previous history, is that a real thing?). So result number one.

 

Surface Pro 4: facebook ad for amazon
Surface Pro 4: facebook ad for amazon

My expectations were that I would now see this filter through and at the very least see related advertising on things like facebook. Did I? Well a little bit of facebooking that evening and nope. There were no changes to my standard side-bar advertising on facebook, and even the featured ad for amazon wasn’t related to my searches.

 

 

 

OK so disappointing so far, but what about search engines? Surely the cookies stored on the computer would result in search engines picking up on my search,  I know this happens I’ve seen it on multiple occasions.

Nope

Surface pro 4: google search

 

 

 

 

About now I was ready to quit, I’m certain I’ve seen the searches spread across platforms so why wasn’t this working? I gave up for the night and decided to try again before work in the morning.

The next morning, sitting at my desk eating my shreddies it all clicked into place. The google bar instantly gave me pink ballet shoes in my search.

This is when the penny dropped. I was using one computer at home and a different computer at work, the algorithm seemed to be taking effect at work on my MacBook, but not at home on my surface pro 4. Cookies! As I mentioned previously, I lock down the cookies on my personal computer, but I am not in charge of the set up of my work computer so there it may be slightly more open to cookies, hence why I was seeing ballet slippers appear in google as well as amazon. Still nothing on either machine for facebook though, so it would appear that only items purchased or added to my wish list cross into facebook, but it would take more investigation to see if this works cross computers or only on the computer the purchase was made on. More investigation will be needed, but I wasn’t buying ballet slippers to test this theory out. I’m now wondering about adding mobile devices to the test…

Algorithms produce worlds rather than objectively account for them
(Knox, 2015).

 

Yup, and in this instance, the world it was creating couldn’t quite see the full picture, the algorithm knew I’d searched for ballet slippers when I was on the macbook, because it could read the cookies that were stored there but once I was home and on a different computer, with no cookies to read the algorithm didn’t recognise me as part of the world it was building around my shopping habits.

References

Knox, J. (2015)Algorithmic Cultures. Excerpt from Critical Education and Digital Cultures. In Encyclopedia of Educational Philosophy and Theory. M. A. Peters (ed.). DOI 10.1007/978-981-287-532-7_124-1

What to share and what to withhold

 

 

 

 

 

If humans are programming algorithms, does that mean human biases won’t affect the algorithm? What about an unconscious bias that we are not aware of ourselves, can we ensure we won’t influence the algorithm if we are not aware of our own bias? From another perspective, if we are using big data and algorithms to identify things deemed important in education,  for instance, the potential failure of a student, what steps do we need to take, if any, to ensure that the data doesn’t negatively influence the student or bias those receiving the data?  The example in this week’s reading is the “Course Signals” system by Purdue University, one of the earliest and most-cited learning analytics systems.

Using an algorithm to mine data collected from a VLE, the purpose of signals is identify students at risk of academic failure in a specific course. It does this by identifying three main outcome types – a student who is at a high risk, moderate risk, and not at risk of failing the course. These three outcomes are then represented as traffic light (red, orange, and green respectively). The traffic lights serve to provide an early warning “signal” to both instructor and student (Gašević, D., Dawson, S. & Siemens, G., 2014). Few could argue that this is not a noble gesture, the opportunity to intervene before a student fails and potentially change their path is every educator’s dream, however, with only data and algorithms to make these decisions we run a very real risk of this knowledge adversely influencing the student. What of the student who for one reason or another is not of a frame of mind sufficient enough to be told they must work harder, might being told they are at risk of failing the course nudge them in an unwanted direction, potentially one where the student gives up instead of being influenced to try harder?  In this instance, surely human intervention to make the decision of whether or not students see data about themselves is essential?  Are the choices to use technology in this instance for the benefit of the student?  Is it a case of the  “warm human and institutional choices that lie behind these cold mechanisms” (Gillespie, 2012) where good intentions are at the heart of the introduction of the technology but the cold heart of administration and economies of business are the driving force behind its use?
However, the decision to withhold information also comes with its pitfalls in relation to bias, could, for example, the knowledge that a student behaves in a particular way, is weak in particular areas or stronger in others influence how their teacher interacts with them? Would a teacher have different expectations of one new student over another if they had prior knowledge to show that students strengths and weaknesses? This may not be solely about big data and algorithms, as this type of information can be known on a much smaller scale however if we take it up a notch and say a student’s behaviour is on record and shows that the student is prone to anger, outbursts and potentially violence. If we choose not to share that information so as to not unduly bias the interaction with that student, would the person making the decision to withhold that information then be responsible if that student attacked a teacher or another student and we potentially had knowledge which could have prevented this?

 

References

Sclater, N., Peasgood, A., Mullan, J., 2016. Learning Analytics in Higher Education, JISC. Available at: https://www.jisc.ac.uk/sites/default/files/learning-analytics-in-he-v3.pdf.

Gašević, D., Dawson, S. & Siemens, G., 2014. Let’s not forget: Learning analytics are about learning. TechTrends, 59(1), pp.64–71. Available at: http://link.springer.com/article/10.1007/s11528-014-0822-x [Accessed December 7, 2016].

Gillespie, T., Gillespie, T. & Boczkowski, P., 2014. The relevance of algorithms. technologies: Essays on …. Available at: https://books.google.com/books?hl=en&lr=&id=zeK2AgAAQBAJ&oi=fnd&pg=PA167&dq=Relevance+Algorithms+Gillespie&ots=GmoJNXY0we&sig=BwtHhKix2ITFbvDg5jrbdZ8zLWA.

 

 

 

Influencing with algorithms: Amazon, YouTube, Google and Facebook oh my….

Although I confess to knowing of the existence of algorithms and even seeing their impact on my net use, I’ve never really paid attention to it. My bad. So I am going to specifically play with  4 tools I use often, amazon, facebook, google, youtube to see what the impact is on each other, or how joined up my web use it.

I will look at the impact of searching on google and see if this permeates through to the other tools and then systematically do the same for each.

Things to consider – I have an enormous digital footprint, therefore for the purpose of this experiment I will be specifically trying to influence my digital footprint using items I would not normally search for.