What kind of digital footprint do students have as a result of their attendance at university?
Swiping a digital identity card to access the school or library, logging into a virtual learning environment, submitting assessments online or using a cloud or network printer to print, charging that same identity card to pay for meals in the canteen or indeed their printing. When using the VLE provided, clicking links, navigation, patterns of use, reading habits and writing habit can all be recorded. Connecting to campus wifi opens up student personal devices to the same potential for data mining – students are leaving behind them both passive and active digital footprints. But are we being honest with our students about the data that is being collected, or indeed that it is being collected at all? Have we explained why this information is being collected and how it will be used and essentially how it will be stored or for how long? Most importantly do the university have a duty of care to ensure students understand the concept of digital footprint so that they can make informed choices about how and when they will participate?
Uses for the data being collected
There are two types of data which can be collected and used with the intent of improving the student experience at a university;
- collecting information about students activities on campus can help manage timetables, staffing, and equipment availability to reduce bottlenecks and improve services.
- Information about reading and writing habits, VLE use and online submissions can be used to better understand teaching and learning, and to personalise or adapt learning to the student’s needs (Siemens, G., 2013).
However, one option I’m hearing spoken about on our course frequently is that of the recommendations algorithm used by commercial sector companies like amazon and Tesco. This algorithm can take information about a student and make recommendations, for instance, students who took your current course also found this course engaging. As has been pointed out in our tweetorial this week, in many cases, this could open up the possibility of options which a student may not have considered otherwise, lead to a path of study and indeed even career choices which may have been missed without such a recommendation algorithm. However, as much as I can see benefits in this and do enjoy the use of this algorithm in my personal life, I can also see the opportunity for misuse by both the information provider and the student themselves.
There has been extensive concern over the black boxing of the algorithms being used, our lack of understanding of how they work, what information is being used and what the intent is. Yeung (2017) talks of reliance on a mechanism of influence in algorithms of this nature called “nudge”. Where essentially the intention is to gently nudge the consumer in the desired direction, think of Tesco and the vouchers they send out fo money off certain items. Nudging customers to come back to the store and use those vouchers and hopefully also spend more money on things they didn’t intend buying until they enter the store. I can see a similar use of the recommendation algorithm in universities. After all, universities are businesses which need the income of students taking courses, therefore encouraging students to buy more courses would be beneficial.
There is another potential for misuse of this algorithm that doesn’t seem to have been addressed in our conversations this week and that is from the student. Depending on the information and recommendations given, could a student chose courses based on the perception that one may be easier than another, from student reviews and feedback, much like the students who currently chose courses based on assessment criteria because they don’t like group work or don’t want to sit an exam? Does this press higher education into a consumer culture against the premise of improving learning or understanding education where students don’t chose studies based on things they want to learn but rather on an easier route to attaining the big bit of paper with the university crest on it?
Siemens, G., 2013. Learning Analytics: The Emergence of a Discipline. The American behavioral scientist, 57(10), pp.1380–1400.
Yeung, K., 2017. “Hypernudge”: Big Data as a mode of regulation by design. Information, Communication and Society, 20(1), pp.118–136.
- uses determinsim – technology is a transparent tool for the realisation of educational aims (this aligns with instrumentalism)
- technological determinsim – concerning the effects of technology on the individual and society (aligns with essentialism) and
- social determinsim – concerned with the affects on societal concepts to drive changes and uses of technology
My thoughts this week have randomly jumped around information and the sci-fi dystopia and how it’s portrayed. Together, these very nicely fit into the package of George Orwell’s book “1984”. In this book, it’s a future dystopia where technology isn’t the main feature and so can almost go unnoticed but is ever present and used to control the citizens. Again information is the key to controlling citizens and their behaviour and thoughts, both information which is being manipulated and given to them and in return information on the citizens themselves in order to ensure that they are docile subjects.
I’ve seen this worry about technology when discussing lecture capture, a worry that these tools will be used to spy on colleagues to ensure they are doing their jobs so there is no specific type of technology which causes concern, even educational technology can be thought of as big brother.
Since we are supposedly in the information age where information is currency, the control of information is highly important. It is easier than ever to find information however it is also easier to spread false information. During world war 2, the British Government set up an entire government department dedicated to spreading the right messages so that it’s citizens behaved and believed appropriately. Today all it takes is social media.
This is the photo which started it all, posted online and shown by news outlets.
I wasn’t there, so how do I know which one is true? The BBC is a trusted new provider so do I just assume because they ran with the story that it is fact?
Just Pinned to #MSCDE: human-being-desing-white.jpg (300×400) http://ift.tt/2jbt19H
Given my thoughts on cyborgs over the last week, this graphic made me chuckle. Then I saw the bottom where it says 100% organic and I thought, well what would non-organic human be and the thought was maybe cyborg?
I’m still battling a little with the definition of cyborg in Miller (2011) where he suggests that we are indeed cyborgs when we use technology to “normalise”, such as glasses to correct vision. I think my issue is based on my conditioning to think of cyborgs in the terms of movie culture were a cyborg is part machine and part human but the machine part usually being skeleton and computer technology. I find the idea of using this label for something as simple as glasses of a prosthetic limb challenging. However, miller (2011) also suggests through Gray et al (1995) that a cyborg is an organism that is made of both organic and inorganic materials. I find that term easier to relate to I think because an organism, I don’t think of cyborgs as human.
The big futuristic question (and one being debated right now) is do I consider cyborgs as beings?