Recollections of Miller, V (2011)

Image borrowed from http://thequestionconcerningtechnology.blogspot.co.uk/2014/04/ecotone-renegotiating-boundaries.html

“the fourth discontinuity is yet to be overcome and is the distinction between humans and machines”

“Norbert Wiener suggested that a pilot/aeroplane could be seen as a self-governing mechanism that continually processes and tries to respond to external stimuli under a complex, though ultimately predictable set of rules, in order to maintain homeostasis (that is, stability and control)” (Miller, V. 2011 Chapter 9, p211)

A number of interactions I had this morning with a learner using the VLE I manage and the VLE itself, brought me back to thinking about this paper and Wiener’s idea of the man-machine self-governing mechanism.

Whilst out and about, my smart-watch alerted me to a Forum message from a VLE user.  I opened this on my phone to find out the details of the issue, which related to a duplicate account being created in error after being locked out of an existing account.  I logged into the VLE and resolved the issue there and then and messaged the learner back to let them know everything was sorted.   Whilst logged into the VLE automated notifications alerted me to a couple of small housekeeping tasks that needed completing and I dealt with those too.  A few minutes later a notification popped up on my smart-watch with a ‘thank you’ from the learner.  Normal service had been resumed.

As Miller proposes the lines between human and machine in those interactions were certainly blurred and one could argue that an observer might find it difficult to determine whether the machines were serving me or vice versa, or, as Miller suggests, the machines and I were ‘working together as a self-governing mechanism’.

My connections to my phone, my smart-watch and the remote VLE also reminded me of Donna Haraway’s ‘A manifesto for cyborgs’ and her proposition that “we are all chimeras, theorized and fabricated hybrids of machine and organism” (Haraway, 1991: 149-150), or as Hands, M. (2008) states “…what was previously visible as the hardware of technoculture and information culture is now increasingly invisible as the infrastructure of contemporary digital culture”.

References:

Miller, V. (2011) Chapter 9: The Body and Information Technology, in Understanding Digital Culture. London: Sage.

Haraway, Donna (2007) A cyborg manifesto from Bell, David; Kennedy, Barbara M (eds),  The cybercultures reader pp.34-65, London: Routledge.

Hand, M (2008) Hardware to everywhere: narratives of promise and threat, chapter 1 of Making digital cultures: access, interactivity and authenticity. Aldershot: Ashgate. pp 15-42.

Tweetorial tweets

This page brings together all my tweets and replies and new followers that resulted from the tweet storm (which I believe are relevant in this context).

Ben Williamson is now following me on Twitter! Bio: Digital data, ‘smart’ technology & education policy. Lecturer @StirUni (1621 followers) http://twitter.com/BenPatrickWill

Dr. GP Pulipaka is now following me on Twitter! Bio: Ganapathi Pulipaka | Founder and CEO @deepsingularity | Bestselling Author | #Bigdata | #IoT | #Startups | #SAP #MachineLearning #DeepLearning #DataScience. (19910 followers) http://twitter.com/gp_pulipaka

 

Michael J.D. Warner is now following me on Twitter! Bio: CEO @ThunderReach ⚡️ #socialmedia #marketing + VIP digital services ➡️ https://t.co/Rf6jA4EIEo • ig @mjdwarner • ✉️ceo@thunderreach.com ⚣ #gay 📍toronto • nyc (98298 followers) http://twitter.com/mjdwarner

 

Featured Heights is now following me on Twitter! Bio: Elevating your #brand with creative websites & engaging marketing. Sharing #marketing, #webDev, #design, #ux & #socialmedia resources. (2281 followers) http://twitter.com/featuredheights

Lumina Analytics is now following me on Twitter! Bio: We are a big data, predictive analytics firm providing insightful risk management & security intelligence to large, regulated corporations & government clients. (10786 followers) http://twitter.com/LuminaAnalytics

 

MuleSoft is now following me on Twitter! Bio: MuleSoft makes it easy to connect the world’s applications, data and devices. (59039 followers) http://twitter.com/MuleSoft

 

Pyramid Analytics is now following me on Twitter! Bio: Bridging the gap between business and IT user needs with a self-service Governed #Data Discovery platform available on any device. #BIOffice #BI #Analytics (6729 followers) http://twitter.com/PyramidAnalytic

 

Kevin Yu is now following me on Twitter! Bio: Co-founder & CTO @socedo transforms B2B marketing with social media by democratizing #CloudComputing & #BigData. Husband of 1, dad of 2, tech and sports junkie. (68521 followers) http://twitter.com/kevincyu

Cheese Lover is now following me on Twitter! Bio: Lover of #cheese and interested in #education (0 followers) http://twitter.com/CheeseLoverBot

Siemens (2013) mind map deconstruction and commentary

Mindmap deconstruction of Siemens, G. (2013) Learning Analytics The Emergence of a Discipline.  Click to open higher resolution version in new browser tab. Browser with zoom functionality will be needed to see detail.

Learning analytics

My experience of learning analytics is at a fairly rudimentary level. The tools I have built into the Academy I manage enable me to look at data at a macro level through to individual learner completions.  I don’t have the sophisticated tracking of learner navigation in terms of click through,  time spent on task etc. immediately to hand, although, some of this information is being recorded and looking at the subject in more detail this week has prompted me to look again at data that could provide valuable insights.

Siemens makes the point that “To be effective, holistic, and transferable, future analytics projects must afford the capacity to include additional data through observation and human manipulation of the existing data sets”.   The additional data and human observation I find most valuable in my own professional practice are the insights gained from the social tools I have built into the Learning Academy.  Discussion and blog comments augment and add colour to the otherwise ‘dry’ learning analytics data.  Together these two resources do enable me to “incorporate […] feedback into future design of learning content.” and in some cases into existing content.

I think the output of learning analytics alone, without the additional layer of human created metadata, would not provide me with sufficient information to judge what learning had taken place or the effectiveness of the materials provided.  As Siemens suggest “The learning process is creative, requiring the generation of new ideas, approaches, and concepts. Analytics, in contrast, is about identifying and revealing what already exists.”  and “The learning process is essentially social and cannot be completely reduced to algorithms.”

Organisational capacity

“Prior to launching a project, organizations will benefit from taking stock of their capacity for analytics and willingness to have analytics have an impact on existing processes.”  I wonder how often this happens in reality.  The business I work for is both purpose and numbers driven, the strategy (hope) being that the former drives the latter.  There is certainly a willingness to react to analytics in all aspects of the business, whether that be customer satisfaction scores, unit sales or learning and development provision.  In my view there is also a danger in reacting immediately to analytics, strategy being a long-game activity, where cultural and other changes can take months or even years to shift.

Privacy and scope

Siemens raises some important issues around privacy and scope. “Distributed and fragmented data present a significant challenge for analytics researchers. The data trails that learners generate are captured in different systems and databases. The experiences of learners interacting with content, each other, and software systems are not available as a coherent whole for analysis.”  I’ve attempted to combat this by integrating everything into one platform, with a single view of the learner.  Where this hasn’t been possible we’ve gone to great lengths to implement a single sign on solution, which is both easier and more convenient for the learner but also helps avoid some of the issues Siemens raises.

From a privacy perspective I’ve implemented as open a model as I’m able to with the data that is available.  I’d love to be able to do more to personalise learning for individual learners but, as with all commercial operations, this comes back to the three levers of cost, time available and quality achievable.  However, our learners are able to interrogate their own learner record and they have an achievements wall where all learning completed online is tracked, along with any external achievements the learner wishes to add manually.  They can also see how their achievements compare to those of their peers. In this respect learners can “see what the institution sees”.

All references are from

Siemens, G. (2013) Learning Analytics: the emergence of a discipline. American Behavioral Scientist, 57(10): 1380-1400

Ben Williamson (2014) video lecture

Having watched Ben Williamson’s video  lecture and listened to the Q&As the final point raised in the Q&As left me reflecting on about the nature of this course.  We’ve been actively encouraged to bring many data streams into these Lifestream blogs, such as Twitter, YouTube and Pinterest.  One could argue, as the questioner in the video lecture does, that many of these are essentially ‘non-academic’ resources, with a commercial imperative and built on who knows what theories of learning.  I’ve been conscious all the way through this course, of guidance we were given right at the outset when embarking upon IDEL, and that is to give careful and critical consideration to what constitutes a legitimate academic source.  As such, when researching, I’ve tried to refer to University library resources as much as the social spaces we’ve been encouraged to explore.  I appreciate that use of the latter does perhaps illustrate the socialisation and ‘algorithmification’ of knowledge and learning rather better than, the results of a university library search might do, and maybe that’s , at least in part, the point.

Williamson paints a view of the future use of data in the design of interactive books, greater personalisation and adaptive assessment techniques.  A couple of years on and some of these emerging trends have become more established.  For example the language learning apps Duolingo and Memrise, which use algorithms to drive input spacing and recall exercises that are specific to the individual.

Another question I found particularly interesting was the potential for the very data social scientist would find invaluable being inaccessible to them due them lacking the coding knowledge required to manipulate it.  The question of whether it was necessary to learn to code was debated at some length and one questioner raised the point that eventually someone will build a tool that makes learning to do so unnecessary. However, I wonder whether this would raise further questions and concerns similar to those relating to the agency of commercial organisations in social platforms, in that one might not be fully aware of what such a tool is doing, or the results it is delivering without knowledge of code.  The physicist in the lecture audience makes a very similar point regarding algorithms and ‘bought in analytical packages’.

One theme that keeps returning for me, perhaps because big data, fake news and privacy are so often in the news (this for example, or this) is how we’re constantly being profiled.  Most of this would appear to be for commercial gain rather than for our personal benefit (unless the latter is a happy consequence of the former).  How long before data mining finds its way into our academic records, if it hasn’t already?

References:

Ben Williamson (2014) Calculating Academics: theorising the algorithmic organization of the digital university

 

Comments on Nigel’s EDC Lifestream Blog

Useful and detailed summary here Nigel – do try to stay within the 250-word limit for these, but yes, sometimes one needs to elaborate. It might be worth trying to extend some of these ideas in a separate post.

‘ To me this is clearly a linking theme and I see possibilities in exploring this in my final assignment for this course.’

Sounds good – so this would be oppositions between ‘pure’ humans and ‘distinct’ technologies? Or dystopia and utopic views of technology? I agree that these are a strong themes that work through the blocks. It would be important to see the different ways this dualism comes about in the cyber- community and algorithmic themes though.

I think you’re right to question some of the dystopia visions of algorithms, and I think we do need to trace the social systems they are embedded in. Some of your examples here are good in this respect: an algorithm that controls a prosthetic limb doesn’t make decisions that ‘plug in’ to the same social conditions as, say, a social media news feed. They are both ‘algorithms’, but the ‘algorithmic system’ involved is more complex?

from Comments for Nigel’s EDC blog http://ift.tt/2mFzMWl
via IFTTT

Week 8 Lifestream Summary

It’s been an interesting week experimenting with algorithms. I’ve enjoyed trying to ‘reverse engineer’ the Amazon recommendation algorithm and, ultimately, going some way toward disproving my own hypotheses.

Reflecting on the cultural aspects of algorithms, I see similar dichotomies in the views people hold about them to those we saw documented in the literature relating to cyberculture and community culture.  To me this is clearly a linking theme and I see possibilities in exploring this in my final assignment for this course.

As with many other topics, the views people hold are likely to be heavily influenced by the media and, just as all things ‘cyber’ are often painted as worthy of suspicion, this does seem to be a ‘go to’ stance for copy writers and producers when taking a position on algorithms.  The banking crisis is probably the biggest world-wide event that has contributed to this and other stories, such as reliability issues with self-driving cars, or inaccuracy of ‘precision munitions’, add to the general feeling of unease around the use and purpose of algorithms.  I chose the latter two examples deliberately as there is a moral as well as technical aspect to both.

So the stories about algorithms that help people control prosthetic limbs more effectively, or to ‘see’ with retinal implants, or even driver-less cars  travelling tens of thousands of miles without incident, can be lost amongst more sensationalist stories of those same cars deciding ‘whose life is worth more’ when an accident is unavoidable.

As a result I wonder how much knowledge the general public has about the algorithms that make their day to day life a little easier, by better predicting the weather, ensuring traffic junctions cope better with rush hour traffic, or even just helping people select a movie they’re likely to enjoy.

One could argue that this underlying distrust of algorithms is no bad thing, particularly if this can lead to unbiased critical appraisal of their use in a particular field, as highlighted by Knox, J. (2015) with regard to their use in education:

“Critical research, sometimes associated with the burgeoning field of Software Studies, has sought to examine and question such algorithms as guarantors of objectivity, authority and efficiency. This work is focused on highlighting the assumptions and rules already encoded into algorithmic operation, such that they are considered always political and always biased.”

This week has made me a little more uneasy about the way “algorithms not only censor educational content, but also work to construct learning subjects, academic practices, and institutional strategies” Knox, J.  (2015).  In my professional practice we do not have the sophistication of systems that would make this a concern, but our learners are exposed to and learn from other systems and their apprehensions about how we might use their data will no doubt be coloured by their view of ‘big data’.  With that in mind this is clearly a subject I should have on my radar.

Apologies for writing double the word limit for this summary and including new content rather than summarising, it’s one of those subjects that, when once you start writing, it’s difficult to stop!

References:

Knox, J. 2015. Algorithmic Cultures. Excerpt from Critical Education and Digital Cultures. In Encyclopedia of Educational Philosophy and Theory. M. A. Peters (ed.). DOI 10.1007/978-981-287-532-7_124-1

 

TWEET: Microsoft white paper on the future of education

Link to Microsoft’s white paper on the way education is changing.

“Learning technology is not a simple application of computer science to education or vice versa”

“Universities tend to be proactive in their approach to preparing undergraduates for the world of work. However, as the employment landscape becomes increasingly fluid, universities must constantly update their teaching practices to suit the demands of the jobs market.”

“This means that students and academics could be working on anything up to three internet-enabled digital devices in a single session: a laptop or desktop, a tablet and a smartphone. Students, like most modern employees, are working on the move, at any time of day, in almost any location as work and leisure hours become blurred by increasingly ‘mobile’ lives”  Yes, definitely reflects my life!

“The primary applications for artificially intelligent systems in HE will occur within marking and assessment. Automated systems designed to mark essays, for example, will reduce the time spent by academics on paperwork and increase their face-to-face time with students or their time spent engaging with research occurring outside of the institution.”  I hope this never happens

“Work in the future will be more interconnected and network-oriented. Employees will be working across specialist knowledge boundaries as technologies and disciplines converge, requiring a blend of technical training and the ‘soft’ skills associated with collaboration.” I think we’re already starting to see this happening

Learner or predictive analytics […] can serve to both measure and shape a student’s progress. Universities will also unlock new insight into how students are engaging in digital and physical spaces.   Very relevant to this algorithmic cultures block.

“Although mobile technology has permanently changed learning environments, all of our interviewees stressed the point that learning technology should be a tool and never the end goal. The ideal university education is still about improving a student’s ability to produce appropriate ideas, solve problems correctly, build on complex theories and make accurate inferences from the available information.” Hurrah!