Gauging the reaction from my fellow participants the last few weeks have been a veritable sprint to the finish! And while this was a collective experience and learning event there was, to my mind, an underlying sense of competitiveness about it.
The Twitter debates and questions in week 9 were the equivalent of an 800m dash with users jockeying for position, looking for insightful advantage and generally trying to beat out some of the other participants. I alluded to some of these elements in my previous post.
But this should not come as a surprise though. Humans are by their very nature competitive and will continuously seek out advantages even if part of a harmonious and cooperative societal structure. That tiny bit of advantage gained or that minor piece of recognition has a very self-satisfying feel to it and for many is rather addictive.
However, it appears that competitiveness has been under attack in academia and been labelled an unhelpful by product of the quest for performance for some time. It is constantly being managed or curtailed in some way and wrapped up in sickly sweet quips like ‘You’re’ only competing against yourself’. Why are we fighting this? The ultimate contradiction must surely be gamification, a massive by word in digital education these days.
Knox alludes to this partly is his Community Cultures piece – the emphasis has turned from learning as an individual internalization to one based primarily on a social construct
So while this data has been very revealing about us and our community what else is it saying about us as individuals and what is motivating us – the deeper layers.
Ultimately learning analytics has been devised to improve performance within each individual but are we maybe ignoring one of its biggest advantages? Furthermore, if we can develop sophisticated analytical and insightful measures to enhance performance in learning cant we also create ways to apply them with the fundamental basic human instinct of competition too while simultaneously breeding out its most distasteful parts ?
In critiquing our EDC Week 9 Twitter discussion I was hoping to draw out some easy to read trends and findings. However, on closer inspection, it appears that our little study on the use of data has been anything but easy to understand. Without taking a statistical viewpoint how well dos this exercise really demonstrate the real use of data as a means by which to adjudge performance or even participation. Taking it further, if, as educators, we were to use the data as a form of assessment could we be certain that we are indeed seeing the full picture? To my mind the mini study on our activity is somewhat like an anamorph – ‘A distorted or monstrous projection or representation of an image on a plane or curved surface, which, when viewed from a certain point, or as reflected from a curved mirror or through a polyhedron, appears regular and in proportion; a deformation of an image’ (Source: anamorphosis.com)
I offer the following to validate this view:
Volume of Tweets
User phillip_downey’s top count of 70+ tweets put his score higher by some way than even second placed on the list. Was this part of an ulterior motive to ensure the highest number or was there a genuine accompanying development or promotion of learning or capability? Without a demonstrable mechanism to determine if the latter is the case the volume achieved does not indicate anything other than a sort of ‘gaming’ of the process. This data, in the hands of the LA uninitiated could be very misleading.
It is interesting to note that word 9 and 20 (I’m & I’ve) on the top words list are both contractions of the original pronoun ‘I’. Users, it shows, are continuously internalising to understand all that is presented through the online tweet based discussion. But, what is this saying about us as an online community that has been interacting at length these past 9 weeks. Where are the ‘us’, ‘we’ and ‘we’ll’? We are, it could be posited, still islands in the vast connected ocean of the web. Maybe we have become a chain of common, closer islands but islands we remain. What does this say for the theory of community of learning?
Sources of Tweets
Tweetdeck was by far the most popular application of choice by which to receive, view and disseminate Tweets. Although I have made use of it in the past I didn’t on this occasion and was limited to the 4th most used medium, Twitter for Android. Does technical supremacy ( a bigger gun?) show how it can provide a medium for greater Tweet volumes? (Quick! Someone call CSI Miami to cross reference Source with Volume of Tweets..) I think this points to a potential danger in the real use of LA in that administrators assume all users are all the same in terms of social status, wealth, culture and behaviour. Where is the social study of the data and what will it reveal? Is LA only good for these people or those kinds of learners? Social and educational inequalities as described by Eynon, 2013. Discrimination, as we have learned, can be automated too.
Let’s be honest, the facilitators hold the centre and are critical to the success of this exercise as is demonstrated. Can we claim to be a high functioning learner body with a maturity level to match? Personally, I’m not that confident that we could have pulled off this exercise as well if Jeremy and James hadn’t led with the questions. But to be fair that was the brief, so perhaps a bit harsh? What is positive though is that this exercise demonstrates to me just how important the modern teacher is and just what an effect they have on the guidance of development of thinking and learning on the web.
#mld2017? #immersivetechnologies? #totallybroke? Im confused, was this part of my discussion stream? Confusing or unaccountable data that I can’t relate to reveals that either I have missed out on a large section of learning or an important experience or it is totally irrelevant – which is it? Having some inkling of what should be revealed about activity in the data is important no? Isn’t that the point of LA? ‘Algorithmic cultures described a current phase in which automated computer operations process data in such a way as to significantly shape the contemporary categorising and privileging of knowledge, places and people – Knox, 2015’
From the graph provided it appears one or two tweets from Crafty_AI has some major bang for the buck spent. How should this be considered in the greater context of our data? What if one insightful comment, an influential user’s action or even minor, collective, action could skew an entire reading of LA to the point where administrators or facilitators adjust course on a learning programme in response? Are we even comfortable potentially leaving this to more competent AI’s in future who could do the same?
I think this, and all of the above, points to the fact that we don’t really know enough about what we see (Knox’s,Abstracting Learning Analytics (2014)) – abstract art angle personified) and create in our own data. Just as the anamorph is distorted and misshapen from our current viewpoint we still, perhaps, need to develop a method of assuming an oblique vision so that its true representation comes into view.
For Data decryption, copy the above text and go to:
Paste into Input text box Use the 'Decrpyt' option Click in Output area for decryption
For obvious reasons, Google has a vested interest in reducing the time it takes to load websites and services. One method is reducing the file size of images on the internet, which they previously pulled off with the WebP format back in 2014, which shrunk photos by 10 percent. Their latest development in this vein is Guetzli, an open-source algorithm that encodes JPEGs that are 35 percent smaller than currently-produced images.
As Google points out in its blog post, this reduction method is similar to their Zopfli algorithm that shrinks PNG and gzip files without needing to create a new format. RNN-based image compression like WebP, on the other hand, requires both client and ecosystem to change to see gains at internet scale.
If you want to get technical, Guetzli (Swiss German for “cookie”) targets the quantization stage of image compression, wherein it trades visual quality for a smaller file size. Its particular psychovisual model (yes, that’s a thing) “approximates color perception and visual masking in a more thorough and detailed way than what is achievable” in current methods. The only tradeoff: Guetzli takes a little longer to run than compression options like libjpeg. Despite the increased time, Google’s post assures that human raters preferred the images churned out by Guetzli. Per the example below, the uncompressed image is on the left, libjpeg-shrunk in the center and Guetzli-treated on the right.