Ben Williamson (2014) video lecture

Having watched Ben Williamson’s video  lecture and listened to the Q&As the final point raised in the Q&As left me reflecting on about the nature of this course.  We’ve been actively encouraged to bring many data streams into these Lifestream blogs, such as Twitter, YouTube and Pinterest.  One could argue, as the questioner in the video lecture does, that many of these are essentially ‘non-academic’ resources, with a commercial imperative and built on who knows what theories of learning.  I’ve been conscious all the way through this course, of guidance we were given right at the outset when embarking upon IDEL, and that is to give careful and critical consideration to what constitutes a legitimate academic source.  As such, when researching, I’ve tried to refer to University library resources as much as the social spaces we’ve been encouraged to explore.  I appreciate that use of the latter does perhaps illustrate the socialisation and ‘algorithmification’ of knowledge and learning rather better than, the results of a university library search might do, and maybe that’s , at least in part, the point.

Williamson paints a view of the future use of data in the design of interactive books, greater personalisation and adaptive assessment techniques.  A couple of years on and some of these emerging trends have become more established.  For example the language learning apps Duolingo and Memrise, which use algorithms to drive input spacing and recall exercises that are specific to the individual.

Another question I found particularly interesting was the potential for the very data social scientist would find invaluable being inaccessible to them due them lacking the coding knowledge required to manipulate it.  The question of whether it was necessary to learn to code was debated at some length and one questioner raised the point that eventually someone will build a tool that makes learning to do so unnecessary. However, I wonder whether this would raise further questions and concerns similar to those relating to the agency of commercial organisations in social platforms, in that one might not be fully aware of what such a tool is doing, or the results it is delivering without knowledge of code.  The physicist in the lecture audience makes a very similar point regarding algorithms and ‘bought in analytical packages’.

One theme that keeps returning for me, perhaps because big data, fake news and privacy are so often in the news (this for example, or this) is how we’re constantly being profiled.  Most of this would appear to be for commercial gain rather than for our personal benefit (unless the latter is a happy consequence of the former).  How long before data mining finds its way into our academic records, if it hasn’t already?

References:

Ben Williamson (2014) Calculating Academics: theorising the algorithmic organization of the digital university

 

Leave a Reply