The Internet of Toys is normalising surveillance in the child's playroom https://t.co/RKHKekiVWd
— Ben Williamson (@BenPatrickWill) January 28, 2017
This article caught my attention because of its focus on how the use of digital technologies changes social practices, in this case by normalising a culture of surveillance:
These technologies also normalise surveillance as a cultural and social practice, in the context of the parent–child relationship or children’s relationship with institutional and commercial actors. Children are monitored or encouraged to monitor their own activities (be it health, school performance and/or play practices).
Growing up in a culture where (self-)surveillance is normalised is likely to shape children’s future lives in ways that it is hard to predict.
The article’s focus on digital technologies’ influence on culture echoes Bayne’s (2015) challenge to the notion of technology being neutral and separate from social practices. Bayne, citing Hamilton & Frieson (2013), uses the perspective of science and technology studies to suggest that the narrative surrounding use of digital technologies, and particularly technology within education, is overly simplistic and reductionist, framing technology as either instrumentalist or existentialist:
Where essentialism attributes to technology a set of ‘inalienable qualities’ immanent to the technological artefact, instrumentalism constructs technology as a set of neutral entities by which pre-existing goals (for example, ‘better’ learning) can be achieved.
The datafication of childhood experience that the articles argues normalises a culture of surveillance isn’t just implicit in the SmartToys it talks about, however. Digital technologies have enabled schools to keep more in-depth records on student behaviours and performance, and this record keeping has a significant impact on both the lives of teachers and students. Supposedly, better record keeping, through tools such as ‘Target Tracker’ schools can improve student progress. However, they simultaneously work to normalise notions of progression and learning as definitively linear, and routine across students. Similarly, persistent behavioural records which are transferred between schools and across levels of schooling frequently establish behaviour as something in need of modification, without addressing underlying social circumstances. One example of the latter is ClassDojo. According to Williamson and Rutherford (2017):
ClassDojo reinforces the idea that it is the behavioural mindset of the individual that needs to be addressed. Many ClassDojo resources refer to ideas such as ‘character development’ and ‘growth mindset’ that emphasise developing individuals’ resilience in the face of difficulties, but this doesn’t address the social causes of many difficulties children encounter, such as stress about tests.
It is clear in these examples that technology is not neutral, but rather influences the culture and practice of both learning and teaching, and more generally, of childhood. Yet, we are also not merely at the whim of technology – for example, it is school (and at times government) policy which dictates whether applications such as Target Tracker and ClassDojo are used. The relatively widespread acceptance of their use (in my experience within multiple schools) suggests that (wrongly or rightly) societally we already accept surveillance. Of course, tracking micro-levels of learning (though not terribly effective in my mind) is slightly less creepy than a doll asking a child about where he/she lives and what mummy and daddy do.. I’m interested in others’ thoughts – is there actually a difference between surveillance by SmartToys and surveillance by schools, or do we -as a society- need to reconsider the latter as well?
Williamson, B. & Rutherford, A. (2017). ClassDojo poses data protection concerns for parents [blog post]. Retrieved from http://blogs.lse.ac.uk/parenting4digitalfuture/2017/01/04/classdojo-poses-data-protection-concerns-for-parents/
Great article to bring in here, I really like Ben Williamson’s work. I was wondering how you were linking this to the ‘cybercultres’ themes? I can certainly see how this will link to our algorithmic cultures block later in the course – worth keeping it in mind for then.
Thinking about your post on ethics and AI (from Ghost in the Shell), there might be something in thinking about how this article points to other (and more critical perhaps!) ethical perspectives. Rather than thinking about the ‘human rights’ of the machine (the classic ‘cyberculture’ kind of position), we might think about the ethics of human/machine relations. The ones described here – the accounting of participation, the normalisation of linear progress, the persistent comparison of human behaviour to data sets – seem to be relations with significant ethical dimensions.
Where technology influences culture (and clearly vice versa), should we be talking more about ‘ethical relations’ than assigning ‘rights’?
Thanks for your comment, Jeremy. You are quite right, the link to cybercultures is tenuous at best! Having recently readSiân Bayne’s (2015) paper, I was more focused on the ‘what is wrong with technology’ criticism of TEL terminology (i.e. narratives which separate technology from social practice). Here the connection is, as you have highlighted, more directly linked to our block on algorithmic cultures. I guess, though, in my mind there is also a loose connection (not made in my post!) between these new ‘smart’ toys and those of J.F. Sebastian in Blade Runner. I find that world to be quite a lonely one, where human-human friendships are replaced by human-machine ones.
Also, there’s the idea of enhancement, in that if we extend the discussion beyond toys with voice recognition that collect data and ‘speak back’ to include the full range of digital technologies marketed at children, we encounter dialogues which position the said toys as necessary for children’s development of digital literacies and future life opportunities, and which will enhance their cognitive growth. As Bayne (2015, p. 11) highlights, such narratives of cognitive enhancement create ‘a discursive link with transhumanism’.
On this last point, a friend who is parent to an eleven-year-old and a seven-year-old in Hong Kong recently raised her concerns about the pressures placed on parents (I would say largely by corporate marketing but also from within education and between parents) to future-proof their children through giving them access to digital tools, and programmes which teach them to code, for example. My friend argued that it was more important to focus on developing distinctly human qualities, such as empathy and imagination, but that her focus on this was at odds with the educational and cultural perspectives surrounding her. While I don’t disagree with the need for development of empathy and imagination, I also don’t see (digital) technology-rich environments as necessarily limiting these. There still seems to be a this or that / technological or human / scientific or creative tension at play, which is reflective of the human / cyborg opposition within much of cyberpunk. To me, there seems to be room for more non-binary thinking within discussions of who we (or our children) might be as humans in the future/within an age so enmeshed in digital technologies.
With regard to your point:
I feel that ‘human rights’ of the machine wouldbecome significant if we do reach the point of sentient machines. However, focusing on this now could be a distraction from more pressing ethical concerns. Thanks for highlighting this.