#mscedc The datafied child: The dataveillance of children and implications for their rights – Jan 23, 2017 https://t.co/HnpZVKitvh
— Renée Hann (@rennhann) February 1, 2017
Lifestream, Tweets
The Internet of Toys is normalising surveillance in the child's playroom https://t.co/RKHKekiVWd
— Ben Williamson (@BenPatrickWill) January 28, 2017
This article caught my attention because of its focus on how the use of digital technologies changes social practices, in this case by normalising a culture of surveillance:
These technologies also normalise surveillance as a cultural and social practice, in the context of the parent–child relationship or children’s relationship with institutional and commercial actors. Children are monitored or encouraged to monitor their own activities (be it health, school performance and/or play practices).
Growing up in a culture where (self-)surveillance is normalised is likely to shape children’s future lives in ways that it is hard to predict.
The article’s focus on digital technologies’ influence on culture echoes Bayne’s (2015) challenge to the notion of technology being neutral and separate from social practices. Bayne, citing Hamilton & Frieson (2013), uses the perspective of science and technology studies to suggest that the narrative surrounding use of digital technologies, and particularly technology within education, is overly simplistic and reductionist, framing technology as either instrumentalist or existentialist:
Where essentialism attributes to technology a set of ‘inalienable qualities’ immanent to the technological artefact, instrumentalism constructs technology as a set of neutral entities by which pre-existing goals (for example, ‘better’ learning) can be achieved.
The datafication of childhood experience that the articles argues normalises a culture of surveillance isn’t just implicit in the SmartToys it talks about, however. Digital technologies have enabled schools to keep more in-depth records on student behaviours and performance, and this record keeping has a significant impact on both the lives of teachers and students. Supposedly, better record keeping, through tools such as ‘Target Tracker’ schools can improve student progress. However, they simultaneously work to normalise notions of progression and learning as definitively linear, and routine across students. Similarly, persistent behavioural records which are transferred between schools and across levels of schooling frequently establish behaviour as something in need of modification, without addressing underlying social circumstances. One example of the latter is ClassDojo. According to Williamson and Rutherford (2017):
ClassDojo reinforces the idea that it is the behavioural mindset of the individual that needs to be addressed. Many ClassDojo resources refer to ideas such as ‘character development’ and ‘growth mindset’ that emphasise developing individuals’ resilience in the face of difficulties, but this doesn’t address the social causes of many difficulties children encounter, such as stress about tests.
It is clear in these examples that technology is not neutral, but rather influences the culture and practice of both learning and teaching, and more generally, of childhood. Yet, we are also not merely at the whim of technology – for example, it is school (and at times government) policy which dictates whether applications such as Target Tracker and ClassDojo are used. The relatively widespread acceptance of their use (in my experience within multiple schools) suggests that (wrongly or rightly) societally we already accept surveillance. Of course, tracking micro-levels of learning (though not terribly effective in my mind) is slightly less creepy than a doll asking a child about where he/she lives and what mummy and daddy do.. I’m interested in others’ thoughts – is there actually a difference between surveillance by SmartToys and surveillance by schools, or do we -as a society- need to reconsider the latter as well?
Williamson, B. & Rutherford, A. (2017). ClassDojo poses data protection concerns for parents [blog post]. Retrieved from http://blogs.lse.ac.uk/parenting4digitalfuture/2017/01/04/classdojo-poses-data-protection-concerns-for-parents/
Lifestream, Tweets
@Tauraco v-interesting to see my own bias.Related conversation btw Obama, Joi Ito & Scott Dadich last year: https://t.co/BDwXMyJRll #mscedc https://t.co/48cOLvGRtP
— Renée Hann (@rennhann) January 25, 2017
Deciding that AI moral responsibility will be better than ours may be a lot trickier than we thought. https://t.co/KcNiBNRdV2 #mscedc
— Myles Thies (@Tauraco) January 25, 2017
Lifestream, Tweets
@jennymackness-thanks.Our focus is algorithmic cultures-so it's great to see initiative towards viewpoint diversity.Looking fwd to viewing
— Renée Hann (@rennhann) January 16, 2017
@jennymackness @Digeded fear of isolation.. but it's amplified by non-diverse social networks. ie.Sheehan's 'spiral of silence' (1/2)#mscedc
— Renée Hann (@rennhann) January 16, 2017
@jennymackness @Digeded (2/2)ppl are more likely to voice opinions they perceive to be minority in diverse community https://t.co/eLXSFk4A5n
— Renée Hann (@rennhann) January 16, 2017
@jennymackness @Digeded so for me it's not about how it starts but how it is amplified #mscedc (bad at this 140 character thing..!)
— Renée Hann (@rennhann) January 16, 2017
@jennymackness @mdvfunesof @Digeded #mscedc of course 🙂 but I'm thinking 'amplify' also in sense of tech&practices which shape values (1/2
— Renée Hann (@rennhann) January 16, 2017
@jennymackness @Digeded @mdvfunes eg.getting 'likes'/media socialisation.Course now is Education & Digital Culture- https://t.co/mmaBeEY7hG
— Renée Hann (@rennhann) January 16, 2017
Note: thanks to Jenny Mackness for joining the conversation, sharing more great resources and probing deeper – so much to unpack in block 3 of the course. Distracted by post-humanism this week.. but the conversation is still ‘whirring’ in the background.
In particular I’ve been thinking about changing values, and how changed (technology driven) communication practices may contribute to those changes in values, for example, through different peer affirmation practices and changes in the scale of friendship groups. The starting point for this thinking is a study by Uhls & Greenfield (2011), “The Rise of Fame: An Historical Content Analysis” which confirms changes in reported youth values that coincide with technological innovation – for example, the arrival of youtube.
Lifestream, Tweet: reply to @Digeded
@Digeded True..but difference is a necessity (for meaning making) within complex systems. https://t.co/hXHXuhvaTZ #mscedc @jennymackness
— Renée Hann (@rennhann) January 16, 2017
Note: This post is more connected to our third block, algorithmic cultures.
Eli Pariser’s (2011) ‘filter bubble’ – or, the way that algorithms edit the content we are exposed to online, connecting us with similar views and exposing us to advertising news we are likely to be interested in – came back into the spotlight in the build-up to the 2016 US presidential elections*. However, danah boyd (in the article tweeted by Matthew and also taken up in various other forms, such as boyd’s blog and Data Society:Points) makes the point that people’s personal choices have a significant role in creating the silos for which personalisation algorithms are frequently blamed.
Why does it matter? While boyd notes that more diverse teams are known to outperform more homogeneous teams, at the crux of her articles is a concern for democratic process: ‘If we want to develop a healthy democracy, we need a diverse and highly connected social fabric.’ For me, this links to Granovetter’s (1973) theory of strong and weak ties, which suggests that people within more diverse social networks (with weak ties between people of diverse backgrounds) are better able to read the opinion climate and are more likely to express opinions they believe to be in the minority.** In turn, a diverse network within which people speak out despite having minority-opinions helps to avoid what Sheehan (2015) termed a “spiral of silence”. Within a spiral of silence some opinions become artificially dominant because of the unwillingness of minority-view holders to speak out, through fear of isolation – hence diversity is necessary.
Recently (January 12, 2017), Jenny Mackness offered me two other ways of understanding the necessity of diversity, in the blog post to which my reply to Matthew links. Firstly, rather than a spiral of silence, Mackness refers to “information cascade” or “cascade phenomena” (Downes, 2005, 2007) in which external information overrides individual, private information signals despite possible contradictions between the two. This phenomena offers further insight into the artificial dominance that some opinions and acts gain, regardless of apparent lack of truth or lack of fit with proclaimed values. Secondly, Mackness raises Cillier’s (2010) proposal that the world is aptly viewed as a complex adaptive system, in which meaning is generated by difference, and meaning is distorted by a reduction in diversity. Both of these ideas will, I feel, enrich my understanding of the impact of algorithmic cultures in block 3 of #mscedc.
*for a demonstration of how conservative and liberal facebook feeds tend to differ, see http://graphics.wsj.com/blue-feed-red-feed/#/ , based on a study by Bakshy, Messing & Adamic (2015).
**I previously wrote (briefly) about weak and strong ties within the context of supporting open networked learning not just for its value for individuals but also society here. The paper as a whole is on vulnerability within socially networked learning, and the emergence of new student identity roles.