I tweeted this after seeing the video in my morning newspaper feed. Here it is. It’s a piece on a new design of radio which uses facial recognition to appraise your mood and play the music to match it. In 4:17 minutes, I think it’s a fantastic summation of much of what I think this course is going to address. For example:
(a) AI designed to have human characteristics – the cheeky smile on the dial; the language of being a ‘buddy’, and offering “self-care and well-being”. What’s the flip-side, the politics of this kind of technology – will the house-bound be offered this as a cheaper alternative to (say) state-funded face-to-face welfare provision, such as the latter still exists within the UK? Or, thinking in educational terms, a tutor-bot?
(b) The mystery of the algorithm. The video talks about the “tricks” within the algorithm, and seeks to demonstrate them via everyday graphics, but acknowledges that they’re “not flawless”. Humorously, the presenter talks about technology looking “inside my soul”, but the film also mentions Google’s face-recognition confusions which offended certain “cultures” and “continents” (see here for a less euphemistic take on this matter). Clearly technologies have contexts, and assume and exclude contexts. Education needs to be aware of these positionalities. Back to the video, and this radio, apparently monobrows, dimples, beards and make-up can distort outcomes. Perhaps the most chilling line in the piece is the comment that “we all need to look exactly the same for it to be perfect” . It drips and reeks with political connotations. Ironically, the radio is designed by a company called Uniform…
(c) The manipulation of the algorithm. The radio uses Spotify. But, as the video notes, not the whole of Spotify, but 300 songs, so that “people recognise” the music. Call me old-school, but I have about 80 CDs within reach as I type. Are 300 songs enough for one person, let alone for people in general, let alone for self-care and well-being? Presumably this capacity is incredibly adjustable, and my elderly relative need not suffer my music catalogue, nor me that of the person with the headset next to me on the last bus I traveled on. But the questions of the filter-bubble lurk here. What are the differences, and connections, between music I like, and music for people like me, a distinction the video acknowledges? This brings me to a final point:
(d) Technology changes everyday things, education included. The internet of things is an unspoken context for this video. But in all areas of life, education included, ‘smart things’ are both the same and not the same as previous artifacts. Complex changes occur in “construct[ing] learning subjects, academic practices, and institutional strategies” (J. Knox, ‘Critical Education and Digital Cultures’). Thus, what are the implications for the production of music brought about by Spotify? At very least, varied, uncertain, and riddled with seen and unforeseen changes. Also, returning to this present video and the radio it introduces, what is the function of music, of a radio – to confirm or change mood? The answers will, directly and indirectly, influence education. Presumably the algorithm can decide – although, as the video notes, there is still a performative agency for humans standing before its eye, trying to put on a desired mood. Remember the passport-photo booth when you were a teenager??
There is a human response, at least at the first flush of novelty, in the use of such technologies but – as the video notes with a ‘like it or not’ fatalism – we are “increasingly monitored and manipulated” by algorithms. My overall take on the video’s tone about this radio is that it’s like the coo-ing over a seemingly gifted baby or toddler. As with all radios, it’s the teenage years that will prove interesting.