“By the late twentieth century our time, a mythic time, we are all chimeras, theorised and fabricated hybrids of machine and organism; in short, we are cyborgs. The cyborg is our ontology;
it gives us our politics”
Haraway, D (2007) A cyborg manifesto
References:
Haraway, Donna (2007) A cyborg manifesto from Bell, David; Kennedy, Barbara M (eds), The cybercultures reader pp.34-65, London: Routledge.
This week the Miller chapter along with the Film Festival chat has firmed up a few of my emerging thoughts about the human relationship to tech, particularly around disembodiment and the importance of the voice.
The importance of the voice in digital learning materials
During the week I’ve been creating some learning materials and I wanted to include a voice over to introduce each section. With this week’s reading a the back of the mind but lacking the time to record someone ‘real’ I decided to use an online text to speech site, the output of which sounded almost indistinguishable from a ‘live’ actor, the emphasis being on almost. The the subtle nuances and imperfection of natural speech were missing and, while the end result was a very close facsimile of the ‘real thing’, the automation was still evident.
So I decided to find out if there is any research to indicate a difference in learning when presented by a human voice versus a synthesised voice. Writing about the results of two experiments Mayer, Sobko, and Mautone (2003) found that students performed better in a transfer test and rated the speaker more positively if narrator had a standard accent rather than a foreign accent (Experiment 1) and if the voice was human rather than machine synthesised (Experiment 2). So does this mean that learners will always respond better to a on-screen human tutor than a computer-generated equivalent? There is research that indicates that people will treat computers in the same way as humans given the right circumstances. Reeves and Nass (1996) found that people will comply with social conventions and be polite to computers when asked to evaluate them directly, as compared to evaluating one computer from a different one (the equivalent of giving face to face feedback compared to giving feedback about someone to a third-party).
Moreno, Mayer, Spires, & Lester (2001) found that there was little difference in the test performance of students learning about botany principles presented by a cartoon on-screen tutor compared to an on-screen human tutor. They also found that students learned equally well even if there was no on-screen tutor so long as the students could hear the tutor’s voice. This suggests that voice quality and clarity is more important than whether it is a human voice or not.
My own experience of being ‘fooled’ by automated telephone services suggest that it will not be long before AI is indistinguishable from a human agent. The more recent Mayer, Sobko, and Mautone experiments suggest that this could be beneficial to those producing digital learning materials, whereas the Moreno, Mayer, Spires, & Lester (2001) experiment indicates that it might not make much difference.
Visualising the concepts in Miller, V (2011)
I’m continuing to mindmap the set readings and other related texts I’ve researched. At this stage the maps are just my way of visualising the concepts and arguments so that I can see how they fit together, currently they don’t offer any critical examination of the texts.
This is my mindmap of the Miller text, it’s a better resolution than the previous maps, which I will update when I revisit them.
References:
Miller, V. (2011) Chapter 9: The Body and Information Technology, in Understanding Digital Culture. London: Sage.
Mayer , R.E. , Sobko , K. , & Mautone , P.D. ( 2003 ). Social cues in multimedia learning: Role of speaker’s voice . Journal of Educational Psychology , 95 , 419 – 425 .
Reeves , B. , & Nass , C. ( 1996 ). The media equation: How people treat computers, television, and new media like real people and places . New York : Cambridge University Press .
Moreno , R. , Mayer , R.E. , Spires , H. , & Lester , J. ( 2001 ). The case for social agency in computer-based teaching: Do students learn more deeply when they interact with animated pedagogical agents? Cognition and Instruction , 19 , 177 – 214
On one level I admire the idea of harnessing the combined efforts of millions of individuals to solve a problem. I’m aware that similar ‘crowd sourcing’ has been used to identify potentially habitable planets and in the identification of abnormal cells. In a similar vein I tried (unsuccessfully) to get the company I work for involved in using the processing power of our PCs for cancer research, while the computers were not being used a night.
My only issue with this type of distributed / networked effort is when it’s done in a covert way. I’ve mentioned the ulterior motive of RECAPTCHA to a few friends and work colleagues and none of them knew that it was being used to digitise books. As a result their first reaction was a feeling of having been ‘used’, regardless of whether digitising the books in question would be to the greater good.
In my view this type of ‘covert’ activity, however well intended, risks adding to public fears about the misuse of data.
So far these extracts from ‘What’s the matter with ‘technology enhanced learning’, sum up some of the big questions for me:
“Yet after science and technology have worked over all human limitations […] the transhumanists claim that something essentially ‘human’ will still remain: ‘reason, intelligence, self-realization, egalitarianism’. Technology here simultaneously, and paradoxically, enables both the transcendence and the preservation of the human.”
“A critical posthumanist position on technology and education would see the human neither as dominating technology nor as being dominated by it. Rather it would see the subject of education itself as being performed through a coming together of the human and non-human, the material and the discursive. It would not see ‘enhancement’ as a feasible proposition, in that enhancement depends on maintaining a distinction between the subject/learner being enhanced and the object/technology ‘doing’ or ‘enabling’ the enhancement.”
“It is time to re-think our task as practitioners and researchers in digital education, not viewing ourselves as the brokers of ‘transformation’, or ‘harnessers’ of technological power, but rather as critical protagonists in wider debates on the new forms of education, subjectivity, society and culture worked-through by contemporary technological change.”
Sian Bayne (2015) What’s the matter with ‘technology-enhanced learning’?, Learning, Media and Technology, 40:1, 5-20, DOI: 10.1080/17439884.2014.915851
Let the intellect alone, it has its usefulness in its proper sphere, but let it not interfere with the flowing of the life-stream. Daisetsu Teitaro Suzuki