The importance of the voice in digital learning materials

This week the Miller chapter along with the Film Festival chat has firmed up a  few of my emerging thoughts about the human relationship to tech, particularly around disembodiment and the importance of the voice.

The importance of the voice in digital learning materials

During the week I’ve been creating some learning materials and I wanted to include a voice over to introduce each section.  With this week’s reading a the back of the mind but lacking the time to record someone ‘real’ I decided to use an online text to speech site, the output of which sounded almost indistinguishable from a ‘live’ actor, the emphasis being on almost.  The the subtle nuances and imperfection of natural speech were missing and, while the end result was a very close facsimile of the ‘real thing’, the automation was still evident.

So I decided to find out if there is any research to  indicate a difference in learning when presented by a human voice versus a synthesised voice.   Writing about the results of two experiments Mayer, Sobko, and Mautone (2003) found that students performed better in a transfer test and rated the speaker more positively if narrator had a standard accent rather than a foreign accent (Experiment 1) and if the voice was human rather than machine synthesised (Experiment 2).  So does this mean that learners will always respond better to a on-screen human tutor than a computer-generated equivalent? There is research that indicates that people will treat computers in the same way as humans given the right circumstances.  Reeves and Nass (1996) found that people will comply with social conventions and be polite to computers when asked to evaluate them directly, as compared to evaluating one computer from a different one (the equivalent of giving face to face feedback compared to giving feedback about someone to a third-party).

Moreno, Mayer, Spires, & Lester (2001) found that there was little difference in the test performance  of students learning about botany principles presented by a cartoon on-screen tutor compared to an on-screen human tutor.  They also found that students learned equally well even if there was no on-screen tutor so long as the students could hear the tutor’s voice. This suggests that voice quality and clarity is more important than whether it is a human voice or not.

My own experience of being ‘fooled’ by automated telephone services suggest that it will not be long before AI is indistinguishable from a human agent.   The  more recent Mayer, Sobko, and Mautone experiments suggest that this could be beneficial to those producing digital learning materials, whereas the Moreno, Mayer, Spires, & Lester (2001) experiment indicates that it might not make much difference.

Visualising the concepts in Miller, V (2011)

I’m continuing to mindmap the set readings and other related texts I’ve researched.  At this stage the maps are just my way of visualising the concepts and arguments so that I can see how they fit together, currently they don’t offer any critical examination of the texts.

This is my mindmap of the Miller text, it’s a better resolution than the previous maps, which I will update when I revisit them.

Mindmap of Miller V (2011) Chapter 9 The Body and Information Technology
Mindmap of Miller V (2011) Chapter 9 The Body and Information Technology. Right click and select open in new tab to view full screen and enable zooming.

Miller, V. (2011) Chapter 9: The Body and Information Technology, in Understanding Digital Culture. London: Sage.

Mayer , R.E. , Sobko , K. , & Mautone , P.D. ( 2003 ). Social cues in multimedia learning: Role of speaker’s voice . Journal of Educational Psychology , 95 , 419 – 425 .

Reeves , B. , & Nass , C. ( 1996 ). The media equation: How people treat computers, television, and new media like real people and places . New York : Cambridge University Press .

Moreno , R. , Mayer , R.E. , Spires , H. , & Lester , J. ( 2001 ). The case for social agency in computer-based teaching: Do students learn more deeply when they interact with animated pedagogical agents? Cognition and Instruction , 19 , 177 – 214

Film festival and the strength of feeling about machines

There were some fairly strong emotions expressed in the text chat about this week’s films, of the ‘but it’s a machine’ variety.  I absolutely get the point being made but I can’t help wondering if these are learnt or societal prejudices, or whether they stem simply from the loss of innocence that comes with age and wisdom.

Child with Mickey Mouse

From my observations it would appear that very young children show little difference in behaviour whether they’e conversing with a robot, or a 5ft mouse or a fellow human. They’re enthralled by TV programmes presented by a variety of creatures and will throw themselves at their favourite Disney character without a second thought.  Maybe it’s only later in life that we start to make a distinction between sentient and non-sentient beings.

As Miller sates in ‘The Body and Information Technology’, “bodies are interpreted through the lens of culture and shaped by social forces”.

Week one Lifestream summary

Homer Simpson, too many options (to choose from)

Having arrived a week late to the course I’m re-designating half way through week two as the end of week one and summarising where I’ve got to so far.

I’m already finding the course content and the course format fascinating and though provoking.  As well as interchanges with other course participants, it has also prompted numerous discussions outside of the course.

There are so many topics that interest me, from mass-participation problem solving, through artificial intelligence, the privileging of certain information through algorithms, and onward to more fundamental questions about what it means to be human or transhuman.   The challenge is proving to be staying on one topic long enough to gain a useful level of understanding, as well remembering to keep in mind that it’s the educational aspects of these big questions that are most important, at least as regards this course.

I’m always looking for practical applications from this course and I can see parallels in my workplace.  Recently we have been using iPads in our practices in a way that allows our people to learn about our products and services alongside our customers, for example through interactive demonstrations augmented by questions and explanations from the colleague.  In this context, this human-machine combination provides an experience that could not be delivered as effectively by either alone.

Tomorrow is my first experience of the film festival. Having viewed the clips from the first ‘festival’ alone I’m looking forward to the shared experience and discussion.

*(Citation to be added)

TWEET: Chimeric bio-tech futures

via Twitter
January 26, 2017 at 09:14PM

Michael Spectre, writing in National Geographic Magazine raises the following question:

“The ability to quickly alter the code of life has given us unprecedented power over the natural world. Should we use it?”

Frightening statistics from the USA show that every ten minutes someone is added to the list of people requiring an organ transplant and every day twenty-two people from that list die without receiving the organ they need.  If you’re on that list, or a family member or friend of someone on that list, your views about altering genetic code and human-pig chimeras might be influenced by the potential to grow the organ you need inside an animal from another species.

Considering that we already farm the same potential donor animals for food the moral and implications might not be so difficult for people to deal with, unless you’re already opposed to the way we treat other species .  Either way the ethical considerations feel like an even bigger issue.   We know that random mutations already occur in all lifeforms and that they are the basis of evolution through natural selection.  But do we truly understand the implications of introducing genetic information into the human gene pool that has resulted from thousands of years of evolution of a different species? The effect could be more immediate than one might imagine – what if doing so were to hasten the development of drug resistant bacteria?

At first sight none of this might seem relevant to digital cultures, so why have I linked this to my Lifestream?  Well initially it was just the serendipity of reading about chimeric bio-futures (a term I have to admit is new to me) on the day the announcement linked above was made.  But, having reflected further, I think a similar moral question can be put to the technologies we apply to education and learning too:

“Just because we can, should we?”

Is Amazon Alexa’s apparent inability to answer some questions actually an aid to learning?

The Amazon Echo Dot
The Amazon Echo Dot

Earlier this week I Tweeted a link to two conflicting views on reCaptcha and the ‘ulterior motive’ it has of assisting Google in digitising books.

This got me thinking about motives  other connected devices I use might have, in particular the Amazon Echo  Dot powered by their AI ‘Alexa’.

Alexa often struggles to answer a question if it’s poorly phrased, whereas ‘OK Google’ and ‘Siri’ seem to be able to make a good go of interpreting even the most poorly articulated query.  But from an educational point of view aren’t the latter two examples doing the user a disservice?  By forcing the user to better articulate their question Alexa might (probably unintentionally) improve their questioning skills and maybe even their vocabulary.  In reality most will simply put Alexa’s inability to answer down to ‘her’ failings rather than their own, but it’s an interesting thought.

Similarly, I often use voice to text software for note taking and this has come a long way since I was part of a pilot to test it in an open-plan office setting.  In the early days the user had to enunciate very clearly for any chance of the text being produced on screen to bear any resemblance to what they had said.  Today improvements in both software and hardware allow relatively sloppy diction to produce accurate results but, based on similar thinking to the above, is that always a good thing?

TWEET Why watching Westworld’s robots should make us question ourselves

This article neatly sums up a lot of the points we’ve debated in this cyberculture block and raises some valid questions about how we will relate to robots as they become more humanoid and at least appear to have their own thoughts and feelings.

“Even if robots are just tools, people will see them as more than that. […] It may be an innate tendency of our profoundly social human minds to see entities that act intelligently in this way. […] It may be difficult to persuade them to see otherwise, particularly if we continue to make robots more life-like. If so, we may have to adapt our ethical frameworks to take this into account. For instance, we might consider violence towards a robot as wrong, even though the suffering is imagined rather than real.” 

I watched this remake of Westworld before I started this course and. even without the added academic stimulus, it brought some interesting moral questions to mind and discussion about those I watched the series with would react in the same situation.

The status of the robots in the series as ‘tools’ was emphasised in a number of ways, not least in that whenever they were taken out of the park to be worked on they were left naked.  While the nudity is probably there for titillation and to attract viewers, it makes it easier to identify who is ‘real’ and who is a machine, it also shows that the humans feel that the machines do not need to be treated with any respect.

To me the point the author of the article linked in the above tweet makes about violence towards robots has one important dimension missing.  If a robot is very ‘life like’ but it is considered acceptable to abuse it in some way, how long before similar abuse toward other humans becomes acceptable?

Not long after watching the Westworld series I watched the film ‘Hidden Figures’ about the black women who were ‘human calculators’ for NASA during the early space race.  The film documents how they were treated and the segregation they faced in both their working and social lives.  I’ve never experienced people of a different skin colour or racial background being treated in this way so the feelings of anger and revulsion I felt when watching the film were raw and painful.  For my twenty-seven year old son they were truly upsetting, and I have to say that this at least gives me hope for the future.   My reason for mentioning this film is that I think there are parallels with the Cyberculture themes we have been studying.  The white people depicted in the film grew up in a society where it was acceptable to treat someone who looked a little different from them as being inferior.

While it is clear that there is more to do towards racial equality, we have moved on considerably since the days of segregation.  I wonder whether we will we see a similar course of events for humanoid artificial intelligence, or sentient androids in future years.

via Twitter

January 25, 2017 at 10:33PM

RETWEET @lemurph

Certainly a thought provoking line but what does it mean?  Is the implication that technology has no place in education if it doesn’t properly inspire a student?  Or maybe there is one superfluous word and comma missing, perhaps it should read “Technology is only a tool, it can be used properly to inspire a student”
This isn’t the first quote I’ve come across this week that refers to technology as a tool.  Roboticists, Rodney Brooks has told on us all to relax about artificial intelligence, stating that “AI is just a tool, not a threat“.
via Twitter

January 25, 2017 at 09:47PM