Is Amazon Alexa’s apparent inability to answer some questions actually an aid to learning?

The Amazon Echo Dot
The Amazon Echo Dot

Earlier this week I Tweeted a link to two conflicting views on reCaptcha and the ‘ulterior motive’ it has of assisting Google in digitising books.

This got me thinking about motives  other connected devices I use might have, in particular the Amazon Echo  Dot powered by their AI ‘Alexa’.

Alexa often struggles to answer a question if it’s poorly phrased, whereas ‘OK Google’ and ‘Siri’ seem to be able to make a good go of interpreting even the most poorly articulated query.  But from an educational point of view aren’t the latter two examples doing the user a disservice?  By forcing the user to better articulate their question Alexa might (probably unintentionally) improve their questioning skills and maybe even their vocabulary.  In reality most will simply put Alexa’s inability to answer down to ‘her’ failings rather than their own, but it’s an interesting thought.

Similarly, I often use voice to text software for note taking and this has come a long way since I was part of a pilot to test it in an open-plan office setting.  In the early days the user had to enunciate very clearly for any chance of the text being produced on screen to bear any resemblance to what they had said.  Today improvements in both software and hardware allow relatively sloppy diction to produce accurate results but, based on similar thinking to the above, is that always a good thing?

Leave a Reply