TWEET: Chimeric bio-tech futures

via Twitter
January 26, 2017 at 09:14PM

Michael Spectre, writing in National Geographic Magazine raises the following question:

“The ability to quickly alter the code of life has given us unprecedented power over the natural world. Should we use it?”

Frightening statistics from the USA show that every ten minutes someone is added to the list of people requiring an organ transplant and every day twenty-two people from that list die without receiving the organ they need.  If you’re on that list, or a family member or friend of someone on that list, your views about altering genetic code and human-pig chimeras might be influenced by the potential to grow the organ you need inside an animal from another species.

Considering that we already farm the same potential donor animals for food the moral and implications might not be so difficult for people to deal with, unless you’re already opposed to the way we treat other species .  Either way the ethical considerations feel like an even bigger issue.   We know that random mutations already occur in all lifeforms and that they are the basis of evolution through natural selection.  But do we truly understand the implications of introducing genetic information into the human gene pool that has resulted from thousands of years of evolution of a different species? The effect could be more immediate than one might imagine – what if doing so were to hasten the development of drug resistant bacteria?

At first sight none of this might seem relevant to digital cultures, so why have I linked this to my Lifestream?  Well initially it was just the serendipity of reading about chimeric bio-futures (a term I have to admit is new to me) on the day the announcement linked above was made.  But, having reflected further, I think a similar moral question can be put to the technologies we apply to education and learning too:

“Just because we can, should we?”

Is Amazon Alexa’s apparent inability to answer some questions actually an aid to learning?

The Amazon Echo Dot
The Amazon Echo Dot

Earlier this week I Tweeted a link to two conflicting views on reCaptcha and the ‘ulterior motive’ it has of assisting Google in digitising books.

This got me thinking about motives  other connected devices I use might have, in particular the Amazon Echo  Dot powered by their AI ‘Alexa’.

Alexa often struggles to answer a question if it’s poorly phrased, whereas ‘OK Google’ and ‘Siri’ seem to be able to make a good go of interpreting even the most poorly articulated query.  But from an educational point of view aren’t the latter two examples doing the user a disservice?  By forcing the user to better articulate their question Alexa might (probably unintentionally) improve their questioning skills and maybe even their vocabulary.  In reality most will simply put Alexa’s inability to answer down to ‘her’ failings rather than their own, but it’s an interesting thought.

Similarly, I often use voice to text software for note taking and this has come a long way since I was part of a pilot to test it in an open-plan office setting.  In the early days the user had to enunciate very clearly for any chance of the text being produced on screen to bear any resemblance to what they had said.  Today improvements in both software and hardware allow relatively sloppy diction to produce accurate results but, based on similar thinking to the above, is that always a good thing?