TWEET Why watching Westworld’s robots should make us question ourselves

This article neatly sums up a lot of the points we’ve debated in this cyberculture block and raises some valid questions about how we will relate to robots as they become more humanoid and at least appear to have their own thoughts and feelings.

“Even if robots are just tools, people will see them as more than that. […] It may be an innate tendency of our profoundly social human minds to see entities that act intelligently in this way. […] It may be difficult to persuade them to see otherwise, particularly if we continue to make robots more life-like. If so, we may have to adapt our ethical frameworks to take this into account. For instance, we might consider violence towards a robot as wrong, even though the suffering is imagined rather than real.” 

I watched this remake of Westworld before I started this course and. even without the added academic stimulus, it brought some interesting moral questions to mind and discussion about those I watched the series with would react in the same situation.

The status of the robots in the series as ‘tools’ was emphasised in a number of ways, not least in that whenever they were taken out of the park to be worked on they were left naked.  While the nudity is probably there for titillation and to attract viewers, it makes it easier to identify who is ‘real’ and who is a machine, it also shows that the humans feel that the machines do not need to be treated with any respect.

To me the point the author of the article linked in the above tweet makes about violence towards robots has one important dimension missing.  If a robot is very ‘life like’ but it is considered acceptable to abuse it in some way, how long before similar abuse toward other humans becomes acceptable?

Not long after watching the Westworld series I watched the film ‘Hidden Figures’ about the black women who were ‘human calculators’ for NASA during the early space race.  The film documents how they were treated and the segregation they faced in both their working and social lives.  I’ve never experienced people of a different skin colour or racial background being treated in this way so the feelings of anger and revulsion I felt when watching the film were raw and painful.  For my twenty-seven year old son they were truly upsetting, and I have to say that this at least gives me hope for the future.   My reason for mentioning this film is that I think there are parallels with the Cyberculture themes we have been studying.  The white people depicted in the film grew up in a society where it was acceptable to treat someone who looked a little different from them as being inferior.

While it is clear that there is more to do towards racial equality, we have moved on considerably since the days of segregation.  I wonder whether we will we see a similar course of events for humanoid artificial intelligence, or sentient androids in future years.

via Twitter

January 25, 2017 at 10:33PM

RETWEET @lemurph

Certainly a thought provoking line but what does it mean?  Is the implication that technology has no place in education if it doesn’t properly inspire a student?  Or maybe there is one superfluous word and comma missing, perhaps it should read “Technology is only a tool, it can be used properly to inspire a student”
This isn’t the first quote I’ve come across this week that refers to technology as a tool.  Roboticists, Rodney Brooks has told on us all to relax about artificial intelligence, stating that “AI is just a tool, not a threat“.
via Twitter

January 25, 2017 at 09:47PM

TWEET: Massive-scale online collaboration

After re-purposing CAPTCHA so each human-typed response helps digitize books, Luis von Ahn wondered how else to use small contributions by many on the Internet for greater good. In this talk, he shares how his ambitious new project, Duolingo, will help millions learn a new language while translating the Web quickly and accurately — all for free.

Massive-scale online collaboration

via Twitter

January 25, 2017 at 08:57AM

On one level I admire the idea of harnessing the combined efforts of millions of individuals to solve a problem.  I’m aware that similar ‘crowd sourcing’ has been used to identify potentially habitable planets and in the identification of abnormal cells.  In a similar vein I tried (unsuccessfully) to get the company I work for involved in using the processing power of our PCs for cancer research, while the computers were not being used a night.

My only issue with this type of distributed / networked effort is when it’s done in a covert way.  I’ve mentioned the ulterior motive of RECAPTCHA to a few friends and work colleagues and none of them knew that it was being used to digitise books.  As a result their first reaction was a feeling of having been ‘used’, regardless of whether digitising the books in question would be to the greater good.

In my view this type of ‘covert’ activity, however well intended, risks adding to public fears about the misuse of data.

TWEET: Two viewpoints on reCAPTCHA

Adapted from

Two viewpoints on reCAPTCHA – assisting Google to digitise books its AI can’t read.

via Twitter

January 25, 2017 at 08:51AM

New AI System Can Learn Like A Human, And Store Info Like A Computer

New AI System Can Learn Like A Human, And Store Info Like A Computer

via Twitter

January 25, 2017 at 08:41AM

I’ve returned to this earlier post having completed several weeks of the course as it feels very relevant to algorithmic cultures block and the question of whether artificial intelligence is a substitute for human pedagogy.

I’ve grappled with this question in a later post.

Shared experience, shared concerns, shared aspirations

I’ve just spent some time skimming through my fellow student’s lifestream blogs, trawling for nuggets of information, such as how to better automate some of the data aggregation for this blog,  and for reassurance that what I’m doing bears some resemblance to what they’re doing.

Helen Murphy’s thoughts in particular chime with my own and I recognise the overarching need to impose some order on the randomness of the format, as well as to make it look nice.  

At some level perhaps that desire to make data aesthetically pleasing is a peculiarly human trait.  Having read Sian’s paper I’m reluctant to use the phrase “isn’t that what set’s us apart from machines?”.  A machines may need its data to be in a format it can deal with, but to me there a difference between that and rejoicing in data’s beauty, symmetry, asymmetry or some other aspect beyond the individual ones or zeros.