Lifestream, Tweets

I suspect this might be my last, or near to last, post before the final summary as it’s Friday afternoon, I’ll work on the summary tomorrow, and Sunday (the due date for the Lifestream) is always a crazy day at work for me. It feels really lovely to return to the beginning – back to cyber culture’s questions about what it means to be human, and at the same time to make connections to community (loosely) and to algorithms.

In the video, Ishiguro says that he started the project, building a robot, in order to learn about humans. He asks what it really means to be human – which things that we do are automated? Which things could someone impersonating us do? When are we truly being ourselves, being creative? Erica (the robot) suggests that through the automation of the uninteresting and tedious aspects of life, robots can help us to focus on the creative aspects, and other parts where we are truly ourselves. With AI being essentially the work of algorithms this ties our first block (cyber cultures) to our third (algorithmic culture). Can algorithms allow us to be more human?

In the video it is also asked, what is the structure that underlies human interaction? We can identify many ‘ingredients’ that play a role, but what does it take for a human to interact with a robot and feel that they are interacting with a being? This is from where my – albeit loose – connections to community are drawn. Last week Antigonish 2.0 told 4-word stories about what community means (#Antigonish2 #4wordstory – there were over 700 responses). How will robots understand all these diverse and valuable ways of being together? Maybe they don’t need to – maybe we can – in the spirit of Shinto – accept them as having their own type of ‘soul’, and accept automation of the mundane.. if our economic system allows for and values our very *human* contributions.

Bring on Keynesian theory and the short working week..


Lifestream, Liked on YouTube: Not Enough AI | Daniela Rus

via YouTube

Daniela Rus’ presentation was interesting to watch in the context of having recently watched Audrey Watters’ presentation at Edinburgh on the automation of education. Rus doesn’t have the cynicism which Watters (justifiably) has. For example, she identifies an algorithm which is able to reduce the number of taxis required in New York City by 10,000 by redirecting drivers (if the public agrees to ride-share). While this could mean 10.000 job losses, Rus says that, with a new economic model, it doesn’t have to. She describes a different picture in which the algorithm could mean the same money for cab drivers, but shorter shifts, with 10,000 less cars on the road producing less pollution. It’s a solution which is good for taxi drivers, and good for society – but like Watters I fear that within capitalism there is little incentive for commercial entities to make the choice to value people or the environment over profits. Automation should, as Rus suggests in the presentation, take away the uninteresting and repetitive parts of jobs and enable a focus on the more ‘human’ aspects of work, but instead, it can be used to deskill professions and push down wages. Her key takeaway is that machines, like humans, are neither necessarily good or bad. For machines, it just depends on how we use them..