Lifestream, Tweets

I suspect this might be my last, or near to last, post before the final summary as it’s Friday afternoon, I’ll work on the summary tomorrow, and Sunday (the due date for the Lifestream) is always a crazy day at work for me. It feels really lovely to return to the beginning – back to cyber culture’s questions about what it means to be human, and at the same time to make connections to community (loosely) and to algorithms.

In the video, Ishiguro says that he started the project, building a robot, in order to learn about humans. He asks what it really means to be human – which things that we do are automated? Which things could someone impersonating us do? When are we truly being ourselves, being creative? Erica (the robot) suggests that through the automation of the uninteresting and tedious aspects of life, robots can help us to focus on the creative aspects, and other parts where we are truly ourselves. With AI being essentially the work of algorithms this ties our first block (cyber cultures) to our third (algorithmic culture). Can algorithms allow us to be more human?

In the video it is also asked, what is the structure that underlies human interaction? We can identify many ‘ingredients’ that play a role, but what does it take for a human to interact with a robot and feel that they are interacting with a being? This is from where my – albeit loose – connections to community are drawn. Last week Antigonish 2.0 told 4-word stories about what community means (#Antigonish2 #4wordstory – there were over 700 responses). How will robots understand all these diverse and valuable ways of being together? Maybe they don’t need to – maybe we can – in the spirit of Shinto – accept them as having their own type of ‘soul’, and accept automation of the mundane.. if our economic system allows for and values our very *human* contributions.

Bring on Keynesian theory and the short working week..

 

Lifestream, Liked on YouTube: Not Enough AI | Daniela Rus

via YouTube


Daniela Rus’ presentation was interesting to watch in the context of having recently watched Audrey Watters’ presentation at Edinburgh on the automation of education. Rus doesn’t have the cynicism which Watters (justifiably) has. For example, she identifies an algorithm which is able to reduce the number of taxis required in New York City by 10,000 by redirecting drivers (if the public agrees to ride-share). While this could mean 10.000 job losses, Rus says that, with a new economic model, it doesn’t have to. She describes a different picture in which the algorithm could mean the same money for cab drivers, but shorter shifts, with 10,000 less cars on the road producing less pollution. It’s a solution which is good for taxi drivers, and good for society – but like Watters I fear that within capitalism there is little incentive for commercial entities to make the choice to value people or the environment over profits. Automation should, as Rus suggests in the presentation, take away the uninteresting and repetitive parts of jobs and enable a focus on the more ‘human’ aspects of work, but instead, it can be used to deskill professions and push down wages. Her key takeaway is that machines, like humans, are neither necessarily good or bad. For machines, it just depends on how we use them..

 

 

Lifestream, Pocket, ‘Future Visions’ anthology brings together science fiction – and science fact

Excerpt:

To the casual observer, the kind of technological breakthroughs Microsoft researchers make may seem to be out of this world.

via Pocket http://ift.tt/2nKVDcX


I came across this collection of short science fiction stories from Microsoft. I hate that I like it (I still haven’t forgiven Gates for 1995’s shenanigans with Netscape and others – & for, well, breaking the ethos of the Internet), but it seems like a ‘page turner’. I’ve only read half the first story, mind, as it is not available to me in iTunes locally and Amazon is suggesting it does not deliver to my region despite it being an e-book. I could use a shop and ship address, but it’s kind of annoying that it isn’t just available as a PDF – combined with my & Bill’s ‘history’, it was enough to put me off for now.

One thing I did think about from the first half of the first story, in which translation and natural language programming has reached the point of being able to translate signing into spoken language and spoken language to text in real time, is that while we herald the benefits of technology for differently abled people, we also ignore what it could mean for communities like the Deaf community, and cultures like Deaf culture. I’m not really qualified to speak on it myself, but I’d be interested in hearing the perspectives of people from within the Deaf community.

Lifestream, Pocket, The Best Way to Predict the Future is to Issue a Press Release

Excerpt:

This talk was delivered at Virginia Commonwealth University today as part of a seminar co-sponsored by the Departments of English and Sociology. The slides are also available here. Thank you very much for inviting me here to speak today.

via Pocket http://ift.tt/2fF4PPI


I started out by trying to grab a few select quotes from this talk that Watters delivered at Virginia Commonwealth University in November 2016, but it is pretty much all gold. She writes about how the stories we tell – or have told to us – about technology and educational technology direct the future, and asks how these stories affect decision making within education:

Here’s my “take home” point: if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.

..

..to predict the future is to control it – to attempt to control the story, to attempt to control what comes to pass.

Watters’ interrogation of future stories – stories by Gartner, by the New Horizon Report, by Justin Thrun, and others – demonstrate that these stories tell us much more about what kind of future the story-tellers want than about future per se. This matters, Watters suggests, because these stories are used to ‘define, disrupt, [and] destabilize’ our institutions:

I pay attention to this story, as someone who studies education and education technology, because I think these sorts of predictions, these assessments about the present and the future, frequently serve to define, disrupt, destabilize our institutions. This is particularly pertinent to our schools which are already caught between a boundedness to the past – replicating scholarship, cultural capital, for example – and the demands they bend to the future – preparing students for civic, economic, social relations yet to be determined.

It’s a powerful read – and connected to the idea I want to pursue in my final assignment. I’m interested in seeing if there are different stories being told to different segments of the population, and trying to imagine what the consequences of that different imagining might be.