— Renée Hann (@rennhann) April 9, 2017
— Renée Hann (@rennhann) April 7, 2017
I suspect this might be my last, or near to last, post before the final summary as it’s Friday afternoon, I’ll work on the summary tomorrow, and Sunday (the due date for the Lifestream) is always a crazy day at work for me. It feels really lovely to return to the beginning – back to cyber culture’s questions about what it means to be human, and at the same time to make connections to community (loosely) and to algorithms.
In the video, Ishiguro says that he started the project, building a robot, in order to learn about humans. He asks what it really means to be human – which things that we do are automated? Which things could someone impersonating us do? When are we truly being ourselves, being creative? Erica (the robot) suggests that through the automation of the uninteresting and tedious aspects of life, robots can help us to focus on the creative aspects, and other parts where we are truly ourselves. With AI being essentially the work of algorithms this ties our first block (cyber cultures) to our third (algorithmic culture). Can algorithms allow us to be more human?
In the video it is also asked, what is the structure that underlies human interaction? We can identify many ‘ingredients’ that play a role, but what does it take for a human to interact with a robot and feel that they are interacting with a being? This is from where my – albeit loose – connections to community are drawn. Last week Antigonish 2.0 told 4-word stories about what community means (#Antigonish2 #4wordstory – there were over 700 responses). How will robots understand all these diverse and valuable ways of being together? Maybe they don’t need to – maybe we can – in the spirit of Shinto – accept them as having their own type of ‘soul’, and accept automation of the mundane.. if our economic system allows for and values our very *human* contributions.
Bring on Keynesian theory and the short working week..
— Renée Hann (@rennhann) April 6, 2017
@c4miller If only I knew knew when 'getting' would become 'got' 😉
— Renée Hann (@rennhann) April 6, 2017
Another plus – with no water, no one had to cook dinner;)
— Renée Hann (@rennhann) March 28, 2017
Stephen Downes’ summary:
When I spoke at the London School of Economics a couple years ago, part of my talk was an extended criticism of the use of models in learning design and analysis. “The real issue isn’t algorithms, it’s models. Models are what you get when you feed data to an algorithm and ask it to make predictions. As (Cathy) O’Neil puts it, ‘Models are opinions embedded in mathematics.'” This article is an extended discussion of the problem stated much more cogently than my presentation. “It’s E Pluribus Unum reversed: models make many out of one, pigeonholing each of us as members of groups about whom generalizations — often punitive ones (such as variable pricing) can be made.
My additions (i.e. from my reading of the article):
What are ‘weapons of math destruction’?
Statistical models that:
- are not opaque to their subjects
- are harmful to subjects’ interests
- grow exponentially to run at large scale
What’s wrong with these models that leads to them being so destructive?
1. lack of feedback and tuning
2. the training data is biased. For example,
The picture of a future successful Ivy League student or loan repayer is painted using data-points from the admittedly biased history of the institutions
3. “the bias gets the credibility of seeming objectivity”
Why does it matter?
It’s a grim picture of the future: WMD makers and SEO experts locked in an endless arms-race to tweak their models to game one another, and all the rest of us being subjected to automated caprice or paying ransom to escape it (for now). In that future, we’re all the product, not the customer (much less the citizen).
Inside this picture, the cost of ‘cleaning up’ the negative externalities that result from sloppy statistical models is more expensive than the savings that companies make through maintaining the models. Yet, we pay for the cleaning up (individually, collectively), while those pushing the weak statistical models save.
The other loss is, of course, the potential: algorithms could, with good statistical modelling, serve societal needs, and those in need within society.
The line of argument is hard to argue with – but one does have to ask, is ‘sloppy’ the right term? Is it just sloppiness? At what point does such ‘sloppiness’ become culpable? Or, malicious disregard?