Lifestream feeds this week were primarily concentrated upon building community. I’ve been there for peers, offering to test IFTTT streams – it’s strange to see because generally I see myself as less technologically able. I do seem to be able to troubleshoot, mind.. a core educational area in the press this week.
What is clear this week is that technology is not separate from culture. The influence is two ways, and we do need to be proactive in the decisions we make about which technology to use in education. Always, we need to ask.. is there a purpose? What are the consequences? No technology for technology’s sake.
This week I’ve thus far been fairly focused on the ethical implications of technological advancement. I posted previously about the 1995 anime, Ghost in the Shell, but I spoke in quite general terms about the themes. Today, let’s take a closer look at a couple of scenes:
Motoko & Batou pass judgement on the Garbage Collector (Ghost in the Shell, 1995)
This clip [1m12, from 23 minutes into the film] shows the capture of the Garbage Collector, and the reactions of Motoko and her second in command, Batou. Motoko seems to almost spit out her questions, “Can you remember your mother’s name or what she looks like? Or how about where you were born? Don’t you have any happy childhood memories? Do you even know who you are?” Her reference to memory as indicative of identity parallels Blade Runner – but it is the viciousness of her questioning which intrigues me from an ethics perspective. The Garbage Collector was human, but has had his ‘ghost’ (consciousness or soul) hacked. Yet somehow it is his fault: “Ghost hacked humans are so pathetic,” says Batou. It seems like an attack on a victim (What were you thinking wearing that skirt? Drinking? Were you asking for it?).
In this clip [2m30, from 47m25 into the film] the Puppetmaster, a non-human who has hacked a cyborg, asks for asylum in section 9, raising questions about what it is that differentiates man and machine:
Puppet Master: What you are now witnessing is an action of my own free will. As a sentient life form, I hereby demand political asylum.
Chief Aramaki: Is this a joke?
Nakamura: Ridiculous! It’s programmed for self-preservation!
Puppet Master: It can also be argued that DNA is nothing more than a program designed to preserve itself. Life has become more complex in the overwhelming sea of information. And life, when organized into species, relies upon genes to be its memory system. So man is an individual only because of his intangible memory. But memory cannot be defined, yet it defines mankind. The advent of computers and the subsequent accumulation of incalculable data has given rise to a new system of memory and thought, parallel to your own. Humanity has underestimated the consequences of computerization.
Nakamura: Nonsense! This babble is no proof at all that you’re a living, thinking life form!
Puppet Master: And can you offer me proof of your existence? How can you, when neither modern science nor philosophy can explain what life is?
Chief Aramaki: Who the hell is this?
Nakamura: Even if you do have a Ghost, we don’t offer freedom to criminals! It’s the wrong place and time to defect.
Puppet Master: Time has been on my side, but by acquiring a body, I am now subject to the possibility of dying. Fortunately, there is no death sentence in this country.
Of course, Ghost in the Shell is fictional, and the notion that we could achieve a state of technological advancement wherein it was possible to upload human consciousness to a machine, or for a machine to develop consciousness (singularity) remains questionable, even amid claims that the first head transplant could occur in the UK in 2017. Yet we are already at a stage where we need to think of the ethics involved in the decisions that machines make, as the previous posts/feed on self-driving cars has indicated. Beyond that, what of the rights of machines should they gain consciousness? On our screens we see their fates played out desperately (Westworld, for example), or alternatively, our own fate is portrayed as under threat (by Stephen Hawking, for example).
If humans are to be accorded different, more privileged rights than machines, they need to actually behave differently to them. Perhaps the increasingly human likeness of some machines is a call to bolster our own humanity, through empathy and the like, so as to truly differentiate ourselves from machines. Thoughts?
SETI astronomer Shrostak suggests that alien lifeforms are likely to be sentient machines. By extension, if humans were able to transfer their minds/other essential human components to machines (becoming sentient machines), the potential to inhabit planets which do not support human biological requirements is advanced.
Looked at from a different angle, the notion of sentient machine extraterrestrials adds another layer of grey to the application of human rights. Of course, aliens, not being human, would not be entitled to ‘human’ rights. But if humans were to become sentient machines, our definition of ‘human’ would change. Would our rights then extend to all other sentient beings, regardless of their ‘shell’/housing/casing?
from Diigo http://ift.tt/2g5l0TZ
This video, made for a school project by Eva Oaks (a graduate in robotic facial design at Utwente), asks what the impact of “perfectly” formed female androids will be on women. Oaks highlights the body image anxiety which seems to be a cultural bi-product of our time, and the increasingly young ages and high rates at which many women are undertaking plastic surgery to “improve” their appearance [links to Miller’s (2011) assertion about the increasing plasticity of the body]. Fast forward to a time in which androids exist alongside humans: what is the impact of this on body anxiety? How will definitions of beauty be influenced?
Important questions which lay bare the cultural complexity of ever increasing technologies.
Miller, V. (2011). Chapter 9: The Body and Information Technology, in Understanding Digital Culture. London: Sage.
Kat Robb reports on a recent trip to Japan, where she spoke with Prof. Ishiguru, director of the Intelligent Robotics Laboratory in Osaka.
He argues that society itself is responsible for shaping humans, therefore by using a combination of computers, motors, and sensors he is able to create androids that are capable of mimicking humans. So synergistic androids are created, that with exposure to language and HRI, are able to develop a personality, therefore making them as human as any other being that depends on exposure to language, society, others and interaction to shape who they are and who they become.
I wonder what this says of current society/culture, when you consider the fate of AI systems such as Microsoft’s chatbot Tay?
Robb also suggests:
Japanese citizens openly accept robots and autonomous systems into their society so they don’t feel the need to distinguish the differences between them, and humans. Robots are considered beings, just like any other being, and take an active part in society in theatre productions, as caregivers, companions and shop assistants.
Robots are considered beings. “Beings” – not ‘human’ beings, but beings none the less. I wonder – what rights do these non-human beings have in Japan, then? Further investigations ahead..