Thanks for your comment, Jeremy. You are quite right, the link to cybercultures is tenuous at best! Having recently readSiân Bayne’s (2015) paper, I was more focused on the ‘what is wrong with technology’ criticism of TEL terminology (i.e. narratives which separate technology from social practice). Here the connection is, as you have highlighted, more directly linked to our block on algorithmic cultures. I guess, though, in my mind there is also a loose connection (not made in my post!) between these new ‘smart’ toys and those of J.F. Sebastian in Blade Runner. I find that world to be quite a lonely one, where human-human friendships are replaced by human-machine ones.
Also, there’s the idea of enhancement, in that if we extend the discussion beyond toys with voice recognition that collect data and ‘speak back’ to include the full range of digital technologies marketed at children, we encounter dialogues which position the said toys as necessary for children’s development of digital literacies and future life opportunities, and which will enhance their cognitive growth. As Bayne (2015, p. 11) highlights, such narratives of cognitive enhancement create ‘a discursive link with transhumanism’.
On this last point, a friend who is parent to an eleven-year-old and a seven-year-old in Hong Kong recently raised her concerns about the pressures placed on parents (I would say largely by corporate marketing but also from within education and between parents) to future-proof their children through giving them access to digital tools, and programmes which teach them to code, for example. My friend argued that it was more important to focus on developing distinctly human qualities, such as empathy and imagination, but that her focus on this was at odds with the educational and cultural perspectives surrounding her. While I don’t disagree with the need for development of empathy and imagination, I also don’t see (digital) technology-rich environments as necessarily limiting these. There still seems to be a this or that / technological or human / scientific or creative tension at play, which is reflective of the human / cyborg opposition within much of cyberpunk. To me, there seems to be room for more non-binary thinking within discussions of who we (or our children) might be as humans in the future/within an age so enmeshed in digital technologies.
from Comments for Renée’s EDC blog http://ift.tt/2kYivUf
I suppose this is an odd picture to include in my blog this week, but when I stopped to take in the beauty of it, it struck me as symbolic of our topic. The seahorse is an ancient and beautifully engineered natural creature. It is inquisitive, mobile, it grows, and it has a sense of survival. It also possesses an internal “clock” that regulates every activity it must engage in to survive. The watch represents the technology we use to govern almost every aspect of our lives. From eating to procreating to daily activities, we rely on technology to keep us on a proper bearing.
This week we are discussing the creation of new “life” in the form of robots, androids and other types of technology. Into these “lifeforms” we will put programming that allows inquisition and learning. Robots and androids will be mobile and have a sense of of survival (antivirus, etc.). The only human activity a robot cannot duplicate at this time is procreation, although there are some models that seem to be able to engage in simulated sexual activity.
Also, see in the watch face part of the reflection of the seahorse, but not entirely. Is this symbolic of what happens when we create AI that can duplicate us? Robots and other types of androids can resemble humans, but the image of ourselves we see in them is not quite complete. Philosophers can go nuts with this.
A question that comes to mind here, for me, is when do the natural processes that have allowed man to survive for thousands of years end, and new technologies begin? Or, in what way does technology have more governance of our lives than our innate abilities?
Using holograms, or HumaGrams, in the classroom is becoming one of the latest techniques to get information to students using technology that is engaging and enjoyable. The attached article describes how a Russian company has created holographic programs that allow students to interact with curriculum. Specifically, one program brings into view the molecular action of molecules in such a way students may manipulate them and learn how they work. The video from YouTube looks at several late technology methods of classroom instruction, especially holograms.
Great article to bring in here, I really like Ben Williamson’s work. I was wondering how you were linking this to the ‘cybercultres’ themes? I can certainly see how this will link to our algorithmic cultures block later in the course – worth keeping it in mind for then.
Thinking about your post on ethics and AI (from Ghost in the Shell), there might be something in thinking about how this article points to other (and more critical perhaps!) ethical perspectives. Rather than thinking about the ‘human rights’ of the machine (the classic ‘cyberculture’ kind of position), we might think about the ethics of human/machine relations. The ones described here – the accounting of participation, the normalisation of linear progress, the persistent comparison of human behaviour to data sets – seem to be relations with significant ethical dimensions.
Where technology influences culture (and clearly vice versa), should we be talking more about ‘ethical relations’ than assigning ‘rights’?
from Comments for Renée’s EDC blog http://ift.tt/2jZlCh5
Excellent to reflect on Ghost in the Shell here Renée (and in your other post), it’s a classic! The sequel is good too, but the original film is pretty hard to beat. I might go and see the Hollywood remake (supposedly this year), but it might spoil it!
Memory seems to be a key theme in this kind of SciFi doesn’t – Bladerunner as you say is another classic example here. I can’t help thinking it is a bit of an easy slippage to equate what we (humans) experience as memory, and the kind of data storage we find in computers. I tweeted a panel talk from John Searle and Luciano Floridi earlier this week (https://t.co/VRUXpQi47V) which offers a useful critique of AI. Searle’s (rather classic) ‘Chinese room’ argument is that computers can only deal with syntax (the arrangement of symbols), whereas ‘us’ humans necessarily also deal with semantics (meanings behind and connected to symbols).
In that sense, machines are immensely powerful, but quite stupid. I wonder then, is there a more pressing need for ‘ethics’? Not for some imagined intelligent machine, but for the increasing use of rather stupid machines to make decisions on our behalf? Might be a good way into critiquing TEL there…
Also, I couldn’t get tube chop to work, but I know the scene well. Hadn’t thought about it in particular before, but it is compelling isn’t it? Is it perhaps that the hacked garbage collector had lost his ‘authenticity’ as a human, because his memories had been replaced. In that sense he was seen as rather pathetic because he was no longer human, going by the general premise that memories make us human.
from Comments for Renée’s EDC blog http://ift.tt/2jZ0SX6
While there may be some benefits to our online security that come from this type of technology, overall, it is yet another example of enhanced analytics tracking, recording and profiling even more portions of our lives. To my mind, the question of the ethical repercussions of such action will be going on for some time and is not the real issue. The major problem I have is the laissez faire attitude of consumers and future users of digital services who wont mind a jot that this goes on. In fact many will actively encourage it and rejoice in its ability. Im probably at risk of being cast as a Luddite but the risk of this data being used for manipulation and subtle, unconscious coercion is not beyond doubt and mostly I dont trust human greed and the need for power to stay away from abusing it.
from Comments for Myles’s EDC blog http://ift.tt/2khPwM0
I would be interested in how technology was defined in the survey or in the minds of the students when they answered. It’s such a broad term that it makes impossible to actually figure out what is meant. Is a student using a laptop in a lecture using technology? A calculator? Looking at a projector? Where does the line get drawn?
I also came up with some other ways to critique the survey. First of all it is only drawing upon American College students for its sample. Perhaps students in other nations feel different?
Secondly, students did not identify technology usage as the attribute they thought was most likely to help them find a job.
“When asked to identify skills that make them attractive job seekers, students are more likely to cite their interpersonal skills (78 percent) than any other attribute, including grades/GPA (67 percent), a degree in a marketable field (67 percent) and internship experience (60 percent).”
So I don’t think this backs up the argument that students are demanding more tech use in the class room.
Thirdly, it is always worth questioning whether the purpose of a degree is to make one ready for a career. When there’s so much debt attached to studying and most people’s way out of debt is through labour then it is hard to make this argument but that doesn’t mean it is a given that degrees are for finding jobs.
from Comments for Angela’s EDC blog http://ift.tt/2kfJNqb