I wanted to write this down so I wouldn’t forget it, and to make it part of my Lifestream blog. I was thinking yesterday about video games I like to play, specifically the old “Asteroids” game, where you move a spaceship around the screen and blow up asteroids that seem to appear at random from around the game area.
I don’t know what the algorithm is that determines when and where asteroids appear, or how, when hit by my blaster, they break up into smaller bits. Of course, as asteroids hit each other, are shot with a blaster, or collide with my ship, the trajectories and speed of movement changes, again seemingly at random.
I was just thinking as I was playing the game, that I seemed to be spending more time and concentration now on detecting patterns in asteroid appearance and movement, and less on developing my skill as a spaceship pilot and gunner. I owe this break in focus on what I’ve learned in this course about algorithms, or perhaps more to the point, what I think I have learned but really don’t know.
What I do know however, is that I will never look at or play another video game the same again.
I came across this article somewhat by accident. It reminded me of the replicator used in the Star Trek shows that dispensed food and drink to crew members. This article details how, through the use of electronic signals and sensors, the taste and color of lemonade can be transmitted from its source to a glass of water. I understand this may not be a direct application of AI as we have discussed in this course before, but it does connect in the sense that sensations such as taste and vision are being replicated and transmitted within an algorithmic framework that mimics real human sensations. This is just another facet of real humanity being replicated into artificial humanity.
A simple form of sensory illusion has been in place at Disneyland, for example, for years. On certain rides the smell of old buildings and musty odors are commonly sprayed around for sensory effect. On one ride at California Adventure, when the carriage flies around California orchards, the smell of oranges and other citrus are present in the subtle vapors sprayed above the heads of the passengers. But these features are the results of the simple process of chemical sprays and mists. The technology detailed in the article cited below is a step further into mimicking the electronic signals used by the human body to transmit sensory signals to other parts of the body, or across space.
If this technology ever gets perfected I wonder how far we can take it? What the classroom? Could we use such tools to bring past historical events alive to students? Events such as the Battle of Gettysburg: could we mimic the smell of gunpowder or the stench of a field hospital? In studies of the Middle Ages could we bring to life again the smells and colors of the roadhouse where travelers ate and rested on weary journeys? Could we taste what food may have tasted like 100, 200, or 500 years ago? And what of medicine? Could we use the smells of medications, diseases and the real colors of tissue in order to train our medical personnel more effectively? Of course the medical value would be substantial in helping people with sensory deprivations to enhance what may have been lost through disease or injury. I have posted a couple of things on this blog related to the regeneration of tissue, drawing “inspiration” from Frankenstein’s Monster. Using Frankenstein again, can we couple this technology of sight and taste with the potential re-animation of tissue thus restoring senses lost? Or, in terms of post-human development, using these advances to create a new form of human, a cyborg for lack of a better term, equipped with all of the sensations a “normal” human would possess. Coupled with what we have discussed already about AI, the potential for the next step in human evolution could be somewhat frightening and/or exciting to contemplate.
And what of our schools? How much technology is too much? How far can, or should, we go in providing students the means to complete assignments, understand calculations, contemplate the subjectiveness of paintings and philosophy? I am reminded of the scientists in Jurassic Park who cloned dinosaurs but had no understanding of the basics of the genetic dispositions of the animals they thought were so beautiful and majestic. As Dr. Malcom told them, they simply built on the work of scientists who had gone before yet did not try and understand the actual work those prior minds had completed. Is that what we are doing to our students with all of the advanced technology we now place at their fingertips? They can accomplish great things now, but do students really understand HOW things work in the first place? What if they can put humans on Mars, yet when the power goes out cannot complete simple arithmetic on an abacus or slide rule or do long division? Perhaps the issue remains, as Dr. Malcom put it (and I paraphrase), in terms of how far we push ourselves into the post-human world, it is not a matter of if we can but rather if we should.
To re-state perhaps a little, the possibilities of this technology could be limitless. Yet, with any application that further stretches the edge of the evolutionary envelope from human to post-human, we must consider the ramifications of it. How far can we go? How far should we go? What is the positive potential as opposed to the negative? Is there the possibility of abuse and if so, what is it and how great is the danger?
My own personal opinion is that sometimes I believe our technological demands and accomplishments are proceeding much faster than the ethics and morals of the technology that need to be considered. One must ask, for any technological advance in question, what is the rush? Is there such a dire need for this specific technology that consideration of the ethical impact of it has to wait?
I don’t have the answers to many of these questions.
As the image implies, we have a connectedness that stretches beyond ourselves. The use of imagery such as this provides a decent visualization of how our brain uses algorithmic principles to function. I am wondering how, in the coming weeks, I will learn this as it appleies to the various topics we have discussed in this course. Another question would be how, as the next image shows, can computers use our spoken and written words, to create algorithms for use in mental health treatment and beyond? (Pestian, et al, 2017).
To branch away from the aforementioned, I have had several comments on my Ethnography, but more pointedly, on the poem I submitted as part of it. A couple of comments were from classmates in EDC17, and a few others from MOOC participants. I think this may be the sum and substance of the MOOC I studied, and which I found myself immersing into rather than simply being an outside “participant.”
The purpose of the MOOC, the REAL purpose I now am starting to realize, goes beyond the stated objectives of the course, which were to share experiences and thoughts about Cascadia. As some have mentioned, my Ethnography drew them in and caused them to spend an unexpected amount of time looking through my collage of pictures and texts. It seems my Ethnography served a purpose beyond its stated objective as well. Rather than turning into a dry, sterile presentation, I found creating it drew from memories and experiences that have long been filed away in my brain. How we we remember is a fascinating realm in which to dive into. Good and not so good memories: we can either dredge them up, churn them up, recreate them, or remake them. It is interesting how present circumstances or perspective, can cause us to see the same memory as good or bad.
Perhaps Weeks 8 through 10 will help me understand the algorithms at play in bringing past memories back to the forefront of consciousness, as I learn how different apps use those algorithms to help us create, express, and even sustain, creativity outside of ourselves.
Pestian, J. P., Sorter, M., Connolly, B., Bretonnel Cohen, K., McCullumsmith, C., Gee, J. T., Morency, L.-P., Scherer, S., Rohlfs, L. and the STM Research Group (2017), A Machine Learning Approach to Identifying the Thought Markers of Suicidal Subjects: A Prospective Multicenter Trial. Suicide Life Threat Behav, 47: 112–121. doi:10.1111/sltb.12312