2001 and Beyond . . .

(This post is a re-post of a comment I previously posted)

Of the many interpretations of “2001: A Space Odyssey” that have been presented, one that is intriguing to me is the movement of humanoids from being at the mercy of nature to a position of self-determination. At the beginning of the film we see pre-humans simply existing as nature provides. Subsequently, bones are found to be useful as tools and weapons, allowing one group to exert dominance over another. The film then moves into the future where humans have evolved and created machines that aid them in just about every area of life. As we progress through the story line, HAL attempts to block human efforts to continue or alter the prescribed space mission to Jupiter, ultimately failing to stop Dave from Dave’s own personal mission. The end of the film seems to provide the demonstration of the evolution of humans to a post-human existence as the “Space Child” appears, hovering over the planets.

The question I presented in my previous post looks at how pre-humans moved from being nature-driven to self-driven. Humans became active participants in their own development and at least partially determinative of their own destinies and futures. As we move into an era where machines are allowed, as HAL was allowed, to control our lives and be integral in whatever decision we make, at what point do humans move back into the position of being “nature driven” and lose any control over ourselves and our future? At some level this question gives new meaning to the concept of “The Circle of Life.”

Can You See It and Taste It . . . For Real?

I came across this article somewhat by accident.  It reminded me of the replicator used in the Star Trek shows that dispensed food and drink to crew members.  This article details how, through the use of electronic signals and sensors, the taste and color of lemonade can be transmitted from its source to a glass of water.  I understand this may not be a direct application of AI as we have discussed in this course before, but it does connect in the sense that sensations such as taste and vision are being replicated and transmitted within an algorithmic framework that mimics real human sensations.  This is just another facet of real humanity being replicated into artificial humanity.

A simple form of sensory illusion has been in place at Disneyland, for example, for years.  On certain rides the smell of old buildings and musty odors are commonly sprayed around for sensory effect.  On one ride at California Adventure, when the carriage flies around California orchards, the smell of oranges and other citrus are present in the subtle vapors sprayed above the heads of the passengers.  But these features are the results of the simple process of chemical sprays and mists.  The technology detailed in the article cited below is a step further into mimicking the electronic signals used by the human body to transmit sensory signals to other parts of the body, or across space.

If this technology  ever gets perfected I wonder how far we can take it?   What the classroom?  Could we use such tools to bring past historical events alive to students?  Events such as the Battle of Gettysburg: could we mimic the smell of gunpowder or the stench of a field hospital?  In studies of the Middle Ages could we bring to life again the smells and colors of the roadhouse where travelers ate and rested on weary journeys? Could we taste what food may have tasted like 100, 200, or 500 years ago?  And what of medicine?  Could we use the smells of medications, diseases and the real colors of tissue in order to train our medical personnel more effectively? Of course the medical value would be substantial in helping people with sensory deprivations to enhance what may have been lost through disease or injury.  I have posted a couple of things on this blog related to the regeneration of tissue, drawing “inspiration” from Frankenstein’s Monster.  Using Frankenstein again, can we couple this technology of sight and taste with the potential re-animation of tissue thus restoring senses lost?  Or, in terms of post-human development, using these advances to create a new form of human, a cyborg for lack of a better term, equipped with all of the sensations a “normal” human would possess.  Coupled with what we have discussed already about AI, the potential for the next step in human evolution could be somewhat frightening and/or exciting to contemplate.

And what of our schools?  How much technology is too much?  How far can, or should, we go in providing students the means to complete assignments, understand calculations, contemplate the subjectiveness of paintings and philosophy?  I am reminded of the scientists in Jurassic Park who cloned dinosaurs but had no understanding of the basics of the genetic dispositions of the animals they thought were so beautiful and majestic.  As Dr. Malcom told them, they simply built on the work of scientists who had gone before yet did not try and understand the actual work those prior minds had completed.  Is that what we are doing to our students with all of the advanced technology we now place at their fingertips?  They can accomplish great things now, but do students really understand HOW things work in the first place?  What if they can put humans on Mars, yet when the power goes out cannot complete simple arithmetic on an abacus or slide rule or do long division?  Perhaps the issue remains, as Dr. Malcom put it (and I paraphrase), in terms of how far we push ourselves into the post-human world, it is not a matter of if we can but rather if we should.

To re-state perhaps a little, the possibilities of this technology could be limitless.  Yet, with any application that further stretches the edge of the evolutionary envelope from human to post-human, we must consider the ramifications of it. How far can we go?  How far should we go?  What is the positive potential as opposed to the negative?  Is there the possibility of abuse and if so, what is it and how great is the danger?

My own personal opinion is that sometimes I believe our technological demands and accomplishments are proceeding much faster than the ethics and morals of the technology that need to be considered.  One must ask, for any technological advance in question, what is the rush?  Is there such a dire need for this specific technology that consideration of the ethical impact of it has to wait?

I don’t have the answers to many of these questions.

http://bit.ly/2nrRUkZ

#mscedc

Week 10 Summary: An Alternate Reality or An Impossible Dream?

This week we have looked at Learning Analytics, for which I have posted my analysis here http://bit.ly/2nD5DI3.  In this analysis, I, in a nutshell, went somewhat off on a tangent and looked at the exercise as a statement of why we do analytics and not necessarily how analytics reflect what we do.  I summarized the positions of Verbeek (2011, 2013) and Foucault (1997) who assert that we need to be affirmative actors not only in the use of technology but its creation and declaration of purpose.

As usual, and characteristic of Blocks 1 and 2, the class engaged in discussion regarding the ethical and moral ramifications of technology.  This discussion was perhaps more prominently in Blocks 1 and 2.  What I did see more so in Block 3 was a diverse range of analytics and the types of data sought to be measured and ultimate use.  Perhaps I missed some things, but I failed to see much reference to the initial programming of the applications themselves, e.g. how was the the Google Search algorithm programmed and how could it be changed or modified and by whom?  The Twitter algorithm, in my view, was not so much about quality but of quantity, unless of course you purposely measure quality by quantity itself.

My readings took me through Knox, Verbeek, and Foucault primarily, and some others such as Braidotti.  The issues I found myself circling back to are displayed in the following video clips.  In short, we have technology and we know what it can do.  The real questions we should now focus on are why do we need these technologies and how can we be involved in establishing their purpose to begin with?

Are we seeking an alternate reality or an impossible dream?  Or neither?  Either by choice, chance, or force, the algorithms we use to move into the next realm of our evolution can be influenced by our own sense of purpose.  And what if, and when, technology evolves its own sense of purpose or the ability to change its fundamental programming?  These are questions for future, ongoing discussions.

http://bit.ly/2nDh6XS  The prisoner resists the insertion of technology into his life in order to alleviate loneliness and give him a new sense of purpose.

http://bit.ly/2nDaqJg  The Man of La Mancha creates a purpose for himself yet sees the impossibility of fulfillment; yet he strives on in spite of it all.

Week 10: Learning Analytics Critique

This last week we looked at the summary of our Learning Analytics exercise and tried to decipher the meaning of the data collected.  While I had the most Tweets, there is no discernable information indicating the quality of those Tweets in relation to the questions asked by Jeremy, James, and others in the class.  The data was more quantifiable in that it measured the number of Tweets, the words used the most, who made any comments at all, etc.  There was little, if anything that I saw, reflecting the quality or relevance of any comment to the stated discussion topics.

A more subjective indication of the overall participation of class members would be the total number of Tweets by members.  This would indicate a willingness to engage with each other on the Twitter platform, but again, would not necessarily represent the quality of the exchanges.  An example would be the number of posted cheese jokes (some of which I found very funny).  The data mined from the exercise does not reflect the number of cheese jokes or the reaction to them, unless you look at the number of times the words “cheese” or “cheesy” were mentioned.  And even that number might be misleading if the words were included in a post addressing a different issue.  What was of interest however, was the resulting discussion among classmates about the quality of the overall data collection in relation to quantity and how the two may be co-reflective.  Some class members posted their own versions of the LA assessment and what the data meant to them.

The comments rendered by my classmates did however bring to my mind some interesting reading I had been doing on the ethics/morals of technology.  I think this fits into what we as a class have been discussing.  I have been looking over a couple of articles by Verbeek and Foucault that assert humans should not take an “outside position” when assessing technology but rather a “limit attitude” (Foucault 1997) whereby we do not focus on the ethics of having technology but rather on how the technology is designed and implemented.  In other words, we as humans are involved in the design and implementation of the technologies that govern, or steer, our lives (Verbeek 2013).  To clarify, an “outside” stance could be interpreted as oppositional to technology as opposed to the “limit attitude” where the individual stands on the border of the technology (but within its sphere of influence) application and assesses its value from that point of view.  Braidotti (2013) quotes Verbeek by stating, ” . . . technologies contribute actively to how humans do ethics (Verbeek 2011).” This statement implies to me that technology, including data mining, or Learning Analytics, is meaningful only when humans use it as a means to revise their lives rather than use simple statistics that may not accurately portray real life circumstances.  This ties in very well with the assertions of Foucault and Verbeek that we should be active participants in the gathering, analyzing, and application of data from the technologies we use.

This position may be wise in terms of our exercise of Learning Analytics.  Rather than looking to see whether the collected data is valid or not, we need to understand its purpose and how that data is collected.  We can also be in a better position to partake in the creation of the application and any revisions that may be necessary. Then we can make viable decisions on how the data is used in our lives, wherever that may be.  The ultimate objective, or at least one of them, is the use of that data in the assessment of our exercise or assignment and how successful it was or not.

“In the context of technology this means that the frameworks from which one can criticize technology are technologically mediated themselves. We can never step out of these mediations. The most far we can get is: to the limits of the situation we are in. Standing at the borders, recognizing the technologically mediated character of our existence and our interpretations, we can investigate the nature and the quality of these mediations: where do they come from, what do they do, could they be different?” (Verbeek 2013).

Per Foucault and Verbeek, we need to look at the data from within, standing at the border of its application and become a part of how that data is collected and eventually used.  This puts a more human element into the equation of the ethics of the assignment in terms of placing a value on it rather than just seeing it as a collection of sterile numbers and charts.  Verbeek (2013) asserts, along with Foucault (1997) that technology is a part of our lives.  I interpret this to mean that I should accept the fact of the presence of technologies, embrace them and work within the parameters of those technologies, using them to enhance my life and work, rather than taking an outside stance and continuing to assess if the technologies should be part of my life in the first place.

Finally, I will end this with a quote from Verbeek which, I believe, sums up what I am trying to express:

“While we cannot conceive of ourselves as autonomous beings anymore, because of the fundamentally mediated character of our lives, we can still develop a free relation to these mediations. Without being able to undo or ignore all of them, we can critically and creatively take up with them. Being a citizen in a technological society requires a form of ‘technological literacy’. Not in the sense that every citizen needs to understand all technical details of the devices around them, but in the sense that we develop a critical awareness of what technologies do in society” (Verbeek 2013).

References:

Braidotti, Rosi (2013, p.41).  The Posthuman.  Cambridge, UK; Malden, MA. Polity Press.

Foucault, M. (1997a). “What is Enlightenment?”. In: M. Foucault, Ethics: subjectivity and truth, edited by Paul Rabinow. New York: The New Press

P.P. Verbeek (2011, p. 5). Moralizing Technology: Understanding and Designing the Morality of Things. Chicago, IL: University of Chicago Press.

P.P. Verbeek (2013). “Technology Design as Experimental Ethics”. In: S. van den Burg and Tsj. Swierstra, Ethics on the Laboratory Floor. Basingstoke: Palgrave Macmillan, pp. 83-100. ISBN 978113700292

Reply to Week 8 Summary Comment by James

James, you are right in stressing algorithms are not static but change as the focus of the inquiry changes. Perhaps I would have been more accurate by emphasizing that algorithms are not necessarily capable of initiating changes but only react to external changes as expressed by a change in search terms or other forms of original inquiry.

Response to James’ response to Week 9 Summary

“. . . even if all five of your high schools shared a curriculum and even the same assessment exercise, as long we ask students to ‘work digitally’ in the preparation of the work, the experience will be different.”

James, your sentiments on this are correct and I think reflect the ongoing battle educators have in standardized testing. We seek to established norms, or algorithms, that will indicate the level of proficiency students achieve in a certain content area, or as predictors of future success in university or a chosen vocational field. Where the real frustration lies is the unknown quantities and qualities of experience, motivation and innate drive. As our algorithms this past couple of weeks have shown, searches may be made “easier” when Google or Facebook as two examples, can predict what or who we are looking for, but the algorithms cannot predict, or perhaps even decipher, internal motivation to even begin the search and how the search may turn given the results, or lack of, received.

As with algorithms, standardized testing should be used with caution. It is good to have established learning objectives as we do in EDC17, for example. Those objectives however, are achieved not by everyone in class creating uniform essays or blogs or Tweets. As has been displayed over the course of the last few weeks, we each approach the ultimate objectives of the course in a different way, but do manage to end up in reletively the same place at the end. With testing, it is wise to have specific goals and standards that need to be met. What is not usually accounted for is the fact that learners, even with the ultimate objective goal in mind, will reach it by diverse paths, and some perhapds not at all. In my opinion there is no algorithm that can adequately, or even fairly, account for the subjective nature.

Week 9 Summary: Changing Direction Can Give You Whiplash

This week we looked at algorithms and how they influence us not only in our private lives but in our vocational situations.  While much of our study of algorithms seemed to focus on the inclusion or exclusion of search results in Google, Facebook, and other social media or news platforms, I tried to look at another facet in how we determine what information is relevant to us as teachers, to our students, and how we use that information to refine our definitions of relevance.

For me this was most poignant as I use algorithms in some form almost every day.  One major concern I have is that of the five high schools in my district, three use different curriculae in any one content area than the other two.  Therefore, what is being tested in our common assessments may not match up with what is being taught or tested at any other school.  In general I believe this makes algortihm use in testing student proficiency invalid and basically a waste of time.  Is this fair to the students?  Is this fair to parents who are under constant pressure to be more involved?  Some of our Tweets this week addressed these issues.

I began to fully realize the senselessness of our testing cycle after reading Gillespie:

“The algorithmic assessment of information then, represents a particular knowledge logic, one built on specific presumptions about what knowledge is and how one should identify its most relevant components.”

Also, I read several posts and the underlying readings supporting them which kept taking me back to our studies in Block 1 of digital cultures.  I am now thinking more of how algorithms determine, or at least influence, the social, and professional, paradigms we adhere to and how we go about trying to predict the outcomes of our day baesd upon preconceived perceptions of people, behavior and circumstances.

#mscedc

 

 

Pinned to Digital Education on Pinterest

Description: Deep learning is having a serious moment right now in the world of AI. And for good reason. Loosely based on the brain’s computing architecture, artificial neural networks have vastly outperformed their predecessors in a variety of tasks that had previously stumped our silicon-minded comrades. But as these algorithms continuously forge new grounds in machine …
By Philip Downey
Pinned to Digital Education on Pinterest
Found on: http://ift.tt/2mAFDsI

Google’s Algorithm Revision

I see Google is now going to use “direct teams” to flag what may be considered offensive search terms.  What is determined to be “offensive” will be determined by Google.  Searches that contain offensive language will then be flagged and although the search results will not be affected in any way, the searches will be listed with less offensive results being listed higher in the order.  This will, in effect, be a change or revision in the existing algorithm that could impede individual research of offensive content for legitimate reasons.

http://bit.ly/2n3ZRij

#mscedc

The Boomerang Effect: Algorithms to Learning Analytics Back to Artifical Intelligence

In the readings and discussion so far this week I have found some interesting information about learning analytics.  The basic concepts seem to be very familiar as they are something I use every day in trying to determine student success, predict failure, and what I may be able to do in my own lesson planning to influence either objective.  In my readings, I found this interesting exchange between two of our reading’s authors, George Siemens and Mike Sharkey.  Very generally, the discussion forum was focused on the variable definition of learning analytics and how whatever your chosen definition could be applied.  I have included almost all of the discussion between Sharkey and Siemen, only editing out what I determined (if I am allowed to do so for this exercise) to be irrelevant at this point.

The reason I have used a large part of their discussion was that I wanted a record in the same place of the context of what Sharkey and Siemen were talking about.  I found it so applicable to how myself and others in my field approach academics and how we define success or failure in the classroom.  This is a struggle I deal with throughout each school year as I move with the ebb and flow of students’ accomplishments during their assignments and assessments.

The following discussion took place in Learning Analytics Google Group Discussion, in August of 2010 (the exchange below is conveyed verbatim and has not been edited in terms of grammar, syntax or emoticon use):

Mike Sharkey:

I wanted to add a dimension to the discussion, specifically around 
defining success.  In the descriptions of learning analytics we talk 
about using data to “predict success”.  I’ve struggled with that as I 
pore over our databases.  I’ve come to realize there are different 
views/levels of success:

Academic: 
In its simplest form, academic success means getting a good/passing 
grade.  That works for a 15-week course since you can use the first 
few weeks of data to predict the remainder of the course.  However, I 
work in an environment where courses are 5, 6, or 9 weeks long (we 
teach courses one- or two-at-a-time in serial).  That prevents me from 
using data within a course to predict the outcome for that student. 
There’s a second part to this argument about whether good grades = 
success.  That’s a discussion we need to have over a beer so I’ll pass 
for now. 😉 

Another academic metric is learning outcomes.  Look at assessment 
data and use mastery of outcomes as a gauge of success.  If the 
institution does a good job measuring learning outcomes, this is a 
possibility. 

Progression: 
If we can’t measure success within a course, we might look at it 
across the student’s program.  From an academic standpoint, that means 
GPA.  That will just lead us to the same discussion about whether or 
not grades are a good measure of success.  From a practical 
standpoint, success may mean “is the student still attending”.  Are 
they progressing through the program in a timely fashion?  This isn’t 
a good qualitative measure, but the argument can be made that if the 
student is still attending, there’s a better chance they will succeed 
in the program (especially when you compare that to students who have 
stopped attending and have zero chance of graduating). 

“Are they attending” is aligned with engagement.  Is the student 
actively engaged in the course?  We can measure this by attendance 
(did they show up) or by some alternate engagement metric (e.g. number 
of actions in the course LMS).  We can even get more detailed on the 
progression metric and look at two dimensions: 
– Persistence (when is the last time we heard from the student) 
– Density (over the last x weeks, what percent of the time has the 
student been engaged) 

I have started to model metrics and I haven’t come to any solid 
conclusions yet.  It really boils down to who you are and how you 
define success.  Different parts of the institution will have 
different definitions. 
I hope to chat more with you at the conference in February. 

Mike 
Director of Academic Analytics 
University of Phoenix 

 George Siemens:

Hi Mike – thanks for contribution. Last year, I met someone from U of Phoenix (can’t remember how it was!) and they mentioned some of the current – and planned future – use of analytics at UoP. It was quite advanced from what I’ve seen at other institutions. Analytics require explication. Online courses, programs, and institutions are uniquely placed to be early trail-blazers of analytics.

Good question about success. Success has come up a few times already and, as you note, will be different in different situations and institutions. Or learners, for that matter. For some learners, simply passing a course could be defined as success. For others, only top grades would be seen as success. 

Your points about persistence and density form part of the research that needs to be done around analytics. What learners characteristics contribute to success (however it is defined)? Which signals or deviation from those characteristics can we observe early enough through analytics to intervene to ensure success? Some great areas of research and exploration!

Mike (and others from the perspective of their institutions) – would you mind sharing a bit more about how you use analytics at UoP? What is working well? How are learners responding? What technology are you using for data collection and analytics? What role does visualization play?

George

In conclusion, sort of, when Siemens mentioned the types of analytics being discussed would work well for online course, etc., it reminded me of the evaluations we completed in the Course Design for Digital Environments course at the University of Edinburgh just last Fall.  We had to consider various analytical frameworks to create operable and meaningful learning outcomes for the courses we designed.  Of course, these outcomes were both dependent and determinant of the curriculum and activities we included in the course structure.  It is very easy to see, from my perspective, how difficult it is to create and implement a solid strand of outcomes yet try to address as many of the different facets of learning and teaching that each teacher and student face each day.

#mscedc

Comment on EDC Week 8 (!) A weeks review of alogarithms.. https://t.co/ovTscw6OJO #mscedc by jlamb

Hello Myles, thanks for this review of your study of algorithms over the last week. And good to see you experimenting with another medium to convey your ideas, this time using Thinglink.

By coincidence, I’m writing this reply while in the background my son is watching his preferred dinosaur cartoon on Netflix. Even though we make use of the option for different profiles for each member of the household, I’m still amused by the some of the films that are recommended for me: the algorithm is sophisticated but not flawless. Unless of course someone else is using my profile to watch comedy-actions films…

Whilst accepting that it might be irritating for you, I was nevertheless amused by your mention that Futurelearn is now spamming you on account of your work around the micro-ethnography. An unintended consequence of the microethnography (combined with other influences) beyond the intention or control of those who designed the EDC course. It would be really interesting to see whether the subject of the advertised courses picked up on other of your online activity?

Within my own research something I’m interested in is how the experience of the marker might be affected by the algorithm. I’ve been thinking for instance how the experience of watching the same video assignment – and perhaps their interpretation – will alter depending on whether the student uploads their work to YouTube, Vimeo or MediaHopper? To apply this to my experience of your own artefact here, when I first looked at your Thinglink assignment my eye was temporarily drawn to the related images beneath: Chelsea Football Club (perhaps because earlier today I glanced at the sports news on the BBC website using this computer?); a crest for the city of Downey in California (because earlier this evening I had a Twitter exchange with Philip Downey from our EDC class?); suffragists and women pioneers (possibly because I had recently followed up your post about Ada Lovelace?). This would seem to be a really nice link into week 9 where we’re looking in particular at how algorithmic culture and learning analytics affect education: in this instance, my experience of viewing your work has been shaped by influences beyond what you intended as the author, and beyond my immediate control as the author. Fascinating stuff.

from Comments for Myles’s EDC blog http://ift.tt/2mKmOqj
via IFTTT

Response from MOOC professor

I recenly sent an email to Professor Jared Leising, at Cascadia College, telling him I had participated in his Innovative Poetery of Cascadia MOOC, and that I had completed an ethnography on his course.  I included a poem I wrote for the course as well.  I have attached below a copy of my letter as well as the response I received from Professor Leising.  It is not much a reply in length but he does express more than nominal appreciate for my participation in his group; enough to share my contat with his colleagues.  I thought it quite nice and a solid capstone for my effort.

Professor Leising:
I am a student at the University of Edinburgh, and am completing requirements for the Master of Science in Digital Education program.  I am also a Social Studies and Life Science teacher at a high school in Southern California.
As part of the coursework for my Education and Digital Culture course, we had to find a MOOC, enroll, and complete an Ethnographical study of the course.  I chose The Innovative Poetry of Cascadia, as I used to live in Oregon and the topic was, honestly, outside my field of expertise.
​I must say I thoroughly enjoyed working through the modules although I was not able to participate in the course during real-time.  What I did do was read many of the poems and thoughts of the other participants as well as exploring information about the Cascadia Poetry Festivals.
I have included with this email a link to the study I completed.  It is not presented in a typical research format but on a platform I believe allowed me to more fully express the color and spirit of the course and themes.  Lastly, I thought I would try my hand at poetry as if I was an active participant in the course.  I hope you enjoy what I have included.
Please feel free to repond if you like, and if so, I look forward to hearing from you.  Thank you.
Philip Downey

(From Professor Leising):

Thank you, Philip.

I shared this with my co-teachers.  It was a lovely surprise!

Really thoughtful and interesting to see it presented in this way.

Jared

#mscedc

Week 8 Summary: Are We Really That Transparent?

This week we have been looking at algorithms and how they work and what effects do they have on our lives.  As a teacher, I am always on the lookout for a better way of doing something, and doing it more efficiently.  So, this week I looked more specifically at student academic behavior and what algorithms we use to not only predict academic success but do explain the lack of academic success.

I looked at a few algorithms based upon desired outcomes.  One was the solving of Rubik’s Cube.  A simple algorithm but inherently infuriating to follow to success (at least for me).  The algorithm follows a straight-forward precept that if you do this then this will happen.  There is no human element involved if you don’t account for patience and perseverance.  http://bit.ly/2mfTMvU

Mudeen wrote an article, which I have referenced in a previous post on my Lifestream.  A guest at a school was asked to analyze a student’s academic performance.  The guest’s analysis was incorrect based upon the fact that he failed to consider the human side of the student such as socio-economic status or motivation to succeed.  I find this to be true in my own teaching experience.  A teacher cannot always predict what a student will or will not do, by looking at a previously determined set of rules or certain biases of one sort or another.

And then of course there are the algorithms used by Facebook. YouTube, Pinterest, and others, that offer me articles and visuals in the same genre as what I have viewed previously.  But again, algorithms cannot seem to consider the human element.  What if my interests change?  What if while reading through offerings on a particular subject matter, I wish to see an opposing view?  An NPR study found that search algorithms are unable to adequately deal with that type of deviance from what algorithms predict my behavior should be.  http://n.pr/2mfRgWp

Mubeen, J. (2016) Humanizing Education’s Algorithms. EdsurgeNews, June 10, 2016

Is My Environment Really All That “Smart”

This week we are looking at a vast variety of algorithms and how they determine, or reflect (or both) our past behavior patterns and future predicators of behavior.  We see algorithms choose what shows or movies to watch on Netflix and YouTube, make recommendations for reading on Amazon and Kindle.  The attached article is an interesting treatise on how algorithms, based upon personal interactions, can predict or determine how well we socialize and what constitutes effective social interaction between people.  The “smart environment” project had as one of its primary goals the gathering of data to determine how people can remain more independent especially as they grow older, as well as to provide information as to how employees can improve production with increased socialization at work.

http://bit.ly/2nn9Xrn

#mscedc

The Human Element: Necessary or an Add-On?

In Humanizing Education’s Algorithms, Mubeen (2016) begins by relating an instance where he analyzed a student’s academic performance patterns, based upon a computer program (algorithm) designed for just that purpose.  What the algorithm did not take into account was the human element.  This caused the analysis to be way off mark.  The student’s excellent marks, especially in math, belied the fact he was homeless and had access to a computer at the local library only twice a week.

Further, what has technology relegated teachers to become?  Are we lecturers and dispensers of information only, or do we still have the mandate to give our students the human touch, that element of community not created by machines or apps?  Mubeen goes on the say, “An algorithmic approach is not sufficient to serve our students. Joshua has met with success because his teachers are active agents in his learning journey.” (Mubeen 2016).  The article’s impetus is that while programs and such are fine and perhaps necessary, they alone are not enough to create and maintain an environment of learning whereby students may become, and remain, successful.  The teacher is there and must be aware of contingencies that computers and programs are not able to handle from a purely subjective point of view.

Basically, and to be guilty of re-stating, the thrust of this article is that we can only really make sense of the learning process if we take into consideration the human element.  Algorithms are wonderful tools for what they do, but can an algorithm insert the human quality?  We have discussed this before during our time studying AI and robots, etc.  The process, or the machine, can mimic performance, but can it mimic intent? Apathy? Motivation?  this are questions that educators must address beyond the technological tools available.

Mubeen, J. (2016) Humanizing Education’s Algorithms. EdsurgeNewsJune 10, 2016

#mscedc

Week 7 Summary Revised

As James noted, my Week 7 Summary really did not address the themes of the week.  I realized this when I wrote the previous Summary, but wanted to present what took some of my time during the week.  On reflection I could have simply made a different entry and title but I didn’t so . . .

As for the theme of Block2: Community, looking back again at the midpoint posting by James, I see how the themes of the prior weeks are moving, or have moved, together.  We started off the course trying to figure out this crazy thing called IFTTT, which some are still having issues with.  Then we moved into space, presence, and community.  A good portion of the feedback received this past week from James was focused on my use of apps or tools to make my presence more pronounced not only in my blog but on the net overall.  James encouraged me to use other forms of media and programs in order to expand my opportunities to connect with others.  I see the value in this and I have been working to get more apps and platforms set up for that purpose.

So far, the tools I have used have been Twitter, Facebook, Pinterest, Lino, Instagram, Tumblr, and YouTube.  New apps I have looked at are Bitly, Padlet, Flickr and Reddit.  I am sure there will be more but for now I am trying to get how IFTTT fits them all together so they work correctly into my Lifestream.  One aspect I have to incorporate is the use of video in my Lifestream feeds.  So far, I have inserted pictures, images, and some vide from YouTube.  I will have to work on using other forms of media in order to fully experience the connection in community our themes have explored.

Renee FurnerComment on My microethnography: https://t.co/G08wdLn0f9 Stories of a MOOC #mscedc by Renee Furner

Renee Furner

Another really impressive and creative piece from you Anne – thank you. It’s a really emotive arrangement.

I really liked your comment:
“When MOOC members go beyond participation and become teachers, contributors and storytellers, the online community is enriched and strengthened.”

In a sense, the MOOC members are projecting themselves into the community – their experience, their feelings, their history their knowledge. In this sense the location of what is valued/what can be learned from becomes ‘distributed’.

I also thought that one reason your MOOC might have been more participatory is the role of empathetic listening when dealing with such fraught subject matter. While we should listen empathetically more frequently, I doubt many do (certainly based on most of our peers’ experiences in their MOOCs). In contrast, one’s humanity prevents one from speaking over or ignoring sensitive subject matter, or those things very important to another (like in Philip’s MOOC). Maybe listening is the key (an idea which I must also credit to Linzi, through her posts on my blog).

Thanks again for sharing. Your artefact construction is inspirational!

from Comments for Anne’s EDC blog http://ift.tt/2mxWyA7
via IFTTT

Is it really all a matter of perspective?

Flow Chart

As the image implies, we have a connectedness that stretches beyond ourselves.  The use of imagery such as this provides a decent visualization of how our brain uses algorithmic principles to function. I am wondering how, in the coming weeks, I will learn this as it appleies to the various topics we have discussed in this course.  Another question would be how, as the next image shows, can computers use our spoken and written words, to create algorithms for use in mental health treatment and beyond? (Pestian, et al, 2017).

To branch away from the aforementioned, I have had several comments on my Ethnography, but more pointedly, on the poem I submitted as part of it.  A couple of comments were from classmates in EDC17, and a few others from MOOC participants.  I think this may be the sum and substance of the MOOC I studied, and which I found myself immersing into rather than simply being an outside “participant.”

The purpose of the MOOC, the REAL purpose I now am starting to realize, goes beyond the stated objectives of the course, which were to share experiences and thoughts about Cascadia.  As some have mentioned, my Ethnography drew them in and caused them to spend an unexpected amount of time looking through my collage of pictures and texts.  It seems my Ethnography served a purpose beyond its stated objective as well.  Rather than turning into a dry, sterile presentation, I found creating it drew from memories and experiences that have long been filed away in my brain.  How we we remember is a fascinating realm in which to dive into.  Good and not so good memories:  we can either dredge them up, churn them up, recreate them, or remake them.  It is interesting how present circumstances or perspective, can cause us to see the same memory as good or bad.

Perhaps Weeks 8 through 10 will help me understand the algorithms at play in bringing past memories back to the forefront of consciousness, as I learn how different apps use those algorithms to help us create, express, and even sustain, creativity outside of ourselves.

Pestian, J. P., Sorter, M., Connolly, B., Bretonnel Cohen, K., McCullumsmith, C., Gee, J. T., Morency, L.-P., Scherer, S., Rohlfs, L. and the STM Research Group (2017), A Machine Learning Approach to Identifying the Thought Markers of Suicidal Subjects: A Prospective Multicenter Trial. Suicide Life Threat Behav, 47: 112–121. doi:10.1111/sltb.12312

#mscedc

Week 7 Summary: My Contribution to my MOOC (sit down before reading)

My Summary this week is to just reflect on my MOOC experience once again, perhaps a bit more specifically.  As I have mentioned before, I lived in Cascadia for a number of years and have done some extensive travel in the region.  It is just a wonderland of treasures from beaches to mountains to rivers and lakes to plains and canyons.

While I was completely mesmerized by my classmates ethnographies, I am especially proud of mine because I found, admittedly unwittingly, a MOOC that touched me in personal ways.  I truly felt again the meanings of space and presence in this course.  I realize also will be going way over the word limit for a summary, so I will ask simply I be indulged in this instance.

In closing and, per James’ request, here is what I wrote, or rather scratched out, as my contribution to the Innovative Poetry of Cascadia MOOC.  I give this simple caveat:  I am NOT a writer nor a poet. But one thing I did learn from this MOOC was that it really doesn’t matter.  The participants in this MOOC and others like it just express feelings as they are experienced and write them down.  So with that, I give you…..this….

Meanderings by Philip Downey

Looking down from the cliffs at the meandering Columbia

I wonder where such an amount of blue comes from.

To the East I see where the gorge narrows

Where each drop of water fights against the others

In its struggle to reach its Western ocean home.

 

In front of and below me the water meanders by

As it makes its way through a flat plain.

Today however, the wind has brought the surface

To a raging froth of foam and spray

Upon which a rainbow of color plays and moves,

Some against the wind and others riding the air currents

As they bounce and swirl among the waves.

 

I find myself feeling jealous of the journey

Of those countless drops of water.

The course they are in will take them to their home

In the distant depths of the sea

Where forces of nature will once again capture them

And deposit them perhaps in some faraway place

Where their journey will repeat and then repeat again.

 

I wonder, as sometimes I do about myself.

Where these drops came from and where they will go

On their endless journeys to places unknown.

Perhaps one day, some day, I will know the secret of their travels

Sharing in them as I move through eternity

On an endless journey of adventure and discovery.

 

“Technology is the exteriorization of our nervous system.”

When I saw this blog entry, and watched the short video with it, it struck a chord with me.  As a life science teacher one of the units we cover in class is of course, the nervous system.  The network of nerves, to make it simple, connects all organ systems of the body and influences every action or reaction taken by the body in growth, fighting disease, metabolism, reproduction, etc.  I thought the comparison of technology was very insightful, and honestly, not one I had really thought of.  The technology we use in this course for example, connects us all, no matter where on the planet we find ourselves, whatever vocation we are in, whatever interests we have and so forth.  I have seen examples of how we struggle with the technology and getting it to work right, and how when it does work, it can be amazing.

The nervous system connects each cell to billions of other cells, and does so at an amazing pace every second of every day.  When a connection is broken, new pathways develop that re-connect the pathway. This really is an absolutely incredible visual of how we, as humans, stay connected to the world whether we want or need to, good or bad.  When a connection is broken that we need or want, we attempt to re-connect by building new pathways of communication.

Another interesting facet is how we are attempting to make Artificial Intelligence mimic actual humanity.  True, the trend has been to focus on more domestic tasks but recent research has sought to extend AI into more cognitive and emotional aspects of humanity.  In Block 1 I mentioned several films that reflected those objectives, albeit in fantasy form.  I would say however, that recent developments are pushing back on the fantasy as researchers get closer still to re-creating humans with cybernetic characteristics and abilities.  In the spirit of the aforementioned article and video, perhaps the more understanding we have of our own innate communication and networking capabilities as humans, the closer we get to achieving the form of AI that truly represents who we are as humans.  In a theological sense we are turning the doctrine of creation on its head.  Rather than God creating man in His own image, we are trying to create beings in OUR image; to some that would seem a rather presumptuous undertaking.

I have included the url for the article and video here again for reference.

https://t.co/2tmt3kvJs5

#mscedc

Comments from msleeman

Jeremy, thank you for some wider positioning / reflecting on the metaphor I adopted – it’s very illuminating for me. The sense of borders and control coincided with me reflecting on previous work with David Delaney’s 2010 book ‘Nomospheric Investigations: The Spatial, The Legal and the Pragmatics of World Making’ (London, Routledge), which I also refer to in a Lifestream post entitled ‘@philip_downey Not been to law school…’ The MOOC as a nomosphere was a helpful background metaphor for me, and generative in framing it within the airport site.

from Comments for Matthew’s EDC blog http://ift.tt/2lxPFdk
via IFTTT