Tweet! For Colin it was just coffee

This conversation was one of those smiley moments where we connect with each other over the spread of the planet and recognise our joined human traits 🙂

We were jus chatting about comforts, but it also reminded me of a comment Colin had made way back in the fist block where he needed another cup of coffee to get through the Donna Haraway paper. I wonder, did any of us find that easy?

Linked from Pocket: Elon Musk launches Neuralink, a venture to merge the human brain with AI

 

 

 

 

SpaceX and Tesla CEO Elon Musk is backing a brain-computer interface venture called Neuralink, according to The Wall Street Journal.

from Pocket http://ift.tt/2nb2bBp
via IFTTT

First thought on reading this is how sci-fi and how well this matched to some of the film clips we watched in our together tube sessions. However as the article highlights, we are already implanting devices in the brain, the most successful being a device that can stop tremors in Parkinson’s sufferers. Perhaps because this is not more widespread, it still seems like science fiction, but Musk’s area of interest goes that little further and it’s about writing and saving information to and from the brain. It’s all about cognition, about improving ourselves through a technology link.

In reading this article and reminding myself of the current extent of research in this field, it’s taking me back (in thought) to the beginning of our EDC journey, to the first few weeks where I battled to understand the concept of cyborg, not as in the sci-fi movie sense but from some of our readings like Miller(2011) and Hayles (1999) where I grappled with the concept of cyborg being much closer to home,  where Miller (2011) explains cyborg as “the growing number of ways that technological apparatuses have been used to fix and alter the human body”, which still sounds “out there”  but is actually talking about such mundane things as eyeglasses and prosthetic limbs, body modification and gym membership.

Hayles (1999) however is probably much more relevant to the intention of Musk in this article, in her paper she talked about her amazement that any scientist could genuinely consider the idea that the human consciousness could be separated fro the body. Even in reading this article and hearing that this is indeed the subject of research in 2017, I still find myself agreeing with Hayles on this one and thinking, come on get real!

 

References

Hayles, N. Katherine (1999) Towards embodied virtuality from Hayles, N. Katherine,  How we became posthuman: virtual bodies in cybernetics, literature, and informatics pp.1-25, 293-297, Chicago, Ill.: University of Chicago Press.

Miller, V. (2011) Chapter 9: The Body and Information Technology, in Understanding Digital Culture. London: Sage.

Linked from Pocket: The music video that changes each time you click play

The algorithm automatically pulls in short clips from video-sharing sites like YouTube when you hit play on Shaking Chains’ Midnight Oil. The short bits of footage are shown back to back with the band’s track playing over the top.

from Pocket http://ift.tt/2onD3Ym
via IFTTT

After all our talk of algorithms and education, I found this a really nice “smiler” so thought I’d share. A music group using an algorithm to change the experience for viewers watching their music video. A nice change from algorithms pulling information out, instead, algorithms creating art?

Weekly round up – Week 10 already

With no set readings this week, it has been a thinking week, a week to go through our tweetorial with a fine-toothed comb and try to relate our learnings on algorithms and learning analytics to a real world scenario,  chat to classmates about their experiences of our learning and its relation to their world and even the opportunity to consider how being a part of the MSCDE has become an important part of who I am.

As you can imagine, twitter is still very prominent this week as we chat about the tweetorial and its outcomes but the relevance fairy has also been delivering the perfect reading material this week letting me think about learning analytics in relation to my job and analytics in both the big powerhouses like Nasa and in our schools.

I’ve even found inspiration in some of my own photography this week as I think about digital education as a big picture and see the opportunities it creates for those who want the chance to set their own path as educators.

I’m looking forward to the next two weeks where I can really get my teeth stuck into some extracurricular reading and start thinking about the direction I’d like to take for my final assignment, do I want to push myself to develop my knowledge in an area where I feel my knowledge is weak or is there a particular area which I am interested in and would like to explore further, and that’s before I consider exactly how I will present this.

 

A read for later – comparing learning analytics with fitness tech

http://ift.tt/2nNUTXG

Tags: #mscedc
March 24, 2017 at 07:57PM
Open in Evernote

Further findings show that 80% of FE students would be happy to have their learning data collected if it improved their grades, and more than half would be happy to have their learning data collected if it stopped them from dropping out.

This block started with my concerns for students receiving “bad news” via learning analytics and how they might react. I was concerned about how stats may be delivered to students and the potential impact of this information.  I saw this reaction on a small scale this week when the tweetorial analytics were released where some of my classmates described shock, annoyance and even anger when seeing a top ten league table that they weren’t in. This, however, was a minor exercise which didn’t feed into any final assignment grades or directly affect the possible pass or fail of the course.

It was during this period that I came across this article about the work being done between 50 education institutions to create an app for learning analytics which could be an aid to both students and teachers. The subheader grabbed my attention as it quoted that 80% of student wanted learning analytics to be carried out and wanted to have this information available to them, this seemed to go against my initial concerns, however on closer inspection, the student seem to want the analytics in all the positive ways, if it improved their grades, if it prevented them fro dropping out. However, there is no thought in that sub header to the students who wouldn’t be receiving good news via the analytics app, so I am afraid I am still on the lower rungs of the cautious ladder when it comes to analytics and information we provide to students and its purpose.

 

Tweet! Sometimes marketing is about too much bling

It’s always interesting how some TEL initiatives are pitched, sometimes we get it right, sometimes we get it wrong.

Lecture capture is one of those oops moments.  It’s not new, it’s not something we are suddenly going to be doing, but because it’s just a part of teaching which has worked away quietly in the background for those who have chosen to use it, it feels like such a big deal now because so many parts of the institution staff have never been involved with or aware of the work that has been going on. The marketing pitch that has been used to “sell” things hasn’t helped, making it out to be so much more than it is.  The big deal is more about the technical aspect of how we will make this available for those who chose to use it on such a large scale, how we will teach our students to make the best use of this as a learning tool and how as educators we will make the best use of it.

If all we do is record lectures and make them available, we are wasting a valuable opportunity to create new learning opportunities and work on new methods.

If we want to control how a technology is being used (pushed on us), we have to take ownership of it, we have to test it, tweak it, find areas where it is a benefit and show areas where it doesn’t work as hoped.

 

Tweet! Just a random thought as I took photos

I try to spend my lunch break practicing my photography skills I am learning on my MOOC course, today I was taking photos of a lecture theatre at work which was empty (it was lunch time after all) and it made me consider the impact of digital education and the question of…

Is digital education an enhancement to current practices or is it the realisation of the MOOC hyperbole of 2012?

This came to mind as I know that there is an imbalance between some of the student body and some of the faculty of UoE, where students are asking that lectures are recorded to be used as study aids, and some faculty are reluctant to do this, with one reason being a fear that it would lead to a drop in numbers in the actual lecture.

I understand these worries, after all, if your lecture is consistently half empty, it could be mistakenly thought that your class is not popular.  However is this not a similar chain of thought to the one saying that students aren’t attending the lecture if they are not in the lecture hall as deliver the lecture?  Instead, could we say that recorded lectures, in fact  extend that lecture period, that the learning can now be happening way past the close of the live lecture and into time perods where the student can be more productive? That maybe students may actually be more present in a lecture and making better use of it if they can participate at times when they know they will take the most on board?

Digital education is such a varied and huge topic, but I also believe it’s more than an enhancement of current methods, I believe it’s a philosophy of encompassing the whole.  A chance to experiment and learn, to change for the better or discard that which doesn’t work, a chance to make use of new tools and technologies where appropriate and more importantly an opportunity to raise the bar rather than follow a path.

Linked from Pocket: Effective learning analytics from Jisc

We’re working in collaboration to build a learning analytics service for the sector. There are over 50 universities and colleges signed up to the initial phases of the implementation.

from Pocket http://ift.tt/1HrcJkG
via IFTTT

This was a nice break away from our critical analysis of learning analytics this week to dip my toes back into the world of learning analytics and the working environment.  It’s always good to put my thoughts back into context when I spent too long in book land, it helps me to understand my studies and put them into practice.

For this piece, I am optimistic about the fact that 50 institutions are joining together for this work. It means there will be a huge variety of angles evaluated, scenarios tested out and eyes on the work. Hopefully resulting in some really good tools and best practice to help make the absolute best of learning analytics in H.E. meaning students and teachers benefit.

 

Pinned to #MSCDE A level student finds a flaw in NASA data

Just Pinned to #MSCDE: The A-level student noticed something odd in radiation levels from the International Space Station. http://ift.tt/2npMjxS
This was a really fascinating story of an A level student who whilst analysing some data from NASA, found an error and reported it to Nasa.
It was a fluff piece, a feel-good story, but it struck me in a couple of ways. Obviously, it ties in nicely with my focus this week on analysing data and the potential for inaccuracy caused by humans, but also the fact that the school had encouraged this sort of participation to the level where the teenager had the confidence to stand by his work instead of assuming he must be wrong because NASA couldn’t possibly have made a mistake.
Well done that lad!

Tweetstorm: interpertation

After the fun and games of our tweetstorm, how has the redcorded data of the event stood up to our memories of what happened and what cnan it tell us about what took place?

Analysing the success of the event

Volume refers to the number of tweets in total and shows clearly that the average number of tweets rose dramatically from less than 50 a day to 187 on Friday.  So from this data we can see that the tutorial had an influence in the frequency of tweeting on these days, we could even say that the tweetorial was a success.  Or can we? 

My interpretation of the statement found on the course website was that the tweet data we are analysing would be from the tweetorial. However, closer inspection shows that it is actually from a much longer period and was recording both before and after the tweetorial. This is key, I interpreted the instructions we were given in a specific way and the data I have been given access to does not quite fit the purpose I thought I had.  This then also brings into question my findings that the tweetorial influenced the frequency of tweets. I can only say that the amount of tweets on Friday was different to other days in this period because I have been given data in a date range which included days on the outside of the tweetorial dates. So for the data I received, yes the data implies that there was indeed influence. However, I cannot see the  same volume data for other days so it is therefore impossible for me to say if the tweetorial days showed higher tweet numbers than the rest of the course,  if we had that data, we may indeed discover that the tweetorial days actually had a lower number of tweets than on other days and therefore don’t hold as much of an influence as we first thought. Therefore, from the data we have, we can only state that there were a certain amount of tweets recorded on a certain day. Not that the tweetorial did indeed have any kind of influence on tweet behaviours of the class nor that it was successful in creating meaningful discussion around certain topics.

Who was present and engaged?

Over the course of the recorded data, Phlip was the most prolific tweeter. This information is displayed as a league table which created a bit of competitiveness amongst the class about who was on the top ten and who wasn’t. One of my classmates mentioned a paper this week which discusses exactly this and says that displaying data in this fashion does indeed cause competitiveness (Cherry, T.L. & Ellis, L.V., 2005), I apologise I can’t remember who introduced me to this paper now. An interesting point to consider, however, is that there was no such competitiveness about who’s tweets showed the best understanding of the topic or the most learning, just about who was on the league of most prolific tweeters.

We could interpret this league table to say that Philip was engaged, or even the most engaged or that he was present and in learning analytics, this might be the way that this data would be used.  We cannot say however that his tweets were of substance. He could have been retweeting the same message over and over in an attempt to influence the data rather than engaging with the debate or the tweets he was publishing may not have been engaging with the topics, they may just have been tweets which had the #mscedc hashtag and therefore were counted. Think roller skates and cheese.

Social learning

This leads me nicely to social learning, as Siemens (2013) points out “The learning process is essentially social and cannot be completely reduced to algorithms”,  although we may interpret the high volume of tweets to say that discussions were taking place amongst peers on the topics given by Jeremy and James, the data doesn’t record the content of these tweets. It has however recorded a heat map of words used and as we would expect data and algorithm feature highly. However I’m drawn to the presence of other words, I’m, I’ve and perhaps. Does the use of these personal statements of I’ve or I’m show conversation and interpersonal discussion? Does it show students taking the learning and evaluating it in personal terms for their understanding?  Does the high ranking of the word perhaps show uncertainty and lack of confidence in the topic? We can and do assume that the use of a social media platform encourages conversation and indeed social learning, but whow would we quantify this experience with data?

Accuracy of data

Although I am pretty sure, from my personal experience, that almost all of the tweets in this timeframe were in English, the data says otherwise. Therefore I must be wrong, I obviously don’t remember the events as well as I had assumed.

Not necessarily.

I know the 1 Swedish record was, in fact English,  I know this because it was one of my tweets which twitter then offered to translate from Swedish when in fact it was in English. I also know that Colin and I both send tweets in Scottish Gaelic (Gah-lick not to be mistaken for Irish Gay-lick), after we saw this mistake, however as you can see from the chart, there is no record of Scottish Gaelic appearing, even though the key to languages says Twitter would recognise if there had been.

This emphasises nicely that we cannot guarantee the accuracy of data we are being given, which we then interpret to make judgements. There has clearly been an influence at work which has told the twitter archive algorithm that it should record languages a certain way and this hasn’t fit with what was going on.  If my memory was infallible, I could say that there were only those 2 tweets which were not in English, therefore the data given by twitter is completely wrong. I cannot say that, however, but what I can say is that the data is definitely not accurate as I have detected at least one flaw, therefore we must question the data set as a whole. We do not know how the algorithm recorded this data to account for the flaws we have seen.

In conclusion. the problem is that we are being asked to analyse the data we have been given, meaning we should study it methodically to interpret its meaning and that’s the flash bulb, “interpret”. I have deliberately and repeatedly spoke of my interpretation of the data and the task. One person will interpret the information they have in one way and the next person may interpret it differently. External influences play a part on how we interpret data, the meaning or importance we place on things. What’s the phrase? Like looking at the world through rose-tinted spectacles, well mine are purple, so I’ll see it differently from you, but interpreting the data is only part of the story, once it has been interpreted, what is then to be done with those finding. For the sake of this example, if this information was being used as learning analytics, we could expect Philip’s tweet count to be associated with attendance or participation and in the same light if someone didn’t rank highly in the volume graph, would they then be marked lower for participation? As mentioned earlier, these stats don’t show the quality of the tweets, only their quantity, it is, therefore impossible to say that Philip was more engaged or participated in the discussion more than any other. We also cannot say who was present and participating but not actively tweeting.

I’m going to end this thought with repeating the word interpret. I have deliberately used this instead of analyse as there are associations based on these words, analyse, we associate with computing and therefore with accuracy. Interpret we see more as an art than a science and therefore holds the potential for human error, but as these few examples have shown, the computer can make mistakes and at the end of the day, it is a human who interprets/analysises the data and they can only work with the data they have been given to try to ascertain the information they need. Before acting on any analytical data, we should ask ourselves,  how has this information been gained, why was it recorded and how was it recorded, before then interpreting the data for your purpose, all the while remembering what you have is interpretation, as Jeremy has said often over the last week, a proxy to help you interpret learning.

 

Refrences

Cherry, T.L. & Ellis, L.V., 2005. Does Rank-Order Grading Improve Student Performance? International Review of Economics Education, 4(1), pp.9–19. Available at: http://www.sciencedirect.com/science/article/pii/S1477388015301407.

Siemens, G. (2013) Learning Analytics: the emergence of a discipline. American Behavioral Scientist, 57(10): 1380-1400