Browsed by
Category: Lifestream

@Eli_App_D @notwithabrush @fleurhills Brilliant! Thank you. Where is the 2000 mentioned? #mscedc

@Eli_App_D @notwithabrush @fleurhills Brilliant! Thank you. Where is the 2000 mentioned? #mscedc

from http://twitter.com/helenwalker7
via IFTTT

“Twitter’s just-in-time design allowed students and instructors to engage in sharing, collaboration, brainstorming, problem-solving, and creating. Participants noted that using Twitter for socializing and learning purposes felt more “natural and immediate” than did using a formal learning management system.” (Dunlap & Lowenthal, n.d.) from https://onlinelearninginsights.wordpress.com/tag/community-of-inquiry-model/

 

Comment on Week 10: Learning Analytics Critique by hwalker

Comment on Week 10: Learning Analytics Critique by hwalker

This is a fascinating analysis, Philip: I’m definitely going to read the Verbeek articles.

‘Rather than looking to see whether the collected data is valid or not, we need to understand its purpose and how that data is collected.’ Absolutely. As Knox states (http://ift.tt/2nlrcud): ‘I think we should focus less on the results of Learning Analytics, and whether they measure up to reality, and more on the processes that have gone into the analysis itself.’

from Comments for Philip’s EDC Blog http://ift.tt/2nqxfzt
via IFTTT

Lifestream summary: week 10

Lifestream summary: week 10

Following the frenetic interactions of the Tweetorial, this has felt like a much quieter week, as we retreated to assess the data which had been captured by via Tweet Archivist. Having looked at a number of responses to the task (see here, here and here), it appears that many of us have dismantled any notion that the data provides us with any meaningful insights into our learning. What it has provided us with is an example of why algorithms, their outcomes and their analyses all have to be interrogated as active processes (we’re back to entanglements again) which are neither objective nor transparent. They are complex constructs which, as Helen’s work shows, can provide us with multiple fictions.

Our discussion in the Hangout touched on the impact which the use of analytics can have on learning and learners. Knowing that we were going to be subjected to an algorithmic assessment impacted on our behaviours in the Tweetorial and, in future, could affect how we approach tasks as we try to ‘beat the machine’.

The other key activity this week has been further development of my ideas for the final assignment. It’s been such a rich, thought provoking course that I’m a little at sea with where to start and what tools to use. However, as suggested by James in his email to his tutees, I’m going to look for help and suggestions from the other students on the course…

@fleurhills @learntechstu @lemurph I third this. A brilliant demonstration that the process of analysis is key, not the data. #mscedc

@fleurhills @learntechstu @lemurph I third this. A brilliant demonstration that the process of analysis is key, not the data. #mscedc

from http://twitter.com/helenwalker7
via IFTTT

If Knox’s blog highlights that the notion of algorithmic objectivity is what Gillepsie calls ‘a carefully crafted fiction’, Helen’s work demonstrates how we can use algorithmic outcomes to tell very different tales: https://padlet.com/lemurph/analytics.

Gillespie, T. 2012. The Relevance of Algorithms. in Media Technologies, ed. Tarleton Gillespie, Pablo Boczkowski, and Kirsten Foot. Cambridge, MA: MIT Press.

Is there a word limit for the final assignment? I’ve looked in the course guide and on the assignment page…can’t find anything? #mscedc

Is there a word limit for the final assignment? I’ve looked in the course guide and on the assignment page…can’t find anything? #mscedc

from http://twitter.com/helenwalker7
via IFTTT

As ever, I initially crumple when faced with freedom…

Fascinated by how neutral we appear to be. #mscedc https://t.co/WAn6hQmjcC

Fascinated by how neutral we appear to be. #mscedc https://t.co/WAn6hQmjcC

from http://twitter.com/helenwalker7
via IFTTT

I’m intrigued by the notion that Keyhole purports to be able to deliver information about sentiment. As a former English teacher, I was inherently sceptical about this: I used to spend much time with my students discussing connotations and the slipperiness of words…So, this was interesting:

Evaluation

‘The accuracy of a sentiment analysis system is, in principle, how well it agrees with human judgments. This is usually measured by precision and recall. However, according to research human raters typically agree 79% of the time.

Thus, a 70% accurate program is doing nearly as well as humans, even though such accuracy may not sound impressive. If a program were “right” 100% of the time, humans would still disagree with it about 20% of the time, since they disagree that much about any answer . More sophisticated measures can be applied, but evaluation of sentiment analysis systems remains a complex matter. For sentiment analysis tasks returning a scale rather than a binary judgement, correlation is a better measure than precision because it takes into account how close the predicted value is to the target value.

From: https://en.wikipedia.org/wiki/Sentiment_analysis