This is a really engaging read, Stuart ‘ thank you!
The digital cacophony at the beginning was really disorienting ‘ I can see why people may want to turn away from it when learning.
One of the points I thought of with regard to the scale of MOOCs (and mine was an infant compared to yours) was that in order to participate in forums, users need a sense of the history of that forum. Without this knowledge, the information can be overwhelming, and if enough people lack knowledge of the history, participation norms are difficult if not impossible to establish.
As one of the ‘steps to success’ in a MOOC
, Cormier suggests that participants need to ‘cluster’, so that they can filter the noise/information, and make it manageable.
It seems though, that within your MOOC there was no opportunity to network and find those on with shared interests (excepting Chenée) – and similarly I’ve seen scant evidence of this in our peer’s ethnographies. What kind of environment would have supported that, I wonder?
Really interesting observations – a pleasure to read.
from Comments for Stuart’s EDC blog http://ift.tt/2mQ93qm
The first week of our algorithmic cultures week seemed ‘noisy’ – perhaps because there is so much recent news on the impact of algorithms, and studies into the same, for peers to share through Twitter. Certainly, it has felt like our investigations are timely.
Finally, in a post illustrating my own algorithmic play, I showed that Google is selective in what it records from search history, that Google ads topics are hit and miss due to not understanding the meaning attached to online actions (demonstrating Enyon’s [2013, p. 239] assertion about the need to understand meaning, rather than just track actions), and the desire for validation when the self is presented back through data (following Gillespie’s 2012, p. 21 suggestion). For me, the findings of my play seemed trivial – but such a stance belies the potential for algorithms to have real (and negative) impact on people’s lives through profiling.
Tim Berners-Lee calls for greater algorithmic transparency and personal data control.
from Diigo http://ift.tt/2ncWlj9
I almost forgot to add some ‘meta-data’ to this one!
Who can believe the Internet is 28 years old? In this open letter, Tim Berner-Lee voices three concerns for the Internet, all connected to algorithms:
1) We’ve lost control of our personal data
2) It’s too easy for misinformation to spread on the web
3) Political advertising online needs transparency and understanding
In terms of (1), Berners-Lee calls for data to be placed back into the hands of web users and for greater algorithm transparency, while encouraging us to fight against excessive surveillance laws.
In terms of personal data control, I wonder what the potential of Finland’s proposed MyData system is:
Transparency of algorithms also applies to (2) – but I also think that web users have to be more proactive in questioning what they find (are given) on the web, and there needs to be greater focus in schools on questioning claims and information rather than sources per se within the teaching of information and media literacy. Berners-Lee additionally calls for greater pressure to be placed on major aggregators such as Google and Facebook to be the gatekeepers, with a responsibility to stop the spread of fake news and warns against a singular, central arbiter of ‘truth’. Where does responsibility lie for misleading information, clickbait and so on? While I agree that aggregators need to take responsibility, the problem seems to be connected to the underlying economic model: while ever there is money to be made from ‘clicks’ fraudulent & sensationalist ‘news’ will continue to be created. The quality of journalism will be weakened. I don’t have any long term solutions – but perhaps in the short term taking personal responsibility for diversifying the channels through which we search for and receive (and distribute!) information is a start, along with simple actions towards protecting some of our data (logging out, using browsers like Tor, not relying exclusively on Google, for example).