Lifestream, Comment on Micro-ethnography by Renee Furner

This is a really engaging read, Stuart ‘ thank you!
The digital cacophony at the beginning was really disorienting ‘ I can see why people may want to turn away from it when learning.

One of the points I thought of with regard to the scale of MOOCs (and mine was an infant compared to yours) was that in order to participate in forums, users need a sense of the history of that forum. Without this knowledge, the information can be overwhelming, and if enough people lack knowledge of the history, participation norms are difficult if not impossible to establish.

As one of the ‘steps to success’ in a MOOC
, Cormier suggests that participants need to ‘cluster’, so that they can filter the noise/information, and make it manageable.

It seems though, that within your MOOC there was no opportunity to network and find those on with shared interests (excepting Chenée) – and similarly I’ve seen scant evidence of this in our peer’s ethnographies. What kind of environment would have supported that, I wonder?

Really interesting observations – a pleasure to read.

Renée

from Comments for Stuart’s EDC blog http://ift.tt/2mQ93qm
via IFTTT

Week 8 Summary

A little tool I hoped to use to help with my summarising this week – still programming its algorithm, though; it’s not quite ready to select what is relevant.

The first week of our algorithmic cultures week seemed ‘noisy’ – perhaps because there is so much recent news on the impact of algorithms, and studies into the same, for peers to share through Twitter. Certainly, it has felt like our investigations are timely.

My lifestream has also been busy, with 18 posts. Some of these (1, 2, 3) were focused on managing IFTTT, with week 8 seeing me introduce Pocket as an additional lifestream feed. Two posts were related to digital art, which I suggested worked to provide an alternative discourse on algorithms from that of impartiality and objectivity, and explored sociomateriality through engaging humans and non-humans in joint construction of artefacts.

Another theme which arose was the potential for algorithms to reinforce existing inequalities. I examined this in several contexts, including education, in response to Matt Reed’s post on stereotype threat within predictive analysis); bail hearings in New Jersey, where it is hoped algorithms will help overcome human bias; and the mathematical inability to create an algorithm that is both equally predictive of future crime and equally wrong about defendants regardless of race. I also interrogated a proposal that a more diverse tech industry could prevent discriminatory algorithms.

The role of algorithms in information prioritisation was also attended to. I responded to a post by Mike Caulfield (2017) on the failure of Google algorithms to identify reliable information, a video on Facebook’s turn to algorithms as opposed to human editorial, and included a Washington Post graphic which illustrates different Facebook feeds for Conservative and Republican supporters.

Finally, in a post illustrating my own algorithmic play, I showed that Google is selective in what it records from search history, that Google ads topics are hit and miss due to not understanding the meaning attached to online actions (demonstrating Enyon’s [2013, p. 239] assertion about the need to understand meaning, rather than just track actions), and the desire for validation when the self is presented back through data (following Gillespie’s 2012, p. 21 suggestion). For me, the findings of my play seemed trivial – but such a stance belies the potential for algorithms to have real (and negative) impact on people’s lives through profiling.

Lifestream, Diigo: Three challenges for the web, according to its inventor – World Wide Web Foundation

Tim Berners-Lee calls for greater algorithmic transparency and personal data control.
from Diigo http://ift.tt/2ncWlj9
via IFTTT

 

I almost forgot to add some ‘meta-data’ to this one!

Who can believe the Internet is 28 years old? In this open letter, Tim Berner-Lee voices three concerns for the Internet, all connected to algorithms:

1)   We’ve lost control of our personal data

2)   It’s too easy for misinformation to spread on the web

3)   Political advertising online needs transparency and understanding

In terms of (1), Berners-Lee calls for data to be placed back into the hands of web users and for greater algorithm transparency, while encouraging us to fight against excessive surveillance laws.

In terms of personal data control, I wonder what the potential of Finland’s proposed MyData system is:

 

MyData Nordic Model

Transparency of algorithms also applies to (2) – but I also think that web users have to be more proactive in questioning what they find (are given) on the web, and there needs to be greater focus in schools on questioning claims and information rather than sources per se within the teaching of information and media literacy. Berners-Lee additionally calls for greater pressure to be placed on major aggregators such as Google and Facebook to be the gatekeepers, with a responsibility to stop the spread of fake news and warns against a singular, central arbiter of ‘truth’. Where does responsibility lie for misleading information, clickbait and so on?  While I agree that aggregators need to take responsibility, the problem seems to be connected to the underlying economic model: while ever there is money to be made from ‘clicks’ fraudulent & sensationalist ‘news’ will continue to be created. The quality of journalism will be weakened. I don’t have any long term solutions – but perhaps in the short term taking personal responsibility for diversifying the channels through which we search for and receive (and distribute!) information is a start, along with simple actions towards protecting some of our data (logging out, using browsers like Tor, not relying exclusively on Google, for example).