"It’s like there is little to no quality control on the algorithm at all."Google fail in info prioritisation https://t.co/jkglmSPPly #mscedc
— Renée Hann (@rennhann) March 10, 2017
Caulfied (2017) compares Google to the quiz show Family Feud through his observations about the frequent inaccuracy of the ‘snippets’ that appear at the top of searches, ‘giving the user what appears to be the “one true answer.”’
In The Relevance of Algorithms, Gillespie (p. 14 of the linked PDF) writes:
“the providers of information algorithms must assert that their algorithm is impartial. The performance of algorithmic objectivity has become fundamental to the maintenance of these tools as legitimate brokers of relevant knowledge”
Many of Google’s ‘snippets’ suggest their algorithms are not legitimately brokering knowledge. As Caulfield (2017) highlights, they frequently fail on three accounts:
-
-They foreground information that is either disputed or for which the expert consensus is the exact opposite of what is claimed.
-
-They choose sites and authors who are in no position to know more about a subject than the average person.
-
-They choose people who often have real reasons to be untruthful — for example, right-wing blogs supported by fracking billionaires, white supremacist coverage of “black-on-white” crime, or critics of traditional medicine that sell naturopathic remedies on site.
Caulfield (2017) asks for more than a discourse of impartiality, objectivity and neutrality for algorithms, seeking instead algorithms that actually ’emulate science in designing a process that privileges returning good information over bad’.
Is information about who is ‘in a position to know’ and ‘who can be relied on to accurately tell the truth’ so contested that it’s not possible to integrate these factors into an algorithm? Or does it just not make commercial sense? I’m not suggesting that Google (or anyone for that matter) should attempt to act as an arbiter of truth, promoting one true answer – but when they do attempt to indicate what’s reliable or widely believed, through ‘snippets’, like Caulfield I think they should at least refer to more reliable sources.