10,000 contractors told to flag ‘upsetting-offensive’ content after months of criticism over hate speech, misinformation and fake news in search results Google is using a 10,000-strong army of independent contractors to flag “offensive or upsetting” content, in order to ensure
from Pocket http://ift.tt/2nIzpbb
via IFTTT
This appeared at the perfect time as I was finishing reading “Algorithmically recognizable: Santorum’s Google problem, and Google’s Santorum problem” by Gillespe, where he talks about how easy it is to manipulate the google algorithm into ranking a preferred site or keyword search. The problem with this is that we blindly trust the google search algorithm that there is no ulterior motive for the results which are given to us on a search. We incorrectly assume that the page ranking we are met by is genuinely a list of pages which meet our search criteria in order of relevance and have not been falsely inflated.
If this is the case for search engine optimisation, is it also something to consider in terms of research? We often chose which papers to read based on the order they appear from a search, again assuming this is a pure result, however, if search results can be affected, maliciously or as a result of behaviour, should we be assessing our behaviour?
For instance, deciding on the “relevance” of a research paper, is that decided by how often that paper is cited, how often it is read or checked out of an electronic library system? How often it’s been shared or added to external referencing and storage tools like paperpile, or by the keywords the author or publisher has assigned to it? Thinking back to the idea that a user may choose papers based on the return order of their search query, this may inappropriately inflate the search report, resulting in papers which meet the search criteria more appropriately listing further down and those which have been read more often, therefore cited more often and shared more often due simply to convenience then climbing the ranking further which in turn restarts the cycle. This, then, could affect the view of this work, where certain papers or academics become more highly associated with particular areas or ideas purely because their name is being seen more often.
I wonder how often, for example, citations of Knox or Bayne could be attributed to students on MSCDE versus students at the O.U. course? Are we falsely inflating the return status of papers?
References
Gillespie, T., 2017. Algorithmically recognizable: Santorum’s Google problem, and Google’s Santorum problem. Information, Communication and Society, 20(1), pp.63–80.
2 Replies to “Linked from Pocket: Google tells invisible army of ‘quality raters’ to flag Holocaust denial”
Comments are closed.