In this video from September 2016, Mike Rugnetta responds to concerns about Facebook which arose in 2016:
May 2016: reports of Facebook suppressing conservative views
August 2016: editorial/news staff replaced with algorithm
He asks, primarily, why we expect Facebook to be unbiased, given that any news source is subject to editorial partiality, and connects their move to separate themselves from their editorial role through the employment of algorithms to ‘mathwashing’ (Fred Benson), or the use of math terms such as ‘algorithm’ to imply objectivity and impartiality, and the assumption that computers do not have bias, despite being programmed by humans with bias, and being reliant on data.. with bias.
Facebook’s sacking of their human team and movement to reliance on algorithms is demonstrative of one of Gillespie’s assertions, except that in Facebook’s case a reputation of neutrality was sought through the reputation of algorithms in general:
The careful articulation of an algorithm as impartial (even when that characterization is more obfuscation than explanation) certifies it as a reliable sociotechnical actor, lends its results relevance and credibility, and maintains the provider’s apparent neutrality in the face of the millions of evaluations it makes.
In the video, Rugnetta suggests there’s a need to abandon the myth of algorithmic neutrality. True – but we also need greater transparency. With so much information available we need some kind of sorting mechanism, and we also need to know (and be able to tweak) the criteria if we are to be in control of our civic participation.
Algorithms are instructions for solving a problem or completing a task. Recipes are algorithms, as are math equations. Computer code is algorithmic. The internet runs on algorithms and all online searching is accomplished through them. Email knows where to go thanks to algorithms.
The scholars set out to address this question: Since blacks are rearrested more often than whites, is it possible to create a formula that is equally predictive for all races without disparities in who suffers the harm of incorrect predictions?
..they realized that the problem was not resolvable. A risk score, they found, could either be equally predictive or equally wrong for all races — but not both.
The reason was the difference in the frequency with which blacks and whites were charged with new crimes. ““If you have two populations that have unequal base rates,’’ Kleinberg said, “then you can’t satisfy both definitions of fairness at the same time.”
The currently used formula inaccurately identifies black defendants as future criminals more frequently than white defendants – reinforcing existing inequalities.
Guidelines for how judges set bail vary across the country, but generally use a combination of a bail schedule, which prices out fees for specific offenses, and their own assessment of whether the defendant will appear at their hearing or commit a crime before their trial.
An interesting use of algorithms in an attempt to overcome the bias of human decisions. However, it makes the point that the algorithm is reliant on the data it has, and the data itself (when it comes to data on arrests and convictions, for example) reflects the biases in ‘the status quo’. Breaking cycles of inequality and discrimination clearly takes more than intent.
The article provides an expose of some of the ways in which biases have made their way into algorithms, from fewer Pokémon Go locations in neighbourhoods with a black majority to LinkedIn advertisements for high paying jobs appearing more frequently for men and prejudice in loan approval through postcode profiling. Its main argument is that a way to reduce bias in algorithms is to diversify tech, because if more minority voices are involved in producing algorithms, greater awareness of potential bias will result, and be avoided. In addition, we need to make sure our datasets are more inclusive so that they are more representative of the whole world.
Both points seem straightforward and beyond argument – but I’m not sure it goes far enough in its calls for diversity. When the information, and communication from social circles, we receive is personalised – or, filtered before it reaches us – we tend to encounter fewer voices that are different from our own. This, in turn, can stop us from voicing views of our own that we perceive to be in the minority, creating a ‘spiral of silence’ (Sheehan, 2015). So yes, we do need to ensure those who design algorithms are diverse, but we also need to be able to elect not to have our information stream filtered, or to control how it is filtered so as to be able to actively manage the diversity of the networks we are part of. Diversity is good for democracy, and such diversity should not be controlled by commercial interests or those who have ‘financial authority’.