Lifestream, Liked on YouTube: Bias? In My Algorithms? A Facebook News Story

via YouTube

In this video from September 2016, Mike Rugnetta responds to concerns about Facebook which arose in 2016:

  1. May 2016: reports of Facebook suppressing conservative views
  2. August 2016: editorial/news staff replaced with algorithm

He asks, primarily, why we expect Facebook to be unbiased, given that any news source is subject to editorial partiality, and connects their move to separate themselves from their editorial role through the employment of algorithms to ‘mathwashing’ (Fred Benson), or the use of math terms such as ‘algorithm’ to imply objectivity and impartiality, and the assumption that computers do not have bias, despite being programmed by humans with bias, and being reliant on data.. with bias.

Facebook’s sacking of their human team and movement to reliance on algorithms is demonstrative of one of Gillespie’s assertions, except that in Facebook’s case a reputation of neutrality was sought through the reputation of algorithms in general:

The careful articulation of an algorithm as impartial (even when that characterization is more obfuscation than explanation) certifies it as a reliable sociotechnical actor, lends its results relevance and credibility, and maintains the provider’s apparent neutrality in the face of the millions of evaluations it makes.

(2012, p. 13 of the linked to PDF)

In the video, Rugnetta suggests there’s a need to abandon the myth of algorithmic neutrality. True – but we also need greater transparency. With so much information available we need some kind of sorting mechanism, and we also need to know (and be able to tweak) the criteria if we are to be in control of our civic participation.

Lifestream, Pocket, Code-Dependent: Pros and Cons of the Algorithm Age

Excerpt:

Algorithms are instructions for solving a problem or completing a task. Recipes are algorithms, as are math equations. Computer code is algorithmic. The internet runs on algorithms and all online searching is accomplished through them. Email knows where to go thanks to algorithms.

via Pocket http://ift.tt/2kn8m3T

The Pew Research Center and Elon University’s Imagining the Internet Center asked  ‘technology experts, scholars, corporate practitioners and government leaders’ to respond to this question:

Will the net overall effect of algorithms be positive for individuals and society or negative for individuals and society?

The responses are organised around 7 core themes, which are explored in greater detail in the report:

[Report from FEBRUARY 8, 2017]

Lifestream, Pocket, Racial Bias in Criminal Risk Scores Is Mathematically Inevitable

Excerpt:

An analysis of bias against black defendants in criminal risk scores has prompted research showing that the disparity can be addressed — if the algorithms focus on the fairness of outcomes.

via Pocket http://ift.tt/2lX34At

Huston, we have a (mathematical) problem:

The scholars set out to address this question: Since blacks are rearrested more often than whites, is it possible to create a formula that is equally predictive for all races without disparities in who suffers the harm of incorrect predictions?

..they realized that the problem was not resolvable. A risk score, they found, could either be equally predictive or equally wrong for all races — but not both.

The reason was the difference in the frequency with which blacks and whites were charged with new crimes. ““If you have two populations that have unequal base rates,’’ Kleinberg said, “then you can’t satisfy both definitions of fairness at the same time.”

The currently used formula inaccurately identifies black defendants as future criminals more frequently than white defendants – reinforcing existing inequalities.

Lifestream, Pocket, One State is Replacing Bail Hearings With…An Algorithm

Excerpt:

Guidelines for how judges set bail vary across the country, but generally use a combination of a bail schedule, which prices out fees for specific offenses, and their own assessment of whether the defendant will appear at their hearing or commit a crime before their trial.

via Pocket http://ift.tt/2mwwQfm

An interesting use of algorithms in an attempt to overcome the bias of human decisions. However, it makes the point that the algorithm is reliant on the data it has, and the data itself (when it comes to data on arrests and convictions, for example) reflects the biases in ‘the status quo’. Breaking cycles of inequality and discrimination clearly takes more than intent.

As one respondent to the Pew Research Center’s survey on the future of algorithms noted,

  • “If you start at a place of inequality and you use algorithms to decide what is a likely outcome for a person/system, you inevitably reinforce inequalities.’

Lifestream, Pocket, Bias in machine learning, and how to stop it

Excerpt:

As AI becomes increasingly interwoven into our lives—fueling our experiences at home, work, and even on the road—it is imperative that we question how and why our machines do what they do.

via Pocket http://ift.tt/2g3DTIX

The article provides an expose of some of the ways in which biases have made their way into algorithms, from fewer Pokémon Go locations in neighbourhoods with a black majority to LinkedIn advertisements for high paying jobs appearing more frequently for men and prejudice in loan approval through postcode profiling. Its main argument is that a way to reduce bias in algorithms is to diversify tech, because if more minority voices are involved in producing algorithms, greater awareness of potential bias will result, and be avoided. In addition, we need to make sure our datasets are more inclusive so that they are more representative of the whole world.

Both points seem straightforward and beyond argument – but I’m not sure it goes far enough in its calls for diversity. When the information, and communication from social circles, we receive is personalised – or, filtered before it reaches us – we tend to encounter fewer voices that are different from our own. This, in turn, can stop us from voicing views of our own that we perceive to be in the minority, creating a ‘spiral of silence’ (Sheehan, 2015). So yes, we do need to ensure those who design algorithms are diverse, but we also need to be able to elect not to have our information stream filtered, or to control how it is filtered so as to be able to actively manage the diversity of the networks we are part of. Diversity is good for democracy, and such diversity should not be controlled by commercial interests or those who have ‘financial authority’.