Lifestream, Pocket, Bias in machine learning, and how to stop it


As AI becomes increasingly interwoven into our lives—fueling our experiences at home, work, and even on the road—it is imperative that we question how and why our machines do what they do.

via Pocket

The article provides an expose of some of the ways in which biases have made their way into algorithms, from fewer Pokémon Go locations in neighbourhoods with a black majority to LinkedIn advertisements for high paying jobs appearing more frequently for men and prejudice in loan approval through postcode profiling. Its main argument is that a way to reduce bias in algorithms is to diversify tech, because if more minority voices are involved in producing algorithms, greater awareness of potential bias will result, and be avoided. In addition, we need to make sure our datasets are more inclusive so that they are more representative of the whole world.

Both points seem straightforward and beyond argument – but I’m not sure it goes far enough in its calls for diversity. When the information, and communication from social circles, we receive is personalised – or, filtered before it reaches us – we tend to encounter fewer voices that are different from our own. This, in turn, can stop us from voicing views of our own that we perceive to be in the minority, creating a ‘spiral of silence’ (Sheehan, 2015). So yes, we do need to ensure those who design algorithms are diverse, but we also need to be able to elect not to have our information stream filtered, or to control how it is filtered so as to be able to actively manage the diversity of the networks we are part of. Diversity is good for democracy, and such diversity should not be controlled by commercial interests or those who have ‘financial authority’.