Excerpt:
Microsoft’s Kate Crawford tells SXSW that society must prepare for authoritarian movements to test the ‘power without accountability’ of AI.
via Pocket http://ift.tt/2nwHZcF
“We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.”
…With AI this type of discrimination can be masked in a black box of algorithms
Crawford’s comments, and those of the article’s author, Olivia Solon, correlate with Ben Williamson’s assertion (based on Seaver, 2014) that objectivity and impartiality claims about algorithms ignore the reality that little black boxes are actually massive networked ones with hundreds of hands reaching into them, tweaking and tuning.
Crawford goes further, however, in identifying the potential for algorithms and AI to be used by authoritarian regimes to target specific populations and centralise authority. Her concerns are similar to those of Tim Berners-Lee, which were included in my Lifestream last week. Where Berners-Lee calls for greater (individual, personal) control of our data and more transparency in political advertising online, Crawford calls for greater transparency and accountability within AI systems. However, both are responding to the same key point: algorithms and AI are not just social products, they also produce social effects. The same point is taken up by Knox (2015),
“..algorithms produce worlds rather than objectively account for them, and are considered as manifestations of power. Questions around what kind of individuals and societies are advantaged or excluded through algorithms become crucial here (Knox, 2015).”
and Williamson (2014, referring to Kitchin & Dodge, 2011):
One Reply to “Lifestream, Pocket, Artificial intelligence is ripe for abuse, tech researcher warns: ‘a fascist’s dream’”
Comments are closed.