Excerpt:
MIT Media Lab director Joi Ito recently published a thoughtful essay titled “Society-in-the-Loop Artificial Intelligence,” and has kindly credited me with coining the term.
via Pocket http://ift.tt/2b2VVH5
I came across this short blog post when I was still thinking about the need for some kind of collective agency or reflexivity in our interactions with algorithms, rather than just individualised agency and disconnected acts (in relation to Matias’ 2017 experiment with /r/worldnews – mentioned here and here in my Lifestream blog).
…’society in the loop” is a scaled up version of an old idea that puts the “human in the loop” (HITL) of automated systems…
What happens when an AI system does not serve a narrow, well-defined function, but a broad function with wide societal implications? Consider an AI algorithm that controls billions a self-driving cars; or a set of news filtering algorithms that influence the political beliefs and preferences of billions of citizens; or algorithms that mediate the allocation of resources and labor in an entire economy. What is the HITL equivalent of thesegovernance algorithms? This is where we make the qualitative shift from HITL to society in the loop (SITL).
While HITL AI is about embedding the judgment of individual humans or groups in the optimization of narrowly defined AI systems, SITL is about embedding the judgment of society, as a whole, in the algorithmic governance of societal outcomes.
(Rahwan, 2016)
Rahwan alludes to the co-evolution of values and technology – an important point that we keep returning to in #mscedc, we are not done unto and nor do we simply do unto technology. Going forward (and a point Rahwan makes), it seems to me imperative that we develop ways of articulating human values that machines can understand, and systems for evaluating algorithmic behaviours against articulated human values. On a global scale it is clearly going to be tricky though – to whom is an algorithmic contract accountable, and how is it to be enforced outside of the boundaries of established governance (across countries, for example)? Or, acting ethically (for instance, within institutional adoption of learning analytics), is it simply the responsibility of those who employ algorithms to be accountable to the society they affect?
2 Replies to “Lifestream, Pocket, Society-in-the-Loop”
Comments are closed.