Lifestream, Tweets

Hmm – I blogged about the article here..  To add to that discussion, how do we separate human agency here? Here’s my attempt:

  • Human agency: choosing to put a sticky comment on a post; choosing to comment with links; choosing to upvote or downvote (even if only doing so out of “reactance“); Reddit decision to change algorithm.
  • Algorithmic agency: within the code there is a (secret) decision about what to count when calculating a post’s ‘score’; based on this score, the algorithm promotes/does not promote posts, influencing what material is read.

However,

  • Humans did not act as expected in response to encouragement to downvote genuinely unreliable material.
  • The (updated) algorithm did not act as expected in not increasing posts’ scores based on changed commenting behaviour in response to stickies. Or rather, it did not act as an external observer (Matias) expected; perhaps the update performs exactly as Reddit intended.

I’m not happy with this breakdown, however – there needs to be something about collective agency, which the algorithm seems to negate.

Lifestream, Pocket, Society-in-the-Loop

Excerpt:

MIT Media Lab director Joi Ito recently published a thoughtful essay titled “Society-in-the-Loop Artificial Intelligence,” and has kindly credited me with coining the term.

via Pocket http://ift.tt/2b2VVH5


I came across this short blog post when I was still thinking about the need for some kind of collective agency or reflexivity in our interactions with algorithms, rather than just individualised agency and disconnected acts (in relation to Matias’ 2017 experiment with /r/worldnews – mentioned here and here in my Lifestream blog).

…’society in the loop” is a scaled up version of an old idea that puts the “human in the loop” (HITL) of automated systems…

What happens when an AI system does not serve a narrow, well-defined function, but a broad function with wide societal implications? Consider an AI algorithm that controls billions a self-driving cars; or a set of news filtering algorithms that influence the political beliefs and preferences of billions of citizens; or algorithms that mediate the allocation of resources and labor in an entire economy. What is the HITL equivalent of thesegovernance algorithms? This is where we make the qualitative shift from HITL to society in the loop (SITL).

While HITL AI is about embedding the judgment of individual humans or groups in the optimization of narrowly defined AI systems, SITL is about embedding the judgment of society, as a whole, in the algorithmic governance of societal outcomes.

(Rahwan, 2016)

Putting society in the loop of algorithmic governance (Rahwan, 2016)

Rahwan alludes to the co-evolution of values and technology – an important point that we keep returning to in #mscedc, we are not done unto and nor do we simply do unto technology. Going forward (and a point Rahwan makes), it seems to me imperative that we develop ways of articulating human values that machines can understand, and systems for evaluating algorithmic behaviours against articulated human values. On a global scale it is clearly going to be tricky though – to whom is an algorithmic contract accountable, and how is it to be enforced outside of the boundaries of established governance (across countries, for example)? Or, acting ethically (for instance, within institutional adoption of learning analytics), is it simply the responsibility of those who employ algorithms to be accountable to the society they affect?