Lifestream, Tweets

Hmm – I blogged about the article here..  To add to that discussion, how do we separate human agency here? Here’s my attempt:

  • Human agency: choosing to put a sticky comment on a post; choosing to comment with links; choosing to upvote or downvote (even if only doing so out of “reactance“); Reddit decision to change algorithm.
  • Algorithmic agency: within the code there is a (secret) decision about what to count when calculating a post’s ‘score’; based on this score, the algorithm promotes/does not promote posts, influencing what material is read.

However,

  • Humans did not act as expected in response to encouragement to downvote genuinely unreliable material.
  • The (updated) algorithm did not act as expected in not increasing posts’ scores based on changed commenting behaviour in response to stickies. Or rather, it did not act as an external observer (Matias) expected; perhaps the update performs exactly as Reddit intended.

I’m not happy with this breakdown, however – there needs to be something about collective agency, which the algorithm seems to negate.

Lifestream, Comment on Tweetorial analysis – Where is Angela? by Renee Furner

Great reflection on what is missing in the data, Dirk. Thanks for sharing.

Regarding my mentions per tweet ranking, here’s some data from outside the ‘window’: I’m pretty sure the cause was simply my ‘early’ response to tweets.. which was influenced by ‘cultural-based time zone factors’, since in my region Friday is a weekend day (meaning I could respond quickly to James’ tweets on Friday morning without work interrupting .. ), and I’m +3 GMT so wasn’t sleeping till the later tweets.

‘Is any of the presented data and data analysis relevant at all? Does it say anything about quality?’

Wondering, do you have any ideas about the kind of analysis (and method) that would (or rather ‘might’) produce a relevant, meaningful interpretation? For example, if you were interviewed about the experience, and asked about what you found most useful/meaningful or which of the Tweet (either questions or responses) prompted the most thought on your part, or about what you felt or thought at the time, would we get closer to ‘relevant’? & if the interviews were repeated with all participants? It would be time consuming, yes, but.. would it reveal something worth uncovering?

What if (for a touch of the creepy) your web-camera had filmed you while tweeting, and captured signs of your mood, algorithmically interpreted? Or measurements of delay between reading a tweet and responding to it?

What in your mind is missing from the data that is required to make it meaningful?

Thanks again,

Renée

from Comments for Argonauts of the Western Pathetic http://ift.tt/2nDebPk
via IFTTT

Lifestream, Pocket, Society-in-the-Loop

Excerpt:

MIT Media Lab director Joi Ito recently published a thoughtful essay titled “Society-in-the-Loop Artificial Intelligence,” and has kindly credited me with coining the term.

via Pocket http://ift.tt/2b2VVH5


I came across this short blog post when I was still thinking about the need for some kind of collective agency or reflexivity in our interactions with algorithms, rather than just individualised agency and disconnected acts (in relation to Matias’ 2017 experiment with /r/worldnews – mentioned here and here in my Lifestream blog).

…’society in the loop” is a scaled up version of an old idea that puts the “human in the loop” (HITL) of automated systems…

What happens when an AI system does not serve a narrow, well-defined function, but a broad function with wide societal implications? Consider an AI algorithm that controls billions a self-driving cars; or a set of news filtering algorithms that influence the political beliefs and preferences of billions of citizens; or algorithms that mediate the allocation of resources and labor in an entire economy. What is the HITL equivalent of thesegovernance algorithms? This is where we make the qualitative shift from HITL to society in the loop (SITL).

While HITL AI is about embedding the judgment of individual humans or groups in the optimization of narrowly defined AI systems, SITL is about embedding the judgment of society, as a whole, in the algorithmic governance of societal outcomes.

(Rahwan, 2016)

Putting society in the loop of algorithmic governance (Rahwan, 2016)

Rahwan alludes to the co-evolution of values and technology – an important point that we keep returning to in #mscedc, we are not done unto and nor do we simply do unto technology. Going forward (and a point Rahwan makes), it seems to me imperative that we develop ways of articulating human values that machines can understand, and systems for evaluating algorithmic behaviours against articulated human values. On a global scale it is clearly going to be tricky though – to whom is an algorithmic contract accountable, and how is it to be enforced outside of the boundaries of established governance (across countries, for example)? Or, acting ethically (for instance, within institutional adoption of learning analytics), is it simply the responsibility of those who employ algorithms to be accountable to the society they affect?