Lifestream, Pocket, Analytics isn’t a thing

Excerpt:

Software is usually classified based on the problems it solves. Need software to help track customers? CRM. Need software to manage what happens in the classroom? LMS. Need software to handle your core business functions? ERP.

via Pocket http://ift.tt/2mBQtA5

I like the tack taken here:

Don’t say that you’re looking to buy an analytics product. Talk about the problems you want to solve and the goals you want to achieve. Once you zero in on that end goal, then you can talk about how information and access to data will help get you there.

Institutions take note! (yes, mine too 😉

Lifestream, Pocket, Code-Dependent: Pros and Cons of the Algorithm Age

Excerpt:

Algorithms are instructions for solving a problem or completing a task. Recipes are algorithms, as are math equations. Computer code is algorithmic. The internet runs on algorithms and all online searching is accomplished through them. Email knows where to go thanks to algorithms.

via Pocket http://ift.tt/2kn8m3T

The Pew Research Center and Elon University’s Imagining the Internet Center asked  ‘technology experts, scholars, corporate practitioners and government leaders’ to respond to this question:

Will the net overall effect of algorithms be positive for individuals and society or negative for individuals and society?

The responses are organised around 7 core themes, which are explored in greater detail in the report:

[Report from FEBRUARY 8, 2017]

Lifestream, Pocket, Racial Bias in Criminal Risk Scores Is Mathematically Inevitable

Excerpt:

An analysis of bias against black defendants in criminal risk scores has prompted research showing that the disparity can be addressed — if the algorithms focus on the fairness of outcomes.

via Pocket http://ift.tt/2lX34At

Huston, we have a (mathematical) problem:

The scholars set out to address this question: Since blacks are rearrested more often than whites, is it possible to create a formula that is equally predictive for all races without disparities in who suffers the harm of incorrect predictions?

..they realized that the problem was not resolvable. A risk score, they found, could either be equally predictive or equally wrong for all races — but not both.

The reason was the difference in the frequency with which blacks and whites were charged with new crimes. ““If you have two populations that have unequal base rates,’’ Kleinberg said, “then you can’t satisfy both definitions of fairness at the same time.”

The currently used formula inaccurately identifies black defendants as future criminals more frequently than white defendants – reinforcing existing inequalities.

Lifestream, Pocket, One State is Replacing Bail Hearings With…An Algorithm

Excerpt:

Guidelines for how judges set bail vary across the country, but generally use a combination of a bail schedule, which prices out fees for specific offenses, and their own assessment of whether the defendant will appear at their hearing or commit a crime before their trial.

via Pocket http://ift.tt/2mwwQfm

An interesting use of algorithms in an attempt to overcome the bias of human decisions. However, it makes the point that the algorithm is reliant on the data it has, and the data itself (when it comes to data on arrests and convictions, for example) reflects the biases in ‘the status quo’. Breaking cycles of inequality and discrimination clearly takes more than intent.

As one respondent to the Pew Research Center’s survey on the future of algorithms noted,

  • “If you start at a place of inequality and you use algorithms to decide what is a likely outcome for a person/system, you inevitably reinforce inequalities.’

Lifestream, Pocket, Bias in machine learning, and how to stop it

Excerpt:

As AI becomes increasingly interwoven into our lives—fueling our experiences at home, work, and even on the road—it is imperative that we question how and why our machines do what they do.

via Pocket http://ift.tt/2g3DTIX

The article provides an expose of some of the ways in which biases have made their way into algorithms, from fewer Pokémon Go locations in neighbourhoods with a black majority to LinkedIn advertisements for high paying jobs appearing more frequently for men and prejudice in loan approval through postcode profiling. Its main argument is that a way to reduce bias in algorithms is to diversify tech, because if more minority voices are involved in producing algorithms, greater awareness of potential bias will result, and be avoided. In addition, we need to make sure our datasets are more inclusive so that they are more representative of the whole world.

Both points seem straightforward and beyond argument – but I’m not sure it goes far enough in its calls for diversity. When the information, and communication from social circles, we receive is personalised – or, filtered before it reaches us – we tend to encounter fewer voices that are different from our own. This, in turn, can stop us from voicing views of our own that we perceive to be in the minority, creating a ‘spiral of silence’ (Sheehan, 2015). So yes, we do need to ensure those who design algorithms are diverse, but we also need to be able to elect not to have our information stream filtered, or to control how it is filtered so as to be able to actively manage the diversity of the networks we are part of. Diversity is good for democracy, and such diversity should not be controlled by commercial interests or those who have ‘financial authority’.

Lifestream, Pocket, Blue Feed, Red Feed

Graphics from ‘Blue Feed, Reed Feed’ by The Wall Street Journal

Excerpt:

Methodology What is this? Recent posts from sources where the majority of shared articles aligned “very liberal” (blue, on the left) and “very conservative” (red, on the right) in a large Facebook study.

via Pocket http://ift.tt/1V9bFKG

These graphics illustrating how facebook feeds can differ according to political preference show how algorithms can contribute to political polarisation. Connects with Eli Pariser’s notion of the filter bubble.

Lifestream, Pocket, Corrupt Personalization

Excerpt:
In my last two posts I’ve been writing about my attempt to convince a group of sophomores with no background in my field that there has been a shift to the algorithmic allocation of attention — and that this is important. In this post I’ll respond to a student question.
Sandvig (2014) defines ‘corrupt personalisation’ as’the process by which your attention is drawn to interests that are not your own‘ (emphasis in original), and suggests it manifests in three ways:
  1. 1. Things that are not necessarily commercial become commercial because of the organization of the system. (Merton called this “pseudo-gemeinschaft,” Habermas called it “colonization of the lifeworld.”)
  2. 2. Money is used as a proxy for “best” and it does not work. That is, those with the most money to spend can prevail over those with the most useful information. The creation of a salable audience takes priority over your authentic interests. (Smythe called this the “audience commodity,” it is Baker’s “market filter.”)
  3. 3. Over time, if people are offered things that are not aligned with their interests often enough, they can be taught what to want. That is, they may come to wrongly believe that these are their authentic interests, and it may be difficult to see the world any other way. (Similar to Chomsky and Herman’s [not Lippman’s] arguments about “manufacturing consent.”)
He makes the point that the problem is not inherent to algorithmic technologies, but that rather the ‘economic organisation of the system’ produces corrupt personalisation. Like Sandvig, I can see the squandered potential of algorithmic culture: instead of supporting authentic interests, our interests seem to be exploited for commercial interests (Sandvig, 2014). Dare we imagine a different system, which serves users rather than corporations?