Methodology What is this? Recent posts from sources where the majority of shared articles aligned “very liberal” (blue, on the left) and “very conservative” (red, on the right) in a large Facebook study.
These graphics illustrating how facebook feeds can differ according to political preference show how algorithms can contribute to political polarisation. Connects with Eli Pariser’s notion of the filter bubble.
In my last two posts I’ve been writing about my attempt to convince a group of sophomores with no background in my field that there has been a shift to the algorithmic allocation of attention — and that this is important. In this post I’ll respond to a student question.
Sandvig (2014) defines ‘corrupt personalisation’ as’the process by which your attention is drawn to interests that are not your own‘ (emphasis in original), and suggests it manifests in three ways:
1. Things that are not necessarily commercial become commercial because of the organization of the system. (Merton called this “pseudo-gemeinschaft,” Habermas called it “colonization of the lifeworld.”)
2. Money is used as a proxy for “best” and it does not work. That is, those with the most money to spend can prevail over those with the most useful information. The creation of a salable audience takes priority over your authentic interests. (Smythe called this the “audience commodity,” it is Baker’s “market filter.”)
3. Over time, if people are offered things that are not aligned with their interests often enough, they can be taught what to want. That is, they may come to wrongly believe that these are their authentic interests, and it may be difficult to see the world any other way. (Similar to Chomsky and Herman’s [not Lippman’s] arguments about “manufacturing consent.”)
He makes the point that the problem is not inherent to algorithmic technologies, but that rather the ‘economic organisation of the system’ produces corrupt personalisation. Like Sandvig, I can see the squandered potential of algorithmic culture: instead of supporting authentic interests, our interests seem to be exploited for commercial interests (Sandvig, 2014). Dare we imagine a different system, which serves users rather than corporations?
Over the last week I’ve come across quite a few examples of algorithmic art, and I’m struck by the beauty of much of what I’ve seen. It somehow seems at odds with the cold (impartial+neutral), scientific image of algorithms which is frequently articulated. Gillespie (2012) refers to these articulations as the ‘discursive work’ of the algorithm – could these alternative articulations, which demonstrate the selective programming, and manipulation of algorithms to an artistic end, help to create a more balanced view of algorithms? Or, at least challenge a singular view?