Excerpt:
In my last two posts I’ve been writing about my attempt to convince a group of sophomores with no background in my field that there has been a shift to the algorithmic allocation of attention — and that this is important. In this post I’ll respond to a student question.
Sandvig (2014) defines ‘corrupt personalisation’ as’the process by which your attention is drawn to interests that are not your own‘ (emphasis in original), and suggests it manifests in three ways:
- 1. Things that are not necessarily commercial become commercial because of the organization of the system. (Merton called this “pseudo-gemeinschaft,” Habermas called it “colonization of the lifeworld.”)
- 2. Money is used as a proxy for “best” and it does not work. That is, those with the most money to spend can prevail over those with the most useful information. The creation of a salable audience takes priority over your authentic interests. (Smythe called this the “audience commodity,” it is Baker’s “market filter.”)
- 3. Over time, if people are offered things that are not aligned with their interests often enough, they can be taught what to want. That is, they may come to wrongly believe that these are their authentic interests, and it may be difficult to see the world any other way. (Similar to Chomsky and Herman’s [not Lippman’s] arguments about “manufacturing consent.”)
He makes the point that the problem is not inherent to algorithmic technologies, but that rather the ‘economic organisation of the system’ produces corrupt personalisation. Like Sandvig, I can see the squandered potential of algorithmic culture: instead of supporting authentic interests, our interests seem to be exploited for commercial interests (Sandvig, 2014). Dare we imagine a different system, which serves users rather than corporations?