Lifestream, blog post: algorithm play

This week your task is to play with some algorithms and document the results.

(edc site intructions)

My initial investigations into how algorithms affect what is advertised to me, and how my search results are affected by algorithms, haven’t been terribly revealing thus far – which, could, of course, be said to be revealing in itself.

I tried the ‘puppy dog’ search suggested in Christian Sandvig’s blog post Show and Tell: Algorithmic Culture:

Sandvig’s ‘puppy dog’ + space search results My ‘puppy dog’ + space results

I wondered (accepting that different languages collocate differently) what a similar search would look like in Spanish. I went for just ‘puppy’ as the collocations would complicate: it’s ojos de cachorro (eyes of puppy) rather than puppy dog eyes, for example).

search for ‘puppy’
Search for ‘cachorro’

None of these options are reflective of my browsing history – I’ve looked for rescue Spanish water dog puppies (in both languages), and I’ve researched how long you can leave puppies alone when they are small (too long for the number of hours I’m at work). Also, I have sought lyrics for tracks which had nothing to do with puppies during the week – so perhaps this influenced the first results.

Google Ads information about me is a little hit and miss:

What Google thinks I like

do like sci-fi, documentaries & action & adventure films.. dogs..world news.. travel.. but make-up and cosmetics? Business productivity software? Shopping? Pop music? There’s quite a bit that is ‘off’ and with such broad categories it’s hard to even know what some of the categories refer to.

In thinking about where the information might ‘come from’, it’s occurred to me that the algorithms have no idea why I go to sites – whether it is to get white noise to block out the noisy neighbours or to find articles that students I teach might relate to. This point is taken up by Enyon (p. 239, 2013):

whilst being able to measure what people actually do is of great value, this is not sufficient for all kinds of social science research. We also need to understand the meanings of that behaviour which cannot be inferred simply from tracking specific patterns.

It’s also occurred to me that what I might actually be interested in is possibly getting drowned in the flow of the things I’m not. These algorithms are certainly not ‘exact’ in their calculations.

Still, I decided to change Google’s understanding of me… do I seek validation, for the algorithm(s) collecting my data to confirm my sense of self through a presentation of myself back to me, as Gillespie suggests we might (2012, p. 21 of the linked PDF)?

Update: After deleting some youtube videos from my viewing history, my ‘interests’ initially reverted to zero, but a day later updated to this.

‘Ads’. What does it matter anyway?

Lots of the ads I ‘receive’ seem to be targeted on where I live rather than my browsing history:

Ad for? The bank? The Electricity and Water Authority?Unclear, since I don’t read Arabic. Appeared on a DIY website.
Fly Dubai ads appear frequently. Never used them, mind. Ad from bbc.co.uk
I have no interest in buying a car. I am not in Saudi Arabia – though I can see Saudi from my compound. I do like palindromic numbers, though;) Ad from bbc.co.uk
The nearest restaurant advertising near me? In another country (UAE)? Is there no good food to be had closer to home?

There are exceptions, however:

The advertising at The Guardian knows I’ve been looking for accommodation in Barcelona (and presumably that I was searching in Spanish):

Facebook has also flagged this, as well as cottoning on to my need for long skirts in this region (not sure ‘Bohemian’ would go down at work, mind):

Amazon gave me mixed results.

Amazon.co.uk is basing its recommendations on books I bought in 2009:


Meanwhile, Amazon.com has linked my search for EU/US shoe size equivalency to my husband’s account, and is recommending some pretty ugly shoes to him, while he is logged in to his account:

How are these ‘related’?

To be honest, the impact of all this on me seems minimal. I’ve already booked the accommodation I want in Barcelona (you’re too late, FB), and I’m not looking for anything else offered. Will I consider Fly Dubai next time I’m going there? Sure, but I did before anyway. However, as Turow (2012) highlighted, in treating the impact of algorithms (‘just a few ads’) as trivial, we ignore the scale of algorithms’ potential for prejudice:

In broader and broader ways, computer- generated conclusions about who we are affect the media content-the streams of commercial messages, discount offers, information, news, and enter- tainment-each of us confronts. Over the next few decades the business logic that drives these tailored activities will transform the ways we see ourselves, those around us, and the world at large. Governments too may be able to use marketers’ technology and data to influence what we see and hear.

From this vantage point, the rhetoric of consumer power begins to lose credibility. In its place is a rhetoric of esoteric technological and statistical knowledge that supports the practice of social discrimination through profiling. We may note its outcomes only once in a while, and we may shrug when we do because it seems trivial — just a few ads, after all. But unless we try to understand how this profiling or reputation-making process works and what it means for the long term, our children and grandchildren will bear the full brunt of its prejudicial force.

Joseph Turow, 2012

Source: https://www.theatlantic.com/technology/archive/2012/02/a-guide-to-the-digital-advertising-industry-thats-watching-your-every-click/252667/

For me, my larger concern is with Google’s sorting/prioritising of the search results I get. I can consciously choose to ignore advertising, but how do I know what information is available if Google selects for me, based on its ideas about me?

Lifestream, Tweets

Caulfied (2017) compares Google to the quiz show Family Feud through his observations about the frequent inaccuracy of the ‘snippets’ that appear at the top of searches, ‘giving the user what appears to be the “one true answer.”’

In The Relevance of Algorithms, Gillespie (p. 14 of the linked PDF) writes:

“the providers of information algorithms must assert that their algorithm is impartial. The performance of algorithmic objectivity has become fundamental to the maintenance of these tools as legitimate brokers of relevant knowledge”

Many of Google’s ‘snippets’ suggest their algorithms are not legitimately brokering knowledge. As Caulfield (2017) highlights, they frequently fail on three accounts:

  • -They foreground information that is either disputed or for which the expert consensus is the exact opposite of what is claimed.

  • -They choose sites and authors who are in no position to know more about a subject than the average person.

  • -They choose people who often have real reasons to be untruthful — for example, right-wing blogs supported by fracking billionaires, white supremacist coverage of “black-on-white” crime, or critics of traditional medicine that sell naturopathic remedies on site.

Caulfield (2017) asks for more than a discourse of impartiality, objectivity and neutrality for algorithms, seeking instead algorithms that actually ’emulate science in designing a process that privileges returning good information over bad’.

Is information about who is ‘in a position to know’ and ‘who can be relied on to accurately tell the truth’ so contested that it’s not possible to integrate these factors into an algorithm? Or does it just not make commercial sense? I’m not suggesting that Google (or anyone for that matter) should attempt to act as an arbiter of truth, promoting one true answer – but when they do attempt to indicate what’s reliable or widely believed, through ‘snippets’, like Caulfield I think they should at least refer to more reliable sources.

 

Lifestream, Diigo: Predictions, Probabilities, and Placebos | Confessions of a Community College Dean

Concerns about predictive analysis – does it introduce ‘stereotype threat’, in which learning that “people like me aren’t good at x” has an affective impact on performance? Steele, quoted in the article, suggests that awareness of negative stereotypes diverts cognitive resources. In this sense, the author (Matt Reed) contends that predictive analytics have the potential to recreate existing economic gaps.

I would say it works from the other side too: teachers who know a student has a bad behavioural or ‘performance’ record often treat them differently, as though they are already a problem.

Reed proposes that we may have ‘a positive duty to withhold data that would do active harm’. Sounds fair on the one hand – but given the option of conducting a ‘statistical placebo’ I feel uncomfortable. We don’t all respond to information in the same manner; perhaps for some students the negative predictions would be valuable. Should students have a right to the predictions?

In a follow-up article Inside Digital Learning asked the leaders of companies from predicitive analytics for a response. Key points/quotes included:

  • It’s not about what the information is, it’s about how you deliver it (i.e. support [which, as Enyon, 2013, p. 238 notes, has financial implications for providers], talking about a student’s options);
  • The type of data you share matters: “It’s not a matter of whether you should share predictive data with students or not, it’s a matter of sharing data they can act on,” Dave Jarrat from Inside Track (i.e. being told that you’re likely to fail/discontinue your studies isn’t useful – you need to know what students who were in your position and succeeded did);
  • Individual responses to data need to be taken into consideration.

Irrespective of whether the responses led me to a personal stance, they speak very loudly of the learnification of education.

from Diigo http://ift.tt/2klcMw2
via IFTTT

 

Lifestream, Pocket, Blue Feed, Red Feed

Graphics from ‘Blue Feed, Reed Feed’ by The Wall Street Journal

Excerpt:

Methodology What is this? Recent posts from sources where the majority of shared articles aligned “very liberal” (blue, on the left) and “very conservative” (red, on the right) in a large Facebook study.

via Pocket http://ift.tt/1V9bFKG

These graphics illustrating how facebook feeds can differ according to political preference show how algorithms can contribute to political polarisation. Connects with Eli Pariser’s notion of the filter bubble.

Lifestream, Pocket, Corrupt Personalization

Excerpt:
In my last two posts I’ve been writing about my attempt to convince a group of sophomores with no background in my field that there has been a shift to the algorithmic allocation of attention — and that this is important. In this post I’ll respond to a student question.
Sandvig (2014) defines ‘corrupt personalisation’ as’the process by which your attention is drawn to interests that are not your own‘ (emphasis in original), and suggests it manifests in three ways:
  1. 1. Things that are not necessarily commercial become commercial because of the organization of the system. (Merton called this “pseudo-gemeinschaft,” Habermas called it “colonization of the lifeworld.”)
  2. 2. Money is used as a proxy for “best” and it does not work. That is, those with the most money to spend can prevail over those with the most useful information. The creation of a salable audience takes priority over your authentic interests. (Smythe called this the “audience commodity,” it is Baker’s “market filter.”)
  3. 3. Over time, if people are offered things that are not aligned with their interests often enough, they can be taught what to want. That is, they may come to wrongly believe that these are their authentic interests, and it may be difficult to see the world any other way. (Similar to Chomsky and Herman’s [not Lippman’s] arguments about “manufacturing consent.”)
He makes the point that the problem is not inherent to algorithmic technologies, but that rather the ‘economic organisation of the system’ produces corrupt personalisation. Like Sandvig, I can see the squandered potential of algorithmic culture: instead of supporting authentic interests, our interests seem to be exploited for commercial interests (Sandvig, 2014). Dare we imagine a different system, which serves users rather than corporations?

Pinned to #mscedc on Pinterest

Just Pinned to #mscedc:
x’ = sin(a * y) – cos(b * x) y’ = sin(c * x) – cos(d * y)
http://ift.tt/2mWsQ8v https://www.pinterest.com/pin/359654720228590520/
Over the last week I’ve come across quite a few examples of algorithmic art, and I’m struck by the beauty of much of what I’ve seen. It somehow seems at odds with the cold (impartial+neutral), scientific image of algorithms which is frequently articulated. Gillespie (2012) refers to these articulations as the ‘discursive work’ of the algorithm – could these alternative articulations, which demonstrate the selective programming, and manipulation of algorithms to an artistic end, help to create a more balanced view of algorithms? Or, at least challenge a singular view?

Lifestream, Tweets

In some ways it feels uncomfortable to consciously give permission to IFTTT to track my movements across different platforms. Yet, we do this anyway just by going online and logging-in, donating data to Google, FB, Twitter and so on.

My tweet though is about the indirectness of creating an IFTTT algorithm which tracks my interaction with third party service providers (so far Twitter, Pinterest, Diigo, Youtube, Evernote, and – new this week – Flickr and Pocket) in order to post to WordPress, when I could ‘ask’ WordPress to post for me directly, through the plug-in/Chrome extension ‘PressThis’. Using IFTTT does, however, have the advantage of raising awareness of ‘being tracked’, and of giving agency to the algorithms: the posts appear when the IFTTT Applet runs, there can be a delay, the code doesn’t always ‘pick-up’ what I intended (Pinterest Applet, for example).

Lifestream, Diigo: Digital materiality? How artifacts without matter, matter | Leonardi | First Monday

Leonardi (2010) provides clear and well-illustrated descriptions of materiality (i.e. relevant to ‘digital materiality’) using 3 different definitions of material:
(1) Material as related to physical substance
(2) Material as the practical instantiation of theory
(3) Material as ‘significant’

Through these ways, and particularly the latter two definitions, of viewing materiality, researchers can gain a way of framing and understanding the role of digital technologies-in-practice.
” These alternative, relational definitions move materiality ‘out of the artifact’ and into the space of interaction between people and artifacts. No matter whether those artifacts are physical or digital, their ‘materiality’ is determined, to a substantial degree, by when, how, and why they are used. These definitions imply that materiality is not a property of artifacts, but a product of the relationships between artifacts and the people who produce and consume them.”

from Diigo http://ift.tt/28PkUdt
via IFTTT