Liked on YouTube: GoGo Penguin – One Percent (Live)

GoGo Penguin – One Percent (Live)
‘One Percent – live at the Union Chapel, London
Composed by GoGo Penguin (Chris Illingworth, Nick Blacka, Rob Turner)
Directed and Edited by Liam Bream and Paul Bryan
Recorded and Audio Mixed by Joe Reiser
Concert Lighting by Lewis Howell
A SixMoreThanForty Production
http://ift.tt/2jrpatm

http://ift.tt/2j3NY6X

http://ift.tt/1XF3ecB
http://ift.tt/2j3S8M4

http://vevo.ly/PLTWoN
via YouTube https://youtu.be/aTwUaZb_xYM

Liked on YouTube: “Fridays TV Show” (1980) [Show F-10] Devo – “Uncontrollable Urge” (Live) [10 of 10]

“Fridays TV Show” (1980) [Show F-10] Devo – “Uncontrollable Urge” (Live) [10 of 10]
Devo performs live, “Uncontrollable Urge”, in their second ‘Fridays’ appearance. Lead in with Maryedith Burrell.

** Check out this excellent article written by Dennis Perrin: “Fridays: The SNL Ripoff That Nearly Surpassed the Original” (January 31st, 2012) –

http://ift.tt/1U6QSXY

.
via YouTube https://youtu.be/AUyiMSEwRaI

Neat definition of Cyberculture

Frow and Morris (2000: 316), define culture neatly as ‘a network of embedded practices and representations (texts, images, talk, codes of behavior, and the narrative structures organizing these) that shapes every aspect of social life’. Cyberculture therefore refers here to ways of life in cyberspace, or ways of life shaped by cyberspace, where cyberspace is a matrix of embedded practices and representations.

http://www.ids-uva.nl/wordpress/wp-content/uploads/2011/08/Cyberculture-Theorists-Manuel-Castells-and-Donna-Haraway.pdf

 

The Relevance of Algorithms – NOTES

The relevance of Algorithms

Tarleton Gillespie

http://culturedigitally.org/2012/11/the-relevance-of-algorithms/

 

 

Algorithms are key actors in deciding what information is relevant to us. They have the power to assign meaningfulness. They are not always software.

If human culture is now expressed mainly through computational tools then we are applying a certain epistemology to all culture. Algorithms are contesting the position of previous cultural arbiters: experts, scientific methods, common sense, word of god.

6 potential political effects of algorithms:

  1. Patterns of inclusion – What counts?
  2. Cycles of anticipation – Can the algo. Predict what you need? Should it?
  3. Evaluation of relevance – How is this measured? Are the metrics kept secret?
  4. The promise of objectivity – Algo. Is portrayed as impartial. How?
  5. Entanglement with practice – Users change their behaviour to fit the algo.
  6. Production of calculated publics – algo presents a public thus shaping their sense of self. Who benefits?

 

Remember: Algos are not fixed entities. They are being tinkered with and negotiated at all times. They are designed by humans and represent certain institutional values.

PATTERNS OF INCLUSION

Study and algo. You need to ask what’s the database it’s drawing on. Don’t confuse the algo and the database as one thing.

3 stages – 1.) Collection – who decides what data is acceptable to collect? Is this universal?

2.) Prepared for algorithm  – Categorising asserts certain ontologies. E.g. what books on amazon are “adult” and thus excluded from the front page sales rank?

3.) Exclusion and demotion – who decides what should be presented? This practice is hidden in algos.

 

CYCLES OF ANTICIPATION

Search algorithms change from user input.

To give “better” results that are more relevant to you search algorithms need info about you. How does the algo determine what is it you want? What data does it draw upon? How can algo. Providers get you to offer up more info? How accurate can the user’s “data shadow” be? How much info is enough to make predictions?

 

EVALUATION OF RELEVANCE

Relevant is a fluid concept. There is no set metric. What metrics are used to approximate it? Since no unbiased measure is available there will always be elements of bias.

  1. Criteria used – Often kept private by the companies who make them. Don’t want users gaming the system.
  2. Commercial aims – Is there financial incentives for the algo producers to promote certain info? Can the social data and commercial data really be easily separated? E.g. what if you follow a product endorsing social media influencers? Is their info an ad or social data?
  3. Epistemological premises – Assumptions are made, is what is popular more relevant? Language used more relevant? Commercial or public sites more relevant? The algo. Is a cognitive artefact where the expertise and judgement of those who made it is hidden and automated.

 

THE PROMISE OF ALGOIRTHMIC OBJECTIVITY

Gives the algo. Credibility. How the creators of the algo. Portray this is important. Methods include repeated claims, obscuring how the algo works, the language used (“best results”)

Parallels with journalism – public interest and objective “fair”reporting. These are more ideals with attached rituals than something that is actually achieved.

 

ENTANGLEMENT WITH PRACTICE

Particularly true for businesses who deliver information as a commodity (google). If people don’t use it in practice the algo. Has failed. Users can shape their practice to better fit algorithms. E.g. twitter hashtags

Algos. Give power to those who know their workings.

As new algos. Become part of public life users may interact with them in unexpected ways. Although because how they work is kept secret it is harder for them to be truly public goods.

Shape how people think about things – knowledge logics. Facebook encourages us to “participate and share” more. Inoculates us with certain idea of privacy. Google – popular is more relevant.

PRODUCTION OF CALCULATED PUBLICS

Algo. Shape how digital publics/communities are formed, shaped and dissipate. Creation of echo chambers and filter bubbles.

Algo. Can create groups. E.g. facebook privacy setting “friends of friends”, only the algo. Can calculate the membership.

Social science based on big data is often drawing upon public data that has already been filtered through social media algo. Theories and knowledge based on this data tells us something about the public that has been calculated.

In creating these notes in some ways I have had to grapple with the same questions about informational relevance that are automated in algorithms. I have condensed a 32 page essay into a few pages of notes. That is inclusion and exclusion. To do this I am trying to anticipate what I will need to know in the future, particularly relating to what texts I might write on algorithms for the DigCul course. What are the key words that can tell me what is in the essay? If I can get this right I can look at my notes then decide if I need to read part of the essay again. To a certain extent my note taking is tangled with practice, the content may not end up being relevant but the discipline of writing precis and making judgements on what’s relevant is vital for studying. I’m still trying to improve these skills.

 

Lifestreaming and mental health

So from the about lifeblogging link suggested in the course handbook I ended up looking at the Quantified Self conference, this guy presented his deeply wrongheaded and creepy software at the conference:

http://quantifiedself.com/projects/1038

From the off, I oppose the ultimate assumption he makes that life is about maximising your happiness. This is overly utilitarian and needlessly devalues the worth of other emotional experiences that are essential to the human experience. Putting the futility of his overall aim aside, I have three other criticisms, to whit;

First, it is worth questioning whether happiness can be measured in the ways he suggests. At several points in his presentation he mentions in passing that the measurements can be adjusted to the individual depending on the weighting they give each metric. So the usefulness of his software relies on the mindboggling assumption that a person can know exactly what makes them happy and by how much in terms of opportunity cost.

Second, the measurements totally ignore personal context. For example, travel is assumed to increase your mental well-being but what if you have to travel across the country to attend the funeral of a relative? This would increase your score but would it really make you happier?

Third, the measurements also ignore societal context. By relentlessly focusing on personal data Milburn ignores the possibility that humans are social animals and that our mental health can be affected by our position in a wider society. Status anxiety exacerbated by societal income inequality can be a major factor in mental health (https://www.equalitytrust.org.uk/mental-health), yet this is not reflected in his metrics.

What this boils down to is that Milburn confuses correlation with causation and then solely focuses on the factors that tracking technology makes the easiest to quantify.

As ever, this leads me back to music. A soundtrack:

“Karma Police arrest this man, he talks in maths, he buzzes like a fridge, he’s like a detuned radio”