Lifestream, Diigo: The Future is Now – Sep 30, 2009

from Diigo http://ift.tt/2oL6xiZ
via IFTTT

The Future is Now: Diegetic Prototypes and the Role of Popular Films in Generating Real-world Technological Development

Social Studies of Science. Vol 40, Issue 1, 2010


Another exploration in the pursuit of the idea of ‘imaginaries’ and how these fictions play a generative role in the culture of technology – and specifically (my interest rather than that of the article) in education.

ABSTRACT: Scholarship in the history and sociology of technology has convincingly demonstrated that technological development is not inevitable, pre-destined or linear. In this paper I show how the creators of popular films including science consultants construct cinematic representations of technological possibilities as a means by which to overcome these obstacles and stimulate a desire in audiences to see potential technologies become realities. This paper focuses specifically on the production process in order to show how entertainment producers construct cinematic scenarios with an eye towards generating real-world funding opportunities and the ability to construct real-life prototypes. I introduce the term ‘diegetic prototypes’ to account for the ways in which cinematic depictions of future technologies demonstrate to large public audiences a technology’s need, viability and benevolence. Entertainment producers create diegetic prototypes by influencing dialogue, plot rationalizations, character interactions and narrative structure. These technologies only exist in the fictional world – what film scholars call the diegesis – but they exist as fully functioning objects in that world. The essay builds upon previous work on the notion of prototypes as ‘performative artefacts’. The performative aspects of prototypes are especially evident in diegetic prototypes because a film’s narrative structure contextualizes technologies within the social sphere. Technological objects in cinema are at once both completely artificial – all aspects of their depiction are controlled in production – and normalized within the text as practical objects that function properly and which people actually use as everyday objects.


At this point, I have to be totally honest and admit I haven’t got round to reading this yet. It looks as though it could shed light on the intricacies of how fictions influence reality, of how imaginaries can work as construction tools. I hope to get time to read it more closely this week – but it’s a busy, busy week..

Lifestream, Pinned to #mscedc on Pinterest

Just Pinned to #mscedc:
Culture Digitally the Podcast Episode 3: Conversations on Algorithms and Cultural Production – Culture Digitally
http://ift.tt/2ogye6r
http://ift.tt/2oowvcI
359654720229215479

I’m a little annoyed with myself that I’ve lost track of my path to finding ‘Culture Digitally’. It first appears in my work computer’s Chrome history on 24 March (but not this link, it was to The Relevance of Algorithms). Nothing was posted to the Lifestream on that day.. and reference to Gillespie’s work on algorithms came at least two weeks earlier. I like being able to picture my trails through the internet – little strings connecting and tying together some wanderings, starting new trails for others – but I’m lost on this one. It seems remarkably remiss in retrospect, as I really enjoyed the podcast and it’s likely that the source I got the link from (though it was probably to a different page in Culture Digitally) has links to other things I might like. This could be becoming an obsession though – and perhaps the volume of information has just exceeded my mental capacity – so I shall let it lie for now.

Right – the podcast. It is the type of transmission that causes several pages of sprawling notes, dotted with food if one happens to be cooking dinner, and multiple moments of vocalised agreement despite no chance the ‘author’ might engage. 2012, though, which means I’m only recently engaging in serious thought about topics Tarleton Gillespie was articulating with great finesse a good 5 years ago. Not to inflate my own insightfulness, how was I not concerned about the influence of algorithms 5 years ago? How is it that concern is not more widely felt today? I’ve digressed: the podcast.

The ‘conversation’ features Ted Striphas and Tarleton Gillespie, interviewed by Hector Postigo. Key things which came out of the discussion for me were:

  1. the role of the algorithm in knowledge generation
  2. the role of the algorithm in the creation of publics
  3. the cultural life of the algorithm
  4. materiality of the algorithm
  5. ethics of the algorithm

In terms of 1, in the recording Gillespie noted how dependent we are on algorithms for ‘knowing general things’, such as those ideas which are trending, and how this informs our understanding of the publics we exist in (2). Yet, this reliance is afforded despite algorithms being ‘malleable’ (4): how it decides what is relevant is based on the operator’s business interests, and can be changed to accommodate these, therefore casting doubt on the supposed legitimacy of the ‘knowledge’ produced. This leads into ‘the culture life of the algorithm’: what we expect it to do, what it does, and what its role is in business (3).

With such obvious conflicts of interest, concern about the ethics of algorithms (5) rise to the fore. Gillespie makes a really interesting comparison with journalism, and the ethics of journalism, which he separates (based on -reference given in recording) the promise of journalistic objectivity and journalistic practices. To the former Gillespie links the view of the algorithm as cold and objective. Of the latter – which he notes are messy in both fields – Gillespie asks, “What are the practices beneath the algorithm?” and “Where is the ombudsman of algorithms?” If we have no central authority, to which the culture of Silicon Valley is resistant, how can there be assurances that the tech companies employing algorithms are committed to their end users? Why would there be? And what are the consequence of the reification of informationalised worlds? What new dispositions are created? What new habits?

Very thought provoking questions. This ought to be more widely thought about now.

 

Lifestream, Diigo: What Do Metrics Want? How Quantification Prescribes Social Interaction on Facebook : Computational Culture

from Diigo http://ift.tt/1GHBZTO
via IFTTT

Benjamin Grosser, 9th November 2014

Excerpt:

“The Facebook interface is filled with numbers that count users’ friends, comments, and “likes.” By combining theories of agency in artworks and materials with a software studies analysis of quantifications in the Facebook interface, this paper examines how these metrics prescribe sociality within the site’s online social network.”


More on the complexities of interwoven agency, and further ‘proof’ that digital technologies are not separate from social practices.

Lifestream, Pocket, Algorithms in the news–why digital media literacy matters

Excerpt:

Much of our work on Code Acts in Education over the past few years has focused on the work that algorithms do (and what they are made to do and by who) in relation to learning, policy and practice. But the work of algorithms extends far beyond education of course.

Ben Williamson

via Pocket http://ift.tt/2hLkgE5

Ben Williamson, while acknowledging the influence of algorithms on his own search results, performed inurl: searches for algorithms within major UK news websites. The short form results (all quoted):

  • The Guardian‘s editorial line is to treat the algorithm as a governor;
  • The Telegraph treats the algorithm as a useful scientist whose expertise is helping society;
  • The Sun is largely disinterested in algorithms in terms of newsworthiness;
  • the editorial line of The Mirror is to treat algorithms in terms of brainy expertise;
  • Algorithms as problem-solvers might be one way of categorizing its [The Daily Mail‘s] editorial line*

*Based on an initial search. An hour later Williamson repeated the search, and received different results. “The Daily Mail is certainly not disinterested in algorithms–the result returns are pretty high compared to the tabloids, and the Mail does frequently re-post scientific content from sources like The Conversation–but by no means does it adopt the kind of critical line found in The Guardian.”


My concerns about algorithms are related to governance, and, I read the Guardian.. Do I read The Guardian because it (more than the other publications given) matches my worldview, or do I think the way I do because of the publications (like The Guardian) that I read? Or, was I initially attracted to The Guardian because of its similarity to my worldview, but now my worldview is influenced by the fact that I read The Guardian, and its initial similarity to my worldview perhaps allows some things to slip beneath the questioning of my ‘truth’ radar?

Fascinating work – makes me wonder, is there a website that presents diverse viewpoints on topics and events using inurl: searches? i.e. monitors news sites, feeding content from diverse sources, organised by topic or event, using humans to add new topics/events as events occur? And with an editorial team to summarise the editorial positions of the publications represented on specific topics? Would such a site help combat political polarisation and divisiveness?

Also.. how can we teach ‘algorithmic literacy’? Can we? When do we start? Would a site which unpacked what this could look like, and offered teaching ideas and a place for discussion be of use? [Assignment ideas..]

Lifestream, Pocket, Imaginaries and materialities of education data science

Excerpt:

Ben Williamson

This is a talk I presented at the Nordic Educational Research Association conference at Aalborg University, Copenhagen, on 23 March 2017.

Education is currently being reimagined for the future. In 2016, the online educational technology  magazine Bright featured a series of artistic visions of the future of education. One of them, by the artist Tim Beckhardt, imagined a vast new ‘Ocunet’ system.

via Pocket http://ift.tt/2n8CT5W


I found this post after reading Knox’s (2014) post on interpreting analytics in the same blog space. What would we call that? Searching laterally? Something which was, at the time, really frustrating in DEGC was that we were always given links to Journal home pages rather than to the specific article we were reading. While I seem to recall this being connected to copyright and appropriate practice, it was frustrating because none of the links were set to open in a new window/tab by default, so unless one right clicked and opened a new window/tab, one then had to go back to the original page to find out which issue one was looking for.. but I’ve subsequently reflected (repeatedly!) on how it made me much more aware of the types of ‘publications’ and their respective content, and perhaps resultantly, I think my ‘lateral searching’ has increased. It’s not a new practice, of course, but an addictive one nonetheless, and it’s always good to find a ‘treasure trove’ of good reads.

I’m getting tangential, though – what caught my eye about this post, in particular, was the focus on ‘imaginaries’, and the ways in which such ‘imaginaries’, or fictions, play a role in the creation of future reality. Williamsons writes,

..what I’m trying to suggest here is that new ways of imagining education through big data appear to mean that such practices of algorithmic governance could emerge, with various actions of schools, teachers and students all subjected to data-based forms of surveillance acted upon via computer systems.

Importantly too, imaginaries don’t always remain imaginary. Sheila Jasanoff has described ‘sociotechnical imaginaries’ as models of the social and technical future that might be realized and materialized through technical invention.Imaginaries can originate in the visions of single individuals or small groups, she argues, but gather momentum through exercises of power to enter into the material conditions and practices of social life. So in this sense, sociotechnical imaginaries can be understood as catalysts for the material conditions in which we may live and learn.

The post has a lot more in it, focusing on how the imaginaries of ‘education data science’ combined with affective computing and cognitive computing are leading to a new kind of ‘agorithmic governance’ within education. Frightening stuff, to be frank.

What I’m really interested in is the role of these ‘imaginaries’ though: how do fictions, and, frequently, corporate fictions, work their influence? Which previous imaginaries, captured in science fiction, can we trace – along with their reception over time – to present day materialities?

And, why are ‘the people’ so passive? Why isn’t there shouting about imaginaries being presented as inevitable? Why isn’t their protest?  A rant: “Uh – you want to put a camera on my kid’s head, to tell me how she’s feeling? Have you thought about asking her? You want to produce data for parents? How about as a society ‘just’ recognising the value of non-working lives and giving people enough time to spend with their kids while they’re trying to pay rent or a mortgage?”

It would make an interesting study – perhaps too large for our EDC final assignment, but I’m wondering about it could be scaled back.

 

 

Lifestream, Tweets

Stephen Downes’ summary:

When I spoke at the London School of Economics a couple years ago, part of my talk was an extended criticism of the use of models in learning design and analysis. “The real issue isn’t algorithms, it’s models. Models are what you get when you feed data to an algorithm and ask it to make predictions. As (Cathy) O’Neil puts it, ‘Models are opinions embedded in mathematics.'” This article is an extended discussion of the problem stated much more cogently than my presentation. “It’s E Pluribus Unum reversed: models make many out of one, pigeonholing each of us as members of groups about whom generalizations — often punitive ones (such as variable pricing) can be made.


My additions (i.e. from  my reading of the article):

What are ‘weapons of math destruction’?

Statistical models that:

  1. are not opaque to their subjects
  2. are harmful to subjects’ interests
  3. grow exponentially to run at large scale

What’s wrong with these models that leads to them being so destructive?

1. lack of feedback and tuning

2. the training data is biased. For example,

The picture of a future successful Ivy League student or loan repayer is painted using data-points from the admittedly biased history of the institutions

3. “the bias gets the credibility of seeming objectivity”

Why does it matter?

It’s a grim picture of the future: WMD makers and SEO experts locked in an endless arms-race to tweak their models to game one another, and all the rest of us being subjected to automated caprice or paying ransom to escape it (for now). In that future, we’re all the product, not the customer (much less the citizen).

Inside this picture, the cost of ‘cleaning up’ the negative externalities that result from sloppy statistical models is more expensive than the savings that companies make through maintaining the models. Yet, we pay for the cleaning up (individually, collectively), while those pushing the weak statistical models save.

The other loss is, of course, the potential: algorithms could, with good statistical modelling, serve societal needs, and those in need within society.

The line of argument is hard to argue with – but one does have to ask, is ‘sloppy’ the right term? Is it just sloppiness? At what point does such ‘sloppiness’ become culpable? Or, malicious disregard?