Lifestream, Liked on YouTube: Not Enough AI | Daniela Rus

via YouTube


Daniela Rus’ presentation was interesting to watch in the context of having recently watched Audrey Watters’ presentation at Edinburgh on the automation of education. Rus doesn’t have the cynicism which Watters (justifiably) has. For example, she identifies an algorithm which is able to reduce the number of taxis required in New York City by 10,000 by redirecting drivers (if the public agrees to ride-share). While this could mean 10.000 job losses, Rus says that, with a new economic model, it doesn’t have to. She describes a different picture in which the algorithm could mean the same money for cab drivers, but shorter shifts, with 10,000 less cars on the road producing less pollution. It’s a solution which is good for taxi drivers, and good for society – but like Watters I fear that within capitalism there is little incentive for commercial entities to make the choice to value people or the environment over profits. Automation should, as Rus suggests in the presentation, take away the uninteresting and repetitive parts of jobs and enable a focus on the more ‘human’ aspects of work, but instead, it can be used to deskill professions and push down wages. Her key takeaway is that machines, like humans, are neither necessarily good or bad. For machines, it just depends on how we use them..

 

 

Lifestream, Diigo: The need for algorithmic literacy, transparency and oversight grows | Pew Research Center

from Diigo http://ift.tt/2loZnPJ
via IFTTT


I posted a link to the complete Pew Research Report (Code-Dependent: Pros and Cons of the Algorithm Age) a few weeks back (March 11). This week, while thinking about my final assignment for Education and Digital Cultures, I returned to Theme 7: The need grows for algorithmic literacy, transparency and oversight.

While the respondents make a great deal of both interesting and important points about concerns that need to be addressed at a societal level – for example, managing accountability (or the dissolution thereof) and transparency of algorithms, avoiding centralized execution of bureaucratic reason/including checks and balances within the centralization enabled by algorithms – there were also points raised that need to be addressed at an educational level. Specifically, Justin Reich from MIT Teaching Systems Lab suggests that ‘those who design algorithms should be trained in ethics’, and Glen Ricart argues that there is a need for people to understand how algorithms affect them and for people to be able to personalize the algorithms they use. In the longer term, Reich’s point doesn’t seem to be limited to those studying computer science subjects, in that, if, as predicted elsewhere (theme 1) in the same report, algorithms continue to spread, more individuals will presumably be involved in their creation as a routine part of their profession (rather than their creation being reserved for computer scientists/programmers/etc.). Also, as computer science is ‘rolled out’ in primary and secondary schools, it makes sense that the study of (related) ethics ought to be a part of the curriculum at those levels also. Further, Ricart implies, in the first instance, that algorithmic literacy needs to be integrated into more general literacy/digital literacy instruction, and in the second, that all students will need to develop computational thinking and the ability to modify algorithms through code (unless black-boxed tool kits are provided to enable people to do this without coding per se, in the same way the Weebly enables people to build websites without writing code).

 

Lifestream, Diigo: eLearnit

from Diigo http://ift.tt/2oPaLWF
via IFTTT

 


I’ve been a little distracted the last couple of days, as I’m presenting the paper I wrote for my final assignment for Digital Education in Global Contexts (Semester B, 2015-16) at a conference today. To be fair, a lot of the conference seems focused on the promise technology is perceived to hold for education (I’m thinking of Siân Bayne’s 2015 inaugural lecture, The Trouble with Digital Education, 8:20) and I’m not certain that my paper will be of a great deal of interest to the audience, but it is, nonetheless, a little nerve wracking. As a consequence of over thinking it, no doubt I’ll also be summarising week 11’s lifestream and adding metadata later tonight.

Snip from the conference programme

Lifestream, Diigo: The Future is Now – Sep 30, 2009

from Diigo http://ift.tt/2oL6xiZ
via IFTTT

The Future is Now: Diegetic Prototypes and the Role of Popular Films in Generating Real-world Technological Development

Social Studies of Science. Vol 40, Issue 1, 2010


Another exploration in the pursuit of the idea of ‘imaginaries’ and how these fictions play a generative role in the culture of technology – and specifically (my interest rather than that of the article) in education.

ABSTRACT: Scholarship in the history and sociology of technology has convincingly demonstrated that technological development is not inevitable, pre-destined or linear. In this paper I show how the creators of popular films including science consultants construct cinematic representations of technological possibilities as a means by which to overcome these obstacles and stimulate a desire in audiences to see potential technologies become realities. This paper focuses specifically on the production process in order to show how entertainment producers construct cinematic scenarios with an eye towards generating real-world funding opportunities and the ability to construct real-life prototypes. I introduce the term ‘diegetic prototypes’ to account for the ways in which cinematic depictions of future technologies demonstrate to large public audiences a technology’s need, viability and benevolence. Entertainment producers create diegetic prototypes by influencing dialogue, plot rationalizations, character interactions and narrative structure. These technologies only exist in the fictional world – what film scholars call the diegesis – but they exist as fully functioning objects in that world. The essay builds upon previous work on the notion of prototypes as ‘performative artefacts’. The performative aspects of prototypes are especially evident in diegetic prototypes because a film’s narrative structure contextualizes technologies within the social sphere. Technological objects in cinema are at once both completely artificial – all aspects of their depiction are controlled in production – and normalized within the text as practical objects that function properly and which people actually use as everyday objects.


At this point, I have to be totally honest and admit I haven’t got round to reading this yet. It looks as though it could shed light on the intricacies of how fictions influence reality, of how imaginaries can work as construction tools. I hope to get time to read it more closely this week – but it’s a busy, busy week..

Lifestream, Liked on YouTube: Automating Education and Teaching Machines Audrey Watters

via YouTube

I was alerted to this excellent talk by Audrey Watters by peers who were tweeting while watching the live stream (thanks Colin, and others). Of course, it is also part of James Lamb’s ‘Lifestream’, in that he featured on screen 😉

I think as a listened to/watched this talk, my focus still on ‘imaginaries’:

  • the way machine learning is used as a metaphor for human learning (Watters asks, ‘Do humans even learn in this way’?), and the consequences that holding such an understanding will have on education;
  • the giving of agency to robots (‘Robots are coming for our jobs’) when, as Watters says, robots do not have agency, and the decision to replace humans is one of owners opting for automation, choosing profit over people – how does the supposedly ‘technological proclamation’ naturalise the loss of human labour?
  • the ‘Uber’ model, and ‘uberization’: how is the ‘driverless’ story of algorithmic governance sold, so that surveillance, the removal of human decision makers with human values and personalization all become naturalised.. and even seen as goals?

There is so much in this talk which I valued. I won’t go through it all – the basic premise is that we need to resist the imaginaries, the reliance on data, and we need to recognise the motivations driving the imaginaries while valuing the human in education. It links to my idea for the final assignment about unpacking ‘imaginaries’ more, but also to my ideas about making a site based around developing ‘algorithmic literacy’:

“You’ve got to be the driver. You’re not incharge when the algorithm is driving” [38:30]

Really worth the watch.

Lifestream, Diigo: What Do Metrics Want? How Quantification Prescribes Social Interaction on Facebook : Computational Culture

from Diigo http://ift.tt/1GHBZTO
via IFTTT

Benjamin Grosser, 9th November 2014

Excerpt:

“The Facebook interface is filled with numbers that count users’ friends, comments, and “likes.” By combining theories of agency in artworks and materials with a software studies analysis of quantifications in the Facebook interface, this paper examines how these metrics prescribe sociality within the site’s online social network.”


More on the complexities of interwoven agency, and further ‘proof’ that digital technologies are not separate from social practices.

Lifestream, Diigo: Undermining ‘data’: A critical examination of a core term in scientific inquiry | Markham | First Monday

“The term ‘data’ functions as a powerful frame for discourse about how knowledge is derived and privileges certain ways of knowing over others. Through its ambiguity, the term can foster a self–perpetuating sensibility that ‘data’ is incontrovertible, something to question the meaning or the veracity of, but not the existence of. This article critically examines the concept of ‘data’ within larger questions of research method and frameworks for scientific inquiry. The current dominance of the term ‘data’ and ‘big data’ in discussions of scientific inquiry as well as everyday advertising focuses our attention on only certain aspects of the research process. The author suggests deliberately decentering the term, to explore nuanced frames for describing the materials, processes, and goals of inquiry.”
from Diigo http://ift.tt/2mOzpW3
via IFTTT


Another great read this week – Markham (2013) suggests ‘data’ acts as a frame, through which we interpret and make sense of our social world. However, she adds,  “the interesting thing about frames, as social psychologist Goffman (1974) noted, is that they draw our attention to certain things and obscure other things.” Through persistent framings, particular ways of interpreting the world are naturalised, and the frame itself becomes invisible.  So is the case with ‘data’, the frame of which Markham  views as having transformed our sense of what it means to be in the 21st century, when experience is digitalised and “collapsed into collectable data points”. These data points are, however, abstractions, which can be reductive, obscuring rather than revealing:

“From a qualitative perspective, ‘data’ poorly capture the sensation of a conversation or a moment in context.”

Certainly, this is reflected in my experience of the Tweet Archivist data analysis of our tweetorial last week.  As such, I particularly enjoyed Markham’s call to embrace complexity, and to reframe the practice of inquiry as one “sense–making rather than discovering or finding or attempting to classify in a reductionist sense.”

“the complexity of twenty–first century culture requires finding perspectives that challenge taken for granted methods for studying the social in a digital epoch. Contributing to an infrastructure of knowledge that does not reduce or simplify experience requires us to acknowledge and scrutinize, as part of our methods, the ways in which data is being generated (we are generating data) in ways we may not notice. Changing the frame from one that is overly–focused on ‘data’ can help us explore the ways our research exists as a continual, dialogic, messy, entangled, and inventive process when it occurs outside the walls of the academy, the covers of books, and the written word.” 

Markham also writes of another strategy for reframing research, which is as a generative process achieved through collaborative remix. Here, the focus is on interpretation and sense-making rather than on findings per se:

“Using remix as a lens for thinking about research is intended to destabilize both the process and products of inquiry, but not toward the end of chaos or “anything goes.” The idea of remix simply refocuses energy toward meaning versus method; engagement versus objectivity; interpretation versus findings; argument versus explanation. In all of this, data is certainly available, present, and important, but it takes a secondary role to sense–making.”

I thought it was apt to include comment on that part of Markham’s paper here, owing to remix’s position within our last block in relation to notions of community cultures, but also because in a sense it speaks to ‘new’, more experimental forms of authorship, which have been a focus in the course.

Lifestream, Diigo: A critical reflection on Big Data: Considering APIs, researchers and tools as data makers | Vis | First Monday

“This paper looks at how data is ‘made’, by whom and how. Rather than assuming data already exists ‘out there’, waiting to simply be recovered and turned into findings, the paper examines how data is co–produced through dynamic research intersections. A particular focus is the intersections between the application programming interface (API), the researcher collecting the data as well as the tools used to process it. In light of this, this paper offers three new ways to define and think about Big Data and proposes a series of practical suggestions for making data.”
from Diigo http://ift.tt/2aFY3FC
via IFTTT


A couple of points from this paper seem relevant this week.

  1. The tools we use when researching ‘limit the possibilities of the data that can be seen and made. Tools then take on a kind of data-making agency.’ I wonder what the influence of the Tweet Archivist API is on my sensemaking of our data.
  2. Data are always selected in particular ways’ some data are made more visible than others and the most visible doesn’t necessarily align with or take into account what was most valued by and meaningful to users. ‘It is important to remember that what you see is framed by what you are able to see or indeed want to see from within a specific ideological framework.’ What did we value most in our tweetorial (obviously different things for different folks)? We still need to construct research questions that focus on those things most important to us, even if the data are less readily available.
  3. ‘Visibility can be instrumentalised in different ways, depending on the interests of those seeking to make something visible. Visibility can be useful as a means of control, it can be commercially exploited, or it can be sold to others who can exploit it in turn.’ How are we exploiting visibility in education?
  4. The monetisation – or making valuable in other ways – of data makes the data itself unreliable. Helen suggests this in her blog post, where she muses that perhaps if she’d known what aspects of our behaviour in the tweetorial were being analysed, she would have ‘gamed it’. 

Lifestream, Diigo: Three challenges for the web, according to its inventor – World Wide Web Foundation

Tim Berners-Lee calls for greater algorithmic transparency and personal data control.
from Diigo http://ift.tt/2ncWlj9
via IFTTT

 

I almost forgot to add some ‘meta-data’ to this one!

Who can believe the Internet is 28 years old? In this open letter, Tim Berner-Lee voices three concerns for the Internet, all connected to algorithms:

1)   We’ve lost control of our personal data

2)   It’s too easy for misinformation to spread on the web

3)   Political advertising online needs transparency and understanding

In terms of (1), Berners-Lee calls for data to be placed back into the hands of web users and for greater algorithm transparency, while encouraging us to fight against excessive surveillance laws.

In terms of personal data control, I wonder what the potential of Finland’s proposed MyData system is:

 

MyData Nordic Model

Transparency of algorithms also applies to (2) – but I also think that web users have to be more proactive in questioning what they find (are given) on the web, and there needs to be greater focus in schools on questioning claims and information rather than sources per se within the teaching of information and media literacy. Berners-Lee additionally calls for greater pressure to be placed on major aggregators such as Google and Facebook to be the gatekeepers, with a responsibility to stop the spread of fake news and warns against a singular, central arbiter of ‘truth’. Where does responsibility lie for misleading information, clickbait and so on?  While I agree that aggregators need to take responsibility, the problem seems to be connected to the underlying economic model: while ever there is money to be made from ‘clicks’ fraudulent & sensationalist ‘news’ will continue to be created. The quality of journalism will be weakened. I don’t have any long term solutions – but perhaps in the short term taking personal responsibility for diversifying the channels through which we search for and receive (and distribute!) information is a start, along with simple actions towards protecting some of our data (logging out, using browsers like Tor, not relying exclusively on Google, for example).

Lifestream, Liked on YouTube: Bias? In My Algorithms? A Facebook News Story

via YouTube

In this video from September 2016, Mike Rugnetta responds to concerns about Facebook which arose in 2016:

  1. May 2016: reports of Facebook suppressing conservative views
  2. August 2016: editorial/news staff replaced with algorithm

He asks, primarily, why we expect Facebook to be unbiased, given that any news source is subject to editorial partiality, and connects their move to separate themselves from their editorial role through the employment of algorithms to ‘mathwashing’ (Fred Benson), or the use of math terms such as ‘algorithm’ to imply objectivity and impartiality, and the assumption that computers do not have bias, despite being programmed by humans with bias, and being reliant on data.. with bias.

Facebook’s sacking of their human team and movement to reliance on algorithms is demonstrative of one of Gillespie’s assertions, except that in Facebook’s case a reputation of neutrality was sought through the reputation of algorithms in general:

The careful articulation of an algorithm as impartial (even when that characterization is more obfuscation than explanation) certifies it as a reliable sociotechnical actor, lends its results relevance and credibility, and maintains the provider’s apparent neutrality in the face of the millions of evaluations it makes.

(2012, p. 13 of the linked to PDF)

In the video, Rugnetta suggests there’s a need to abandon the myth of algorithmic neutrality. True – but we also need greater transparency. With so much information available we need some kind of sorting mechanism, and we also need to know (and be able to tweak) the criteria if we are to be in control of our civic participation.