Lifestream, Diigo: The need for algorithmic literacy, transparency and oversight grows | Pew Research Center

from Diigo http://ift.tt/2loZnPJ
via IFTTT


I posted a link to the complete Pew Research Report (Code-Dependent: Pros and Cons of the Algorithm Age) a few weeks back (March 11). This week, while thinking about my final assignment for Education and Digital Cultures, I returned to Theme 7: The need grows for algorithmic literacy, transparency and oversight.

While the respondents make a great deal of both interesting and important points about concerns that need to be addressed at a societal level – for example, managing accountability (or the dissolution thereof) and transparency of algorithms, avoiding centralized execution of bureaucratic reason/including checks and balances within the centralization enabled by algorithms – there were also points raised that need to be addressed at an educational level. Specifically, Justin Reich from MIT Teaching Systems Lab suggests that ‘those who design algorithms should be trained in ethics’, and Glen Ricart argues that there is a need for people to understand how algorithms affect them and for people to be able to personalize the algorithms they use. In the longer term, Reich’s point doesn’t seem to be limited to those studying computer science subjects, in that, if, as predicted elsewhere (theme 1) in the same report, algorithms continue to spread, more individuals will presumably be involved in their creation as a routine part of their profession (rather than their creation being reserved for computer scientists/programmers/etc.). Also, as computer science is ‘rolled out’ in primary and secondary schools, it makes sense that the study of (related) ethics ought to be a part of the curriculum at those levels also. Further, Ricart implies, in the first instance, that algorithmic literacy needs to be integrated into more general literacy/digital literacy instruction, and in the second, that all students will need to develop computational thinking and the ability to modify algorithms through code (unless black-boxed tool kits are provided to enable people to do this without coding per se, in the same way the Weebly enables people to build websites without writing code).

 

Lifestream, Diigo: eLearnit

from Diigo http://ift.tt/2oPaLWF
via IFTTT

 


I’ve been a little distracted the last couple of days, as I’m presenting the paper I wrote for my final assignment for Digital Education in Global Contexts (Semester B, 2015-16) at a conference today. To be fair, a lot of the conference seems focused on the promise technology is perceived to hold for education (I’m thinking of Siân Bayne’s 2015 inaugural lecture, The Trouble with Digital Education, 8:20) and I’m not certain that my paper will be of a great deal of interest to the audience, but it is, nonetheless, a little nerve wracking. As a consequence of over thinking it, no doubt I’ll also be summarising week 11’s lifestream and adding metadata later tonight.

Snip from the conference programme

Lifestream, Diigo: The Future is Now – Sep 30, 2009

from Diigo http://ift.tt/2oL6xiZ
via IFTTT

The Future is Now: Diegetic Prototypes and the Role of Popular Films in Generating Real-world Technological Development

Social Studies of Science. Vol 40, Issue 1, 2010


Another exploration in the pursuit of the idea of ‘imaginaries’ and how these fictions play a generative role in the culture of technology – and specifically (my interest rather than that of the article) in education.

ABSTRACT: Scholarship in the history and sociology of technology has convincingly demonstrated that technological development is not inevitable, pre-destined or linear. In this paper I show how the creators of popular films including science consultants construct cinematic representations of technological possibilities as a means by which to overcome these obstacles and stimulate a desire in audiences to see potential technologies become realities. This paper focuses specifically on the production process in order to show how entertainment producers construct cinematic scenarios with an eye towards generating real-world funding opportunities and the ability to construct real-life prototypes. I introduce the term ‘diegetic prototypes’ to account for the ways in which cinematic depictions of future technologies demonstrate to large public audiences a technology’s need, viability and benevolence. Entertainment producers create diegetic prototypes by influencing dialogue, plot rationalizations, character interactions and narrative structure. These technologies only exist in the fictional world – what film scholars call the diegesis – but they exist as fully functioning objects in that world. The essay builds upon previous work on the notion of prototypes as ‘performative artefacts’. The performative aspects of prototypes are especially evident in diegetic prototypes because a film’s narrative structure contextualizes technologies within the social sphere. Technological objects in cinema are at once both completely artificial – all aspects of their depiction are controlled in production – and normalized within the text as practical objects that function properly and which people actually use as everyday objects.


At this point, I have to be totally honest and admit I haven’t got round to reading this yet. It looks as though it could shed light on the intricacies of how fictions influence reality, of how imaginaries can work as construction tools. I hope to get time to read it more closely this week – but it’s a busy, busy week..

Lifestream, Diigo: What Do Metrics Want? How Quantification Prescribes Social Interaction on Facebook : Computational Culture

from Diigo http://ift.tt/1GHBZTO
via IFTTT

Benjamin Grosser, 9th November 2014

Excerpt:

“The Facebook interface is filled with numbers that count users’ friends, comments, and “likes.” By combining theories of agency in artworks and materials with a software studies analysis of quantifications in the Facebook interface, this paper examines how these metrics prescribe sociality within the site’s online social network.”


More on the complexities of interwoven agency, and further ‘proof’ that digital technologies are not separate from social practices.

Lifestream, Diigo: Undermining ‘data’: A critical examination of a core term in scientific inquiry | Markham | First Monday

“The term ‘data’ functions as a powerful frame for discourse about how knowledge is derived and privileges certain ways of knowing over others. Through its ambiguity, the term can foster a self–perpetuating sensibility that ‘data’ is incontrovertible, something to question the meaning or the veracity of, but not the existence of. This article critically examines the concept of ‘data’ within larger questions of research method and frameworks for scientific inquiry. The current dominance of the term ‘data’ and ‘big data’ in discussions of scientific inquiry as well as everyday advertising focuses our attention on only certain aspects of the research process. The author suggests deliberately decentering the term, to explore nuanced frames for describing the materials, processes, and goals of inquiry.”
from Diigo http://ift.tt/2mOzpW3
via IFTTT


Another great read this week – Markham (2013) suggests ‘data’ acts as a frame, through which we interpret and make sense of our social world. However, she adds,  “the interesting thing about frames, as social psychologist Goffman (1974) noted, is that they draw our attention to certain things and obscure other things.” Through persistent framings, particular ways of interpreting the world are naturalised, and the frame itself becomes invisible.  So is the case with ‘data’, the frame of which Markham  views as having transformed our sense of what it means to be in the 21st century, when experience is digitalised and “collapsed into collectable data points”. These data points are, however, abstractions, which can be reductive, obscuring rather than revealing:

“From a qualitative perspective, ‘data’ poorly capture the sensation of a conversation or a moment in context.”

Certainly, this is reflected in my experience of the Tweet Archivist data analysis of our tweetorial last week.  As such, I particularly enjoyed Markham’s call to embrace complexity, and to reframe the practice of inquiry as one “sense–making rather than discovering or finding or attempting to classify in a reductionist sense.”

“the complexity of twenty–first century culture requires finding perspectives that challenge taken for granted methods for studying the social in a digital epoch. Contributing to an infrastructure of knowledge that does not reduce or simplify experience requires us to acknowledge and scrutinize, as part of our methods, the ways in which data is being generated (we are generating data) in ways we may not notice. Changing the frame from one that is overly–focused on ‘data’ can help us explore the ways our research exists as a continual, dialogic, messy, entangled, and inventive process when it occurs outside the walls of the academy, the covers of books, and the written word.” 

Markham also writes of another strategy for reframing research, which is as a generative process achieved through collaborative remix. Here, the focus is on interpretation and sense-making rather than on findings per se:

“Using remix as a lens for thinking about research is intended to destabilize both the process and products of inquiry, but not toward the end of chaos or “anything goes.” The idea of remix simply refocuses energy toward meaning versus method; engagement versus objectivity; interpretation versus findings; argument versus explanation. In all of this, data is certainly available, present, and important, but it takes a secondary role to sense–making.”

I thought it was apt to include comment on that part of Markham’s paper here, owing to remix’s position within our last block in relation to notions of community cultures, but also because in a sense it speaks to ‘new’, more experimental forms of authorship, which have been a focus in the course.

Lifestream, Diigo: A critical reflection on Big Data: Considering APIs, researchers and tools as data makers | Vis | First Monday

“This paper looks at how data is ‘made’, by whom and how. Rather than assuming data already exists ‘out there’, waiting to simply be recovered and turned into findings, the paper examines how data is co–produced through dynamic research intersections. A particular focus is the intersections between the application programming interface (API), the researcher collecting the data as well as the tools used to process it. In light of this, this paper offers three new ways to define and think about Big Data and proposes a series of practical suggestions for making data.”
from Diigo http://ift.tt/2aFY3FC
via IFTTT


A couple of points from this paper seem relevant this week.

  1. The tools we use when researching ‘limit the possibilities of the data that can be seen and made. Tools then take on a kind of data-making agency.’ I wonder what the influence of the Tweet Archivist API is on my sensemaking of our data.
  2. Data are always selected in particular ways’ some data are made more visible than others and the most visible doesn’t necessarily align with or take into account what was most valued by and meaningful to users. ‘It is important to remember that what you see is framed by what you are able to see or indeed want to see from within a specific ideological framework.’ What did we value most in our tweetorial (obviously different things for different folks)? We still need to construct research questions that focus on those things most important to us, even if the data are less readily available.
  3. ‘Visibility can be instrumentalised in different ways, depending on the interests of those seeking to make something visible. Visibility can be useful as a means of control, it can be commercially exploited, or it can be sold to others who can exploit it in turn.’ How are we exploiting visibility in education?
  4. The monetisation – or making valuable in other ways – of data makes the data itself unreliable. Helen suggests this in her blog post, where she muses that perhaps if she’d known what aspects of our behaviour in the tweetorial were being analysed, she would have ‘gamed it’. 

Lifestream, Diigo: Three challenges for the web, according to its inventor – World Wide Web Foundation

Tim Berners-Lee calls for greater algorithmic transparency and personal data control.
from Diigo http://ift.tt/2ncWlj9
via IFTTT

 

I almost forgot to add some ‘meta-data’ to this one!

Who can believe the Internet is 28 years old? In this open letter, Tim Berner-Lee voices three concerns for the Internet, all connected to algorithms:

1)   We’ve lost control of our personal data

2)   It’s too easy for misinformation to spread on the web

3)   Political advertising online needs transparency and understanding

In terms of (1), Berners-Lee calls for data to be placed back into the hands of web users and for greater algorithm transparency, while encouraging us to fight against excessive surveillance laws.

In terms of personal data control, I wonder what the potential of Finland’s proposed MyData system is:

 

MyData Nordic Model

Transparency of algorithms also applies to (2) – but I also think that web users have to be more proactive in questioning what they find (are given) on the web, and there needs to be greater focus in schools on questioning claims and information rather than sources per se within the teaching of information and media literacy. Berners-Lee additionally calls for greater pressure to be placed on major aggregators such as Google and Facebook to be the gatekeepers, with a responsibility to stop the spread of fake news and warns against a singular, central arbiter of ‘truth’. Where does responsibility lie for misleading information, clickbait and so on?  While I agree that aggregators need to take responsibility, the problem seems to be connected to the underlying economic model: while ever there is money to be made from ‘clicks’ fraudulent & sensationalist ‘news’ will continue to be created. The quality of journalism will be weakened. I don’t have any long term solutions – but perhaps in the short term taking personal responsibility for diversifying the channels through which we search for and receive (and distribute!) information is a start, along with simple actions towards protecting some of our data (logging out, using browsers like Tor, not relying exclusively on Google, for example).

Lifestream, Diigo: Predictions, Probabilities, and Placebos | Confessions of a Community College Dean

Concerns about predictive analysis – does it introduce ‘stereotype threat’, in which learning that “people like me aren’t good at x” has an affective impact on performance? Steele, quoted in the article, suggests that awareness of negative stereotypes diverts cognitive resources. In this sense, the author (Matt Reed) contends that predictive analytics have the potential to recreate existing economic gaps.

I would say it works from the other side too: teachers who know a student has a bad behavioural or ‘performance’ record often treat them differently, as though they are already a problem.

Reed proposes that we may have ‘a positive duty to withhold data that would do active harm’. Sounds fair on the one hand – but given the option of conducting a ‘statistical placebo’ I feel uncomfortable. We don’t all respond to information in the same manner; perhaps for some students the negative predictions would be valuable. Should students have a right to the predictions?

In a follow-up article Inside Digital Learning asked the leaders of companies from predicitive analytics for a response. Key points/quotes included:

  • It’s not about what the information is, it’s about how you deliver it (i.e. support [which, as Enyon, 2013, p. 238 notes, has financial implications for providers], talking about a student’s options);
  • The type of data you share matters: “It’s not a matter of whether you should share predictive data with students or not, it’s a matter of sharing data they can act on,” Dave Jarrat from Inside Track (i.e. being told that you’re likely to fail/discontinue your studies isn’t useful – you need to know what students who were in your position and succeeded did);
  • Individual responses to data need to be taken into consideration.

Irrespective of whether the responses led me to a personal stance, they speak very loudly of the learnification of education.

from Diigo http://ift.tt/2klcMw2
via IFTTT

 

Lifestream, Diigo: Digital materiality? How artifacts without matter, matter | Leonardi | First Monday

Leonardi (2010) provides clear and well-illustrated descriptions of materiality (i.e. relevant to ‘digital materiality’) using 3 different definitions of material:
(1) Material as related to physical substance
(2) Material as the practical instantiation of theory
(3) Material as ‘significant’

Through these ways, and particularly the latter two definitions, of viewing materiality, researchers can gain a way of framing and understanding the role of digital technologies-in-practice.
” These alternative, relational definitions move materiality ‘out of the artifact’ and into the space of interaction between people and artifacts. No matter whether those artifacts are physical or digital, their ‘materiality’ is determined, to a substantial degree, by when, how, and why they are used. These definitions imply that materiality is not a property of artifacts, but a product of the relationships between artifacts and the people who produce and consume them.”

from Diigo http://ift.tt/28PkUdt
via IFTTT