Lifestream, Pocket, ‘Future Visions’ anthology brings together science fiction – and science fact

Excerpt:

To the casual observer, the kind of technological breakthroughs Microsoft researchers make may seem to be out of this world.

via Pocket http://ift.tt/2nKVDcX


I came across this collection of short science fiction stories from Microsoft. I hate that I like it (I still haven’t forgiven Gates for 1995’s shenanigans with Netscape and others – & for, well, breaking the ethos of the Internet), but it seems like a ‘page turner’. I’ve only read half the first story, mind, as it is not available to me in iTunes locally and Amazon is suggesting it does not deliver to my region despite it being an e-book. I could use a shop and ship address, but it’s kind of annoying that it isn’t just available as a PDF – combined with my & Bill’s ‘history’, it was enough to put me off for now.

One thing I did think about from the first half of the first story, in which translation and natural language programming has reached the point of being able to translate signing into spoken language and spoken language to text in real time, is that while we herald the benefits of technology for differently abled people, we also ignore what it could mean for communities like the Deaf community, and cultures like Deaf culture. I’m not really qualified to speak on it myself, but I’d be interested in hearing the perspectives of people from within the Deaf community.

Lifestream, Pocket, The Best Way to Predict the Future is to Issue a Press Release

Excerpt:

This talk was delivered at Virginia Commonwealth University today as part of a seminar co-sponsored by the Departments of English and Sociology. The slides are also available here. Thank you very much for inviting me here to speak today.

via Pocket http://ift.tt/2fF4PPI


I started out by trying to grab a few select quotes from this talk that Watters delivered at Virginia Commonwealth University in November 2016, but it is pretty much all gold. She writes about how the stories we tell – or have told to us – about technology and educational technology direct the future, and asks how these stories affect decision making within education:

Here’s my “take home” point: if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.

..

..to predict the future is to control it – to attempt to control the story, to attempt to control what comes to pass.

Watters’ interrogation of future stories – stories by Gartner, by the New Horizon Report, by Justin Thrun, and others – demonstrate that these stories tell us much more about what kind of future the story-tellers want than about future per se. This matters, Watters suggests, because these stories are used to ‘define, disrupt, [and] destabilize’ our institutions:

I pay attention to this story, as someone who studies education and education technology, because I think these sorts of predictions, these assessments about the present and the future, frequently serve to define, disrupt, destabilize our institutions. This is particularly pertinent to our schools which are already caught between a boundedness to the past – replicating scholarship, cultural capital, for example – and the demands they bend to the future – preparing students for civic, economic, social relations yet to be determined.

It’s a powerful read – and connected to the idea I want to pursue in my final assignment. I’m interested in seeing if there are different stories being told to different segments of the population, and trying to imagine what the consequences of that different imagining might be.

Lifestream, Pocket, Algorithms in the news–why digital media literacy matters

Excerpt:

Much of our work on Code Acts in Education over the past few years has focused on the work that algorithms do (and what they are made to do and by who) in relation to learning, policy and practice. But the work of algorithms extends far beyond education of course.

Ben Williamson

via Pocket http://ift.tt/2hLkgE5

Ben Williamson, while acknowledging the influence of algorithms on his own search results, performed inurl: searches for algorithms within major UK news websites. The short form results (all quoted):

  • The Guardian‘s editorial line is to treat the algorithm as a governor;
  • The Telegraph treats the algorithm as a useful scientist whose expertise is helping society;
  • The Sun is largely disinterested in algorithms in terms of newsworthiness;
  • the editorial line of The Mirror is to treat algorithms in terms of brainy expertise;
  • Algorithms as problem-solvers might be one way of categorizing its [The Daily Mail‘s] editorial line*

*Based on an initial search. An hour later Williamson repeated the search, and received different results. “The Daily Mail is certainly not disinterested in algorithms–the result returns are pretty high compared to the tabloids, and the Mail does frequently re-post scientific content from sources like The Conversation–but by no means does it adopt the kind of critical line found in The Guardian.”


My concerns about algorithms are related to governance, and, I read the Guardian.. Do I read The Guardian because it (more than the other publications given) matches my worldview, or do I think the way I do because of the publications (like The Guardian) that I read? Or, was I initially attracted to The Guardian because of its similarity to my worldview, but now my worldview is influenced by the fact that I read The Guardian, and its initial similarity to my worldview perhaps allows some things to slip beneath the questioning of my ‘truth’ radar?

Fascinating work – makes me wonder, is there a website that presents diverse viewpoints on topics and events using inurl: searches? i.e. monitors news sites, feeding content from diverse sources, organised by topic or event, using humans to add new topics/events as events occur? And with an editorial team to summarise the editorial positions of the publications represented on specific topics? Would such a site help combat political polarisation and divisiveness?

Also.. how can we teach ‘algorithmic literacy’? Can we? When do we start? Would a site which unpacked what this could look like, and offered teaching ideas and a place for discussion be of use? [Assignment ideas..]

Lifestream, Pocket, Imaginaries and materialities of education data science

Excerpt:

Ben Williamson

This is a talk I presented at the Nordic Educational Research Association conference at Aalborg University, Copenhagen, on 23 March 2017.

Education is currently being reimagined for the future. In 2016, the online educational technology  magazine Bright featured a series of artistic visions of the future of education. One of them, by the artist Tim Beckhardt, imagined a vast new ‘Ocunet’ system.

via Pocket http://ift.tt/2n8CT5W


I found this post after reading Knox’s (2014) post on interpreting analytics in the same blog space. What would we call that? Searching laterally? Something which was, at the time, really frustrating in DEGC was that we were always given links to Journal home pages rather than to the specific article we were reading. While I seem to recall this being connected to copyright and appropriate practice, it was frustrating because none of the links were set to open in a new window/tab by default, so unless one right clicked and opened a new window/tab, one then had to go back to the original page to find out which issue one was looking for.. but I’ve subsequently reflected (repeatedly!) on how it made me much more aware of the types of ‘publications’ and their respective content, and perhaps resultantly, I think my ‘lateral searching’ has increased. It’s not a new practice, of course, but an addictive one nonetheless, and it’s always good to find a ‘treasure trove’ of good reads.

I’m getting tangential, though – what caught my eye about this post, in particular, was the focus on ‘imaginaries’, and the ways in which such ‘imaginaries’, or fictions, play a role in the creation of future reality. Williamsons writes,

..what I’m trying to suggest here is that new ways of imagining education through big data appear to mean that such practices of algorithmic governance could emerge, with various actions of schools, teachers and students all subjected to data-based forms of surveillance acted upon via computer systems.

Importantly too, imaginaries don’t always remain imaginary. Sheila Jasanoff has described ‘sociotechnical imaginaries’ as models of the social and technical future that might be realized and materialized through technical invention.Imaginaries can originate in the visions of single individuals or small groups, she argues, but gather momentum through exercises of power to enter into the material conditions and practices of social life. So in this sense, sociotechnical imaginaries can be understood as catalysts for the material conditions in which we may live and learn.

The post has a lot more in it, focusing on how the imaginaries of ‘education data science’ combined with affective computing and cognitive computing are leading to a new kind of ‘agorithmic governance’ within education. Frightening stuff, to be frank.

What I’m really interested in is the role of these ‘imaginaries’ though: how do fictions, and, frequently, corporate fictions, work their influence? Which previous imaginaries, captured in science fiction, can we trace – along with their reception over time – to present day materialities?

And, why are ‘the people’ so passive? Why isn’t there shouting about imaginaries being presented as inevitable? Why isn’t their protest?  A rant: “Uh – you want to put a camera on my kid’s head, to tell me how she’s feeling? Have you thought about asking her? You want to produce data for parents? How about as a society ‘just’ recognising the value of non-working lives and giving people enough time to spend with their kids while they’re trying to pay rent or a mortgage?”

It would make an interesting study – perhaps too large for our EDC final assignment, but I’m wondering about it could be scaled back.

 

 

Lifestream, Pocket, Society-in-the-Loop

Excerpt:

MIT Media Lab director Joi Ito recently published a thoughtful essay titled “Society-in-the-Loop Artificial Intelligence,” and has kindly credited me with coining the term.

via Pocket http://ift.tt/2b2VVH5


I came across this short blog post when I was still thinking about the need for some kind of collective agency or reflexivity in our interactions with algorithms, rather than just individualised agency and disconnected acts (in relation to Matias’ 2017 experiment with /r/worldnews – mentioned here and here in my Lifestream blog).

…’society in the loop” is a scaled up version of an old idea that puts the “human in the loop” (HITL) of automated systems…

What happens when an AI system does not serve a narrow, well-defined function, but a broad function with wide societal implications? Consider an AI algorithm that controls billions a self-driving cars; or a set of news filtering algorithms that influence the political beliefs and preferences of billions of citizens; or algorithms that mediate the allocation of resources and labor in an entire economy. What is the HITL equivalent of thesegovernance algorithms? This is where we make the qualitative shift from HITL to society in the loop (SITL).

While HITL AI is about embedding the judgment of individual humans or groups in the optimization of narrowly defined AI systems, SITL is about embedding the judgment of society, as a whole, in the algorithmic governance of societal outcomes.

(Rahwan, 2016)

Putting society in the loop of algorithmic governance (Rahwan, 2016)

Rahwan alludes to the co-evolution of values and technology – an important point that we keep returning to in #mscedc, we are not done unto and nor do we simply do unto technology. Going forward (and a point Rahwan makes), it seems to me imperative that we develop ways of articulating human values that machines can understand, and systems for evaluating algorithmic behaviours against articulated human values. On a global scale it is clearly going to be tricky though – to whom is an algorithmic contract accountable, and how is it to be enforced outside of the boundaries of established governance (across countries, for example)? Or, acting ethically (for instance, within institutional adoption of learning analytics), is it simply the responsibility of those who employ algorithms to be accountable to the society they affect?

Lifestream, Pocket, Persuading Algorithms With an AI Nudge

Excerpt:

Readers of /r/worldnews on reddit often report tabloid news to the volunteer moderators, asking them to ban tabloids for their sensationalized articles. Embellished stories catch people’s eyes, attract controversy, and get noticed by reddit’s ranking algorithms, which spread them even further.

via Pocket http://ift.tt/2k0DN3H

Full results


In this experiment, tabloid news articles on /r/worldnews were randomly assigned either

  1. no sticky comment (control)
  2. sticky comment encouraging fact-checking
  3. sticky comment encouraging fact-checking and downvoting of unreliable articles.
Figure 1: sticky comment encouraging fact-checking. Matias (2017).
Figure 2: sticky comment encouraging fact checking and downvoting. Matias (2017).

Results

Changes in human behaviour

Both sticky comments resulted in a higher chance of comments on articles containing links (1.28% more likely to have at least one link for the sticky with scepticism, and 1.47% more likely to have at least one link for the sticky encouraging scepticism and  voting).  These figures are representative of the effect on individual comments – but the increase in evidence bearing comments per post is much higher:

“Within discussions of tabloid submissions on r/worldnews, encouraging skeptical links increases the incidence rate of link-bearing comments by 201% on average, and the sticky encouraging skepticism and discerning downvotes increases the incidence rate by 203% on average.”

  • Changes in algorithmic behaviour

Reddit posts receive an algorithmic ‘score’, which influences whether the post is promoted or not.

“On average, sticky comments encouraging fact-checking caused tabloid submissions to receive 50.9% lower than submissions with no sticky comment, an effect that is statistically-significant. Where sticky comments include an added encouragement to downvote, I did not find a statistically-significant effect.”

Why does this matter? And what does it have to do with learning analytics?

The experiment illustrates a complex entanglement of human and material agency. The author of the study had predicted that the sticky encouraging fact-checking would increase the algorithmic score of associated posts, thinking that the Reddit score and HOT algorithm would respond to changed commenting activity, or that other behaviours that do influence the Reddit score and HOT algorithm would also be changed by the changes in commenting behaviour. It was predicted that the inclusion of encouragement to downvote would limit the predicted changes in algorithmic scoring. However, mid-experiment Reddit updated their algorithm. 

“Before the algorithm change, the effect of our sticky comments was exactly as we initially expected: encouraging fact-checking caused a 1111.6% increase in the score of a tabloid submission compared to no sticky comment. Furthermore, encouraging downvoting did dampen that effect, with the second sticky causing only a 453.26% increase in the score of a comment after 13,000 minutes.”

The observed outcomes show the difficulty of predicting both human and algorithmic responses, the dramatic impact on outcomes which changes to an algorithm can produce, and the need for monitoring of these outcomes, to ensure desired effects are maintained.

“Overall, this finding reminds us that in complex socio-technical systems like platforms, algorithms and behavior can change in ways that completely overturn patterns of behavior that have been established experimentally.”

Connecting this to learning analytics rather than algorithms more generally, when we use algorithms to ‘enhance’ education, particularly when ‘nudges’ aimed at improving student success, we need to be cognisant that behaviours don’t always change in the ways expected, and that the outcomes of behavioural changes can be ‘overwritten’/cancelled-out by algorithmic design.

 

Lifestream, Pocket, Abstracting Learning Analytics

Excerpt:

By Jeremy Knox

In his presentation at the second Code Acts seminar, Simon Buckingham-Shum raised important critical questions about Learning Analytics.

via Pocket http://ift.tt/2o1mtxh

 In this blog post, Knox (2014) uses a trope of art to encourage a stepping away from representational logic in our critique of learning algorithms. He contends that our attachment to such logic assumes that ‘a good learning analytics is a transparent one’, and obscures the ‘the processes that have gone into the analysis itself’. 

If we strive for learning analytics to be transparent, to depict with precise fidelity the real behaviours of our students, then we are working to hide the processes inherent to analysis itelf.

In using the Russian propaganda poster, Knox comments, ‘The question is not whether Stalin lifted the child (the reality behind the image) but how and why the image itself was produced.’ I found this to be really effective use of image and metaphor, so it was useful both from the perspective of interrogating learning analytics and from that of thinking about how to integrate non-verbal modes in academic presentation. A great read.

Lifestream, Pocket, LGBT community anger over Youtube restrictions which make their videos invisible

Excerpt:

YouTube has responded to accusations of discrimination from high-profile members of its LGBT community, who have reported their videos being hidden by the platform.

via Pocket http://ift.tt/2mjlyvH


Another reminder that algorithms are driven by human decisions about how to categorise content, similar to the 2009 removal of all books with gay and lesbian themes from Amazon’s ranking lists. In this case, the biases of those managing the algorithms seem to be being used to reinforce existing prejudice, or as Ellis puts it, to reinforce the sexualisation of gay and trans people and the rhetoric of sexual perversion that they are subjected to. One has to ask, in whose interests?

Lifestream, Pocket, Informing Pedagogical Action

Excerpt:

Informing Pedagogical Action

Aligning Learning Analytics With Learning Design

First Published March 12, 2013 research-article

This article considers the developing field of learning analytics and argues that to move from small-scale practice to broad scale applicability, there is a need to establish a contextual framework that helps teachers interpret the information that analytics provides. The article presents learning design as a form of documentation of pedagogical intent that can provide the context for making sense of diverse sets of analytic data. We investigate one example of learning design to explore how broad categories of analytics—which we call checkpoint and process analytics—can inform the interpretation of outcomes from a learning design and facilitate pedagogical action.

via Pocket http://ift.tt/2nh0g1k


The basic premise of the article is:

Why do we need this framework?

To date, learning analytics studies have tended to focus on broad learning measures such as of student attrition (Arnold, 2010), sense of community and achievement (Fritz, 2011), and overall return on investment of implemented technologies (Norris, Baer, Leonard, Pugliese, & Lefrere, 2008). However, learning analytics also provides additional and more sophisticated measures of the student learning process that can assist teachers in designing, implementing, and revising courses (p. 1441)

Within learning design, research approaches such as focus group interviews are often used to inform redesign of courses and learning activities. The authors suggest that using analytics overcomes data inaccuracy that can be associated with focus group style research, as such approaches are reliant on self-reporting and accurate recollection of details by participants. However, they note that the interpretation of LA data against pedagogical intention is challenging, and propose a framework – “check-points and processes analytics” – for evaluating learning design.

Check-Points and Processes Analytics

In the proposed framework, two types of analytics (illustrated in the diagram above by circles and crosses in the final column) are utilised:

  1. checkpoint analytics“the snapshot data that indicate a student has met the prerequisites for learning by accessing the relevant resources of the learning design” (p. 1448)This type of data can be used during course delivery to ascertain whether learners have accessed the required materials and are progressing through the intended learning sequence, and prompt ‘just in time’ support (reminders, encouragement) when learners have not engaged in any required steps.
  2. process analytics“These data and analyses provide direct insight into learner information processing and knowledge application (Elias, 2011) within the tasks that the student completes as part of a learning design.” (p. 1448)

Again, this data could support interventions when students are involved in group work, for example, if patterns of interaction diverge from the intended patterns (unequal participation, for example, through social network visualisation).

My Reactions

On the one hand, this application of LA interests me because it puts LA into the work that I do as a teacher rather than at an institutional level. It feels more ‘real’ in that its focus is on pedagogy rather than the broad strokes of ‘student experience’. The institutional use of LA can sometimes seem to frame teachers as service providers and reflect the commodification of education. In contrast, this application seems like a teaching tool (with the caveat that the check-points analysis may be seen to adopt a transactional view of learning). However, I’m cautious because any plan to monitor and direct patterns of interaction is underpinned by assumptions about what effective learning looks like, and the ability to automate such monitoring and intervention through LA could enable blind adherence to a particular view of learning. Of course, even without LA we use such assumptions in our teaching: in the face to face classroom a teacher monitors group work and intervenes when students seem off task or are not communicating with each other as intended. Such interactions/engagement with tasks can (as the authors note) be more difficult to monitor in online learning, and LA could be a helpful tool for teachers online, and inform task setup and choice of technological tools used. In this sense, I would be very interested in utilising the analytics approach outlined – but I would be much less interested in it being used as an evaluative tool of my teaching, if, for example, it were based on a departmental ‘ruling’ about the types of interactions deemed to be supportive of learning, and very much less interested in using it as part of student assessment, wherein students were expected to conform to particular models of interaction in order to be ‘successful’ (see, for example, MacFarlane, 2015). As with all analytics and algorithms, the danger seems to be in the application.

Lifestream, Pocket, Artificial intelligence is ripe for abuse, tech researcher warns: ‘a fascist’s dream’

Excerpt:

Microsoft’s Kate Crawford tells SXSW that society must prepare for authoritarian movements to test the ‘power without accountability’ of AI.

via Pocket http://ift.tt/2nwHZcF

 

“We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.”

…With AI this type of discrimination can be masked in a black box of algorithms

Crawford’s comments, and those of the article’s author, Olivia Solon, correlate with Ben Williamson’s assertion (based on Seaver, 2014) that objectivity and impartiality claims about algorithms ignore the reality that little black boxes are actually massive networked ones with hundreds of hands reaching into them, tweaking and tuning.

Slide from Ben Williamson’s lecture, Calculating Academics: theorising the algorithmic organization of the digital university (2014)

 

Crawford goes further, however, in identifying the potential for algorithms and AI to be used by authoritarian regimes to target specific populations and centralise authority. Her concerns are similar to those of Tim Berners-Lee, which were included in my Lifestream last week. Where Berners-Lee calls for greater (individual, personal) control of our data and more transparency in political advertising online, Crawford calls for greater transparency and accountability within AI systems. However, both are responding to the same key point: algorithms and AI are not just social products, they also produce social effects. The same point is taken up by Knox (2015),

“..algorithms produce worlds rather than objectively account for them, and are considered as manifestations of power. Questions around what kind of individuals and societies are advantaged or excluded through algorithms become crucial here (Knox, 2015).”

and Williamson (2014, referring to Kitchin & Dodge, 2011):

Slide from Ben Williamson’s lecture, Calculating Academics: theorising the algorithmic organization of the digital university (2014)