End of Course Summary

 

CC0

The time has come to unplug from IFTTT and pool my Lifestream somewhere other than here. Before shunting the flow, a few moments of reflection.

While the Lifestream follows course content and therefore associated themes, it is also indicative of my own thinking and research practices, combined with the influence of the ‘assignment’ on these practices. To clarify, due to the assignment requirements for feeds from diverse sources, my usual pathways through the Interwebs were diverted to Pinterest, YouTube and Flickr, sites that are less frequently part of my academic repertoire. Additionally, my usual ‘hold all’, Evernote, fed awkwardly into the lifestream, so I found myself using different ‘tools’ such as Pocket and Diigo, for articles with and without images respectively. Also, my activity on Twitter was more prolific than usual. Each of these changes in my own practices in a sense is representative of the assignment’s agency over me, which derives its power from the salience of the student role for me, but nonetheless which I am co-agent in. There were times, however, when the Lifestream was less representative of my engagement, when solitary acts went unrecorded. Inherent in ‘capturing’ learning in this way, or through learning analytics, is a tension: do we privilege the behaviours we see as most valuable or does our ability to record particular behaviours result in an undervaluing of unobserved – but valuable – behaviours?

 

Although the course was divided into blocks, with cybercultures including an exploration of transhumanism politics and an interrogation of what it means to be human, community cultures focusing on ethnography, participatory literacies and community policing and digital labour, and algorithmic cultures unveiling concerns around bias, and the need for accountability and transparency, a key theme also persisted throughout all three blocks. This was the blurring of boundaries and/or entanglement, whether between dualisms such as mind/body and human/machine, between digital technologies and social practices or between human and non-human agency.

CC0

Exploring the blurring of the human/machine dualism in cybercultures involved both acknowledging the increased plasticity and hyperreality of the body enabled by technology (Williams & Bendelow, 1998), and recognising that the body has always been a site of cultural activity and quest for social distinction (Bourdieu, 1984). As such, while the materiality of the posthuman body is constructed (Hayles, 1999 / e-reserve PDF), it always has been – but now digital technologies are also co-creators of culture.

Similarly, in community cultures the influence of technological infrastructure (coupled with human agency such as pedagogical intent) on dialogue between participants became apparent. This is not to say that technology determines social practice, but rather that the two are co-constituent, each exerting pressure, along with the practices of commerce.

CC0

Algorithms too co-create, through a complex entanglement of human and non-human agency (Matias, 2017). At this point, the challenge seems to be to maintain and clarify human agency, both at an individual level, and at a societal level so that human values are not subsumed by myths of algorithmic objectivity and service to commercial gain.

Of course, these are simplifications, lacking many of the links to education that were evident in the Lifestream. However, perhaps what is needed in considering the relationship between digital education and culture is an understanding of the same entanglement and co-constituency – and a similar effort to deconstruct myths amid questions of agency and value, and predatory commercial interests.

 

Thanks Jeremy, James and peers – a tremendous 12 weeks indeed!

 

Lifestream, Tweets

I suspect this might be my last, or near to last, post before the final summary as it’s Friday afternoon, I’ll work on the summary tomorrow, and Sunday (the due date for the Lifestream) is always a crazy day at work for me. It feels really lovely to return to the beginning – back to cyber culture’s questions about what it means to be human, and at the same time to make connections to community (loosely) and to algorithms.

In the video, Ishiguro says that he started the project, building a robot, in order to learn about humans. He asks what it really means to be human – which things that we do are automated? Which things could someone impersonating us do? When are we truly being ourselves, being creative? Erica (the robot) suggests that through the automation of the uninteresting and tedious aspects of life, robots can help us to focus on the creative aspects, and other parts where we are truly ourselves. With AI being essentially the work of algorithms this ties our first block (cyber cultures) to our third (algorithmic culture). Can algorithms allow us to be more human?

In the video it is also asked, what is the structure that underlies human interaction? We can identify many ‘ingredients’ that play a role, but what does it take for a human to interact with a robot and feel that they are interacting with a being? This is from where my – albeit loose – connections to community are drawn. Last week Antigonish 2.0 told 4-word stories about what community means (#Antigonish2 #4wordstory – there were over 700 responses). How will robots understand all these diverse and valuable ways of being together? Maybe they don’t need to – maybe we can – in the spirit of Shinto – accept them as having their own type of ‘soul’, and accept automation of the mundane.. if our economic system allows for and values our very *human* contributions.

Bring on Keynesian theory and the short working week..

 

Lifestream, Liked on YouTube: Not Enough AI | Daniela Rus

via YouTube


Daniela Rus’ presentation was interesting to watch in the context of having recently watched Audrey Watters’ presentation at Edinburgh on the automation of education. Rus doesn’t have the cynicism which Watters (justifiably) has. For example, she identifies an algorithm which is able to reduce the number of taxis required in New York City by 10,000 by redirecting drivers (if the public agrees to ride-share). While this could mean 10.000 job losses, Rus says that, with a new economic model, it doesn’t have to. She describes a different picture in which the algorithm could mean the same money for cab drivers, but shorter shifts, with 10,000 less cars on the road producing less pollution. It’s a solution which is good for taxi drivers, and good for society – but like Watters I fear that within capitalism there is little incentive for commercial entities to make the choice to value people or the environment over profits. Automation should, as Rus suggests in the presentation, take away the uninteresting and repetitive parts of jobs and enable a focus on the more ‘human’ aspects of work, but instead, it can be used to deskill professions and push down wages. Her key takeaway is that machines, like humans, are neither necessarily good or bad. For machines, it just depends on how we use them..

 

 

Lifestream, Pocket, ‘Future Visions’ anthology brings together science fiction – and science fact

Excerpt:

To the casual observer, the kind of technological breakthroughs Microsoft researchers make may seem to be out of this world.

via Pocket http://ift.tt/2nKVDcX


I came across this collection of short science fiction stories from Microsoft. I hate that I like it (I still haven’t forgiven Gates for 1995’s shenanigans with Netscape and others – & for, well, breaking the ethos of the Internet), but it seems like a ‘page turner’. I’ve only read half the first story, mind, as it is not available to me in iTunes locally and Amazon is suggesting it does not deliver to my region despite it being an e-book. I could use a shop and ship address, but it’s kind of annoying that it isn’t just available as a PDF – combined with my & Bill’s ‘history’, it was enough to put me off for now.

One thing I did think about from the first half of the first story, in which translation and natural language programming has reached the point of being able to translate signing into spoken language and spoken language to text in real time, is that while we herald the benefits of technology for differently abled people, we also ignore what it could mean for communities like the Deaf community, and cultures like Deaf culture. I’m not really qualified to speak on it myself, but I’d be interested in hearing the perspectives of people from within the Deaf community.

Lifestream, Pocket, The Best Way to Predict the Future is to Issue a Press Release

Excerpt:

This talk was delivered at Virginia Commonwealth University today as part of a seminar co-sponsored by the Departments of English and Sociology. The slides are also available here. Thank you very much for inviting me here to speak today.

via Pocket http://ift.tt/2fF4PPI


I started out by trying to grab a few select quotes from this talk that Watters delivered at Virginia Commonwealth University in November 2016, but it is pretty much all gold. She writes about how the stories we tell – or have told to us – about technology and educational technology direct the future, and asks how these stories affect decision making within education:

Here’s my “take home” point: if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.

..

..to predict the future is to control it – to attempt to control the story, to attempt to control what comes to pass.

Watters’ interrogation of future stories – stories by Gartner, by the New Horizon Report, by Justin Thrun, and others – demonstrate that these stories tell us much more about what kind of future the story-tellers want than about future per se. This matters, Watters suggests, because these stories are used to ‘define, disrupt, [and] destabilize’ our institutions:

I pay attention to this story, as someone who studies education and education technology, because I think these sorts of predictions, these assessments about the present and the future, frequently serve to define, disrupt, destabilize our institutions. This is particularly pertinent to our schools which are already caught between a boundedness to the past – replicating scholarship, cultural capital, for example – and the demands they bend to the future – preparing students for civic, economic, social relations yet to be determined.

It’s a powerful read – and connected to the idea I want to pursue in my final assignment. I’m interested in seeing if there are different stories being told to different segments of the population, and trying to imagine what the consequences of that different imagining might be.