@j_k_knox Thanks @j_k_knox – did you feel that massive sigh of collective relief? It hit 3 to 4 continents, I think! #mscedc
— Renée Hann (@rennhann) April 9, 2017
Lifestream, Tweets
@philip_downey @j_k_knox @james858499 err-IDEL,DEGC,CDDE:midnight=night of due date.Oxford dictionary also suggests it is night of the day.. Think I'm going to throw up! #mesced pic.twitter.com/vVPETqSSHe
— Renée Hann (@rennhann) April 9, 2017
Lifestream, Tweets
@philip_downey WHAT?! OMG me too. @j_k_knox @james858499 is this a mistake? Midnight Sunday surely means tonight? #mscedc
— Renée Hann (@rennhann) April 9, 2017
Lifestream, Tweets
@philip_downey @Digeded 'beyond human'=being human(instead of automatic)more often.Question:is it sustainable if always full self, creative,not auto?#mscedc
— Renée Hann (@rennhann) April 7, 2017
Lifestream, Tweets
Ishiguro:"What is the minimal definition of humans?" #mscedchttps://t.co/tTYjYEggYm
— Renée Hann (@rennhann) April 7, 2017
I suspect this might be my last, or near to last, post before the final summary as it’s Friday afternoon, I’ll work on the summary tomorrow, and Sunday (the due date for the Lifestream) is always a crazy day at work for me. It feels really lovely to return to the beginning – back to cyber culture’s questions about what it means to be human, and at the same time to make connections to community (loosely) and to algorithms.
In the video, Ishiguro says that he started the project, building a robot, in order to learn about humans. He asks what it really means to be human – which things that we do are automated? Which things could someone impersonating us do? When are we truly being ourselves, being creative? Erica (the robot) suggests that through the automation of the uninteresting and tedious aspects of life, robots can help us to focus on the creative aspects, and other parts where we are truly ourselves. With AI being essentially the work of algorithms this ties our first block (cyber cultures) to our third (algorithmic culture). Can algorithms allow us to be more human?
In the video it is also asked, what is the structure that underlies human interaction? We can identify many ‘ingredients’ that play a role, but what does it take for a human to interact with a robot and feel that they are interacting with a being? This is from where my – albeit loose – connections to community are drawn. Last week Antigonish 2.0 told 4-word stories about what community means (#Antigonish2 #4wordstory – there were over 700 responses). How will robots understand all these diverse and valuable ways of being together? Maybe they don’t need to – maybe we can – in the spirit of Shinto – accept them as having their own type of ‘soul’, and accept automation of the mundane.. if our economic system allows for and values our very *human* contributions.
Bring on Keynesian theory and the short working week..
Lifestream, Tweets
That moment when,after massive work wk,you arrive home planning to crack on w/ #mscedc bt..surprise!may need beverage pic.twitter.com/vKwgpAwZJc
— Renée Hann (@rennhann) April 6, 2017
@c4miller If only I knew knew when 'getting' would become 'got' 😉
— Renée Hann (@rennhann) April 6, 2017
Another plus – with no water, no one had to cook dinner;)
Lifestream, Pocket, ‘Future Visions’ anthology brings together science fiction – and science fact
Excerpt:
To the casual observer, the kind of technological breakthroughs Microsoft researchers make may seem to be out of this world.
via Pocket http://ift.tt/2nKVDcX
I came across this collection of short science fiction stories from Microsoft. I hate that I like it (I still haven’t forgiven Gates for 1995’s shenanigans with Netscape and others – & for, well, breaking the ethos of the Internet), but it seems like a ‘page turner’. I’ve only read half the first story, mind, as it is not available to me in iTunes locally and Amazon is suggesting it does not deliver to my region despite it being an e-book. I could use a shop and ship address, but it’s kind of annoying that it isn’t just available as a PDF – combined with my & Bill’s ‘history’, it was enough to put me off for now.
One thing I did think about from the first half of the first story, in which translation and natural language programming has reached the point of being able to translate signing into spoken language and spoken language to text in real time, is that while we herald the benefits of technology for differently abled people, we also ignore what it could mean for communities like the Deaf community, and cultures like Deaf culture. I’m not really qualified to speak on it myself, but I’d be interested in hearing the perspectives of people from within the Deaf community.
Lifestream, Pocket, The Best Way to Predict the Future is to Issue a Press Release
Excerpt:
This talk was delivered at Virginia Commonwealth University today as part of a seminar co-sponsored by the Departments of English and Sociology. The slides are also available here. Thank you very much for inviting me here to speak today.
via Pocket http://ift.tt/2fF4PPI
I started out by trying to grab a few select quotes from this talk that Watters delivered at Virginia Commonwealth University in November 2016, but it is pretty much all gold. She writes about how the stories we tell – or have told to us – about technology and educational technology direct the future, and asks how these stories affect decision making within education:
Here’s my “take home” point: if you repeat this fantasy, these predictions often enough, if you repeat it in front of powerful investors, university administrators, politicians, journalists, then the fantasy becomes factualized. (Not factual. Not true. But “truthy,” to borrow from Stephen Colbert’s notion of “truthiness.”) So you repeat the fantasy in order to direct and to control the future. Because this is key: the fantasy then becomes the basis for decision-making.
..
..to predict the future is to control it – to attempt to control the story, to attempt to control what comes to pass.
Watters’ interrogation of future stories – stories by Gartner, by the New Horizon Report, by Justin Thrun, and others – demonstrate that these stories tell us much more about what kind of future the story-tellers want than about future per se. This matters, Watters suggests, because these stories are used to ‘define, disrupt, [and] destabilize’ our institutions:
I pay attention to this story, as someone who studies education and education technology, because I think these sorts of predictions, these assessments about the present and the future, frequently serve to define, disrupt, destabilize our institutions. This is particularly pertinent to our schools which are already caught between a boundedness to the past – replicating scholarship, cultural capital, for example – and the demands they bend to the future – preparing students for civic, economic, social relations yet to be determined.
It’s a powerful read – and connected to the idea I want to pursue in my final assignment. I’m interested in seeing if there are different stories being told to different segments of the population, and trying to imagine what the consequences of that different imagining might be.
Lifestream, Diigo: The need for algorithmic literacy, transparency and oversight grows | Pew Research Center
from Diigo http://ift.tt/2loZnPJ
via IFTTT
I posted a link to the complete Pew Research Report (Code-Dependent: Pros and Cons of the Algorithm Age) a few weeks back (March 11). This week, while thinking about my final assignment for Education and Digital Cultures, I returned to Theme 7: The need grows for algorithmic literacy, transparency and oversight.
While the respondents make a great deal of both interesting and important points about concerns that need to be addressed at a societal level – for example, managing accountability (or the dissolution thereof) and transparency of algorithms, avoiding centralized execution of bureaucratic reason/including checks and balances within the centralization enabled by algorithms – there were also points raised that need to be addressed at an educational level. Specifically, Justin Reich from MIT Teaching Systems Lab suggests that ‘those who design algorithms should be trained in ethics’, and Glen Ricart argues that there is a need for people to understand how algorithms affect them and for people to be able to personalize the algorithms they use. In the longer term, Reich’s point doesn’t seem to be limited to those studying computer science subjects, in that, if, as predicted elsewhere (theme 1) in the same report, algorithms continue to spread, more individuals will presumably be involved in their creation as a routine part of their profession (rather than their creation being reserved for computer scientists/programmers/etc.). Also, as computer science is ‘rolled out’ in primary and secondary schools, it makes sense that the study of (related) ethics ought to be a part of the curriculum at those levels also. Further, Ricart implies, in the first instance, that algorithmic literacy needs to be integrated into more general literacy/digital literacy instruction, and in the second, that all students will need to develop computational thinking and the ability to modify algorithms through code (unless black-boxed tool kits are provided to enable people to do this without coding per se, in the same way the Weebly enables people to build websites without writing code).
Lifestream, Diigo: eLearnit
from Diigo http://ift.tt/2oPaLWF
via IFTTT
I’ve been a little distracted the last couple of days, as I’m presenting the paper I wrote for my final assignment for Digital Education in Global Contexts (Semester B, 2015-16) at a conference today. To be fair, a lot of the conference seems focused on the promise technology is perceived to hold for education (I’m thinking of Siân Bayne’s 2015 inaugural lecture, The Trouble with Digital Education, 8:20) and I’m not certain that my paper will be of a great deal of interest to the audience, but it is, nonetheless, a little nerve wracking. As a consequence of over thinking it, no doubt I’ll also be summarising week 11’s lifestream and adding metadata later tonight.