Automating Education and Teaching Machines Audrey Watters
“Can computers replace teachers?” The Atlantic recently asked. “Can AI replace student testing?” another publication queried. These sorts of headlines are appearing with increasing frequency. But do they reflect technological advances in “artificial intelligence”? Or are they reflections instead of culture and political desires to see education automated?
This talk will explore the history of “teaching machines” — a history that certainly pre-dates the latest hype about artificial intelligence. It will also examine the ideological (and technical) underpinnings of Silicon Valley’s recent push to automate — or as it calls it, “personalize” — education.
I liked this on YouTube to fix it on my lifestream so that I can watch it later.
http://ift.tt/2ooD4vW
via YouTube https://youtu.be/jJShaktigoo
Hoaxy Tutorial
Welcome to Hoaxy! Hoaxy is a public tool for visualizing the spread of fact-checking and claims on social media. You can use it a bit like Google.
Hoaxy is part of the Observatory on Social Media (http://ift.tt/1WUSKDw) and is project by the Indiana University Network Science Institute (http://iuni.iu.edu/) and the Center for Complex Networks and Systems Research (http://ift.tt/1ozQbHR) at the Indiana University School of Informatics and Computing at (http://ift.tt/2fuU67n).
I watched a demonstration of this automated essay-improving software on YouTube and desperately want to try it out to see if it works. I thought it interesting that in these days of hypermedia one of the aims of OpenEssayist was to ensure student essays followed the traditional beginning, middle and end, showing how our narrative linear literacies have not been challenged here.
Databite No. 70: Tarleton Gillespie
Tarleton Gillespie (@tarletonG) presents #Trendingistrending: When Algorithms Become Culture:
I liked this on YouTube and have left iftt and YouTube’s default information feed into my lifestream, just adding the bold emphasis to the last sentence.
Algorithms may now be our most important knowledge technologies, “the scientific instruments of a society at large,” and they are increasingly vital to how we organize human social interaction, produce authoritative knowledge, and choreograph our participation in public life. Search engines, recommendation systems, and edge algorithms on social networking sites: these not only help us find information, they provide a means to know what there is to know and to participate in social and political discourse.
If not as pervasive and structurally central as search and recommendation, trending has emerged as an increasingly common feature of such interfaces and seems to be growing in cultural importance. It represents a fundamentally different logic for how to algorithmically navigate social media: besides identifying and highlighting what might be relevant to “you” specifically, trending algorithms identify what is popular with “us” more broadly.
But while the techniques may be new, the instinct is not: what today might be identified as “trending” is the latest instantiation of the instinct to map public attention and interest, be it surveys and polling, audience metrics, market research, forecasting, and trendspotting. Understanding the calculations and motivations behind the production of these “calculated publics,” in this historical context, helps highlight how these algorithms are relevant to our collective efforts to know and be known.
Rather than discuss the effect of trending algorithms, I want to ask what it means that they have become a meaningful element of public culture. Algorithms, particularly those involved in the movement of culture, are both mechanisms of distribution and valuation, part of the process by which knowledge institutions circulate and evaluate information, the process by which new media industries provide and sort culture. This essay examines the way these algorithmic techniques themselves become cultural objects, get taken up in our thinking about culture and the public to which it is addressed, and get contested both for what they do and what they reveal. We should ask not just how algorithms shape culture, but how they become culture.
Blackboard Learning Analytics Report for students (Blackboard Learning Analytics)
This short video will show you how to access your student Learning Analytics report from within Blackboard. Part of the ‘Blackboard Learning Analytics’ series.
This video is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs License (http://ift.tt/1hexVwJ)
This video is pretty scary. Others may well be different, but I wouldn’t find it motivating to compare myself to the class average for number of logins or social interactions. I am deeply sceptical about whether this would be helpful or useful to a student unless, perhaps, they were doing well and even if they were, is any meaningful information or reinforcement to be found here? A student ‘at risk of failing’ would not rush to read their report, nor find any help if they did.
It is interesting to compare this video to the Blackboard promotional video here
How Blackboard thinks about Analytics There’s value in data. It’s our job to extract that value by transforming that raw data into helpful information. Dennis Witte (VP of Administration, Concordia University – Chicago), Kendall St. Hilaire (Virtual Campus Administrative Director, Indian River State College), and John Fritz (Asst VP for Instructional Technology, University of Maryland, Baltimore County) talk about how support from Blackboard Analytics has helped to improve the human decision-making process.
MORE INFORMATION: http://ift.tt/2mtQRj8 via YouTube
This promotional video advertises some of the perceived benefits of an ‘off the shelf’ LA solution. It is interesting to watch it and compare it to this video
Learn about what’s happening in online classes when things are happening
See what works and what doesn’t and alter course design
Enable student-driven decisions
Drive up retention and student numbers in online courses
I like this video on YouTube because technologies work to keep us on their site as long as possible, gathering data gleaned from our likes and views, chats and shares. This is the data algorithms like to feed on.
A poetic short film by Max Stossel & Sander van Dijk:
In the Attention Economy, technology and media are designed to maximize our screen-time. But what if they were designed to help us live by our values? www.timewellspent.io
What if news & media companies were creating content that enriched our lives instead of catering to our most base instincts for clicks?
As technology gets more and more engaging, and as AI and VR become more and more prevalent in our day-to-day lives we need to take a look at how we’re structuring our future.
Time Well Spent is a movement to align technology with our humanity: www.timewellspent.io
I liked this on YouTube because it illustrates both our endeavour to understand the human brain and the progress we are making at the same time as emphasising its ineffable complexity.
A Dennett Adagio
Functional MRI scan of the brilliant brain of DANIEL DENNETT, December 2015. This is a rest state (default mode) image series captured at the Olin Neuropsychiatry Research Center in Connecticut. After standard preprocessing, I reduced the dimensionality of the data with Independent Component Analysis, extracting eight synchronized networks as shown. Each was assigned a tone; the more rapidly varying regions are higher in pitch. This part of the experiment, resting without any stimulus or task, lasted six minutes. It’s accelerated here x2.
The “Door” Study
This video shows footage from a 1998 study by Daniel Simons and Daniel Levin in which a participant fails to notice when the person he is talking to is replaced by someone else. The study was among the first to demonstrate that the phenomenon of “change blindness” can occur outside the laboratory. This was the first of many studies by Simons, Levin, and colleagues to explore how change blindness can occur in the real world.
via YouTube
This video featured on the Mooc I’m following. Change blindness reveals how little of the whole picture it is possible to see, whilst we remain sure in our certainty. Is ethnography a useful methodology for studying communities and might it help avoid single-perspective blindness? Perhaps ethnography helps us to perceive where the subject’s blindnesses lie? How can we reveal this in the knowledge of our own imperfect vision? Ha ha, I’m getting sucked down a vortex!