I delivered this talk today at the NMC Summer 2017 conference in Cambridge, Massachusetts
Thank you very much for inviting me to your conference. I know there have been lots of murmurs about what it means that someone who’s been quite critical of the Horizon Report project would be invited to speak, let alone to get to offer the closing remarks.
So I’ll say at the outset that I’m not here to offer solutions or resolutions or absolutions. The latter’s the job of your priest, and none of these the job of your keynote speaker. I will not be assigning penance today – although as a scholar of history and culture, I do want you (all of us, really) to think about what we’ve done; to think about what we’ve said; to think about the stories we tell about the future of technology and education.
That is the purpose of the Horizon Report, of course: it’s a story about the future. It’s a story designed to share, one you can tell others; and like certain genres of storytelling, it’s one particularly well-suited for urging people to behave in certain ways. It’s one that aspires to shape the future in a certain direction. Or in the seasonally inappropriate words of John Frederick Coots and Haven Gillespie: You better watch out, you better not cry / you better not pout / I’m telling you why / artificial intelligence is four to five years on the horizon.
I spend a lot of time talking about what I call “the history of the future” of education technology. I’m interested in the stories we tell and the stories we have long told about the shape of things to come. (That is to say, the shape of things we believe, we hope, we imagine, we worry, and we predict will come.)
I am interested in how technology functions in those stories as a motif, a symbol, a theme, and sometimes even a protagonist in its own right. I’m interested in how technology functions in those stories as a set of imagined practices, as a reflection of a certain mindset – a mindset that, no matter the sweeping sagas, is bound to and bound by its teller’s contemporaneity. I’m interested in what we believe technology will do. I’m interested in why we believe technology will work, and in why technology is featured so prominently in stories about the future. Why and where.
I realize this is an education conference, but I’m going to shift the “where” of my focus today to stories about the future of technology that take place outside of the school and the classroom. I want to talk about the history of the future of technologies of the home. My rationale is severalfold:
First, education technology is boring; or at least its stories, repetitive. You’ve sat here through a couple of days’ worth of presentations on ed-tech, and perhaps you’re a little tired of it too. (Or perhaps I’m projecting.) To borrow from “Norman’s Law of eLearning Tool Convergence,” no matter the stories we tell about innovation, no matter the predictions we make about disruption, in time everything in ed-tech becomes indistinguishable from the learning management system. I do not want to talk about the LMS – not today, not ever to be perfectly frank; not as a portal, not as a “personalized learning environment,” not as a “next generation learning environment,” not as infrastructure, not as ideology, not as a conduit for our failed imagination.
Second, I want to talk about the future of the home because I want us to think about the history of consumer products. Although in many ways, education technology has been more closely associated with what some people call “enterprise technology” – that is, the kinds of mostly administrative software and services sold to large organizations (corporations, governments, K–12 school districts, universities) – education technology is deeply intertwined with consumer tech and trends. I’m not sure those in education technology always want to talk about this consumer framework – we like to pretend we use technology because it will “improve teaching and learning,” not because we’ve been heavily marketed certain products and certain stories about the necessity of our technology consumption. We prefer to think of ourselves as professors or pedagogues or scholars or students, not as consumers or users.
No doubt, today’s technology companies view students and schools as a largely untapped market. But that’s not new. Technology companies – particularly those hawking aspirational, education-related products – have long viewed parents in a similar way. But now “software is eating the world,” as venture capitalist Marc Andreessen wants to us all to believe. That is to say in my mind at least, Silicon Valley ideology – libertarian, individualist, consumerist, capitalist – seeks to mediate all relationships: social, professional, civic, familial.
So I want to consider the history of technologies of the home – the social and the economic history. What do we expect this technology to do? How does this technology actually function? Who does it benefit? What does it signal? Whose values, whose imagination does it reflect? Who builds it? Who buys it? Whose home is this technological imaginary that we are apt to tout?
Sidenote: Someone from the Clayton Christensen Institute recently invoked the history of household appliances in an op-ed for Edsurge, asking “Is Your Edtech Product a Refrigerator or Washing Machine?” These two appliances are meant to serve in the article as an analogy for ed-tech adoption – something about how quickly we embrace products that fit into the home as-is as compared to ones that require we restructure entire rooms and lay new pipes – “incrementalism” versus “transformation,” I suppose. “Reform” versus “revolution.” The historical timeline in the op-ed’s a bit off, historian Jonathan Rees has pointed out, noting that many of us still get by just fine without having a washing machine at home. New technology replacing and displacing and disrupting older technology is not inevitable, no matter how often those from the Clayton Christensen Institute like to tell that story.
Sidenote to the sidenote: A press release from early May pronounced that “Global Innovation Guru Clay Christensen Predicts Disruption in the Domain of Parenting.”
Pay attention to these stories. Pay attention to these storytellers. But pay critical attention. Pay attention critically. Ask better questions about why they’re inventing these histories and predicting these futures.
The third reason why I want to talk about technology and the home: I want us to think specifically about technology and labor, about sites of production and reproduction – yes in a Marxist sense – particularly the production and reproduction of knowledge and culture; and I want us to think about love and care. Affective labor. Emotional labor. Who do we imagine is doing this work? Do we value it?
My aim here is to “defamiliarize” a discussion of education technology, shifting the focus so that we can perceive it differently. As I explore with you some technologies of child-rearing (new and old), I want you to think, at every turn, about how these technologies and these practices are prescribed for the home and for the schoolhouse – or at least for some homes and some classrooms.
In January of this year, at the annual Consumer Electronics Show in Las Vegas, Mattel (or rather, its subsidiary Nabi) unveiled Aristotle, a “smart baby monitor” – what it claimed was the world’s first. Companies always hope they’ll be able to make headlines at CES, and Aristotle received a fair amount of attention this year. There were stories in the usual tech publications – Engadget, PC World, CNET – as well as in the mainstream and tabloid press – USA Today, ABC News, Fox News, The Daily Mail. Bloomberg heralded the device as “Baby’s First Virtual Assistant.” And here’s how Fast Company described the voice-activated speaker/monitor, which is set to launch some time next month (the release day keeps getting postponed):
Aristotle is built to live in a child’s room – and answer a child’s questions. In this most intimate of spaces, Aristotle is designed to be far more specific than the generic voice assistants of today: a nanny, friend, and tutor, equally able to soothe a newborn and aid a tween with foreign-language homework. It’s an AI to help raise your child.
Now that’s obviously a series of sentences that situates the device among its competitors today (those “generic voice assistants”), but that also serves as a very imaginative marketing of a technological future (one where a machine can “aid a tween with foreign-language homework”). It is not a list of actual technical specifications. Indeed, since CES the specifications for Aristotle have changed substantially. Mattel has cancelled its integration with Amazon Alexa, for example, which was supposed to power the speaker and facilitate the parts of “parent mode” that involved shopping for baby supplies.
Here’s how the Mattel website, where you can pre-order the device, now describes Aristotle’s features:
Aristotle™ combines multiple nursery devices into one convenient, hands-free system. It’s a smart baby monitor, multi-color LED nightlight, WiFi HD camera, Bluetooth® speaker and sound machine, all in one!
The convenient Aristotle™ App lets you keep a close eye and ear on your baby from your smart device via WiFi internet connection. Easily track and store your baby’s feeding, changing and sleeping patterns, and receive notifications to alert you of important reminders in real time. You can even find out if your little one is fussy with the cry detector!
With the App’s “Do this When” tool, you can create customized actions that respond automatically to your baby. For example, you can program Aristotle™ to respond to your baby’s cries with a personalized soothing light and sound combination.
There is a lot packed into that marketing material, not just about the specifics of the device for sale but about the cultural and commercial expectations of parenting. It’s also full of buzzwords that will be familiar to those who work in education technology: personalization, analytics, real-time notifications, convenience.
But gone from the Mattel website are the boasts made at CES about what one of its executives said was “the fundamental problem of most baby products, which is they don’t grow with you.” Aristotle was couched in much of the CES coverage as a virtual assistant that would offer, if not “lifelong learning” explicitly, then at least an AI that would learn about the child and teach her as she grew into a teen. All those promises that this $350 device would be something parents would keep in their child’s room long after the supposed need has passed for a “smart baby monitor” – they’re now nowhere to be found. What remains is some fairly boilerplate language about an Internet-connected device.
What happened? Was this a matter of promising too much about a technology? Or did the marketing actually create fear and uncertainty rather than excitement?
(Let’s be clear: these gulfs between marketing’s promises and technologies’ capabilities and consumers’ interests and desires appear regularly. Think the repeated failures of VR or AI to live up to the hype.)
To give you a flavor of what company executives, and in turn technology reporters, gushed about at CES, here’s more from Fast Company, which I apologize for quoting at length, but it’s amazing how swept up in the story about the future of high-tech parenting that the publication seemed to be:
…It’s the child-to-Aristotle connection that makes the device such an interesting entrant in the rapidly commoditized voice-assistant market. …
Key to that is Aristotle’s ability to understand young voices. “It was one of the core things we tried to resolve from the get-go,” says [one executive]. “Our audience often says words completely differently [even from one another].” To deal with that complication, Mattel partnered with PullString, a San Francisco–based company that focuses on AI conversation and speech recognition. Embedded with PullString’s platform, Aristotle will mature alongside its young listeners, constantly improving its recognition capabilities as children get older. For toddlers, Aristotle will turn its LED various colors and ask the listener to identify them; older kids can ask Aristotle factoids like, “Who was the 16th president of the United States?” or request to play a game.
All of this points at Aristotle’s greater intent: It’s built for play. Mattel is, after all, a toy company with lots of intellectual property. “Imagine what happens with Hot Wheels and Thomas the Tank Engine when you have this connected hub,” says [a Mattel executive] of Aristotle’s future ecosystem. “Do you hear sound effects? Can you have greater interactions?” Mattel imagines that even cheap, simplistic die-cast cars can be loaded with low-cost chips to connect to Aristotle. Meanwhile, the device’s camera will use object recognition to identify flash cards, or even a toy without any special electronics, essentially adding interactions to make it feel more dynamic. The company is aiming to roll out these features early next year.
I mean, I guess we’ll see about that – if any of this particular techno-fantasy ever materializes from Mattel, let alone “early next year.” We, the reader and consumer, are asked to believe a lot of bullshit in that passage: that the device works, that the AI “learns,” that quizzing children on factoids is a technological and pedagogical breakthrough, that this is the future of play.
Mattel is already selling an Internet-connected Barbie – Hello Barbie – and an Internet-connected Barbie Dreamhouse, much to the consternation of privacy and information security advocates who caution that these devices are incredibly insecure, that the microphone and the stored audio files are readily accessible to hackers. Incidentally, these two Barbie toys use the same voice-recognition technology as the Aristotle: ToyTalk, now rebranded as PullString.
Perhaps we might recognize, as we wait to see if Mattel’s or Clayton Christensen’s predictions about the future come true, that this fantasy of the robot companion or caretaker has its own, long history – stories that elicit fear as often as comfort. There’s Olympia in E. T. A. Hoffmann’s 1816 short story “The Sandman,” for example, which Sigmund Freud used as the basis for his analysis of “the uncanny” – that unsettling feeling of something strangely, frighteningly familiar. “Das unheimlich,” Freud observed, is a German word that contains in it an ambivalence: “heimlich” – meaning the home, something familiar, and also something hidden – and its reverse and its pair, “unheimlich” – the unspoken, the repressed. The robot, or rather a seemingly living automaton in “The Sandman,” veers towards “das unheimlich.” Making the familiar unfamiliar. The basis for many horror stories.
And yet at CES and elsewhere, technologists insist this is what we will want in the home. (The liberal arts matter, technologists, I promise you.)
Now, the difference between the PR at CES in January and the marketing on the Mattel website in June might be striking, but it’s not really surprising. The point of CES, after all, is not so much to showcase what technology can do but to suggest what it might be able to do. Each and every year, the event is full of promises and vaporware – prototypes that never make it into production, products that never make it onto store shelves. CES truly encapsulates what I’ve argued elsewhere: that “the best way to predict the future is to issue a press release.” One tells powerful stories about what’s “on the horizon” in order to help shape imaginations and markets. Imaginations and markets.
What stories, what forces helped shape the market for baby monitors? Baby monitors have a history, of course – a social history and a history of the technology itself. We did not “need” baby monitors until quite recently, in no small part because our current system of sleeping – adults in one room, children each in their own – did not exist before the late nineteenth century. The idea that babies should sleep alone is even newer, reinforced by the rise of the disciplines of psychology and pediatrics in the early 20th century and by the market for parenting books and child-rearing products that developed alongside the “science.”
The first baby monitor – the “Radio Nurse” – was built by Zenith Radio Corporation in 1937. Zenith’s president, Eugene F. McDonald Jr., had cobbled together his own experimental system for his yacht using what was already a popular and accessible medium of the time: radio broadcasting. Zenith engineers polished McDonald’s prototype into a two-piece set: the “Guardian Ear,” which was plugged in next to the baby’s crib, transmitted sounds; and the “Radio Nurse,” which was plugged in next to the listening caregiver, received them. Isamu Noguchi, a well-known Japanese-American sculptor, was commissioned to design the latter, something he made out of Bakelite, which according to the curator of the Henry Ford Museum, was “an impressive abstract form that managed to capture the essence of a benign, yet no-nonsense nurse.”
“The essence” of a nurse. A curved plastic box. “Das unheimlich.”
The Radio Nurse was never a commercial success; the monitor picked up all sorts of other radio broadcasts, not just those from the baby’s room. Nevertheless, the baby monitor has since become a consumer product that parents are expected to own, often justified as a medical precaution, even though there’s no evidence that these devices prevent or even reduce the risk of sudden infant death syndrome.
Interestingly, infant mortality was not the inspiration for the Radio Nurse – or so the story goes. Zenith’s president felt compelled to build a monitor for his own child following the kidnapping of the Lindbergh baby in 1932.
The “crime of the century” and its trial were covered extensively by newsreels, and the kidnapping of the Lindbergh baby shaped Americans’ imagination. It prompted the passage of several laws relating to abduction. Now, I don’t want to overstate the importance of this particular crime in fostering the notion that babies need more monitoring, particularly in light of the various reform efforts made in the early twentieth century to protect children’s safety and well-being in general. But we can see in the Radio Nurse, I think, a technological intervention to that end – the embrace of a popular story that children are in danger, that they need to be surveilled when they are out of sight for their own protection; and it’s an early embrace too of a story that parenting can and should be mechanized. For the sake of “progress,” the twentieth century demanded it.
I would be remiss if I neglected to talk at an education technology conference about one of the most controversial “parenting machines” of the twentieth century: the “air crib” designed by behavioral psychologist B. F. Skinner, the infamous trainer of pigeons and inventor of teaching machines. First called the “baby tender” and then – and I kid you not – the “heir conditioner,” the device was meant to replace the crib, the bassinet, and the playpen. (There are echoes of this “efficiency” in Mattel’s Aristotle – “multiple nursery devices” in “one convenient, hands-free system.”)
Skinner fabricated the climate-controlled environment for his second child in 1944. Writing in Ladies Home Journal the following year, Skinner said,
When we decided to have another child, my wife and I felt that it was time to apply a little labor-saving invention and design to the problems of the nursery. We began by going over the disheartening schedule of the young mother, step by step. We asked only one question: Is this practice important for the physical and psychological health of the baby? When it was not, we marked it for elimination. Then the “gadgeteering” began.
The crib Skinner “gadgeteered” for his daughter was made of metal, larger than a typical crib, and higher off the ground – labor-saving, in part, through less bending over, Skinner argued. It had three solid walls, a roof, and a safety-glass pane at the front which could be lowered to move the baby in and out. Canvas was stretched across the bottom to create a floor, and the bedding was stored on a spool outside the crib, to be rolled in to replace soiled linen. It was soundproof and “dirt proof,” Skinner said, but its key feature was that the crib was temperature-controlled, so save the diaper, the baby was kept unclothed and unbundled. Skinner argued that clothing created unnecessary laundry and inhibited the baby’s movement and thus the baby’s exploration of her world.
As a labor-saving machine, Skinner boasted that the air crib meant it only would take “about one and one-half hours each day to feed, change, and otherwise care for the baby.” Skinner insisted that his daughter, who stayed in the crib for the first two years of her life, was not “socially starved and robbed of affection and mother love.” He wrote in Ladies Home Journal that
The compartment does not ostracize the baby. The large window is no more of a social barrier than the bars of a crib. The baby follows what is going on in the room, smiles at passers-by, plays “peek-a-boo” games, and obviously delights in company. And she is handled, talked to, and played with whenever she is changed or fed, and each afternoon during a play period, which is becoming longer as she grows older.
Much like the Radio Nurse, the air crib did not catch on, quite possibly because of that very Ladies Home Journal article. Its title – “Baby in a Box” – connected the crib to the “Skinner’s Box,” the operant conditioning chamber that Skinner had designed for his experiments on rats and pigeons, thus associating the crib with the rewards and pellets that Skinner used to modify these animals’ behavior in his laboratory. Indeed, the article described the crib’s design and the practices he and his wife developed for their infant daughter as an “experiment” – a word that Skinner probably didn’t really mean in a scientific sense but that possibly suggested to readers that this was a piece of lab equipment, not a piece of furniture suited for a baby or for the home. The article also opened with the phrase “in that brave new world which science is preparing for the housewife of the future,” and many readers would have likely been familiar with Aldous Huxley’s 1932 novel Brave New World, thus making the connection between the air crib and Huxley’s dystopia in which reproduction and child-rearing were engineered and controlled by a techno-scientific authoritarian government. But most damning, perhaps, was the photo that accompanied the article: the Skinner baby enclosed in the crib, with her face and hands pressed up against the glass.
The article helped foster an urban legend of sorts about Deborah Skinner – that being raised in the crib had caused grave psychological trauma, that she’d gone mad, that she’d committed suicide. None of these are true. “I was not a lab rat,” she wrote in an op-ed in The Guardian in 2004. But that’s the story that gets told nonetheless. That’s the popular perception of what this particular piece of parenting technology might do: deprive the child of love and socialization.
The air crib, psychologists Ludy Benjamin and Elizabeth Nielsen-Gamman argue, was viewed at the time as a “technology of displacement” – “a device that interferes with the usual modes of contact for human beings, in this case, parent and child; that is it displaces the parent.” It’s a similar problem, those two scholars contend, to that faced by one of Skinner’s other inventions, the teaching machine – a concept he came up with in 1953 after visiting Deborah’s fourth-grade classroom. These technologies both failed to achieve widespread adoption, according to Benjamin and Nielsen-Gamman, because they were seen as subverting valuable human relationships – relationships necessary to child development.
Now arguably, the most significant (and in some circles, alarming) parenting technology of the twentieth century was neither the baby monitor nor the air crib; it was the television. Children in post-war America were increasingly left alone while their parents were at work, some feared, without adequate adult supervision. (Children being left alone, of course, wasn’t new. But white, middle-class fears about “unaccompanied minors” were heightened for a number of reasons – and no doubt connected to changing cultural expectations and socio-economic pressures regarding working mothers as well as the social construction of a category of young people – “the youth.”) Subsequently (or ostensibly) children were being “raised,” educated, entertained by television – again, a technology that people worried might serve to undermine healthy childhood development by displacing parental authority, by exposing them to “inappropriate content” and to commercials.
Some of that moral panic has extended these days to other “screens,” even though American children do still watch a phenomenol amount of television – 19 hours a week for those age 2 to 11, according to the latest figures from Nielsen – much of it “unsupervised.” But one of the promises of new screens and new parenting technologies: unlike the television, these can watch children back. Again, I give you the marketing materials from Mattel: “The convenient Aristotle App lets you keep a close eye and ear on your baby from your smart device.” You can monitor the sounds the child makes through the microphone; you can monitor the movements the child makes through the camera; you can monitor all activity – physical and digital – through the computer’s activity logs. You can monitor them wherever they go without you: in their bedroom, in their classroom.
These new parenting devices try very hard to convince us that they are not a “technology of displacement,” but rather one of enhancement. They insist they do not interfere with parental relationships but enable them and extend their reach, even in a parent’s physical absence. This is not a matter of replacing parents with machines, but rather augmenting parenting with machines. As Stirling University’s Ben Williamson describes Mattel’s Aristotle, the “smart baby monitor” purports to be “the algorithmic solution to many parents’ problems – the automated in-loco-parentis figure that possesses endless energy, requires no sleep, does the shopping, and keeps the baby entertained and educated in ways that exceed human capacity.”
This argument should be quite familiar to those of us in ed-tech. This is the story we hear and we tell about computers, about algorithmic systems like adaptive learning, predictive analytics, personalization. Enhance, not replace. It’s the story B. F. Skinner told some sixty years ago about teaching machines too. “Will machines replace teachers?” he asked. “On the contrary,” he said,
they are capital equipment to be used by teachers to save time and labor. In assigning certain mechanizable functions to machines, the teacher emerges in his proper role as an indispensable human being. He may teach more students than heretofore – this is probably inevitable if the world-wide demand for education is to be satisfied – but he will do so in fewer hours and with fewer burdensome chores.
“Chores” – an interesting word choice, one that posits the work of the classroom alongside the work of the home. It’s not really clear in this passage by Skinner what these tasks might be. What are “mechanizable functions” and what, by extension, are not? In the case of Mattel’s Aristotle, these functions seem to include not only monitoring a sleeping child, alerting a parent to her cries, but playing with the child, comforting the child, talking and singing and reading to the child.
Raising a child, this story suggests, can be mechanized. Interacting with a child can be mechanized. Caring for a child can be mechanized. That’s quite an unsettling story, I think. “Das unheimlich.” But Fast Company likes it. And perhaps if people tell us the story often enough, they’ll change the way in which we all think. Maybe they’ll change how we think about robots. Maybe they’ll change how we think about parenting.
Indeed, last week I was on stage with someone from Singularity University, a Silicon Valley think tank co-founded by Ray Kurzweil, who insisted that this would be our future: we will love and be loved by robots. We will be raised by robots. (She cited Mattel’s Aristotle as an example.) We will be taught by robots. We will age and we will die with robot caretakers.
But robots don’t love. Robots don’t care. They don’t now; they never will – no matter the stories futurists tell us. “I think eventually [robots will] be able to act just like they are falling in love,” Google AI expert Peter Norvig told The Daily Beast in 2013 in response to the Spike Jonze movie Her. But is being programmed to act like love the same as love?
This is a philosophical question, to be sure. But it’s a political one as well, I’d contend, and maybe a pedagogical one too. And it’s a question we must ask, particularly as companies try to extend their reach with their products and their promises of thinking machines. How might programmatic, algorithmic child-raising technologies change our notions of love, of care, of humanity? How might they already be doing precisely that?
Through their design and their implementation, through the way in which they incentivize certain activities, technologies shape and reshape our practices and our relationships. They shape our imaginations, and technologies in turn are shaped by the imaginative stories we tell and we hear, by our beliefs and our practices.
Will a robot raise your child? Sixty years ago, when B. F. Skinner was trying to convince families and schools to buy air cribs and teaching machines, the answer from parents and teachers was overwhelmingly “No.” But now?
I’m not sure we are as resistant to the language of engineering and optimization, even in our most intimate spaces and relationships. It’s not that the technology is better either. Mostly, it’s not. New technologies, and the ideologies that underpin them, have brought the language of efficiency and productivity out of the workplace and into the classroom and into the home – into the realm of reproductive labor. Everything becomes a data-point to be tracked and quantified and analyzed and adjusted as (someone deems) necessary. Everything must be made perfectly observable, even when no human is there to watch.
And so: the quantified parent. The quantified baby. The quantified child. The quantified family. The quantified bedroom. The quantified bathroom. The quantified laundry room. The quantified kitchen. Quantified feedings. Quantified diaper changes. Quantified nap times. Quantified gurgles. Quantified smiles. Quantified word use. Quantified play.
All of this will be facilitated by “smart devices” in our “smart homes” under the guise of engineering (and that is the operative word) “smart children.” New, networked systems will optimize parenting and child development algorithmically. Or so we’re told.
It seems quite likely that the ways in which a white child from an affluent two-parent family would experience these parenting and education technologies would be quite different from the way in which a brown child with a poor single mom would. (There are no people of color in any of the images I used today. This science fiction imaginary. Did you notice?) A brave new world indeed.
We’re supposed to be thrilled about this “enhancement.” Or so I gather from the marketing for parenting and education technologies. So we’re told by CES. So we’re told by the Horizon Report.
Somewhere along the way, I think, we have confused surveillance for care. This is not necessarily a recent or emergent phenomenon – we can trace it back, at the very least, to the Radio Nurse and this compulsion to monitor our babies. This confusion – surveillance for care – has profound implications for how we raise children, no doubt. It has profound implications for how we teach and learn. It has profound implications for how we trust and respect one another.
Love and care and respect for one another – I’m an idealist, yes –that must be the work of all humans. That is the work of parenting (even for non-parents). That is the work of teaching too. I truly, truly hope we never convince ourselves that this can, that this should be the work of a machine.
from Hack Education http://ift.tt/2sggoAe