Tweets: IFTTT, Twitter, and that binary again

It’s an emotional moment. (OK, not really.) The IFTTT strings are cut forever between Twitter and this blog. Between everything and this blog. It’s a good time, I think, to reflect on my use of Twitter and IFTTT throughout this course. I get the impression that the way I’ve used Twitter differs from a lot of my cohort. Earlier today, for example, the other Helen asked why we might’ve used Twitter more than previous course cohorts, and I was interested in this answer, given by Clare:

By comparison, my use of Twitter has been largely terse, laconic and unsustained – though the platform might be partially responsible for at least the first two. Looking back over my lifestream, I can see how rarely I’ve started or entered into conversations on Twitter, how aphoristic my tweets have been. They’ve been purely functional, one-offs to prove that I’m engaging with this or that, that I’m following the social and educational rules of this course.

At the beginning of the course, I wrote a post where I said I thought I’d find it weird to contaminate my social media presence with academic ideas. This turned out to be either a pretty accurate foretelling, or a self-fulfilling prophecy. Perhaps I should’ve set up a new Twitter handle, specifically for this. Or perhaps this would have simply masked my behaviour.

I write this not to excuse the pithiness and infrequency of my tweets, fortuitous though it may be to do that as well. Instead, I write this because my reflection about my use of Twitter is revealing to me the potential polymorphic, inconstant forms of agency that technology acts and performs. It’s a kind of unexpected technological determinism, one which is misaligned with the ‘goals’ of the platform. Twitter might be designed to ameliorate communication, but the massive and complex sociotechnical network within which it exists actually worked to silence me.

IFTTT presents a different sort of challenge. It was part of the assessment criteria to use it – to bring in diverse and differently moded content from wherever we are, whatever we’re looking at, and however we’re doing so. We were instructed to be instrumental about it, to use IFTTT in this very ‘black boxed’ sort of way. But of course we didn’t. IFTTT failed to meet the aesthetic standards of many of us, including me. So we’ve let IFTTT do its thing, and then gone into the blog to make it better. It’s instrumentalism, still, but again, kind of misdirected. Maybe we could call it transtechnologism.

What these two (flawed, I’m sure) observations do is to underline something fundamental about the themes of this course that I think, until now, I’d missed. Consider the two approach to technology often implicit in online education, according to Hamilton and Friesen (2013):

the first, which we call “essentialist”, takes technologies to be embodiments of abstract pedagogical principles. Here technologies are depicted as independent forces for the realisation of pedagogical aims, that are intrinsic to them prior to any actual use.

the second, which we call “instrumentalist”, depicts technologies as tools to be interpreted in light of this or that pedagogical framework or principle, and measured against how well they correspond in practice to that framework or principle. Here, technologies are seen as neutral mean employed for ends determined independently by their users.

These ideas have permeated this whole course. Don’t fall into these traps, into this lazy binary.  And yet there’s nothing here that rules out determinism, essentialism, instrumentalism. Calling out the binary tells us to think critically about the use of technology in education: it doesn’t make the two edges of that binary fundamentally false or impossible. We ought not to make the assumption, but once we haven’t, what we might have assumed could still turn out to be true.

References

Hamilton, E. C., & Friesen, N. (2013). Online Education: A Science and Technology Studies Perspective. Canadian Journal of Learning and Technology, 39(2), https://www.cjlt.ca/index.php/cjlt/article/view/26315

Goodreads and algorithms, part the definite last

Good recommendation algorithms are really (really!) difficult to do right. We built Goodreads so that you could find new books based on what your friends are reading, and now we want to take the next step to make that process even more fruitful.

This quotation is from the Goodreads blog, a post written by Otis Chandler, a Goodreads CEO. The “next step” to which he refers is Goodreads’ acquisition of the small start-up, Discovereads, which was developing algorithms around book recommendations. The algorithms used by Discovereads were multiple, based on book ratings from millions of users, and tracking data patterns of how people read, how they rate, the choices they make, what might influence them.

It’s roughly based on the sorts of algorithms that drive Netflix, though there’s an obvious difference between the two platforms, and it’s not the type of content. Goodreads isn’t a publisher nor a producer of its own content; it isn’t promoting its own creations but rather can influence the user to spend money in a way that Netflix, which works to a different economic model, may not. Chandler admits this: one of the goals in adopting the Discovereads algorithm is that it will improve marketing strategies, ensuring that sponsored content (books promoted to users) will be more up their street.

Given this, then, it’s possible to say that the way recommendations work in Goodreads is based on at least three things:

  1. The ratings provided by an individual at the point they sign up – part of the process of getting a Goodreads account is adding genres you’re interested in, and “rating” a (computer-generated) series of books
  2. The algorithms at play are monitoring human patterns of reading and rating and, presumably, analytics and big data collected on what might encourage a person to add a recommended book to their lists (and perhaps, too, to their shopping basket)
  3. The Amazon connection: the fact that Goodreads isn’t providing its own content, and that it’s owned by Amazon, makes a particular sort of economic link. Not only does it incentivise Goodreads promoting specific economic content, but it means that Goodreads can influence how and where consumers’ money is spent. Presumably analytics based on how often Goodreads’ recommendations leads to a purchase is fed back into the recommendation system to improve upon it.

Knox (2015) suggests that actor-network theory might account for the “layers of activity involved” in the complex, often hidden, and often automated ways in which humans and non-humans interact in the development and deployment of algorithms. One of the principal benefits of this approach (and there are many) is that it inherently assumes that the human and non-human are working together. This is not always self-evident, and the quotation at the top of this post suggests that the two are seen to be in opposition. The incorporation of the Discovereads algorithm, it is implied, will lead to a fundamentally different way of generating recommendations. It signals a move from human-generated recommendations (what your friends are reading) to computer-generated ones, based on this algorithm.

The responses to the blog post written by Chandler suggest that this binary is presupposed by Goodreads users as well. The posts below, for example, clearly espouse the benefits of both ‘routes’ to recommendations. But they suggest that recommendations are either human- or computer-generated: there’s no indication that non-human interference in extant friend-generated recommendations, nor any human influence in the computer-generated ones. It’s a code-based version of the binary we’ve encountered lots in the past eight weeks: the perception that the options of technological instrumentalism and technological determinism are the only ones.

The reality, of course, is that it’s a false binary. It’s not a choice of human or non-human but – as Knox outlines – both are present. The difference, then, to which Chandler refers, the change heralded by the acquisition of Discovereads, isn’t necessarily in the source of the content, but in the perception of that source. It’s in the perceived transparency or hiddenness of the algorithm.

References

Chandler, O. (2011). Recommendations And Discovering Good Reads. Retrieved 11 March 2017, from http://www.goodreads.com/blog/show/271-recommendations-and-discovering-good-reads
Knox, J. (2015). Critical Education and Digital Cultures. In M. Peters (Ed.), Encyclopedia of Educational Philosophy and Theory (pp. 1–6). Singapore: Springer Singapore. https://doi.org/10.1007/978-981-287-532-7_124-1

Lifestreams and (academic) themes – Week 2

This week the lifestream reflects my conscious attempt to grapple with some of the academic and philosophical themes in the block reading. I’ve been trying on a posthumanist hat. It fits a lot better than it did on Monday.

I’ve used the lifestream this week to draw together definitions and, since then, to test my nascent understanding of these definitions. I found some of the secondary readings particularly impenetrable in places, and I think that is reflected in the speculative tone I’ve been adopting all week. A main theme is binaries: my interpretation of Bayne (2015) concluded with an assessment of her opposition to the abbreviation of complex assemblages. I picked up binaries again in a longer post about some of the secondary readings, a sort of meandering through some of the key ideas I’ve been encountering, and a brief sojourn in what this may mean for educational philosophy and pedagogy.

Another recurring theme in what I’ve written and produced this week has been the postness of posthumanism and its necessary relativity to the dominant ideas that preceded it and caused it. There’s an innate sense of the disruptiveness, the fracturing and splintering of ideas and identities, even the combativeness with which posthumanism takes on its humanistic, anthropocentric predecessors. This sits in contrast with the view expressed in a Desert Island Discs interview with the choreographer,  Wayne McGregor. He argues in favour of a continuum between technology and the body, approbative rather than antagonistic.

So it’s been quite a theoretical week, in many ways, and as we enter Week 3, I’m hoping to switch my attention to concrete examples of the implications of cybercultures for educational practice.

 

Boundaries, binaries, and posthumanism

In The Manifesto for Cyborgs, Haraway (2007) argues that “we are all chimeras, theorized and fabricated hybrids of machines and organisms” (p. 35). Haraway uses the cyborg as the metaphor for the post-war blurring of boundaries, for the disruption of the categories by which we organise: human and machine, physical and non-physical, etc.

In the excerpt in our reading by Hayles (1999), she takes on some of these ideas, encapsulating them in how she defines the ‘posthuman’. It “privileges informational pattern over material instantiation” (p. 2); it treats consciousness as “an evolutionary upstart” (p. 3), and it considers the body “the original prosthesis”. It’s an even more radical blurring of boundaries, a fracturing of identities and categories we use. The subject is now inescapably hybrid, embodied virtuality:

there are no essential differences or absolute demarcations between bodily existence and computer simulation, cybernetic mechanism and biological organism, robot teleology and human goals (p. 3)

So far, so good. Boundaries blurred, binaries overcome. We are all hybrids. With the philosophy in mind, I tried (and struggled) to connect this to education and pedagogy, and I found a really useful article by Gourlay (2012). Drawing on the work of Haraway and particularly of Hayles, she points to the relationship between the lecture and the VLE as an example of the blurring of virtual and embodied boundaries in education:

the binary is blurred in the context between face-to-face and online engagement, as the context increasingly allows simultaneous engagement with networks of communities and sources of information beyond the physical walls of the university (p. 208)

Gourlay argues that the VLE displaces the lecturer’s biological body, shifting it to the side, while the lecturer’s voice is relativised by the effects of the displacement. The voice becomes one among many as the new setting of the lecture destabilises authority and singularity. What the lecturer says may be questioned, instantly, by the information to which the student has access (although this didn’t feel particularly ‘new’ to me). For the student, the relationship between the lecture and the VLE allows for greater hybridity, which Gourlay describes as “cyborg ontologies” (p. 208).

Gourlay’s focus on voice provides a way to explore sound and the extent to which sound(s) are embodied or not; this reframes, to an extent, Sterne’s chapter in Critical Cyberculture Studies where he bemoans the sidelining of sound studies in cybercultures research.

Yet Sterne’s main point is that we can use sound as a way to trouble any certainty we may have developed in our understanding of what cyberculture ‘is’. One of the ways in which he uses sound is as a way to upset the status quo, to keep us from complacency, and as a barrier to essentialism. He uses it in a way which is relative to the ‘dominant’ approach as it attempts to disrupt it. And that got me thinking about the way we conceive of, and write about, posthumanism. We’re still speaking of posthumanism in relation to humanism; we’re still referring to the boundaries and the binaries even as we theorise overcoming them. We’re still thinking in terms of human and machine, face-to-face or online, virtual and embodied, lecture and VLE. To an extent, this is inescapable: hybridity is relative and subjective. But are there ways in which we can account for this in our educational practice?

 

References:

Gourlay, L. (2012). Cyborg ontologies and the lecturer’s voice: a posthuman reading of the ‘face-to-face’. Learning, Media and Technology, 37(2), 198–211. https://doi.org/10.1080/17439884.2012.671773
Haraway, D. (2007). A cyborg manifesto. In D. Bell & B. M. Kennedy (Eds.), The cybercultures reader (2nd ed, pp. 34–65). London ; New York: Routledge.
Hayles, K. (1999). How we became posthuman: virtual bodies in cybernetics, literature, and informatics. Chicago, Ill.: University of Chicago Press. Retrieved from http://hdl.handle.net/2027/heb.05711
Silver, D., Massanari, A., & Sterne, J. (Eds.). (2006). The Historiography of Cyberculture. In Critical cyberculture studies (pp. 17–28). New York: New York University Press.

Blade Runner 2049

Replicants are like any other machine. They’re either a benefit or a hazard. If they’re a benefit, it’s not my problem.

A benefit or a hazard, says Deckard. One or the other. Not both, not neither. The binary nature of this didn’t hit me as fully at the beginning of the course, but now I see it – now I’m onto you, Blade Runner 2049. But it’s here, and it’s unmistakeable, the perfect microcosmic example of why the critical ideas surrounding digital cultures are so necessary… over-simplistic sci-fi and bad action films may never been the same again.