The life(stream) pursuit – (final) Week 12 summary

quote from earlier post

This was my summary at the end of Week 1. There’s a rather sweet prescience to the quote above, especially about randomness. Back then, I was apologetic; by comparison, last week I wrote about the uncertainty and variety of lifestream content, about the artificiality of imposing themes onto its heterogeneity.

The use of ‘extension’ cements a sense of otherness back in Week 1. It carries an inherent implication of being added on, attached but not part of the original structure. I’m over here, engaging; the lifestream is over there, blinking, nudging. This has changed too. We haven’t quite hybridised, but as the flexibility of the lifestream, and the mobility of its boundaries, have become apparent, so has its centrality as pedagogical apparatus to represent my confrontation with course themes.

Striated and smooth space come to mind as I consider the shifting role of the lifestream: its smoothness has become more evident to me. It has come to represent a mooring for the contestation of ideas.

a local integration moving from part to part and constituting smooth space in an infinite succession of linkages and changes in direction. It is an absolute that is one with becoming itself, with process (Deleuze & Guattari, 1988, p494).

Both of these observations – the heterogeneity of lifestream content and its integration in practice – speak, I think, to the nature of digital culture as a ‘subject’. They speak to its fluidity, its infiltration, its rhizomatic nature. With that they speak to the struggle of forcing it into a recognisable mould of subject. It’s like shoving a sleeping bag into a briefcase.

Here there is little conflict between content and ‘subject’: the content too is multifaceted, multimodal and diverse. As I read it through it, I’m struck by how much of it I have not written, how much is the work of others, passively gathered in, reappropriated. This points to the shifting tectonic plates under our definition of ownership of digital content.

There is, however, evidence of my attempt to engage actively with course themes. In particular, I have tried to layer ideas of digital culture on top of my professional practice. But the vulnerability of that practice is clear to me: it is the earth’s crust, digital culture is the magma, cracking through. Our main defence is critical thought, and it still needs work.

I would be remiss were I to exclude from my final summary ideas around the sociomaterial, which have fundamentally changed the way I think. Critical posthumanism is a constant later theme in the lifestream, and I’m taking it into the final assignment. I’m starting to see the lifestream as a representation of the coming together of the discursive and the material. In this conflict between active and passive gathering of content I’m starting to see myself as decentred – after all, it has done much of the gathering itself. I’m starting to notice and comprehend its biases and subjectivities: assessment criteria, its public nature, institutional structures, the traditional educational rules to which it must be seen to adhere.

So, lastly, the lifestream is an entanglement: of networks, technologies, algorithms, bots, software, bits of code, institutional structures, texts, communities. These are active, generative and performative (cf. Scott & Orlikowski); they have qualitatively changed what I have come to understand of digital cultures. I hope that the lifestream reflects this.

References

Bayne, S. (2004). Smoothness and Striation in Digital Learning Spaces. E-Learning and Digital Media, 1(2), 302–316. https://doi.org/10.2304/elea.2004.1.2.6
Deleuze, G., Guattari, F., & Massumi, B. (1988). A thousand plateaus : capitalism and schizophrenia. London: Athlone.
Edwards, R. (2010). The end of lifelong learning: A post-human condition? Studies in the Education of Adults, 42(1), 5–17.
Scott, S. V., & Orlikowski, W. J. (2013). Sociomateriality — taking the wrong turning? A response to Mutch. Information and Organization, 23(2), 77–80. https://doi.org/10.1016/j.infoandorg.2013.02.003

Power in the Digital Age

talking politics logo

Corbyn! Trump! Brexit! Politics has never been more unpredictable, more alarming or more interesting. TALKING POLITICS is the podcast that will try to make sense of it all.

from Pocket http://ift.tt/2o76yyJ
via IFTTT

Another one of my favourite podcasts, but this time it’s totally relevant to this course. Look at the synopsis for it:

synopsis of post

This particular episode looks at the ways in which politics and technology intersect, socio-critical and socio-technical issues around power and surveillance, the dominance of companies, and the impact of the general political outlook of the technologically powerful.

There are two things that I think are really relevant to the themes of the algorithmic cultures block. The first is about data. Data is described as being like ‘the land […], what we live on’, and machine learning is the plough, it’s what digs up the land. What we’ve done, they argue, is to give the land to the people who own the ploughs. This, Runciman, the host, argues, is not capitalism, but feudalism.

I’m paraphrasing the metaphor, so I may have missed a nuance or two. It strikes me as being different from the data-as-oil one, largely because of the perspective taken. It’s not really taken from a corporate perspective, although I think in the data-as-land metaphor there’s an assumption that we once ‘owned’ our data, or that it was ever conceived by us of as our intellectual property. I have the impression that Joni Mitchell might have been right – don’t it always seem to go that you don’t know what you’ve got ’til it’s gone – and that many of us really didn’t think about it much before.

The second point is about algorithms, where the host and one of his guests (whose name I missed, sorry) gently approach a critical posthumanist perspective of technology and algorithms without ever acknowledging it. Machine learning algorithms have agency – polymorphous, mobile, agency – which may be based on simulation but is nonetheless real. The people that currently control these algorithms, it is argued, are losing control, as the networked society allows for them to take on a dynamic of their own. Adopting and paraphrasing the Thomas theorem, it is argued that:

If a machine defines a situation as real, it is real in its consequences.

I say ‘gently approaching’ because I think that while the academics in this podcast are recognising the agency and intentionality of non-human actants – or algorithms – there’s still a sense that they believe there’s a need to wrest back this control from them. There’s still an anthropocentrism in their analysis which aligns more closely with humanism than posthumanism.

Elon Musk Isn’t the Only One Trying to Computerize Your Brain

Elon Musk wants to merge the computer with the human brain, build a “neural lace,” create a “direct cortical interface,” whatever that might look like.

from Pocket http://ift.tt/2nqnLSf
via IFTTT

This reminds me of the part about Moravec’s Mind Children in N. Katherine Hayles’ book, How we Became Posthuman (I just read ‘Theorizing Posthumanism by Badmington, which refers to it as well). There’s a scenario in Mind Children, writes Hayles, where Moravec argues that it will soon be possible to download human consciousness into a computer.

How, I asked myself, was it possible for someone of Moravec’s obvious intelligence to believe that mind could be separated from body? Even assuming that such a separation was possible, how could anyone think that consciousness in an entirely different medium would remain unchanged, as if it had no connection with embodiment? Shocked into awareness, I began to notice he was far from alone. (1999, p. 1)

It appears that Moravec wasn’t wrong about the possibility of the technology to ‘download’ human consciousness, but let’s hope the scientists all get round to reading Hayles’ work on this techno-utopia before the work really starts…

References

Badmington, N. (2003). Theorizing Posthumanism. Cultural Critique, (53), 10–27.

Hayles, N. K. (1999). How we became posthuman: virtual bodies in cybernetics, literature, and informatics. Chicago, Ill: University of Chicago Press.

Big Data, learning analytics, and posthumanism

I’ve now read a few articles assessing the pros and cons of learning analytics and, regardless of the methodologies employed, there are patterns and themes in what is being found. The benefits include institutional efficiency and institutional performance around financial planning and recruitment; for students, the benefits correspond to insights into learning and informed decision-making. These are balanced against the cons: self-fulfilling prophecies concerning at-risk students, the dangers of student profiling, risks to student privacy and questions around data ownership (Roberts et al., 2016; Lawson et al., 2016). This is often contextualised by socio-critical understandings which converge on notions of power and surveillance; some of the methodologies explicitly attempt to counter presumptions made as a result of this, for example, by bringing in the student voice (Roberts et al., 2016).

In reading these articles and studies, I was particularly interested in ideas around student profiling and student labelling, and how this is perceived (or sometimes spun) as a benefit for students. Arguments against student profiling focus on the oversimplification of student learning, students being labelled on past decisions, student identity being in a necessary state of flux (Mayer-Schoenberger, 2011). One of the things, though, that’s missing in all of this, the absence of which I am feeling keenly, is causation. It strikes me that big data and learning analytics can tell us what is, but not always why.

A similar observation leads Chandler to assert that Big Data is the kind of Bildungsroman of posthumanism (2015). He argues that Big Data is an epistemological revolution:

“displacing the modernist methodological hegemony of causal analysis and theory displacement” (2015, p. 833).

Chandler is not interested in the pros and cons of Big Data so much as the way in which it changes how knowledge is produced, and how we think about knowledge production. This is an extension of ideas espoused by Anderson, in which he argues that theoretical models are becoming redundant in a world of Big Data (2008). Similar, Cukier and Schoenberger argue that Big Data:

“represents a move away from trying to understand the deeper reasons behind how the world works to simply learning about an association among phenomena, and using that to get that done” (2013, p. 32).

Big Data aims not at instrumental knowledge, nor causal reasoning, but the revealing of feedback loops. It’s reflexive. And for Chandler, this represents an entirely new epistemological approach for making sense of the world, gaining insights which are ‘born from the data’, rather than planned in advance.

Chandler is interested in the ways in which Big Data can intersect with ideas in international relations and political governance, and many of his ideas are extremely translatable and relevant to higher education institutions. For example, Chandler argues that Big Data reflects political reality (i.e. what is) but it also transforms it through enabling community self-awareness. It allows reflexive problem-solving on the basis of this self-awareness. Similarly, it may be seen that learning analytics allows students to gain understanding of their learning and their progress, possibly in comparison with their peers.

This sounds great, but Chandler contends that it is necessarily accompanied by a warning: it isn’t particularly empowering for those who need social change:

Big Data can assist with the management of what exists […] but it cannot provide more than technical assistance based upon knowing more about what exists in the here and now. The problem is that without causal assumptions it is not possible to formulate effective strategies and responses to problems of social, economic and environmental threats. Big Data does not empower people to change their circumstances but merely to be more aware of them in order to adapt to them (p. 841-2).

The problem of lack of understanding of causation is raised in consideration of ‘at risk’ students – a student being judged on a series of data without any (potentially necessary) contextualisation. The focus is on reflexivity and relationality rather than how or why a situation has come about, and what the impact of it might be. Roberts et al. found that students were concerned about this, that learning analytics might drive inequality through advantaging only some students (2016).The demotivating nature of the EASI system for ‘at risk’ students is also raised by Lawson et al. (2016, p. 961). Too little consideration is given to the causality of ‘at risk’, and perhaps too much to essentialism.

His considerations of Big Data and international relations leads Chandler to assert cogently that:

Big Data articulates a properly posthuman ontology of self-governing, autopoietic assemblages of the technological and the social (2015, p. 845).

No one here is necessarily excluded, and all those on the periphery are brought in. Rather paradoxically, this appears to be both the culmination of the socio-material project, as well as an indicator of its necessity. Adopting a posthumanist approach to learning analytics may be a helpful critical standpoint, and is definitely something worth exploring further.

References

Anderson, C. (2008). The End of Theory: The Data Deluge Makes the Scientific Method Obsolete. Retrieved 19 March 2017, from https://www.wired.com/2008/06/pb-theory/
Chandler, D. (2015). A World without Causation: Big Data and the Coming of Age of Posthumanism. Millennium, 43(3), 833–851. https://doi.org/10.1177/0305829815576817
Cukier, K., & Mayer-Schoenberger, V. (2013). The Rise of Big Data: How It’s Changing the Way We Think About the World. Foreign Affairs, 92(3), 28–40.
Lawson, C., Beer, C., Rossi, D., Moore, T., & Fleming, J. (2016). Identification of ‘at risk’ students using learning analytics: the ethical dilemmas of intervention strategies in a higher education institution. Educational Technology Research and Development, 64(5), 957–968. https://doi.org/10.1007/s11423-016-9459-0
Mayer-Schonberger, V. (2011). Delete: the virtue of forgetting in the digital age: Princeton: Princeton University Press.
Roberts, L. D., Howell, J. A., Seaman, K., & Gibson, D. C. (2016). Student Attitudes toward Learning Analytics in Higher Education: ‘The Fitbit Version of the Learning World’. Frontiers in Psychology, 7. https://doi.org/10.3389/fpsyg.2016.01959

Lifestreams and (academic) themes – Week 2

This week the lifestream reflects my conscious attempt to grapple with some of the academic and philosophical themes in the block reading. I’ve been trying on a posthumanist hat. It fits a lot better than it did on Monday.

I’ve used the lifestream this week to draw together definitions and, since then, to test my nascent understanding of these definitions. I found some of the secondary readings particularly impenetrable in places, and I think that is reflected in the speculative tone I’ve been adopting all week. A main theme is binaries: my interpretation of Bayne (2015) concluded with an assessment of her opposition to the abbreviation of complex assemblages. I picked up binaries again in a longer post about some of the secondary readings, a sort of meandering through some of the key ideas I’ve been encountering, and a brief sojourn in what this may mean for educational philosophy and pedagogy.

Another recurring theme in what I’ve written and produced this week has been the postness of posthumanism and its necessary relativity to the dominant ideas that preceded it and caused it. There’s an innate sense of the disruptiveness, the fracturing and splintering of ideas and identities, even the combativeness with which posthumanism takes on its humanistic, anthropocentric predecessors. This sits in contrast with the view expressed in a Desert Island Discs interview with the choreographer,  Wayne McGregor. He argues in favour of a continuum between technology and the body, approbative rather than antagonistic.

So it’s been quite a theoretical week, in many ways, and as we enter Week 3, I’m hoping to switch my attention to concrete examples of the implications of cybercultures for educational practice.

 

Boundaries, binaries, and posthumanism

In The Manifesto for Cyborgs, Haraway (2007) argues that “we are all chimeras, theorized and fabricated hybrids of machines and organisms” (p. 35). Haraway uses the cyborg as the metaphor for the post-war blurring of boundaries, for the disruption of the categories by which we organise: human and machine, physical and non-physical, etc.

In the excerpt in our reading by Hayles (1999), she takes on some of these ideas, encapsulating them in how she defines the ‘posthuman’. It “privileges informational pattern over material instantiation” (p. 2); it treats consciousness as “an evolutionary upstart” (p. 3), and it considers the body “the original prosthesis”. It’s an even more radical blurring of boundaries, a fracturing of identities and categories we use. The subject is now inescapably hybrid, embodied virtuality:

there are no essential differences or absolute demarcations between bodily existence and computer simulation, cybernetic mechanism and biological organism, robot teleology and human goals (p. 3)

So far, so good. Boundaries blurred, binaries overcome. We are all hybrids. With the philosophy in mind, I tried (and struggled) to connect this to education and pedagogy, and I found a really useful article by Gourlay (2012). Drawing on the work of Haraway and particularly of Hayles, she points to the relationship between the lecture and the VLE as an example of the blurring of virtual and embodied boundaries in education:

the binary is blurred in the context between face-to-face and online engagement, as the context increasingly allows simultaneous engagement with networks of communities and sources of information beyond the physical walls of the university (p. 208)

Gourlay argues that the VLE displaces the lecturer’s biological body, shifting it to the side, while the lecturer’s voice is relativised by the effects of the displacement. The voice becomes one among many as the new setting of the lecture destabilises authority and singularity. What the lecturer says may be questioned, instantly, by the information to which the student has access (although this didn’t feel particularly ‘new’ to me). For the student, the relationship between the lecture and the VLE allows for greater hybridity, which Gourlay describes as “cyborg ontologies” (p. 208).

Gourlay’s focus on voice provides a way to explore sound and the extent to which sound(s) are embodied or not; this reframes, to an extent, Sterne’s chapter in Critical Cyberculture Studies where he bemoans the sidelining of sound studies in cybercultures research.

Yet Sterne’s main point is that we can use sound as a way to trouble any certainty we may have developed in our understanding of what cyberculture ‘is’. One of the ways in which he uses sound is as a way to upset the status quo, to keep us from complacency, and as a barrier to essentialism. He uses it in a way which is relative to the ‘dominant’ approach as it attempts to disrupt it. And that got me thinking about the way we conceive of, and write about, posthumanism. We’re still speaking of posthumanism in relation to humanism; we’re still referring to the boundaries and the binaries even as we theorise overcoming them. We’re still thinking in terms of human and machine, face-to-face or online, virtual and embodied, lecture and VLE. To an extent, this is inescapable: hybridity is relative and subjective. But are there ways in which we can account for this in our educational practice?

 

References:

Gourlay, L. (2012). Cyborg ontologies and the lecturer’s voice: a posthuman reading of the ‘face-to-face’. Learning, Media and Technology, 37(2), 198–211. https://doi.org/10.1080/17439884.2012.671773
Haraway, D. (2007). A cyborg manifesto. In D. Bell & B. M. Kennedy (Eds.), The cybercultures reader (2nd ed, pp. 34–65). London ; New York: Routledge.
Hayles, K. (1999). How we became posthuman: virtual bodies in cybernetics, literature, and informatics. Chicago, Ill.: University of Chicago Press. Retrieved from http://hdl.handle.net/2027/heb.05711
Silver, D., Massanari, A., & Sterne, J. (Eds.). (2006). The Historiography of Cyberculture. In Critical cyberculture studies (pp. 17–28). New York: New York University Press.

Something from Scannable

Scannable Document

I usually try to make a list of the definitions of any isms that I come across in reading, and I generally aim to do this by hand, rather than on the computer. Because I write by hand much slower than I can type, I think more carefully about the words that I use, and somehow that cements the definitions in my head a bit more.

So a few words that I’ve learned this week and last:

 

Reference:

Miller, V. (2011). Understanding digital culture. London ; Thousand Oaks, Calif.: SAGE Publications.

 

Tags: Scannable
January 28, 2017 at 12:45PM
Open in Evernote

(Confusing) Tweets and posthumanism

 

Source: @lemurph January 25, 2017 at 04:24PM

This is the quote I’m referring to in my tweet:

Technology is only a tool if it can be used properly to inspire a student – Anthony Salcito, vice-president of Worldwide Education at Microsoft

embedded tweet

It’s a weird set of words to put together – it’s ambiguous, and it’s taken me a few goes of reading through it to understand what it means. (I’m still not sure I do.) But if I were a proper critical posthumanist, what would I make of it?

On one hand, the technology is seen as exclusively material: it’s even further removed from being a ‘tool’, because it’s only a ‘tool’ if it meets certain conditions. So, not only does it require a separation of the material/technological and social, its status is dependent upon its being ‘used’ by humans in a certain way. Ergo: instrumentalist technology.

On the other hand, the technology has a ‘proper’ use – there is a way to use it properly, and if we humans are cognisant of this and able to use properly, it will ‘inspire’ our students. Ergo: determinist technology.

I’m also troubled by the use of this word ‘inspire’ – it’s so subjective, it privileges the human, it’s anthropocentric, and it’s difficult to see how it might escape a value judgment about what ‘learning’ is.

So technology-enhanced inspiration? Technology-inspired learning? No, thank you!

More human than human

Image of Blade Runner DVD
The book and the film

Last night I watched Blade Runner for the first time in about 15 years, and I’ve recently read the book it’s based on – Philip K. Dick’s Do Androids Dream of Electric Sheep? (1968). There has been a lot of research into the posthuman, postmodern side of of Blade Runner and into the epistemological questions that emerge as a result of it. But, informed by informed by the first of our core readings – Miller’s ‘The Body and Information Technology’, but I wanted to focus on what Blade Runner tells us (or doesn’t tell us) about what it means to be human.this contains spoilers

Blade Runner (1982), directed by Ridley Scott, is set in a post-war, post-industrial city, decaying and toxic, inhabited by humans and replicants (bio-robotic androids). The replicants are incredibly sophisticated; it’s impossible to tell them apart on sight. And so we have this question of what makes humans human? What is it to be human? Rachel, the replicant with whom Deckard, the eponymous blade runner, falls in love, can’t tell herself whether she is human or android: this is seen as the victory of the project.

The Nexus-6 android types, Rick reflected, surpassed several classes of human specials in terms of intelligence.

The way that the hunters tell humans and androids apart is using the Voight-Kampff test, which assesses empathy:

Empathy, evidently, existed only within the human community, whereas intelligence to some degree could be found throughout every phylum and order including the arachnida.

But within Blade Runner there’s even a question mark over whether this works. The test looks for physical signs of empathy – pupil dilation, etc., rather than feelings. It’s just performance, ultimately, one which technology is perfectly capable of recreating. It’s not necessarily anything to do with feeling. And replicants are seen to show emotion truer – on sight – than the alleged human Deckard: he is conspicuously emotionally distant while some of the replicants show emotion – Roy and Pris particularly.

Bertek links this ultimately inability to tell humans and replicants apart to Haraway’s Cyborg Manifesto. There’s been a reversal, Haraway says, and the technology is lively while the humans are inert (p. 194). This appears to be the case in Blade Runner – Kellner et al. provide several examples: Roy, the replicant, longs to be human, while Deckard increasingly sympathises with replicants; the replicant revolt is identified positively as a slave revolt.

So is Blade Runner posthumanist? Humans and machines are intricately connected in this post-industrial city, and there are few essential differences between them. For Lacey, it can’t ever be posthumanist because it’s mainstream cinema, and too connected to the bourgeoisie and consumerism and capitalism:

Science fiction remains the genre most able to deal with the posthuman, but whether it does so depends upon the institutional context in which films are produced (p. 198).

But it’s empathy which is foregrounded as the thing that make us human, solidarity with others to be at the core of humanity. After a day of marching in London with the Women’s March, this is ringing so true with me right now. I would be so interested to hear what the rest of you think.

Women's March, London
20th January 2017

References

Bertek, T. (2014). The Authenticity of the Replica: A Post-Human Reading of Blade Runner. [Sic] – a Journal of Literature, Culture and Literary Translation, (1.5). https://doi.org/10.15291/sic/1.5.lc.2
Brooker, W. (2012). The Blade runner experience: the legacy of a science fiction classic. New York: Columbia University Press.
Bruno, G. (1987). Ramble City: Postmodernism and ‘Blade Runner’. October, 41, 61. https://doi.org/10.2307/778330
Dick, Philip K. (1999). Do Androids Dream of Electric Sheep? London: Millennium.
Haraway, D. (1991). Simians, cyborgs, and women : the reinvention of nature. New York: Routledge.
Kellner, D., Leibowitz, F., & Ryan, M. (1984). Blade Runner: A Diagnostic Critique. Jump Cut, 29, 6–8.
Kuhn, A. (Ed.). (1999). Alien zone II: the spaces of science-fiction cinema. London ; New York: Verso.
Lacey, N. (2012). ‘Postmodern Romance: the impossibility of decentring the self’, in The Blade Runner Experience, ed. by Miller, pp. 190-200. New York: Columbia University Press.
Miller, V. (Vincent A. (2011). Understanding digital culture. London ; Thousand Oaks, Calif.: SAGE Publications.
Scott, R. (2007). Blade runner : the final cut. Warner Home Video.
Telotte, J. P. (2001). Science fiction film. Cambridge ; New York: Cambridge University Press.