Tweets: IFTTT, Twitter, and that binary again

It’s an emotional moment. (OK, not really.) The IFTTT strings are cut forever between Twitter and this blog. Between everything and this blog. It’s a good time, I think, to reflect on my use of Twitter and IFTTT throughout this course. I get the impression that the way I’ve used Twitter differs from a lot of my cohort. Earlier today, for example, the other Helen asked why we might’ve used Twitter more than previous course cohorts, and I was interested in this answer, given by Clare:

By comparison, my use of Twitter has been largely terse, laconic and unsustained – though the platform might be partially responsible for at least the first two. Looking back over my lifestream, I can see how rarely I’ve started or entered into conversations on Twitter, how aphoristic my tweets have been. They’ve been purely functional, one-offs to prove that I’m engaging with this or that, that I’m following the social and educational rules of this course.

At the beginning of the course, I wrote a post where I said I thought I’d find it weird to contaminate my social media presence with academic ideas. This turned out to be either a pretty accurate foretelling, or a self-fulfilling prophecy. Perhaps I should’ve set up a new Twitter handle, specifically for this. Or perhaps this would have simply masked my behaviour.

I write this not to excuse the pithiness and infrequency of my tweets, fortuitous though it may be to do that as well. Instead, I write this because my reflection about my use of Twitter is revealing to me the potential polymorphic, inconstant forms of agency that technology acts and performs. It’s a kind of unexpected technological determinism, one which is misaligned with the ‘goals’ of the platform. Twitter might be designed to ameliorate communication, but the massive and complex sociotechnical network within which it exists actually worked to silence me.

IFTTT presents a different sort of challenge. It was part of the assessment criteria to use it – to bring in diverse and differently moded content from wherever we are, whatever we’re looking at, and however we’re doing so. We were instructed to be instrumental about it, to use IFTTT in this very ‘black boxed’ sort of way. But of course we didn’t. IFTTT failed to meet the aesthetic standards of many of us, including me. So we’ve let IFTTT do its thing, and then gone into the blog to make it better. It’s instrumentalism, still, but again, kind of misdirected. Maybe we could call it transtechnologism.

What these two (flawed, I’m sure) observations do is to underline something fundamental about the themes of this course that I think, until now, I’d missed. Consider the two approach to technology often implicit in online education, according to Hamilton and Friesen (2013):

the first, which we call “essentialist”, takes technologies to be embodiments of abstract pedagogical principles. Here technologies are depicted as independent forces for the realisation of pedagogical aims, that are intrinsic to them prior to any actual use.

the second, which we call “instrumentalist”, depicts technologies as tools to be interpreted in light of this or that pedagogical framework or principle, and measured against how well they correspond in practice to that framework or principle. Here, technologies are seen as neutral mean employed for ends determined independently by their users.

These ideas have permeated this whole course. Don’t fall into these traps, into this lazy binary.  And yet there’s nothing here that rules out determinism, essentialism, instrumentalism. Calling out the binary tells us to think critically about the use of technology in education: it doesn’t make the two edges of that binary fundamentally false or impossible. We ought not to make the assumption, but once we haven’t, what we might have assumed could still turn out to be true.

References

Hamilton, E. C., & Friesen, N. (2013). Online Education: A Science and Technology Studies Perspective. Canadian Journal of Learning and Technology, 39(2), https://www.cjlt.ca/index.php/cjlt/article/view/26315

Power in the Digital Age

talking politics logo

Corbyn! Trump! Brexit! Politics has never been more unpredictable, more alarming or more interesting. TALKING POLITICS is the podcast that will try to make sense of it all.

from Pocket http://ift.tt/2o76yyJ
via IFTTT

Another one of my favourite podcasts, but this time it’s totally relevant to this course. Look at the synopsis for it:

synopsis of post

This particular episode looks at the ways in which politics and technology intersect, socio-critical and socio-technical issues around power and surveillance, the dominance of companies, and the impact of the general political outlook of the technologically powerful.

There are two things that I think are really relevant to the themes of the algorithmic cultures block. The first is about data. Data is described as being like ‘the land […], what we live on’, and machine learning is the plough, it’s what digs up the land. What we’ve done, they argue, is to give the land to the people who own the ploughs. This, Runciman, the host, argues, is not capitalism, but feudalism.

I’m paraphrasing the metaphor, so I may have missed a nuance or two. It strikes me as being different from the data-as-oil one, largely because of the perspective taken. It’s not really taken from a corporate perspective, although I think in the data-as-land metaphor there’s an assumption that we once ‘owned’ our data, or that it was ever conceived by us of as our intellectual property. I have the impression that Joni Mitchell might have been right – don’t it always seem to go that you don’t know what you’ve got ’til it’s gone – and that many of us really didn’t think about it much before.

The second point is about algorithms, where the host and one of his guests (whose name I missed, sorry) gently approach a critical posthumanist perspective of technology and algorithms without ever acknowledging it. Machine learning algorithms have agency – polymorphous, mobile, agency – which may be based on simulation but is nonetheless real. The people that currently control these algorithms, it is argued, are losing control, as the networked society allows for them to take on a dynamic of their own. Adopting and paraphrasing the Thomas theorem, it is argued that:

If a machine defines a situation as real, it is real in its consequences.

I say ‘gently approaching’ because I think that while the academics in this podcast are recognising the agency and intentionality of non-human actants – or algorithms – there’s still a sense that they believe there’s a need to wrest back this control from them. There’s still an anthropocentrism in their analysis which aligns more closely with humanism than posthumanism.

Confessions of a distance learning refusenik-linear courses

An occasional blog, pulled together from my research diary for the Teaching and Learning Online Module for the MA: Digital Technologies, Communication and Education at the University of Manchester.

from Pocket http://ift.tt/2oLX7HE
via IFTTT

The post above is written by a colleague and friend of mine, Ange Fitzpatrick. Ange is a student on the Digital Technologies course at the University of Manchester. It is a brutally honest post about the ways in which she engages with the course she is taking, and in it she talks about her engagement with the course structure, and the technology through which it is enacted.

The post resonated with me for several reasons. I’m interested in the way that Ange is taught, in comparison with the way that I am, in the similarities and differences between the two offerings. Empathy is a big thing too – like Ange, I’ve juggled this course with a family (occasionally in crisis, like most families) and a demanding job. I can snatch time here and there during the week, and usually am able to carve out more time at weekends, but it means I’m not always available (or awake enough) for much of the pre-fixed ‘teaching’.

Like Ange, I’ve been an independent learner for a long time; I fear it’s turned me into a really bad student. I like finding my own stuff to read rather than going with what is suggested. I feel as though I don’t need much support (though others may disagree!). I’m neither proud nor ashamed of this, but it does put me at odds – and it makes me feel at odds – with what has been an extremely supportive cohort of students and teachers. I have a laissez-faire attitude to assessment: I’ll do my best, and I do care a little about the marks. But more than anything I’m here to be ‘contaminated’ (to borrow the term of Lewis and Khan) by ideas that are new to me. I’d rather things got more complicated than more simple.

The reason I really wanted to share this, though, was that I feel that Ange’s post highlights and exemplifies the entanglements of digital and distance education. It reveals the complex assemblages and networks at play in how we engage with course materials, in how we define ‘engagement’. It uncovers the dispersal of activity, the instability, the times when instrumentalist approaches feel like the only option. It epitomises our attempts to stay in control, to centre and recentre ourselves at the nexus of our studying. It underlines the networks: the multi-institutional, political, cultural, familial, social, soteriological networks that combine and collide and co-constitute. It exposes the totalising sociomateriality of experience, “the delicate material and cultural ecologies within which life is situated” (Bayne, 2015, p. 15). And it does so from the perspective of the student.

But it also, I think, emphasises the – I say this tentatively – relative redundancy of these ideas and critical assessments. Recognition of the networks and rhizomes does not provide Ange with a more navigable path through her course. This doesn’t mean that these considerations are not important but it does – for me at least – point to a disjunction between theory and practice.

References

Bayne, S. (2015). What’s the matter with ‘technology-enhanced learning’? Learning, Media and Technology, 40(1), 5–20. https://doi.org/10.1080/17439884.2014.915851

With many many thanks to Ange for letting me share her post.

 

The Top Ed-Tech Trends (Aren’t ‘Tech’)

Every year since 2010, I’ve undertaken a fairly massive project in which I’ve reviewed the previous twelve months’ education and technology news in order to write ten articles covering “the top ed-tech trends.

from Pocket http://ift.tt/2nwX9OP
via IFTTT

This is a really interesting post from one of my favourite blogs, Hack Education. It’s the rough transcript of a talk given by Audrey Watters, about her work developing the ‘top ed-tech trends’. She talks about the ways in which this cannot be predictive, but is a ‘history’ of technology, and one which is immersed in claims made about technology by the people who are trying to sell it to us. Technology, she says wryly, is always amazing.

I want us to think more critically about all these claims, about the politics, not just the products (perhaps so the next time we’re faced with consultants or salespeople, we can do a better job challenging their claims or advice).

Her argument is a profound one, and one which coheres nicely with the principal themes in EDC. Her conceptualisation of technologies is that they are ideological practices, rather than tools, and rather than things you can go out and buy and in doing so render yourself ‘ed-tech’, a form of technological solutionism. They have a narrative, and that narrative includes the $2.2 billion spent on technology development in 2016.

Personalization. Platforms. These aren’t simply technological innovations. They are political, social – shaping culture and politics and institutions and individuals in turn.

Watters ends with a plea to us all. When we first encounter new technologies, consider not just what it can do, or what our ownership or mastership of the product might say about us. But also consider its ideologies and its implications.

Really, definitely, absolutely worth reading.

The future is algorithms, not code

some code-like algorithms

The current ‘big data’ era is not new. There have been other periods in human civilisation where we have been overwhelmed by data. By looking at these periods we can understand how a shift from discrete to abstract methods demonstrate why the emphasis should be on algorithms not code.

from Pocket http://ift.tt/2nGVibD
via IFTTT

Tweets

I’ve spent some time this weekend reading a couple of articles to help me to formulate the specific questions I’d like to focus on in the assignment. I was mostly enjoying myself, when I started on an article that elicited the reaction you can see in the tweet above. The phrase in the tweet – “a certain performative, post-human, ethico-epistem-ontology” is pretty much inaccessible, and this is a real bugbear of mine. Thankfully I’ve encountered it only a few times in this course. It took me a while to figure out what the author was getting at with his ethico-epistem-ontology, and when I did I found that it wasn’t half as fancy or clever as the language used might suggest.

Ideas should challenge, and language should challenge too, but one of the things about good academic writing (obviously something on my mind with the assignment coming up) is the ability to represent and communicate complex, nuanced, difficult ideas in a way that doesn’t throw up a huge great wall. There are times when that huge barrier is instrumental to the argument, I suppose: I remember reading Derrida…*

Yet largely if the aforementioned ‘challenge’ is located as much in the discrete individual words used as in the premises of the argument (assuming, of course, that the two can be separated), then what does that mean for the locus of academic literacy? And what does it mean for openness? The trend toward open access and open data, despite being fraught with issues around policy, the way technology is implicated, and other things, is generally a positive. But is representation of ideas like this even vaguely ‘open’ in anything but a literal sense?

Anyway, this is a total aside, and I’ll bring an end of the rant.  Authentic content for the lifestream, I think 🙂

*OK, I mainly looked at the words and panicked internally

Being Human is your problem

How do we create institutional cultures where the digital isn’t amplifying that approach but is instead a place suffused with the messiness, vulnerability and humanity inherent in meaningful learning?

Donna is one of my very favourite people, and I’m sure Dave is excellent too. This lecture/presentation is worth watching. Twice.
from Tumblr http://ift.tt/2nqhyoP
via IFTTT

Trump is erasing gay people like me from American society by keeping us off the census

As a gay man, I literally don’t count in America. Despite previous reports that we would be counted for the first time in history, this week the Trump administration announced that LGBT Americans will not be included in the 2020 census.

from Pocket http://ift.tt/2njLWRP
via IFTTT

I read about this earlier in the week, and when I watched the TED talk on statistics I was reminded about this. There was talk, recently, about LGBT Americans being counted in the 2020 census. Obviously being able to quantify the number of LGBT people will mean that policy will have to take this information into account – if the census demonstrates conclusively that x% of Americans are LGBT, then that is a weapon for agitating for better rights, better provisions, better services, better everything, really. The plan to count LGBT Americans has been shelved this week, and this represents a major challenge for the LGBT community in the US.

I think it’s a really clear example of the socio-critical elements of data and algorithmic cultures. If you have an unequal structure to begin with, then the algorithms used to make sense of that may well replicate that inequality. And if you assume that the data is not necessary to begin with, then there’s no accountability at all.