Tweets: IFTTT, Twitter, and that binary again

It’s an emotional moment. (OK, not really.) The IFTTT strings are cut forever between Twitter and this blog. Between everything and this blog. It’s a good time, I think, to reflect on my use of Twitter and IFTTT throughout this course. I get the impression that the way I’ve used Twitter differs from a lot of my cohort. Earlier today, for example, the other Helen asked why we might’ve used Twitter more than previous course cohorts, and I was interested in this answer, given by Clare:

By comparison, my use of Twitter has been largely terse, laconic and unsustained – though the platform might be partially responsible for at least the first two. Looking back over my lifestream, I can see how rarely I’ve started or entered into conversations on Twitter, how aphoristic my tweets have been. They’ve been purely functional, one-offs to prove that I’m engaging with this or that, that I’m following the social and educational rules of this course.

At the beginning of the course, I wrote a post where I said I thought I’d find it weird to contaminate my social media presence with academic ideas. This turned out to be either a pretty accurate foretelling, or a self-fulfilling prophecy. Perhaps I should’ve set up a new Twitter handle, specifically for this. Or perhaps this would have simply masked my behaviour.

I write this not to excuse the pithiness and infrequency of my tweets, fortuitous though it may be to do that as well. Instead, I write this because my reflection about my use of Twitter is revealing to me the potential polymorphic, inconstant forms of agency that technology acts and performs. It’s a kind of unexpected technological determinism, one which is misaligned with the ‘goals’ of the platform. Twitter might be designed to ameliorate communication, but the massive and complex sociotechnical network within which it exists actually worked to silence me.

IFTTT presents a different sort of challenge. It was part of the assessment criteria to use it – to bring in diverse and differently moded content from wherever we are, whatever we’re looking at, and however we’re doing so. We were instructed to be instrumental about it, to use IFTTT in this very ‘black boxed’ sort of way. But of course we didn’t. IFTTT failed to meet the aesthetic standards of many of us, including me. So we’ve let IFTTT do its thing, and then gone into the blog to make it better. It’s instrumentalism, still, but again, kind of misdirected. Maybe we could call it transtechnologism.

What these two (flawed, I’m sure) observations do is to underline something fundamental about the themes of this course that I think, until now, I’d missed. Consider the two approach to technology often implicit in online education, according to Hamilton and Friesen (2013):

the first, which we call “essentialist”, takes technologies to be embodiments of abstract pedagogical principles. Here technologies are depicted as independent forces for the realisation of pedagogical aims, that are intrinsic to them prior to any actual use.

the second, which we call “instrumentalist”, depicts technologies as tools to be interpreted in light of this or that pedagogical framework or principle, and measured against how well they correspond in practice to that framework or principle. Here, technologies are seen as neutral mean employed for ends determined independently by their users.

These ideas have permeated this whole course. Don’t fall into these traps, into this lazy binary.  And yet there’s nothing here that rules out determinism, essentialism, instrumentalism. Calling out the binary tells us to think critically about the use of technology in education: it doesn’t make the two edges of that binary fundamentally false or impossible. We ought not to make the assumption, but once we haven’t, what we might have assumed could still turn out to be true.

References

Hamilton, E. C., & Friesen, N. (2013). Online Education: A Science and Technology Studies Perspective. Canadian Journal of Learning and Technology, 39(2), https://www.cjlt.ca/index.php/cjlt/article/view/26315

Power in the Digital Age

talking politics logo

Corbyn! Trump! Brexit! Politics has never been more unpredictable, more alarming or more interesting. TALKING POLITICS is the podcast that will try to make sense of it all.

from Pocket http://ift.tt/2o76yyJ
via IFTTT

Another one of my favourite podcasts, but this time it’s totally relevant to this course. Look at the synopsis for it:

synopsis of post

This particular episode looks at the ways in which politics and technology intersect, socio-critical and socio-technical issues around power and surveillance, the dominance of companies, and the impact of the general political outlook of the technologically powerful.

There are two things that I think are really relevant to the themes of the algorithmic cultures block. The first is about data. Data is described as being like ‘the land […], what we live on’, and machine learning is the plough, it’s what digs up the land. What we’ve done, they argue, is to give the land to the people who own the ploughs. This, Runciman, the host, argues, is not capitalism, but feudalism.

I’m paraphrasing the metaphor, so I may have missed a nuance or two. It strikes me as being different from the data-as-oil one, largely because of the perspective taken. It’s not really taken from a corporate perspective, although I think in the data-as-land metaphor there’s an assumption that we once ‘owned’ our data, or that it was ever conceived by us of as our intellectual property. I have the impression that Joni Mitchell might have been right – don’t it always seem to go that you don’t know what you’ve got ’til it’s gone – and that many of us really didn’t think about it much before.

The second point is about algorithms, where the host and one of his guests (whose name I missed, sorry) gently approach a critical posthumanist perspective of technology and algorithms without ever acknowledging it. Machine learning algorithms have agency – polymorphous, mobile, agency – which may be based on simulation but is nonetheless real. The people that currently control these algorithms, it is argued, are losing control, as the networked society allows for them to take on a dynamic of their own. Adopting and paraphrasing the Thomas theorem, it is argued that:

If a machine defines a situation as real, it is real in its consequences.

I say ‘gently approaching’ because I think that while the academics in this podcast are recognising the agency and intentionality of non-human actants – or algorithms – there’s still a sense that they believe there’s a need to wrest back this control from them. There’s still an anthropocentrism in their analysis which aligns more closely with humanism than posthumanism.

Confessions of a distance learning refusenik-linear courses

An occasional blog, pulled together from my research diary for the Teaching and Learning Online Module for the MA: Digital Technologies, Communication and Education at the University of Manchester.

from Pocket http://ift.tt/2oLX7HE
via IFTTT

The post above is written by a colleague and friend of mine, Ange Fitzpatrick. Ange is a student on the Digital Technologies course at the University of Manchester. It is a brutally honest post about the ways in which she engages with the course she is taking, and in it she talks about her engagement with the course structure, and the technology through which it is enacted.

The post resonated with me for several reasons. I’m interested in the way that Ange is taught, in comparison with the way that I am, in the similarities and differences between the two offerings. Empathy is a big thing too – like Ange, I’ve juggled this course with a family (occasionally in crisis, like most families) and a demanding job. I can snatch time here and there during the week, and usually am able to carve out more time at weekends, but it means I’m not always available (or awake enough) for much of the pre-fixed ‘teaching’.

Like Ange, I’ve been an independent learner for a long time; I fear it’s turned me into a really bad student. I like finding my own stuff to read rather than going with what is suggested. I feel as though I don’t need much support (though others may disagree!). I’m neither proud nor ashamed of this, but it does put me at odds – and it makes me feel at odds – with what has been an extremely supportive cohort of students and teachers. I have a laissez-faire attitude to assessment: I’ll do my best, and I do care a little about the marks. But more than anything I’m here to be ‘contaminated’ (to borrow the term of Lewis and Khan) by ideas that are new to me. I’d rather things got more complicated than more simple.

The reason I really wanted to share this, though, was that I feel that Ange’s post highlights and exemplifies the entanglements of digital and distance education. It reveals the complex assemblages and networks at play in how we engage with course materials, in how we define ‘engagement’. It uncovers the dispersal of activity, the instability, the times when instrumentalist approaches feel like the only option. It epitomises our attempts to stay in control, to centre and recentre ourselves at the nexus of our studying. It underlines the networks: the multi-institutional, political, cultural, familial, social, soteriological networks that combine and collide and co-constitute. It exposes the totalising sociomateriality of experience, “the delicate material and cultural ecologies within which life is situated” (Bayne, 2015, p. 15). And it does so from the perspective of the student.

But it also, I think, emphasises the – I say this tentatively – relative redundancy of these ideas and critical assessments. Recognition of the networks and rhizomes does not provide Ange with a more navigable path through her course. This doesn’t mean that these considerations are not important but it does – for me at least – point to a disjunction between theory and practice.

References

Bayne, S. (2015). What’s the matter with ‘technology-enhanced learning’? Learning, Media and Technology, 40(1), 5–20. https://doi.org/10.1080/17439884.2014.915851

With many many thanks to Ange for letting me share her post.

 

The Top Ed-Tech Trends (Aren’t ‘Tech’)

Every year since 2010, I’ve undertaken a fairly massive project in which I’ve reviewed the previous twelve months’ education and technology news in order to write ten articles covering “the top ed-tech trends.

from Pocket http://ift.tt/2nwX9OP
via IFTTT

This is a really interesting post from one of my favourite blogs, Hack Education. It’s the rough transcript of a talk given by Audrey Watters, about her work developing the ‘top ed-tech trends’. She talks about the ways in which this cannot be predictive, but is a ‘history’ of technology, and one which is immersed in claims made about technology by the people who are trying to sell it to us. Technology, she says wryly, is always amazing.

I want us to think more critically about all these claims, about the politics, not just the products (perhaps so the next time we’re faced with consultants or salespeople, we can do a better job challenging their claims or advice).

Her argument is a profound one, and one which coheres nicely with the principal themes in EDC. Her conceptualisation of technologies is that they are ideological practices, rather than tools, and rather than things you can go out and buy and in doing so render yourself ‘ed-tech’, a form of technological solutionism. They have a narrative, and that narrative includes the $2.2 billion spent on technology development in 2016.

Personalization. Platforms. These aren’t simply technological innovations. They are political, social – shaping culture and politics and institutions and individuals in turn.

Watters ends with a plea to us all. When we first encounter new technologies, consider not just what it can do, or what our ownership or mastership of the product might say about us. But also consider its ideologies and its implications.

Really, definitely, absolutely worth reading.

Pinned to Education and Digital Cultures on Pinterest

Just Pinned to Education and Digital Cultures: http://ift.tt/2o5xkZW
I’ve been trying to find something that reflects roughly how I’m feeling about the assignment/artefact/essay – it’s hard to know what to call it. I found the image above on Pinterest, and a reverse image search led me to this article about the artist, Maurizio Anzari. He has embroidered existing old photos, using coloured skeins to criss-cross and mask the faces.
That kind of slightly muddled yet totally vibrant feeling before writing or creating an argument about something is something I’m familiar with, but it never stops being destabilising. The goal, I guess, is to untangle the skeins, and organise the colours…

The future is algorithms, not code

some code-like algorithms

The current ‘big data’ era is not new. There have been other periods in human civilisation where we have been overwhelmed by data. By looking at these periods we can understand how a shift from discrete to abstract methods demonstrate why the emphasis should be on algorithms not code.

from Pocket http://ift.tt/2nGVibD
via IFTTT

Tweets

I’ve spent some time this weekend reading a couple of articles to help me to formulate the specific questions I’d like to focus on in the assignment. I was mostly enjoying myself, when I started on an article that elicited the reaction you can see in the tweet above. The phrase in the tweet – “a certain performative, post-human, ethico-epistem-ontology” is pretty much inaccessible, and this is a real bugbear of mine. Thankfully I’ve encountered it only a few times in this course. It took me a while to figure out what the author was getting at with his ethico-epistem-ontology, and when I did I found that it wasn’t half as fancy or clever as the language used might suggest.

Ideas should challenge, and language should challenge too, but one of the things about good academic writing (obviously something on my mind with the assignment coming up) is the ability to represent and communicate complex, nuanced, difficult ideas in a way that doesn’t throw up a huge great wall. There are times when that huge barrier is instrumental to the argument, I suppose: I remember reading Derrida…*

Yet largely if the aforementioned ‘challenge’ is located as much in the discrete individual words used as in the premises of the argument (assuming, of course, that the two can be separated), then what does that mean for the locus of academic literacy? And what does it mean for openness? The trend toward open access and open data, despite being fraught with issues around policy, the way technology is implicated, and other things, is generally a positive. But is representation of ideas like this even vaguely ‘open’ in anything but a literal sense?

Anyway, this is a total aside, and I’ll bring an end of the rant.  Authentic content for the lifestream, I think 🙂

*OK, I mainly looked at the words and panicked internally

Lifestream analytics

When we first started setting up our lifestream blogs, I remember wondering briefly why we didn’t have access to WordPress’ normal in-built analytics and statistics. I have another WordPress blog, and I’ve got access to loads of stuff from there: number of visitors, where they’re from, etc. I think at the time I thought it must be a license issue, something to do with the way the university is using WordPress. I didn’t dwell on it particularly.

But one of the things about EDC that has been really stark for me so far is that it’s a bit of a metacourse. It’s experimental, and thoughtful, and deliberate. And so the quiet conspiracy theorist in me is wondering if this too is deliberate.

I started thinking about the analytics I could easily (i.e. in under 5 minutes) extract from the lifestream blog, and I was able to (manually) figure this out, throw the numbers into Excel and create a chart:

My posts per week (so far)

I also learned that I’ve used 177 tags in 129 posts, and the most popular tags are:

Tags used (so far)

Neither of these is massively revelatory. But there isn’t much other quantifiable information I could access simply and efficiently.

We’re reviewing our lifestreams at the moment, which means looking back at the things we’ve written, ideas we’ve encountered, and so on. There’s a practically unspoken set of rules about what it’s OK to edit, and what it isn’t; we might improve on the tags we’ve used, or categorise our posts, or we might correct a spelling mistake or a broken link. But we probably shouldn’t rewrite posts, tighten up ideas, or make things reflect what we’re thinking now rather than what we were thinking then. I say ‘practically unspoken’, because James practically spoke it earlier this week:

This is making me think about the role analytics plays in the assessment of the course. When we considered analytics for the tweetorial, one of the things I and a lot of people mentioned was how it was the quantifiable and not the qualifiable that is measured. How far do the analytics of our lifestream (which we can’t access easily, but maybe our glorious leaders can) impact upon the assessment criteria?

The course guide suggests that this is how we might get 70% or more on the lifestream part of the assessment:

From the course guide

Only one of these is quantifiable – Activity – and even that isn’t totally about the numbers. The frequency of posts, and the range of sources, are, but the appropriateness of posts isn’t. The number of lifestream summary posts, in Reflection, can be quantified, and the activities mentioned in Knowledge and Understanding are quantifiable too. But nothing else is. Everything else is about the quality of the posts. The assessment, largely, is about quality not quantity (apart from the few bits about quantity).

So evidently there are educational positives around growth, development, authenticity – not quite a ‘becoming’ (because I’ve been reading about how this educational premise is problematically humanist, natch) but ‘deepening’ or ‘ecologising’, if I can get away with making up two words in one blog post.

My first instinct is to say that the learning analytics that we seem to have access to at the moment really don’t seem to be up to the job, along with the prediction that this will not always be the case. But if there’s one thing I’ve learned about education and technology in this course it’s that technology shapes us as far as we shape it. So if it’s the case that the technology through which learning analytics can be performed won’t ever be able to capture the current state of educational feedback, does that mean that the state of educational feedback will be shaped or co-constituted by the technology available? And what does that look like? What are the points of resistance?