Power in the Digital Age

talking politics logo

Corbyn! Trump! Brexit! Politics has never been more unpredictable, more alarming or more interesting. TALKING POLITICS is the podcast that will try to make sense of it all.

from Pocket http://ift.tt/2o76yyJ
via IFTTT

Another one of my favourite podcasts, but this time it’s totally relevant to this course. Look at the synopsis for it:

synopsis of post

This particular episode looks at the ways in which politics and technology intersect, socio-critical and socio-technical issues around power and surveillance, the dominance of companies, and the impact of the general political outlook of the technologically powerful.

There are two things that I think are really relevant to the themes of the algorithmic cultures block. The first is about data. Data is described as being like ‘the land […], what we live on’, and machine learning is the plough, it’s what digs up the land. What we’ve done, they argue, is to give the land to the people who own the ploughs. This, Runciman, the host, argues, is not capitalism, but feudalism.

I’m paraphrasing the metaphor, so I may have missed a nuance or two. It strikes me as being different from the data-as-oil one, largely because of the perspective taken. It’s not really taken from a corporate perspective, although I think in the data-as-land metaphor there’s an assumption that we once ‘owned’ our data, or that it was ever conceived by us of as our intellectual property. I have the impression that Joni Mitchell might have been right – don’t it always seem to go that you don’t know what you’ve got ’til it’s gone – and that many of us really didn’t think about it much before.

The second point is about algorithms, where the host and one of his guests (whose name I missed, sorry) gently approach a critical posthumanist perspective of technology and algorithms without ever acknowledging it. Machine learning algorithms have agency – polymorphous, mobile, agency – which may be based on simulation but is nonetheless real. The people that currently control these algorithms, it is argued, are losing control, as the networked society allows for them to take on a dynamic of their own. Adopting and paraphrasing the Thomas theorem, it is argued that:

If a machine defines a situation as real, it is real in its consequences.

I say ‘gently approaching’ because I think that while the academics in this podcast are recognising the agency and intentionality of non-human actants – or algorithms – there’s still a sense that they believe there’s a need to wrest back this control from them. There’s still an anthropocentrism in their analysis which aligns more closely with humanism than posthumanism.

The Top Ed-Tech Trends (Aren’t ‘Tech’)

Every year since 2010, I’ve undertaken a fairly massive project in which I’ve reviewed the previous twelve months’ education and technology news in order to write ten articles covering “the top ed-tech trends.

from Pocket http://ift.tt/2nwX9OP
via IFTTT

This is a really interesting post from one of my favourite blogs, Hack Education. It’s the rough transcript of a talk given by Audrey Watters, about her work developing the ‘top ed-tech trends’. She talks about the ways in which this cannot be predictive, but is a ‘history’ of technology, and one which is immersed in claims made about technology by the people who are trying to sell it to us. Technology, she says wryly, is always amazing.

I want us to think more critically about all these claims, about the politics, not just the products (perhaps so the next time we’re faced with consultants or salespeople, we can do a better job challenging their claims or advice).

Her argument is a profound one, and one which coheres nicely with the principal themes in EDC. Her conceptualisation of technologies is that they are ideological practices, rather than tools, and rather than things you can go out and buy and in doing so render yourself ‘ed-tech’, a form of technological solutionism. They have a narrative, and that narrative includes the $2.2 billion spent on technology development in 2016.

Personalization. Platforms. These aren’t simply technological innovations. They are political, social – shaping culture and politics and institutions and individuals in turn.

Watters ends with a plea to us all. When we first encounter new technologies, consider not just what it can do, or what our ownership or mastership of the product might say about us. But also consider its ideologies and its implications.

Really, definitely, absolutely worth reading.

Referencing and digital culture

It’s dissertation season in the Faculty I work in, which means it’s a time of referencing questions a-go-go. Like most things, referencing is a mix of common sense, cobbling something together that looks roughly OK, and being consistent about it. In the past three days I’ve been asked about referencing sourceless, orphan works found in random bits of the internet, live dance performances from the early 20th century, and – in another worlds collide moment – how to reference an algorithm.

A student was basing a portion of their argument on the results of Google’s autocomplete function – this kind of thing:

google autocomplete in action

My colleague and I were stumped. Who owns this algorithm? Well, Google. But it’s also collectively formed, discursively constituted, mutually produced. How do you reference something that is a temporary, unstable representation?

Pickering (1993, 2002) argues that ‘things’ move between being socially constructed via discourse and existing as real, material entities – a performativity which is “temporally emergent in practice” (p. 565), a kind of mangled practice of human and material agency which emerges in real time. This kind of autocomplete text (if ‘text’ is the right word) reflects this completely.

The act of referencing is one of stabilising, as well as avoiding plagiarism or practising academic integrity. When referencing online sources which don’t have a DOI or a stable URL, you are artificially fixing the location of something and representing it via text. You put ‘accessed’ dates to secure oneself against future accusations of plagiarism but also in view of the instability of the digital text. It’s not an ideal process, but it works.

And yet referencing – or indicating ownership of an autocomplete algorithm – seems to take this a step further. It leans towards reification, and it imbues the algorithm with a human and material intentionality which isn’t justified. It ‘essentialises’ what is fleeting and performative. So how, then, do you capture something which is, as Pickering writes it, ‘temporally emergent in practice?’

I suppose I should say what we told the student too, though it may not be right. We suggested that it didn’t need to be referenced, because it constituted their ‘own’ research; you wouldn’t reference the ‘act’ of reading, or the technology used to find, access or cite resources. You’d cite someone else’s published ‘version’ of the algorithm, but not your own. This uncovers another area where digital technology shapes and is shaped by ‘traditional’ practices and performances.

References

Jackson, A. Y. (2013). Posthumanist data analysis of mangling practices. International Journal of Qualitative Studies in Education, 26(6), 741–748. https://doi.org/10.1080/09518398.2013.788762
Pickering, A. (1993). The Mangle of Practice: Agency and Emergence in the Sociology of Science. American Journal of Sociology, 99(3), 559–589. https://doi.org/10.1086/230316
Pickering, A. (2002). Cybernetics and the Mangle: Ashby, Beer and Pask. Social Studies of Science, 32(3), 413–437. https://doi.org/10.1177/0306312702032003003

 

The future is algorithms, not code

some code-like algorithms

The current ‘big data’ era is not new. There have been other periods in human civilisation where we have been overwhelmed by data. By looking at these periods we can understand how a shift from discrete to abstract methods demonstrate why the emphasis should be on algorithms not code.

from Pocket http://ift.tt/2nGVibD
via IFTTT

Lifestream analytics

When we first started setting up our lifestream blogs, I remember wondering briefly why we didn’t have access to WordPress’ normal in-built analytics and statistics. I have another WordPress blog, and I’ve got access to loads of stuff from there: number of visitors, where they’re from, etc. I think at the time I thought it must be a license issue, something to do with the way the university is using WordPress. I didn’t dwell on it particularly.

But one of the things about EDC that has been really stark for me so far is that it’s a bit of a metacourse. It’s experimental, and thoughtful, and deliberate. And so the quiet conspiracy theorist in me is wondering if this too is deliberate.

I started thinking about the analytics I could easily (i.e. in under 5 minutes) extract from the lifestream blog, and I was able to (manually) figure this out, throw the numbers into Excel and create a chart:

My posts per week (so far)

I also learned that I’ve used 177 tags in 129 posts, and the most popular tags are:

Tags used (so far)

Neither of these is massively revelatory. But there isn’t much other quantifiable information I could access simply and efficiently.

We’re reviewing our lifestreams at the moment, which means looking back at the things we’ve written, ideas we’ve encountered, and so on. There’s a practically unspoken set of rules about what it’s OK to edit, and what it isn’t; we might improve on the tags we’ve used, or categorise our posts, or we might correct a spelling mistake or a broken link. But we probably shouldn’t rewrite posts, tighten up ideas, or make things reflect what we’re thinking now rather than what we were thinking then. I say ‘practically unspoken’, because James practically spoke it earlier this week:

This is making me think about the role analytics plays in the assessment of the course. When we considered analytics for the tweetorial, one of the things I and a lot of people mentioned was how it was the quantifiable and not the qualifiable that is measured. How far do the analytics of our lifestream (which we can’t access easily, but maybe our glorious leaders can) impact upon the assessment criteria?

The course guide suggests that this is how we might get 70% or more on the lifestream part of the assessment:

From the course guide

Only one of these is quantifiable – Activity – and even that isn’t totally about the numbers. The frequency of posts, and the range of sources, are, but the appropriateness of posts isn’t. The number of lifestream summary posts, in Reflection, can be quantified, and the activities mentioned in Knowledge and Understanding are quantifiable too. But nothing else is. Everything else is about the quality of the posts. The assessment, largely, is about quality not quantity (apart from the few bits about quantity).

So evidently there are educational positives around growth, development, authenticity – not quite a ‘becoming’ (because I’ve been reading about how this educational premise is problematically humanist, natch) but ‘deepening’ or ‘ecologising’, if I can get away with making up two words in one blog post.

My first instinct is to say that the learning analytics that we seem to have access to at the moment really don’t seem to be up to the job, along with the prediction that this will not always be the case. But if there’s one thing I’ve learned about education and technology in this course it’s that technology shapes us as far as we shape it. So if it’s the case that the technology through which learning analytics can be performed won’t ever be able to capture the current state of educational feedback, does that mean that the state of educational feedback will be shaped or co-constituted by the technology available? And what does that look like? What are the points of resistance?

Being Human is your problem

How do we create institutional cultures where the digital isn’t amplifying that approach but is instead a place suffused with the messiness, vulnerability and humanity inherent in meaningful learning?

Donna is one of my very favourite people, and I’m sure Dave is excellent too. This lecture/presentation is worth watching. Twice.
from Tumblr http://ift.tt/2nqhyoP
via IFTTT

Trump is erasing gay people like me from American society by keeping us off the census

As a gay man, I literally don’t count in America. Despite previous reports that we would be counted for the first time in history, this week the Trump administration announced that LGBT Americans will not be included in the 2020 census.

from Pocket http://ift.tt/2njLWRP
via IFTTT

I read about this earlier in the week, and when I watched the TED talk on statistics I was reminded about this. There was talk, recently, about LGBT Americans being counted in the 2020 census. Obviously being able to quantify the number of LGBT people will mean that policy will have to take this information into account – if the census demonstrates conclusively that x% of Americans are LGBT, then that is a weapon for agitating for better rights, better provisions, better services, better everything, really. The plan to count LGBT Americans has been shelved this week, and this represents a major challenge for the LGBT community in the US.

I think it’s a really clear example of the socio-critical elements of data and algorithmic cultures. If you have an unequal structure to begin with, then the algorithms used to make sense of that may well replicate that inequality. And if you assume that the data is not necessary to begin with, then there’s no accountability at all.

How to spot a bad statistic

This is a great talk on the use of statistics. Mona Chalabi is a data journalist, and here she outlines three ways of questioning statistics, based on her assessment that “the one way to make numbers more accurate is to have as many people as possible be able to question them.”

The three questions she provides were, I thought, generally quite obvious; as a teacher of information literacy they echo quite substantially the sorts of questions I encourage my students to ask. But there were two points that she made that I thought were really important and relevant to EDC, especially algorithmic cultures.

The first was about overstating certainty and how statistics can be used in a way that makes them describe situations as either black or white, with little middle ground. Sometimes this is a result of how they’re collected in the first place, and sometimes it’s how the statistics are communicated, and sometimes it’s in how they’re interpreted. I think this is one of the reasons that I’m hesitant about learning analytics; its innate tendency towards what can be quantified might lead to an overestimation of certainty, either in the way data about students is collected, communicated or interpreted. And, as we’ve seen, that data can become predictive, or a self-fulfilling prophecy.

The second point that I thought was really interesting was how Mona was responding to this situation of certainty. She takes real data sets, and turns them into hand-drawn visualisations so that the imprecision, the uncertainty, can be revealed. She says, “so that people can see that a human did this, a human found the data and visualised it”. A human did this, and so we anticipate uncertainty. Inherent here is a mistrust in the ability of technology to replicate nuance and complexity, which I think is misguided. But there’s also an underlying assumption about statistics – that a computer is able to hide the imprecision in a way that humans cannot. That computer data visualisations are sleek, while human data visualisations are shaky. This is a fascinating conceptualisation of the relationship between humans and technology, of the ways in which both humans and technology can be used instrumentally to make up for the weaknesses of the other.

Revealed: what people watch on Netflix where you live in the UK

Screenshot from Gilmore Girls
Netflix has revealed the most popular TV shows and films in regions across the UK. And it’s thrown up some surprising differences in country’s viewing habits.

from Pocket http://ift.tt/2nTlClj
via IFTTT

Netflix has revealed the most popular TV shows and films in regions across the UK. And it’s thrown up some surprising differences in country’s viewing habits. By analysing statistics between October 2016 and this month, the streaming service was able to reveal what parts of the country are more inclined to watch a specific genre compared to others.

So quotes the article above. I know it’s only a bit of silliness – it’s one step away from a Buzzfeed-esque ‘Can we guess where you live based on your favourite Netflix show?’. The worst bit is that there’s a tiny amount of truth to it: I have watched Gilmore Girls AND I live in the South East. I reject the article’s proposal, however, that this implies that I am “pining for love”.

So yes, it’s overly simplistic and makes assumptions (such as the one that everyone watches Netflix, or that heterogeneity is a result of a postcode lottery); ultimately, it’s a bit of vapid fluff. But it’s also a bit of vapid fluff that exemplifies how far algorithmic cultures are embedded in the media we consume: the data collected about us now just entertainment ouput.

 

Pinned to Education and Digital Cultures on Pinterest

Just Pinned to Education and Digital Cultures: http://ift.tt/2ogZpOw

This is a photo of my very tiny, very messy desk at home, taken last weekend, just hours after my computer keyboard and trackpad decided to pack in permanently.

It wasn’t a major problem – I already had a bluetooth mouse and keyboard, and I was able to get an appointment to get the computer fixed this week. But I included this image because this slight interruption in the way that I work felt unsettling. The computer not working as I expected it to affected the way that I would normally study, and it affected (well, delayed) what I had planned to do over the weekend.

One of the themes of EDC is battling the supposed binary of technological instrumentalism and technological determinism, of proving that it’s all a little more complex and nuanced than that. This was, for me, a reminder (and a pretty annoying one) that my conceptualisations of how technology might be used and practised is not always followed through in my enactment of it.</P