Read read read
Tags:
April 02, 2017 at 03:22PM
Open in Evernote
Education and Digital Cultures
Reading an article that includes this phrase: "a certain performative, post-human, ethico-epistem-ontology" #mscedc pic.twitter.com/JyuWwOoxDC
— Helen Murphy (@lemurph) April 1, 2017
I’ve spent some time this weekend reading a couple of articles to help me to formulate the specific questions I’d like to focus on in the assignment. I was mostly enjoying myself, when I started on an article that elicited the reaction you can see in the tweet above. The phrase in the tweet – “a certain performative, post-human, ethico-epistem-ontology” is pretty much inaccessible, and this is a real bugbear of mine. Thankfully I’ve encountered it only a few times in this course. It took me a while to figure out what the author was getting at with his ethico-epistem-ontology, and when I did I found that it wasn’t half as fancy or clever as the language used might suggest.
Ideas should challenge, and language should challenge too, but one of the things about good academic writing (obviously something on my mind with the assignment coming up) is the ability to represent and communicate complex, nuanced, difficult ideas in a way that doesn’t throw up a huge great wall. There are times when that huge barrier is instrumental to the argument, I suppose: I remember reading Derrida…*
Yet largely if the aforementioned ‘challenge’ is located as much in the discrete individual words used as in the premises of the argument (assuming, of course, that the two can be separated), then what does that mean for the locus of academic literacy? And what does it mean for openness? The trend toward open access and open data, despite being fraught with issues around policy, the way technology is implicated, and other things, is generally a positive. But is representation of ideas like this even vaguely ‘open’ in anything but a literal sense?
Anyway, this is a total aside, and I’ll bring an end of the rant. Authentic content for the lifestream, I think 🙂
*OK, I mainly looked at the words and panicked internally
When we first started setting up our lifestream blogs, I remember wondering briefly why we didn’t have access to WordPress’ normal in-built analytics and statistics. I have another WordPress blog, and I’ve got access to loads of stuff from there: number of visitors, where they’re from, etc. I think at the time I thought it must be a license issue, something to do with the way the university is using WordPress. I didn’t dwell on it particularly.
But one of the things about EDC that has been really stark for me so far is that it’s a bit of a metacourse. It’s experimental, and thoughtful, and deliberate. And so the quiet conspiracy theorist in me is wondering if this too is deliberate.
I started thinking about the analytics I could easily (i.e. in under 5 minutes) extract from the lifestream blog, and I was able to (manually) figure this out, throw the numbers into Excel and create a chart:
I also learned that I’ve used 177 tags in 129 posts, and the most popular tags are:
Neither of these is massively revelatory. But there isn’t much other quantifiable information I could access simply and efficiently.
We’re reviewing our lifestreams at the moment, which means looking back at the things we’ve written, ideas we’ve encountered, and so on. There’s a practically unspoken set of rules about what it’s OK to edit, and what it isn’t; we might improve on the tags we’ve used, or categorise our posts, or we might correct a spelling mistake or a broken link. But we probably shouldn’t rewrite posts, tighten up ideas, or make things reflect what we’re thinking now rather than what we were thinking then. I say ‘practically unspoken’, because James practically spoke it earlier this week:
@HerrSchwindenh_ @philip_downey @helenwalker7 Would prefer it to reflect thinking/engagement at time rather than a revised version #mscedc
— James Lamb (@james858499) March 28, 2017
This is making me think about the role analytics plays in the assessment of the course. When we considered analytics for the tweetorial, one of the things I and a lot of people mentioned was how it was the quantifiable and not the qualifiable that is measured. How far do the analytics of our lifestream (which we can’t access easily, but maybe our glorious leaders can) impact upon the assessment criteria?
The course guide suggests that this is how we might get 70% or more on the lifestream part of the assessment:
Only one of these is quantifiable – Activity – and even that isn’t totally about the numbers. The frequency of posts, and the range of sources, are, but the appropriateness of posts isn’t. The number of lifestream summary posts, in Reflection, can be quantified, and the activities mentioned in Knowledge and Understanding are quantifiable too. But nothing else is. Everything else is about the quality of the posts. The assessment, largely, is about quality not quantity (apart from the few bits about quantity).
So evidently there are educational positives around growth, development, authenticity – not quite a ‘becoming’ (because I’ve been reading about how this educational premise is problematically humanist, natch) but ‘deepening’ or ‘ecologising’, if I can get away with making up two words in one blog post.
My first instinct is to say that the learning analytics that we seem to have access to at the moment really don’t seem to be up to the job, along with the prediction that this will not always be the case. But if there’s one thing I’ve learned about education and technology in this course it’s that technology shapes us as far as we shape it. So if it’s the case that the technology through which learning analytics can be performed won’t ever be able to capture the current state of educational feedback, does that mean that the state of educational feedback will be shaped or co-constituted by the technology available? And what does that look like? What are the points of resistance?
How do we create institutional cultures where the digital isn’t amplifying that approach but is instead a place suffused with the messiness, vulnerability and humanity inherent in meaningful learning?
Donna is one of my very favourite people, and I’m sure Dave is excellent too. This lecture/presentation is worth watching. Twice.
from Tumblr http://ift.tt/2nqhyoP
via IFTTT
As a gay man, I literally don’t count in America. Despite previous reports that we would be counted for the first time in history, this week the Trump administration announced that LGBT Americans will not be included in the 2020 census.
from Pocket http://ift.tt/2njLWRP
via IFTTT
I read about this earlier in the week, and when I watched the TED talk on statistics I was reminded about this. There was talk, recently, about LGBT Americans being counted in the 2020 census. Obviously being able to quantify the number of LGBT people will mean that policy will have to take this information into account – if the census demonstrates conclusively that x% of Americans are LGBT, then that is a weapon for agitating for better rights, better provisions, better services, better everything, really. The plan to count LGBT Americans has been shelved this week, and this represents a major challenge for the LGBT community in the US.
I think it’s a really clear example of the socio-critical elements of data and algorithmic cultures. If you have an unequal structure to begin with, then the algorithms used to make sense of that may well replicate that inequality. And if you assume that the data is not necessary to begin with, then there’s no accountability at all.
This is a great talk on the use of statistics. Mona Chalabi is a data journalist, and here she outlines three ways of questioning statistics, based on her assessment that “the one way to make numbers more accurate is to have as many people as possible be able to question them.”
The three questions she provides were, I thought, generally quite obvious; as a teacher of information literacy they echo quite substantially the sorts of questions I encourage my students to ask. But there were two points that she made that I thought were really important and relevant to EDC, especially algorithmic cultures.
The first was about overstating certainty and how statistics can be used in a way that makes them describe situations as either black or white, with little middle ground. Sometimes this is a result of how they’re collected in the first place, and sometimes it’s how the statistics are communicated, and sometimes it’s in how they’re interpreted. I think this is one of the reasons that I’m hesitant about learning analytics; its innate tendency towards what can be quantified might lead to an overestimation of certainty, either in the way data about students is collected, communicated or interpreted. And, as we’ve seen, that data can become predictive, or a self-fulfilling prophecy.
The second point that I thought was really interesting was how Mona was responding to this situation of certainty. She takes real data sets, and turns them into hand-drawn visualisations so that the imprecision, the uncertainty, can be revealed. She says, “so that people can see that a human did this, a human found the data and visualised it”. A human did this, and so we anticipate uncertainty. Inherent here is a mistrust in the ability of technology to replicate nuance and complexity, which I think is misguided. But there’s also an underlying assumption about statistics – that a computer is able to hide the imprecision in a way that humans cannot. That computer data visualisations are sleek, while human data visualisations are shaky. This is a fascinating conceptualisation of the relationship between humans and technology, of the ways in which both humans and technology can be used instrumentally to make up for the weaknesses of the other.
Netflix has revealed the most popular TV shows and films in regions across the UK. And it’s thrown up some surprising differences in country’s viewing habits.
from Pocket http://ift.tt/2nTlClj
via IFTTT
Netflix has revealed the most popular TV shows and films in regions across the UK. And it’s thrown up some surprising differences in country’s viewing habits. By analysing statistics between October 2016 and this month, the streaming service was able to reveal what parts of the country are more inclined to watch a specific genre compared to others.
So quotes the article above. I know it’s only a bit of silliness – it’s one step away from a Buzzfeed-esque ‘Can we guess where you live based on your favourite Netflix show?’. The worst bit is that there’s a tiny amount of truth to it: I have watched Gilmore Girls AND I live in the South East. I reject the article’s proposal, however, that this implies that I am “pining for love”.
So yes, it’s overly simplistic and makes assumptions (such as the one that everyone watches Netflix, or that heterogeneity is a result of a postcode lottery); ultimately, it’s a bit of vapid fluff. But it’s also a bit of vapid fluff that exemplifies how far algorithmic cultures are embedded in the media we consume: the data collected about us now just entertainment ouput.
Elon Musk wants to merge the computer with the human brain, build a “neural lace,” create a “direct cortical interface,” whatever that might look like.
from Pocket http://ift.tt/2nqnLSf
via IFTTT
This reminds me of the part about Moravec’s Mind Children in N. Katherine Hayles’ book, How we Became Posthuman (I just read ‘Theorizing Posthumanism by Badmington, which refers to it as well). There’s a scenario in Mind Children, writes Hayles, where Moravec argues that it will soon be possible to download human consciousness into a computer.
How, I asked myself, was it possible for someone of Moravec’s obvious intelligence to believe that mind could be separated from body? Even assuming that such a separation was possible, how could anyone think that consciousness in an entirely different medium would remain unchanged, as if it had no connection with embodiment? Shocked into awareness, I began to notice he was far from alone. (1999, p. 1)
It appears that Moravec wasn’t wrong about the possibility of the technology to ‘download’ human consciousness, but let’s hope the scientists all get round to reading Hayles’ work on this techno-utopia before the work really starts…
References
Badmington, N. (2003). Theorizing Posthumanism. Cultural Critique, (53), 10–27.
This is a photo of my very tiny, very messy desk at home, taken last weekend, just hours after my computer keyboard and trackpad decided to pack in permanently.
It wasn’t a major problem – I already had a bluetooth mouse and keyboard, and I was able to get an appointment to get the computer fixed this week. But I included this image because this slight interruption in the way that I work felt unsettling. The computer not working as I expected it to affected the way that I would normally study, and it affected (well, delayed) what I had planned to do over the weekend.
One of the themes of EDC is battling the supposed binary of technological instrumentalism and technological determinism, of proving that it’s all a little more complex and nuanced than that. This was, for me, a reminder (and a pretty annoying one) that my conceptualisations of how technology might be used and practised is not always followed through in my enactment of it.</P
Looking forward to attending this in a week or two!