Tweets: IFTTT, Twitter, and that binary again

It’s an emotional moment. (OK, not really.) The IFTTT strings are cut forever between Twitter and this blog. Between everything and this blog. It’s a good time, I think, to reflect on my use of Twitter and IFTTT throughout this course. I get the impression that the way I’ve used Twitter differs from a lot of my cohort. Earlier today, for example, the other Helen asked why we might’ve used Twitter more than previous course cohorts, and I was interested in this answer, given by Clare:

By comparison, my use of Twitter has been largely terse, laconic and unsustained – though the platform might be partially responsible for at least the first two. Looking back over my lifestream, I can see how rarely I’ve started or entered into conversations on Twitter, how aphoristic my tweets have been. They’ve been purely functional, one-offs to prove that I’m engaging with this or that, that I’m following the social and educational rules of this course.

At the beginning of the course, I wrote a post where I said I thought I’d find it weird to contaminate my social media presence with academic ideas. This turned out to be either a pretty accurate foretelling, or a self-fulfilling prophecy. Perhaps I should’ve set up a new Twitter handle, specifically for this. Or perhaps this would have simply masked my behaviour.

I write this not to excuse the pithiness and infrequency of my tweets, fortuitous though it may be to do that as well. Instead, I write this because my reflection about my use of Twitter is revealing to me the potential polymorphic, inconstant forms of agency that technology acts and performs. It’s a kind of unexpected technological determinism, one which is misaligned with the ‘goals’ of the platform. Twitter might be designed to ameliorate communication, but the massive and complex sociotechnical network within which it exists actually worked to silence me.

IFTTT presents a different sort of challenge. It was part of the assessment criteria to use it – to bring in diverse and differently moded content from wherever we are, whatever we’re looking at, and however we’re doing so. We were instructed to be instrumental about it, to use IFTTT in this very ‘black boxed’ sort of way. But of course we didn’t. IFTTT failed to meet the aesthetic standards of many of us, including me. So we’ve let IFTTT do its thing, and then gone into the blog to make it better. It’s instrumentalism, still, but again, kind of misdirected. Maybe we could call it transtechnologism.

What these two (flawed, I’m sure) observations do is to underline something fundamental about the themes of this course that I think, until now, I’d missed. Consider the two approach to technology often implicit in online education, according to Hamilton and Friesen (2013):

the first, which we call “essentialist”, takes technologies to be embodiments of abstract pedagogical principles. Here technologies are depicted as independent forces for the realisation of pedagogical aims, that are intrinsic to them prior to any actual use.

the second, which we call “instrumentalist”, depicts technologies as tools to be interpreted in light of this or that pedagogical framework or principle, and measured against how well they correspond in practice to that framework or principle. Here, technologies are seen as neutral mean employed for ends determined independently by their users.

These ideas have permeated this whole course. Don’t fall into these traps, into this lazy binary.  And yet there’s nothing here that rules out determinism, essentialism, instrumentalism. Calling out the binary tells us to think critically about the use of technology in education: it doesn’t make the two edges of that binary fundamentally false or impossible. We ought not to make the assumption, but once we haven’t, what we might have assumed could still turn out to be true.

References

Hamilton, E. C., & Friesen, N. (2013). Online Education: A Science and Technology Studies Perspective. Canadian Journal of Learning and Technology, 39(2), https://www.cjlt.ca/index.php/cjlt/article/view/26315

Power in the Digital Age

talking politics logo

Corbyn! Trump! Brexit! Politics has never been more unpredictable, more alarming or more interesting. TALKING POLITICS is the podcast that will try to make sense of it all.

from Pocket http://ift.tt/2o76yyJ
via IFTTT

Another one of my favourite podcasts, but this time it’s totally relevant to this course. Look at the synopsis for it:

synopsis of post

This particular episode looks at the ways in which politics and technology intersect, socio-critical and socio-technical issues around power and surveillance, the dominance of companies, and the impact of the general political outlook of the technologically powerful.

There are two things that I think are really relevant to the themes of the algorithmic cultures block. The first is about data. Data is described as being like ‘the land […], what we live on’, and machine learning is the plough, it’s what digs up the land. What we’ve done, they argue, is to give the land to the people who own the ploughs. This, Runciman, the host, argues, is not capitalism, but feudalism.

I’m paraphrasing the metaphor, so I may have missed a nuance or two. It strikes me as being different from the data-as-oil one, largely because of the perspective taken. It’s not really taken from a corporate perspective, although I think in the data-as-land metaphor there’s an assumption that we once ‘owned’ our data, or that it was ever conceived by us of as our intellectual property. I have the impression that Joni Mitchell might have been right – don’t it always seem to go that you don’t know what you’ve got ’til it’s gone – and that many of us really didn’t think about it much before.

The second point is about algorithms, where the host and one of his guests (whose name I missed, sorry) gently approach a critical posthumanist perspective of technology and algorithms without ever acknowledging it. Machine learning algorithms have agency – polymorphous, mobile, agency – which may be based on simulation but is nonetheless real. The people that currently control these algorithms, it is argued, are losing control, as the networked society allows for them to take on a dynamic of their own. Adopting and paraphrasing the Thomas theorem, it is argued that:

If a machine defines a situation as real, it is real in its consequences.

I say ‘gently approaching’ because I think that while the academics in this podcast are recognising the agency and intentionality of non-human actants – or algorithms – there’s still a sense that they believe there’s a need to wrest back this control from them. There’s still an anthropocentrism in their analysis which aligns more closely with humanism than posthumanism.

Confessions of a distance learning refusenik-linear courses

An occasional blog, pulled together from my research diary for the Teaching and Learning Online Module for the MA: Digital Technologies, Communication and Education at the University of Manchester.

from Pocket http://ift.tt/2oLX7HE
via IFTTT

The post above is written by a colleague and friend of mine, Ange Fitzpatrick. Ange is a student on the Digital Technologies course at the University of Manchester. It is a brutally honest post about the ways in which she engages with the course she is taking, and in it she talks about her engagement with the course structure, and the technology through which it is enacted.

The post resonated with me for several reasons. I’m interested in the way that Ange is taught, in comparison with the way that I am, in the similarities and differences between the two offerings. Empathy is a big thing too – like Ange, I’ve juggled this course with a family (occasionally in crisis, like most families) and a demanding job. I can snatch time here and there during the week, and usually am able to carve out more time at weekends, but it means I’m not always available (or awake enough) for much of the pre-fixed ‘teaching’.

Like Ange, I’ve been an independent learner for a long time; I fear it’s turned me into a really bad student. I like finding my own stuff to read rather than going with what is suggested. I feel as though I don’t need much support (though others may disagree!). I’m neither proud nor ashamed of this, but it does put me at odds – and it makes me feel at odds – with what has been an extremely supportive cohort of students and teachers. I have a laissez-faire attitude to assessment: I’ll do my best, and I do care a little about the marks. But more than anything I’m here to be ‘contaminated’ (to borrow the term of Lewis and Khan) by ideas that are new to me. I’d rather things got more complicated than more simple.

The reason I really wanted to share this, though, was that I feel that Ange’s post highlights and exemplifies the entanglements of digital and distance education. It reveals the complex assemblages and networks at play in how we engage with course materials, in how we define ‘engagement’. It uncovers the dispersal of activity, the instability, the times when instrumentalist approaches feel like the only option. It epitomises our attempts to stay in control, to centre and recentre ourselves at the nexus of our studying. It underlines the networks: the multi-institutional, political, cultural, familial, social, soteriological networks that combine and collide and co-constitute. It exposes the totalising sociomateriality of experience, “the delicate material and cultural ecologies within which life is situated” (Bayne, 2015, p. 15). And it does so from the perspective of the student.

But it also, I think, emphasises the – I say this tentatively – relative redundancy of these ideas and critical assessments. Recognition of the networks and rhizomes does not provide Ange with a more navigable path through her course. This doesn’t mean that these considerations are not important but it does – for me at least – point to a disjunction between theory and practice.

References

Bayne, S. (2015). What’s the matter with ‘technology-enhanced learning’? Learning, Media and Technology, 40(1), 5–20. https://doi.org/10.1080/17439884.2014.915851

With many many thanks to Ange for letting me share her post.

 

Referencing and digital culture

It’s dissertation season in the Faculty I work in, which means it’s a time of referencing questions a-go-go. Like most things, referencing is a mix of common sense, cobbling something together that looks roughly OK, and being consistent about it. In the past three days I’ve been asked about referencing sourceless, orphan works found in random bits of the internet, live dance performances from the early 20th century, and – in another worlds collide moment – how to reference an algorithm.

A student was basing a portion of their argument on the results of Google’s autocomplete function – this kind of thing:

google autocomplete in action

My colleague and I were stumped. Who owns this algorithm? Well, Google. But it’s also collectively formed, discursively constituted, mutually produced. How do you reference something that is a temporary, unstable representation?

Pickering (1993, 2002) argues that ‘things’ move between being socially constructed via discourse and existing as real, material entities – a performativity which is “temporally emergent in practice” (p. 565), a kind of mangled practice of human and material agency which emerges in real time. This kind of autocomplete text (if ‘text’ is the right word) reflects this completely.

The act of referencing is one of stabilising, as well as avoiding plagiarism or practising academic integrity. When referencing online sources which don’t have a DOI or a stable URL, you are artificially fixing the location of something and representing it via text. You put ‘accessed’ dates to secure oneself against future accusations of plagiarism but also in view of the instability of the digital text. It’s not an ideal process, but it works.

And yet referencing – or indicating ownership of an autocomplete algorithm – seems to take this a step further. It leans towards reification, and it imbues the algorithm with a human and material intentionality which isn’t justified. It ‘essentialises’ what is fleeting and performative. So how, then, do you capture something which is, as Pickering writes it, ‘temporally emergent in practice?’

I suppose I should say what we told the student too, though it may not be right. We suggested that it didn’t need to be referenced, because it constituted their ‘own’ research; you wouldn’t reference the ‘act’ of reading, or the technology used to find, access or cite resources. You’d cite someone else’s published ‘version’ of the algorithm, but not your own. This uncovers another area where digital technology shapes and is shaped by ‘traditional’ practices and performances.

References

Jackson, A. Y. (2013). Posthumanist data analysis of mangling practices. International Journal of Qualitative Studies in Education, 26(6), 741–748. https://doi.org/10.1080/09518398.2013.788762
Pickering, A. (1993). The Mangle of Practice: Agency and Emergence in the Sociology of Science. American Journal of Sociology, 99(3), 559–589. https://doi.org/10.1086/230316
Pickering, A. (2002). Cybernetics and the Mangle: Ashby, Beer and Pask. Social Studies of Science, 32(3), 413–437. https://doi.org/10.1177/0306312702032003003

 

Lifestream analytics

When we first started setting up our lifestream blogs, I remember wondering briefly why we didn’t have access to WordPress’ normal in-built analytics and statistics. I have another WordPress blog, and I’ve got access to loads of stuff from there: number of visitors, where they’re from, etc. I think at the time I thought it must be a license issue, something to do with the way the university is using WordPress. I didn’t dwell on it particularly.

But one of the things about EDC that has been really stark for me so far is that it’s a bit of a metacourse. It’s experimental, and thoughtful, and deliberate. And so the quiet conspiracy theorist in me is wondering if this too is deliberate.

I started thinking about the analytics I could easily (i.e. in under 5 minutes) extract from the lifestream blog, and I was able to (manually) figure this out, throw the numbers into Excel and create a chart:

My posts per week (so far)

I also learned that I’ve used 177 tags in 129 posts, and the most popular tags are:

Tags used (so far)

Neither of these is massively revelatory. But there isn’t much other quantifiable information I could access simply and efficiently.

We’re reviewing our lifestreams at the moment, which means looking back at the things we’ve written, ideas we’ve encountered, and so on. There’s a practically unspoken set of rules about what it’s OK to edit, and what it isn’t; we might improve on the tags we’ve used, or categorise our posts, or we might correct a spelling mistake or a broken link. But we probably shouldn’t rewrite posts, tighten up ideas, or make things reflect what we’re thinking now rather than what we were thinking then. I say ‘practically unspoken’, because James practically spoke it earlier this week:

This is making me think about the role analytics plays in the assessment of the course. When we considered analytics for the tweetorial, one of the things I and a lot of people mentioned was how it was the quantifiable and not the qualifiable that is measured. How far do the analytics of our lifestream (which we can’t access easily, but maybe our glorious leaders can) impact upon the assessment criteria?

The course guide suggests that this is how we might get 70% or more on the lifestream part of the assessment:

From the course guide

Only one of these is quantifiable – Activity – and even that isn’t totally about the numbers. The frequency of posts, and the range of sources, are, but the appropriateness of posts isn’t. The number of lifestream summary posts, in Reflection, can be quantified, and the activities mentioned in Knowledge and Understanding are quantifiable too. But nothing else is. Everything else is about the quality of the posts. The assessment, largely, is about quality not quantity (apart from the few bits about quantity).

So evidently there are educational positives around growth, development, authenticity – not quite a ‘becoming’ (because I’ve been reading about how this educational premise is problematically humanist, natch) but ‘deepening’ or ‘ecologising’, if I can get away with making up two words in one blog post.

My first instinct is to say that the learning analytics that we seem to have access to at the moment really don’t seem to be up to the job, along with the prediction that this will not always be the case. But if there’s one thing I’ve learned about education and technology in this course it’s that technology shapes us as far as we shape it. So if it’s the case that the technology through which learning analytics can be performed won’t ever be able to capture the current state of educational feedback, does that mean that the state of educational feedback will be shaped or co-constituted by the technology available? And what does that look like? What are the points of resistance?

Analysing the tweetorial, or why we shouldn’t focus on subjectivity

Two disclaimers before getting started:

  1. I mentioned in a blog post last week that regrettably I’d had to miss the tweetorial, and was only able to cursorily glance through some of the later tweets once it was all over.  This absence, and my subsequent uncertainty about how it unfolded, strongly influenced this blog post as well as the padlet I created.
  2. I’ve noticed a tendency that I have to lean a little too heavily on the literature, critiquing others rather than trying to use my own critical voice. I know that’s normally OK, and it’s just a question of balance, but anyway, here I’m trying to counter that. No references!

OK. Here goes.

The literal answer to Jeremy and James’ first question: “how has the Twitter archive represented our tweetorial?” is reasonably simple. The archive has stored tweets which used a predetermined hashtag, and specific tweet metadata, in a way which is linear and yet unfinished. It has used the tweets – or, at least, specific elements of them – to quantify behaviour and activity. This might allow us (or a computer) to extrapolate and draw conclusions. In this sense, it all seems rather objective.

And yet it isn’t objective. The choices made about the data collected and attached, and those which are not, were subjective. They were subjective regardless of who made them – human, computer or both. The visual representation of the data is also mediated and subjective – the clue is in the word ‘representation’. It’s necessarily, inescapably reductive. The key point is that this isn’t fundamentally bad. The data being subjective doesn’t make it meaningless or inaccurate or untrustworthy. Why privilege impartiality anyway?

And, moreover, the charge of subjectivity is easily dealt with. The quantified facts the archive presents are of course not the ‘whole picture’ (whatever that is). The conclusions we draw ought to be questioned. We should ensure that the non-quantifiable (tiredness, workload, scepticism) is considered too. There is scope for multiple interpretations, all at the same time (as I tried to show in the padlet). The ways in which the analytics are presented may or may not have educational value; we cannot be conclusive as this depends on the individual. It will motivate some while demotivating others. It will give some confidence while causing others to question themselves. There is space for all of these attitudes concurrently. The archive can’t tell us whether learning happened, or didn’t happen, or the quality of it: it was never intended to do so.

So, for me, the problem – the danger, even – with analytics like this isn’t that they’re subjective. It lies instead in their inescapable finality, even as the data collection is ongoing. The finality easily gives way to become ‘authority’, and the platform doesn’t particularly lend itself to that authority being questioned. Given the sheer number of tweets, searching and retrieving them is not simple. You can’t retrospectively change the choices made about the data which is collected, if you can change them at all. The platform does not allow it. That’s another choice, by the way. And again, it doesn’t really matter who made it.

Our ability to answer the questions set by Jeremy and James (or in my case, inability) is so fundamentally predicated on the fact that it happened last week. Our ability to identify where the data collected is subjective, and where or why this is problematic, is based on the same thing. We were there, we can remember it, so we can interpret it. And yet the fixedness, the finality, and the stability of the archive has to be compared with the fleetingness of the qualitative information and individual interpretation that we’re using to gloss it. Right at this moment, we can question the archive. Right at this moment, we know better. We have authority. But it’s temporary. After all, the data will last longer.

 

Big Data, learning analytics, and posthumanism

I’ve now read a few articles assessing the pros and cons of learning analytics and, regardless of the methodologies employed, there are patterns and themes in what is being found. The benefits include institutional efficiency and institutional performance around financial planning and recruitment; for students, the benefits correspond to insights into learning and informed decision-making. These are balanced against the cons: self-fulfilling prophecies concerning at-risk students, the dangers of student profiling, risks to student privacy and questions around data ownership (Roberts et al., 2016; Lawson et al., 2016). This is often contextualised by socio-critical understandings which converge on notions of power and surveillance; some of the methodologies explicitly attempt to counter presumptions made as a result of this, for example, by bringing in the student voice (Roberts et al., 2016).

In reading these articles and studies, I was particularly interested in ideas around student profiling and student labelling, and how this is perceived (or sometimes spun) as a benefit for students. Arguments against student profiling focus on the oversimplification of student learning, students being labelled on past decisions, student identity being in a necessary state of flux (Mayer-Schoenberger, 2011). One of the things, though, that’s missing in all of this, the absence of which I am feeling keenly, is causation. It strikes me that big data and learning analytics can tell us what is, but not always why.

A similar observation leads Chandler to assert that Big Data is the kind of Bildungsroman of posthumanism (2015). He argues that Big Data is an epistemological revolution:

“displacing the modernist methodological hegemony of causal analysis and theory displacement” (2015, p. 833).

Chandler is not interested in the pros and cons of Big Data so much as the way in which it changes how knowledge is produced, and how we think about knowledge production. This is an extension of ideas espoused by Anderson, in which he argues that theoretical models are becoming redundant in a world of Big Data (2008). Similar, Cukier and Schoenberger argue that Big Data:

“represents a move away from trying to understand the deeper reasons behind how the world works to simply learning about an association among phenomena, and using that to get that done” (2013, p. 32).

Big Data aims not at instrumental knowledge, nor causal reasoning, but the revealing of feedback loops. It’s reflexive. And for Chandler, this represents an entirely new epistemological approach for making sense of the world, gaining insights which are ‘born from the data’, rather than planned in advance.

Chandler is interested in the ways in which Big Data can intersect with ideas in international relations and political governance, and many of his ideas are extremely translatable and relevant to higher education institutions. For example, Chandler argues that Big Data reflects political reality (i.e. what is) but it also transforms it through enabling community self-awareness. It allows reflexive problem-solving on the basis of this self-awareness. Similarly, it may be seen that learning analytics allows students to gain understanding of their learning and their progress, possibly in comparison with their peers.

This sounds great, but Chandler contends that it is necessarily accompanied by a warning: it isn’t particularly empowering for those who need social change:

Big Data can assist with the management of what exists […] but it cannot provide more than technical assistance based upon knowing more about what exists in the here and now. The problem is that without causal assumptions it is not possible to formulate effective strategies and responses to problems of social, economic and environmental threats. Big Data does not empower people to change their circumstances but merely to be more aware of them in order to adapt to them (p. 841-2).

The problem of lack of understanding of causation is raised in consideration of ‘at risk’ students – a student being judged on a series of data without any (potentially necessary) contextualisation. The focus is on reflexivity and relationality rather than how or why a situation has come about, and what the impact of it might be. Roberts et al. found that students were concerned about this, that learning analytics might drive inequality through advantaging only some students (2016).The demotivating nature of the EASI system for ‘at risk’ students is also raised by Lawson et al. (2016, p. 961). Too little consideration is given to the causality of ‘at risk’, and perhaps too much to essentialism.

His considerations of Big Data and international relations leads Chandler to assert cogently that:

Big Data articulates a properly posthuman ontology of self-governing, autopoietic assemblages of the technological and the social (2015, p. 845).

No one here is necessarily excluded, and all those on the periphery are brought in. Rather paradoxically, this appears to be both the culmination of the socio-material project, as well as an indicator of its necessity. Adopting a posthumanist approach to learning analytics may be a helpful critical standpoint, and is definitely something worth exploring further.

References

Anderson, C. (2008). The End of Theory: The Data Deluge Makes the Scientific Method Obsolete. Retrieved 19 March 2017, from https://www.wired.com/2008/06/pb-theory/
Chandler, D. (2015). A World without Causation: Big Data and the Coming of Age of Posthumanism. Millennium, 43(3), 833–851. https://doi.org/10.1177/0305829815576817
Cukier, K., & Mayer-Schoenberger, V. (2013). The Rise of Big Data: How It’s Changing the Way We Think About the World. Foreign Affairs, 92(3), 28–40.
Lawson, C., Beer, C., Rossi, D., Moore, T., & Fleming, J. (2016). Identification of ‘at risk’ students using learning analytics: the ethical dilemmas of intervention strategies in a higher education institution. Educational Technology Research and Development, 64(5), 957–968. https://doi.org/10.1007/s11423-016-9459-0
Mayer-Schonberger, V. (2011). Delete: the virtue of forgetting in the digital age: Princeton: Princeton University Press.
Roberts, L. D., Howell, J. A., Seaman, K., & Gibson, D. C. (2016). Student Attitudes toward Learning Analytics in Higher Education: ‘The Fitbit Version of the Learning World’. Frontiers in Psychology, 7. https://doi.org/10.3389/fpsyg.2016.01959

I missed the tweetstorm!

But…I always knew I would. It coincided with the last two days of our academic term, so my calendar was full, and long days meant I had no time even to read the tweets, let alone contribute to any of the discussions.

I’ve read it this morning, though, and it looks to have been hugely successful. Lots of tweets, lots of content, and I now find myself more than ever impressed and a little daunted by the talents and skills of my coursemates. But when Jeremy and James release the analytics of the tweetstorm that they’ve been collecting, I won’t be on it.

So here’s my question: how do you analyse absence?

a student who logs into an LMS leaves thousands of data points, including navigation patterns, pauses, reading habits and writing habits (Siemens, p. 1381)

Well, not only has Siemens never seen the dreadful analytics potential of the VLE we use, the crucial point is this: “who logs into”. Similar ideas are critically raised in the lecture we heard this week – the idea of using captured learning data from cradle to university, and using this to provide customised experiences. Learning analytics requires ‘logging’, both in the sense of ‘logging in’, but also in terms of the data trails left behind. You have to put your name to your activity.

This has significant implications. Thinking about assessment, it invites considerations around assessors’ bias (unconscious or otherwise). There are implications for openness and scale too – it’s probably pretty easy for the EDC website to track our locations, even our IP addresses, but you can never know for sure who is reading the website and who isn’t. You can probably come up with an average ‘time’ it might take to read the website page. You can probably track clicks on the links for the readings, but you can’t be sure it’s been read. So there are potentially knock-on effects for the sorts of platforms and media by which teaching can be performed. This relates back to something Jeremy and I discussed a while ago – a sort of tyranny around providing constant evidence for the things that we do, for our engagement with course ideas and course materials. It also smacks of behaviourism which – as Siemens and Long (2011) argue, is not appropriate in HE.

But it also has implications for the ‘lurkers’ among us. Students who may not engage in the ‘prescribed’ way, whether that’s through volition, through a poor internet connection, lack of time, changes in circumstances. How might these people have a personalised learning experience? What data might be collected about these people, and how can it incorporate the richness and subjectivity of experience, of happenstance, of humanness.

My question then, is this: can learning analytics track engagement without framing it entirely within the context of participation, or logging in? Because while these are indicators of engagement, they are not the same thing.

References

Siemens, G. (2013). Learning Analytics: The Emergence of a Discipline. American Behavioral Scientist, 57(10), 1380–1400. https://doi.org/10.1177/0002764213498851

Siemens, G., & Long, P. (n.d.). Penetrating the Fog: Analytics in Learning and Education. Retrieved 19 March 2017, from http://er.educause.edu/articles/2011/9/penetrating-the-fog-analytics-in-learning-and-education

Someone made Windows 98 for your wrist, because why not?

There’s the recent re-launch of the Nokia 3330, which comes with the much-beloved game Snake. Then there’s a wireless keyboard that looks and feels like an old-school typewriter. And someone recently made a browser extension that brings Clippy back to life. 

from Pocket http://ift.tt/2lmIz0j
via IFTTT

One of the themes of the course – and indeed, something I’ve picked up on this blog – is the historiographical approach we’re taking. I’m interested in the role that nostalgia takes in this, and the ways in which it might influence our understanding of technological development. I included this article in the lifestream because it appears that this nostalgia, although not exactly new, is now considered to be totally commercially viable.

Albert Borgmann, the philosopher, argues that the structure and practices of our lives are being changed by technology, and he doesn’t necessarily see this as a good thing. He talks in terms of focal practices and focal things, the two being connected (the focal practice of ‘cooking’, for example, connected to the focal thing of ‘the oven’). For Borgmann, technology disburdens us from having to manually manage certain focal practices – he calls this technology the ‘device paradigm’. But Borgmann also thinks that we ought to make time for these pretechnological practices because technology, while disburdening us, does not make us happy – this is part of his critique.

[NB There is considerably more to it than that, it must be said].

Brittain sees this perspective of Borgmann’s as ultimately nostalgic, almost a yearning for pre-technology (something which Borgmann denied, in fact – cf. Higgs et al., p. 72). I wonder though if there’s any connection here to our nostalgia for the technology of our past:

indeed, it is difficult to know how anyone these days can be nostalgic for a pre-technological culture […] when none of us has lived ever lived more than momentarily in one (p. 72)

On the other hand, Borgmann denied that he was nostalgic about our past, and he criticises Heidegger for it. Instead, it’s about having an awareness of the past, and using that awareness to assess our present use of technology. He’s writing strongly in favour of that historiographical approach.

I don’t particularly agree with his line of thought, and certainly have some critical issues with the instrumental way he’s conceptualising technology, as well as using ‘technology’ as a catch-all term for a variety of fundamentally different things. But my question right now is whether our natural (?) nostalgia for technologies of the past – for the phone we had as a 17 year old, for the computer games we played as a 9 year old – can be reconciled in a meaningful way to the way we conceive of technology now.

 

References

Borgmann, A. (1984). Technology and the character of contemporary life: a philosophical inquiry. Chicago, Ill. ; London: University of Chicago Press.
Heikkerö, T. (2005). The good life in a technological world: Focal things and practices in the West and in Japan. Technology in Society, 27(2), 251–259. https://doi.org/10.1016/j.techsoc.2005.01.009
Higgs, E., Light, A., & Strong, D. (2000). Technology and the good life? Chicago: University of Chicago Press. Retrieved from http://public.eblib.com/choice/publicfullrecord.aspx?p=648118

No one reads terms of service, studies confirm

Apparently losing rights to data and legal recourse is not enough of a reason to inspect online contracts. So how can websites get users to read the fine print? The words on the screen, in small type, were as innocent and familiar as a house key.

from Pocket http://ift.tt/2lCnKt1
via IFTTT

An interesting article about how we don’t read the T&Cs, featuring a research study by two Canadian professors who managed to get a load of students to agree to promise a(n imaginary) company their first born children.

This has, I think, many important implications for the way we use technology. From a UX perspective, knowing that the T&Cs aren’t being read would mean that websites and companies ought to rethink the way they give information to potential customers, so they’re fully informed when they sign up. Somehow I can’t imagine this happening. The author of the article, however, suggests a sort of unspoken digital ethics contract (similar to the Hippocratic Oath), but how that might work is another matter.

There’s also how far we’re unable to do anything at all about terms and conditions we disagree with. If our use of a particular site is entirely optional then we can choose not to use it; if it isn’t – if our employer insists on it, or if it’s something expected of us – then we can hardly demand that Google or Facebook comes up with an alternative set of T&Cs just for us.

This is on my mind, particularly, as a result of an action I took in responding to the mid-term feedback from Jeremy. One of the points made – and a completely valid one – was that I might look to broaden my horizons in terms of the feeds coming into the lifestream. I added a couple of feeds and then looked to link up YouTube to the WordPress blog. And I was then faced with this:

Manage? I clicked on the ‘i’ to see what it inferred, and was faced with this:

At this point, I was turned completely off the idea of linking the two – any videos will just have to be – as Cathy brilliantly put it – glued on to the lifestream. I’m sure their intention is not particularly insidious, and I’ve probably already inadvertently given up lots of my data, but this seemed just a step too far.

But, on the other hand, at least it was clear.