Extremist ads and LGBT videos: do we want YouTube to be a censor, or not?

rainbow flag
Is the video-sharing platform a morally irresponsible slacker for putting ads next to extremist content – or an evil, tyrannical censor for restricting access to LGBT videos? YouTube is having a bad week.

from Pocket http://ift.tt/2n6Zi5b
via IFTTT

YouTube has been in the news lately because of two connected battles: the positioning of certain ads around what might be considered to be ‘extremist’ content, and the inherent problems in the algorithms used to categorise content as extremist or restricted.

The article in the New Statesment attempts to bring the moral arguments to bear on these algorithmic moves, raising the point about false equivalence of extremist videos and LGBT content, and whether responsibility for censoring certain voices ultimately represents the handing over of power in a problematic way.

No one reads terms of service, studies confirm

Apparently losing rights to data and legal recourse is not enough of a reason to inspect online contracts. So how can websites get users to read the fine print? The words on the screen, in small type, were as innocent and familiar as a house key.

from Pocket http://ift.tt/2lCnKt1
via IFTTT

An interesting article about how we don’t read the T&Cs, featuring a research study by two Canadian professors who managed to get a load of students to agree to promise a(n imaginary) company their first born children.

This has, I think, many important implications for the way we use technology. From a UX perspective, knowing that the T&Cs aren’t being read would mean that websites and companies ought to rethink the way they give information to potential customers, so they’re fully informed when they sign up. Somehow I can’t imagine this happening. The author of the article, however, suggests a sort of unspoken digital ethics contract (similar to the Hippocratic Oath), but how that might work is another matter.

There’s also how far we’re unable to do anything at all about terms and conditions we disagree with. If our use of a particular site is entirely optional then we can choose not to use it; if it isn’t – if our employer insists on it, or if it’s something expected of us – then we can hardly demand that Google or Facebook comes up with an alternative set of T&Cs just for us.

This is on my mind, particularly, as a result of an action I took in responding to the mid-term feedback from Jeremy. One of the points made – and a completely valid one – was that I might look to broaden my horizons in terms of the feeds coming into the lifestream. I added a couple of feeds and then looked to link up YouTube to the WordPress blog. And I was then faced with this:

Manage? I clicked on the ‘i’ to see what it inferred, and was faced with this:

At this point, I was turned completely off the idea of linking the two – any videos will just have to be – as Cathy brilliantly put it – glued on to the lifestream. I’m sure their intention is not particularly insidious, and I’ve probably already inadvertently given up lots of my data, but this seemed just a step too far.

But, on the other hand, at least it was clear.

Blade Runner 2049

Replicants are like any other machine. They’re either a benefit or a hazard. If they’re a benefit, it’s not my problem.

A benefit or a hazard, says Deckard. One or the other. Not both, not neither. The binary nature of this didn’t hit me as fully at the beginning of the course, but now I see it – now I’m onto you, Blade Runner 2049. But it’s here, and it’s unmistakeable, the perfect microcosmic example of why the critical ideas surrounding digital cultures are so necessary… over-simplistic sci-fi and bad action films may never been the same again.