Revealed: what people watch on Netflix where you live in the UK

Screenshot from Gilmore Girls
Netflix has revealed the most popular TV shows and films in regions across the UK. And it’s thrown up some surprising differences in country’s viewing habits.

from Pocket http://ift.tt/2nTlClj
via IFTTT

Netflix has revealed the most popular TV shows and films in regions across the UK. And it’s thrown up some surprising differences in country’s viewing habits. By analysing statistics between October 2016 and this month, the streaming service was able to reveal what parts of the country are more inclined to watch a specific genre compared to others.

So quotes the article above. I know it’s only a bit of silliness – it’s one step away from a Buzzfeed-esque ‘Can we guess where you live based on your favourite Netflix show?’. The worst bit is that there’s a tiny amount of truth to it: I have watched Gilmore Girls AND I live in the South East. I reject the article’s proposal, however, that this implies that I am “pining for love”.

So yes, it’s overly simplistic and makes assumptions (such as the one that everyone watches Netflix, or that heterogeneity is a result of a postcode lottery); ultimately, it’s a bit of vapid fluff. But it’s also a bit of vapid fluff that exemplifies how far algorithmic cultures are embedded in the media we consume: the data collected about us now just entertainment ouput.

 

Elon Musk Isn’t the Only One Trying to Computerize Your Brain

Elon Musk wants to merge the computer with the human brain, build a “neural lace,” create a “direct cortical interface,” whatever that might look like.

from Pocket http://ift.tt/2nqnLSf
via IFTTT

This reminds me of the part about Moravec’s Mind Children in N. Katherine Hayles’ book, How we Became Posthuman (I just read ‘Theorizing Posthumanism by Badmington, which refers to it as well). There’s a scenario in Mind Children, writes Hayles, where Moravec argues that it will soon be possible to download human consciousness into a computer.

How, I asked myself, was it possible for someone of Moravec’s obvious intelligence to believe that mind could be separated from body? Even assuming that such a separation was possible, how could anyone think that consciousness in an entirely different medium would remain unchanged, as if it had no connection with embodiment? Shocked into awareness, I began to notice he was far from alone. (1999, p. 1)

It appears that Moravec wasn’t wrong about the possibility of the technology to ‘download’ human consciousness, but let’s hope the scientists all get round to reading Hayles’ work on this techno-utopia before the work really starts…

References

Badmington, N. (2003). Theorizing Posthumanism. Cultural Critique, (53), 10–27.

Hayles, N. K. (1999). How we became posthuman: virtual bodies in cybernetics, literature, and informatics. Chicago, Ill: University of Chicago Press.

WhatsApp’s privacy protections questioned after terror attack

a silhouette of a padlock on a green whatsapp logo of a telephone

Chat apps that promise to prevent your messages being accessed by strangers are under scrutiny again following last week’s terror attack in London. On Sunday, the home secretary said the intelligence services must be able to access relevant information.

from Pocket http://ift.tt/2nXBsM5
via IFTTT

This is only tangentially related to our readings and the themes we’ve been exploring throughout the course, but I do think it’s worth including. Many ‘chat’ apps use end-to-end encryption, so messages sent are private, even to the company itself. The government clearly believes that this shouldn’t be allowed, and is attempting to take steps to prevent it. Hopefully unsuccessfully, I should add.

There’s an assumption here that data about us ought to be at least potentially public – chat apps, says the Home Secretary, must not provide a ‘secret place’. It’s not far from this position to one that says that we don’t own the data we generate, along with the data generated about us: where we are, who we send messages to, and so on. There are questions around the intersection of civil liberties and technology, and whether there’s a digital divide in terms of the ability to protect yourself from surveillance online.

 

Comment from Cathy’s blog

Cathy, this is great! Nice work, and an innovative way to question what data is captured – it’s important to balance our interpretation of the meaning of data with what is captured (and what is missing). Thank you!
-Helen

from Comments for Cathy’s Lifestream http://ift.tt/2ohg295
via IFTTT

What I’m reading

At a conference today! #cctl2017

Tags:
March 23, 2017 at 11:53AM
Open in Evernote

I attended a Teaching Forum hosted by the Cambridge Centre for Teaching and Learning on Thursday, and this is a photo of some of the notes that I took during a presentation by Dr Sonia Ilie, on the LEGACY project. Dr Ilie discussed the results of a bit of qualitative research surrounding students’ understanding of learning gain.  One of her arguments put me in mind of learning analytics.

In case my handwriting isn’t clear, Dr Ilie reported that the research had demonstrated that students are variably equipped to reflect upon their own learning. I wondered – in the bottom comment of the photo – about the impact that learning analytics might have upon this. I’m interested in whether learning analytics might help students to develop critically reflective skills, or whether it might let them off the hook by effectively providing them with a shorthand version of that reflection.

Extremist ads and LGBT videos: do we want YouTube to be a censor, or not?

rainbow flag
Is the video-sharing platform a morally irresponsible slacker for putting ads next to extremist content – or an evil, tyrannical censor for restricting access to LGBT videos? YouTube is having a bad week.

from Pocket http://ift.tt/2n6Zi5b
via IFTTT

YouTube has been in the news lately because of two connected battles: the positioning of certain ads around what might be considered to be ‘extremist’ content, and the inherent problems in the algorithms used to categorise content as extremist or restricted.

The article in the New Statesment attempts to bring the moral arguments to bear on these algorithmic moves, raising the point about false equivalence of extremist videos and LGBT content, and whether responsibility for censoring certain voices ultimately represents the handing over of power in a problematic way.

Transcript of “Big data is better data”

Self-driving cars were just the start. What’s the future of big data-driven technology and design? In a thrilling science talk, Kenneth Cukier looks at what’s next for machine learning — and human knowledge.

from Pocket http://ift.tt/2nA1wwV
via IFTTT

This is a good (and pithy) talk, but there are two points he makes that I find particularly interesting:

  1. We have to be the master of this technology, not its servant […] This is a tool, but this is a tool that, unless we’re careful, will burn us“. A patent warning against technological determinism, here, but one which (in my opinion) is not necessarily couched with enough care to help us to understand how to avoid a fully instrumentalist approach.
  2. Humanity can finally learn from the information that it can collect, as part of our timeless quest to understand the world and our place in it“. This accompanies a strong sense of why Big Data is important, but it’s also very essentialist: it’s about reflecting the here and now, rather than attempting to understand the past. I wonder if there are some historiographical problems here, given that Big Data collection is so recent and new, and still so patchy in places. The ‘timeless quest’, given this, seems to be one which will be answered from a position of privilege: from those who are fortunate enough, paradoxically, to have data collected about them.