Today I listened to last night’s Radio 4 broadcast of The Moral Maze, examining the the morality of ‘fake news’. Perhaps I’d hoped for too much from one programme, especially after various previous posts on my Lifestream about the topic, but I found it a bamboozling discussion in terms of trying to come to a clear(er) mind about the issues. Off the back of the programme, and spurred by its passing reference to the Washington Post’s new-but-not-so-new slogan:
and as ongoing reflections, here are some provisional theses regarding fake news:
1. Democracy, if it can, needs to see in the dark. If it can’t, it’s in trouble. We need a light that can shine in the darkness, which the darkness cannot put out.
2. We get the news we want. Or at least, we don’t get more than that. Sometimes, and in some places, we get a lot less than that.
3. Given the bewildering proliferation of sources for news, as well as the communicative complexity within ‘news’, we can’t verify everything. News was, is, and will be – to some degree – a matter of trusting both source and interpretation. News is socially constructed. We can all try harder in constructing it. Relating with one another is inherent to both the problems and the solutions cohering around terms like ‘part-truth’, ‘post-truth’ and ‘fake news’.
4. For mortal humans, truth-in-news is more than a relativistic mirage, and is less than an absolute certainty. We need wisdom, humility and tenacity to live with that. Digital cultures haven’t created this epistemic situation, but might well clarify its contours for us, even while creating some treacherous cliffs for us.
5. A spectrum between ‘entertainment’ and ‘news’ is not a neat one, but is probably analytically important. Its polarities might be easy-ish to identify, but don’t boil down to ‘news = true’, or ‘entertainment = false’. The real action is in the spectrum’s blurred middle, especially in light of inevitable and continual mediation via editorial selection and control. News media are social media; social media are news media. Both are interested in market share and, often, in profitability.
6. Digital cultures introduce new technologies, novel business models, experimental assemblages. They accentuate lots of uncertainty. But that was all there before, too. Propaganda is a pre-digital term.
7. It’s probably helpful to distinguish deliberate lying, from mistakes made due to negligence or weakness which are then corrected. It might not be easy to discern the two, but motivation and consequence matter.
8. But don’t ever expect ‘correction’ to remove error; networked relations preclude such easily controlled binary options. No-one (person) is – fully – in control. But nor is ‘the system’, in some easily identifiable kind of way.
9. Don’t forget ‘the people’. And don’t think individuals can’t be lazy, mistaken, uncritically happy with what they’re told, or unbelieving of – and resistant to – the truth. It looks like it can happen.
10. Algorithms are an unknown. They can be a folk-devil. But what if they are genuinely dangerous for truth, and how can we tell? Therein lies one of my rolling questions, on the cusp of entering the ‘Algorithmic Cultures’ block of the course.
While The Moral Maze didn’t address digital platforms specifically, it was implicit in the discussion. Not least, in the choice of Tom Chatfield, author of ‘How to Thrive in the Digital Age’ as one of the witnesses interrogated by the panelists.
Issues of ‘fake news’ are ontologically driven, but are also a genre issue, in an era and space where genre-categories collapse and blur and reform in unexpected and unstable formations. Even YouTube videos are faked, it seems, and it can be news…