Studium Generale Lecture
Monday 30 October 2017
In reaction to scandals about ‘fake news’, digital services such as “Full Fact” have been developed to flag up dubious content online. Such services propose new ways to solve an old problem: how can we distinguish between legitimate and illegitimate information sources? However, today’s digital solutions to this longstanding “problem of demarcation” risk to reproduce an old blindspot: they suggest that the main problem is with social media content itself. But the ways in which content circulates online equally plays a role.
In this lecture, I will argue that the wider design principles that inform social media are partly responsible for the lack of respect for knowledge in the online public sphere. Platforms like Twitter rely on “social algorithms” to select sources for disclosure: they distinguish between valuable and invalid sources on the basis of how widely they are shared. Such principles are not very well attuned to the requirements of democratic communication. While many digital media algorithms are social, they are not sociological: they treat online activity as behaviour, and do not sufficiently appreciate the political effects that arise from interactions between media, technology and people.