Earlier this year, the government published its white paper outlining plans for a new regulatory approach to tackling “illegal and harmful content and activity online”. In particular, it discusses the threat to “our way of life” posed by “disinformation” which has huge implications for the conduct of journalism. We reproduce below our submission to the ongoing consultation.
The Media Reform Coalition welcomes the opportunity to respond to this important white paper on the need for independent regulation to protect the public from various online harms. Since 2011, we have been at the forefront of public debates on a range of media policy issues, with a particular concern for the protection and enhancement of media plurality. It is our submission that the proposed framework for regulating disinformation online poses a substantial risk to plurality, and hence the public interest, by unduly privileging large scale publishers and incentivising intermediaries to censor smaller, alternative and independent sources of news and information. Further, the proposed framework may have a significant chilling effect on legitimate free speech whilst doing little to tackle the specified harms related to disinformation and the spread of misinformation.
The risks stem in part from the white paper’s emphasis on disinformation as distinct from misinformation:
Online disinformation – spreading false information to deceive deliberately – is becoming more and more prevalent. Misinformation refers to the inadvertent sharing of false information.
This approach is very similar to that adopted by Facebook in a report published in 2017:
Disinformation – Inaccurate or manipulated information content that is spread intentionally […] Disinformation is distinct from misinformation, which is the inadvertent or unintentional spread of inaccurate information without malicious intent.[1]
However, it is often extremely difficult to make reliable judgements as to the intention behind the publication or sharing of false information. The proposed framework will effectively empower intermediaries to play quasi-judicial roles in making such determinations, presenting a serious risk of over-reach. By the same logic, there is a risk that actual harm caused by either disinformation or the spread of misinformation will be left unaddressed by the proposed framework. From a harms perspective, the difference between the effects of disinformation and the spread of misinformation may be in any case negligible and any practicable regime must deal adequately with both.
It is also imperative that the framework ensures proportionate responses and remedies. Without sufficient safeguards in this respect, intermediaries may be incentivised to proactively censor certain types of news and information sources in an effort to pre-empt regulatory scrutiny. Indeed, there is evidence that this has already occurred as intermediaries have sought to position themselves as capable of adequate self-regulation in this space. In 2017, a change to Google’s algorithm aimed at combatting fake news elicited an outcry from progressive independent news sites claiming they had been unfairly penalised in Google’s search rankings as a result.[2] Similarly, in 2018 Facebook announced the removal of more than 800 political pages and accounts including the legitimate Rightwingnews.com (hosted by conservative blogger and columnist John Hawkins).
Part of the problem is that algorithms are themselves blunt instruments of regulation that struggle, for instance, to tell the difference between voicing hate speech and reporting on it.[3] But it is also clear that intermediaries have increasingly sought to prioritise large scale and ‘legacy’ news brands over smaller, independent sources. Underlying such ‘filters’ is a highly questionable association between size and trustworthiness. Google made this point explicitly in a 2013 patent update for its Google News algorithm:
CNN and BBC are widely regarded as high quality sources of accuracy of reporting, professionalism in writing, etc., while localnews sources, such as hometown news sources, may be of lower quality.[4]
But it is not just algorithms that have been put to this task. There have been a number of examples in recent years where ‘manual’ overrides have consistently favoured mainstream news sources. For instance, in a study last year of news sources recommended by Apple News editors, the Tow Center found that these were dominated by “a few major newsrooms”.[5]
There is also a growing commercial logic for intermediaries to actively censor or constrain independent and local news sources in favour of major brands. As algorithms move progressively towards a more predictive rather than responsive framework, it is inevitable that news sources with relative scale and volume will be increasingly favoured in news feeds and search rankings over news sources that are relatively specialist and/or less resourced. The proposed framework threatens to exacerbate this already unlevel playing field and perversely subject smaller independent news sources to a higher level of regulatory scrutiny compared to major publishers.
Furthermore, it would be wrong for the scope of the new regulatory framework to include platforms for User Generated Content, but not news publishers. Users of social networks are largely private citizens, whereas some of the largest news publishers are powerful and wealthy corporations. It would be perverse to subject content posted by private citizens to new regulatory standards, but not the output of these companies.
We note that most major news publishers are members of IPSO, which does not meet required standards for recognised independent regulation. By providing these companies with an exemption from the new framework, the Government would be proposing a more restrictive form of regulation for private citizens than for large corporations.
Recommendations
The Duty of Care should be broadened in scope to encompass both disinformation and the spread of misinformation as associated harms. This will absolve intermediaries and regulators of the need to make often contentious and unreliable judgements as regards the motivations behind published false information. The notion of due accuracy contained within the Broadcasting Code should be adapted to take account of the specificities of disinformation and misinformation online. In particular, it should ensure that responses and remedies are proportionate with respect to balancing both the need to preserve a plurality of news and information sources, as well as ensuring that free speech rights are not unduly impacted. The regulator should provide a backstop both for the escalation of complaints and appeals.
We therefore propose the following safeguarding principles:
We further recommend that the regulator conduct proactive and regular reviews to ensure that a) cases of disinformation and/or spread of misinformation that are not detected through the complaints process are identified; and b) that intermediaries are abiding by the Duty of Care and effectively balancing the rights of free speech with access to reliable and plural sources of news and information.
Finally, we recommend that news publishers are included within the scope of the new framework, to ensure that the specified harms can be addressed regardless of whether they originate with UGC platforms or news publishers.
In summary, the framework outlined in the white paper risks creating perverse incentives that could lead to a significant contraction of plurality and have a chilling effect on free speech. The recommendations above will ensure that responses and remedies are proportionate (capturing both disinformation and misinformation with significant reach), effective (providing an evidence-based response mechanism and eschewing the need for blunt instruments or interpretive judgements about source intentions), and practicable (with given thresholds for reach and impact minimising the oversight burden on both intermediaries and the regulator).
[1]J. Weedon (2017). Information Operations and Facebook. Available at https://fbnewsroomus.files.wordpress.com/2017/04/facebook-and-information-operations-v1.pdf
[2]See https://www.nytimes.com/2017/09/26/technology/google-search-bias-claims.html
[3]See https://www.rollingstone.com/politics/politics-features/youtube-facebook-purges-journalists-845790/
[4]Curtiss, M., Bharat, K. A. & Schmitt, M. (2014). Patent Indentifier No. US 2014/0188859 A1. United States: Google.com
[5]See https://www.cjr.org/tow_center/study-apple-newss-human-editors-prefer-a-few-major-newsrooms.php