Fact-checking software is needed on all social media and video sites

Harmful misinformation is undermining people’s confidence in vaccines – lives are being ruined by the destructive power of bad information

Will Moy
Friday 18 December 2020 09:03 GMT
Comments
People attend a demonstration against the current Covid-19 restrictions in Hyde Park
People attend a demonstration against the current Covid-19 restrictions in Hyde Park (Getty Images)

If anyone was yet to be convinced, 2020 has been a case study in how bad information can ruin people’s lives.

We’ve seen fake medical advice threaten people’s health, opportunists scamming the vulnerable and conspiracy theorists spreading hate—and that’s just on social media.

In the decade since Full Fact was founded, we’ve been making the case that more needs to be done by governments, internet companies and others to fight the spread of dangerous false information online.

That’s why we joined Facebook’s Third Party Fact-Checking (TPFC) programme in January 2019 – to tackle misinformation as it happens and give people the tools to recognise it themselves.

Today, Full Fact published a new report documenting our experience of working with Facebook in the TPFC programme, as well as setting out several ways we think it could be improved.

Under the programme, users can flag content they worry may be false. We also receive content that Facebook’s own systems have identified as potentially false.

Our fact checkers then identify the most harmful misleading claims, before flagging them as either true, false or a mixture of accurate and inaccurate claims. False content will appear lower in news feeds, so it will reach fewer people.

The project is a worthwhile endeavour. It has enhanced our ability to tackle misinformation over a tumultuous year.

During last winter’s UK general election, we checked inaccurate claims circulating on Facebook that a viral image of a boy sleeping on a hospital floor had been faked. We rated this “False”, attaching warnings to 71 pieces of content on the queue on 10 December. By 12 December, that the rating had been applied to 971 instances of the claim on the platform.

In this sense, we were able to make a positive contribution to a debate that dominated the election news cycle for several days.

Recently, it has also allowed us to find a significant amount of harmful misinformation related to Covid-19, much of which could undermine confidence in vaccines.

Some of our previous recommendations to Facebook to improve the programme have been implemented. These now also covers Instagram as well as Facebook. Other internet companies should learn from TPFC and implement similar programmes on their platforms. 

But that’s not to say it’s perfect. Our two main concerns still relate to transparency and scale.

On transparency, we would be able to identify and “catch” claims with the potential to go viral much more easily if our fact-checkers were provided with data points, including the number of shares over time.

Too often, we are alerted to harmful misinformation once it has already been seen too many times.

It goes the other way, too. Right now, if Full Fact has rated one of your posts as false or misleading, you will receive a notification. A warning message is also attached to what you’ve shared, in order to warn others.

We want much more information available to people when their content is fact-checked – including automatic contact details for the organisation that worked on their post.

Most internet companies are already trying to use AI to scale fact-checking, by trying to apply the same fact check every time a false claim appears on their platforms, and we have said we want Facebook to invest in better claim matching to identify false information on its platforms.

All this work needs to be done with open transparent democratic oversight and clear protections for freedom of expression. It’s a growing concern for us at Full Fact that none of the social media firms that use AI to scale up fact-checking are doing so in a transparent way with independent assessment.

The forthcoming Online Safety Bill is a key opportunity for the UK government to demonstrate it can meet this need for oversight.

Indeed, the plans it set out this week to tackle dangerous online misinformation, which will put a duty on internet companies to address the problem, give cause for cautious optimism.

In the previous 18 months, the TPFC programme has helped Full Fact to protect more people than ever from the destructive power of bad information.

But, as the coronavirus pandemic has painfully demonstrated, too many lives are still being ruined by those who spread false information and misleading narratives.

We don’t know what challenges 2021 will bring, but the time to act is now. Voluntary action by internet companies has had real benefits, but we must not let another year go by without proper democratic accountability.

Will Moy is director of Full Fact, the UK’s only independent factchecking organisation, which checks claims made by politicians, interest groups and the media. He previously worked for an independent crossbench peer.

Join our commenting forum

Join thought-provoking conversations, follow other Independent readers and see their replies

Comments

Thank you for registering

Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in