Saturday, December 21, 2024

Facebook to Offer Users Tips on Spotting Fake News

Share

By Jasper Jackson

Hundreds of millions of Facebook users will be offered tips for spotting fake news as part of the social network’s latest attempt to address concerns about its role in the spread of false information.

The new “educational tool” is part of a multi-pronged strategy which will also see a growing range of “signals” from user behaviour and third-party fact checkers used to make misinformation less prominent on the social network.

“We’re against it and want to take it seriously,” said Adam Mosseri, Facebook’s vice president in charge of the news feed. “It’s a large problem, predates Facebook, predates the internet. We need to take a multi-pronged approach.”

From Friday (7 April) users in 14 countries will be presented with a large post at the top of their feeds with messages such as “it is possible to spot false news” and linking to 10 tips for identifying misinformation including checking web addresses and being sceptical about headlines which make shocking claims.

The post will be rolled out over three days and users will only see the message up to three times. The initial rollout will target the UK, the US, Germany, France, Italy, the Philippines, Indonesia, Taiwan, Myanmar, Brazil, Mexico, Columbia, Argentina, and Canada, but Facebook said it would also look at pushing it out globally.

Facebook has limited its definition of fake news to articles that set out to deceive, contain objectively provable falsehoods and pretend to be from a “legitimate” news site. The input of third-party fact checkers such as Snopes and PolitiFact will only be used to limit the spread of stories thought to fit that definition, which is deliberately narrow to avoid accusations of politically motivated censorship.

However, other signals which will affect the ranking of articles in the news feed, such as whether a someone is less likely to share a story after they have read it, could affect legitimate publishers including established newspapers, as well as controversial digital outlets such as rightwing US site Breitbart.

Mosseri said the majority of fake news on Facebook was created for financial rather than political gain and its impact could be reduced by limiting how often people see it and cutting off the supply of ad revenue.

However, he said educational measures were also necessary to help people evaluate fake news that made it into news feeds while also encouraging a more critical approach to less clear cut attempts to misinform that Facebook would not target.

“Some false news is going to make it on to our platform, it’s not going to go to zero … Yes we have a responsibility to reduce the amount of time people come across false news … but also help them to make more informed decisions.”

“We’re going to need multiple solutions, there is no silver bullet.”

Will Moy, director of UK-based fact checking organisation Full Fact, which worked on the tips for spotting fake news, welcomed the move but said he hoped Facebook would “recognise how much more then can do to make it easier for users to spot false news online”.

He added: “The launch of this educational campaign is useful and timely but it should just be the start.”

Facebook has been criticised for being slow to take responsibility for its role in spreading misinformation, which became hotly debated during the US election when fabricated stories about Donald Trump and Hillary Clinton were read and shared by millions. At the same time, it has faced questions over its approach to taking down inappropriate and illegal content, such as hate speech or sexualised images of children, while also being accused of censoring legitimate posts.

Facebook has repeatedly said it takes the criticism seriously, but many remain concerned about how its algorithms, which are primarily designed to maximise the time spent on the network, affect the beliefs and opinions of its users. It has also been accused of a lack of transparency around its approach to tackling misinformation and its processes for taking down content.

Governments have increasingly indicated a willingness to intervene to force technology companies to take more responsibility for the what appears on their platforms. The cabinet of German chancellor, Angela Merkel, this week backed legislation that could lead to fines of up to €50m if social networks refuse to remove illegal content or don’t give users an effective way to report hate speech and fake news.

Justice minister Heiko Maas told Bloomberg on Wednesday that “social network providers are responsible when their platforms are misused to propagate hate crimes and fake news”.

Source, theguardian.com

About The Author

Table of contents

Read more

Local News