Late last year, Facebook-parent Meta quietly phased out certain content labels on its platforms that for much of the pandemic had directed viewers to its central Covid-19 information page, after internal research concluded the labels may be ineffective at changing attitudes or stopping the spread of misinformation, according to a report Thursday by the company’s external oversight board.
Facebook rolled out the labels in early 2021, after coming under criticism for the spread of Covid-19 misinformation on its platforms during the first year of the pandemic. The company applied the labels to a wide range of claims both true and untrue about vaccines, treatments and other topics related to the virus.
But Meta’s use of the labels began slowing on Dec. 19, and ended completely soon after, the report said, following the internal research. Study results provided to the Meta Oversight Board, a quasi-judicial body, showed that the company’s labels appeared to have “no detectable effect on users’ likelihood to read, create or re-share” claims that had previously been rated as false by third-party fact-checkers or that discouraged the use of vaccines, the report said.
The research focused on Meta’s direct labeling interventions as opposed to labels the company applies to content as part of its third-party fact-checking program. The research found that the more frequently a user was exposed to the labels, the less likely they were to visit the Covid-19 information center, which offers authoritative resources and information linked to the pandemic.
“The company reported that initial research showed that these labels may have no effect on user knowledge and vaccine attitudes,” the report said.
Meta’s internal research on the labels has not been previously released, and the Oversight Board on Thursday called for Meta to publish its findings as part of a broader review of the company’s handling of Covid-19 misinformation.
The new details highlight the struggles platforms have faced in fighting misinformation and could raise broader questions about the efficacy of labeling and directing users to more accurate information. It also comes at a time when some of the biggest social media companies, including Twitter and Meta, are either rolling back their Covid-19 misinformation policies or considering doing so.
Meta should not relax its approach to Covid-19 misinformation as the company has proposed, the Oversight Board added. Until the World Health Organization determines that the pandemic has eased, Meta should instead continue to remove misinformation that violates the company’s policies, rather than shifting toward more lenient treatments such as labeling or downranking misleading information, the board said.
Meta said Thursday it will publicly respond to the Oversight Board’s recommendations within 60 days.
“We thank the Oversight Board for its review and recommendations in this case,” a company spokesperson said. “As Covid-19 evolves, we will continue consulting extensively with experts on the most effective ways to help people stay safe on our platforms.”
In the past, Meta has touted its ability to direct users to the Covid-19 information center. Last July, the company said it had connected more than 2 billion people across 189 countries to trustworthy information through the portal.
Some of those visits occurred through labels that Meta referred to internally as “neutral inform treatments,” or NITs, and “facts about ‘X’ informed treatments,” also known as FAXITs.
The labels were automatically applied to content that Meta’s automated tools determined were about Covid-19, the Oversight Board said. The labels never directly addressed the claims within any given post, but they provided a link to the Covid-19 information center as well as more contextual information, including messages saying that vaccines have been proven safe and effective or that unapproved Covid-19 treatments could cause bodily harm. (Meta provided examples of a NIT and a FAXIT in its July 2022 request for Oversight Board guidance on whether it should relax its Covid-19 misinformation policy.)
The decision to begin phasing out the labels came after Meta’s product and integrity teams ran an experiment studying Meta’s global userbase, the report said. The study found that users who were shown the labels approximately once a month were more likely on average to click through to the Covid-19 information center than users who were shown the labels both more and less frequently.
In light of the results, Meta later told the Oversight Board it would stop using the labels altogether, to ensure they could remain effective in other public health emergencies, according to the report.
While the Oversight Board’s report Thursday did not pass judgment on Meta’s decision to stop using the labels, it urged the company to reevaluate the 80 distinct types of claims that the company considers to be Covid-19 misinformation and therefore subject to removal from its platforms.
Meta should perform the reassessments regularly, the Oversight Board said, consulting with public health officials to determine which claims on Meta’s banned list continue to be false or misleading and worthy of removal. Meta should also publish a record of when and how it updates that list, the board added.