Misinformation Watch

By Donie O'Sullivan, Kaya Yurieff, Kelly Bourdet, the CNN Business team and contributors from across CNN

Updated 5:00 p.m. ET, October 19, 2020
12 Posts
Sort byDropdown arrow
1:04 p.m. ET, October 14, 2020

CNN Election 101 podcast dives into identifying disinformation

These days, it’s not so easy to tell what’s true and what’s false on the internet.

From trolls to Russian bots, there are a lot of tools being used to destabilize US elections, and they are counting on regular Americans to click and share their false information. 

In Wednesday's episode of the CNN Election 101 podcast, Kristen Holmes and former CIA analyst Cindy Otis help you figure out how to spot disinformation, and stop it from spreading.

You can now listen here.

12:06 p.m. ET, October 14, 2020

YouTube bans Covid-19 misinformation videos

CNN Business' Kaya Yurieff

YouTube on Wednesday said it would take down videos that include misinformation about Covid-19 vaccines.

The policy will apply to any claims that go against expert consensus from local health officials or the World Health Organization. For example, YouTube said it would remove claims that a vaccine would kill people or cause infertility, or that microchips would be implanted in people who get the vaccine.

The company noted it's already taken action on other types of coronavirus-related misinformation, such as content that disputes the existence of the virus. The company said it's removed over 200,000 videos containing dangerous or misleading information about Covid-19 since February.

YouTube's announcement comes a day after Facebook said it would no longer allow ads that discourage people from getting vaccinated. 

1:18 p.m. ET, October 13, 2020

Facebook only now says it will stop allowing ads that discourage vaccines 

CNN Business' Donie O'Sullivan

Facebook announced Tuesday that it will no longer allow ads that discourage people from getting vaccinated. 

Prominent proponents of anti-vaccine misinformation have for years been using Facebook and Instagram to spread their message, which can have dangerous and even deadly consequences

"Today, we’re launching a new global policy that prohibits ads discouraging people from getting vaccinated. We don’t want these ads on our platforms," Kang-Xing Jin, Facebook’s head of health, and Rob Leathern, a Facebook director of product management, wrote in a post on Tuesday

"Ads that advocate for or against legislation or government policies around vaccines – including a Covid-19 vaccine – are still allowed," they wrote. 

The company said it will be rolling out the ad ban in the coming days. 

Jesselyn Cook, a reporter at HuffPost, highlighted after Facebook’s announcement the type of paid anti-vaccine ads running on Facebook as of Tuesday.

1:21 p.m. ET, October 13, 2020

In reversal, Facebook will ban Holocaust denial posts under hate speech policy

CNN Business' Oliver Effron

Facebook is expanding its hate speech policy to include content that "denies or distorts the Holocaust," a major shift for the platform, which has repeatedly come under fire for its inaction on hateful and false information.

In announcing the policy change, Monika Bickert, Facebook's vice president of content policy, wrote in a blog post that the decision was "supported by the well-documented rise in anti-Semitism and the alarming level of ignorance about the Holocaust." She cited a recent survey that found almost a quarter of adults in the US between the ages of 18 and 39 believed the Holocaust was a myth.

Facebook (FB) will now direct users to credible information if they search for content related to Holocaust denial on its platform.

CEO Mark Zuckerberg has previously said that while he finds Holocaust denial "deeply offensive," he had maintained that Facebook should not police content.

"At the end of the day, I don't believe that our platform should take that down because I think there are things that different people get wrong," Zuckerberg said in a 2018 interview with Recode's Kara Swisher. "I don't think that they're intentionally getting it wrong."

In a Facebook post following Monday's announcement, Zuckerberg noted that his thinking has evolved after seeing data showing an increase in anti-Semitic violence.

"I've struggled with the tension between standing for free expression and the harm caused by minimizing or denying the horror of the Holocaust," he wrote, "...but with the current state of the world, I believe this is the right balance."

Facebook has had a patchy record when it comes to monitoring dangerous or erroneous information. While it has removed some posts from President Trump that violated its policies, the platform has so far taken no action on a post by Trump that claimed, without evidence, that he is immune to coronavirus.

Read more here

6:17 p.m. ET, October 12, 2020

Facebook bans company it says ran fake accounts for Turning Point

Kaya Yurieff and Donie O'Sullivan

Facebook said Thursday it had banned a company it believes ran fake accounts for the conservative group Turning Point USA.

Facebook said the marketing firm Rally Forge, working on behalf of Turning Point USA, ran a campaign that relied upon fake accounts that posted criticism of former Vice President Joe Biden and praise for President Donald Trump. According to Facebook, that campaign included tactics like commenting on the Facebook pages of major national American media outlets.

The alleged activity was first identified through an investigation by The Washington Post, which prompted Facebook to look into the group.

"Many of these accounts used stock profile photos and posed as right-leaning individuals from across the US. In 2018, some of these accounts posed as left-leaning individuals to comment on content as well. This activity was centered primarily around commenting on news articles posted by news organizations and public figures, rather than posting their own content," Facebook said in a report published Thursday.

Facebook added in the report, "The most recent activity included creating what we call 'thinly veiled personas' whose names were slight variations of the names of the people behind them and whose sole activity on our platform was associated with this deceptive campaign."

Read more here

1:03 p.m. ET, October 13, 2020

Twitter won't let you retweet, like or reply to election tweets with warnings on them

CNN Business' Kaya Yurieff

Twitter is rolling out a series of changes ahead of the US election next month in an attempt to clamp down on the spread of misinformation.

On Friday, Twitter said that users, including political candidates, cannot claim an election win before it is authoritatively called. Twitter's new criteria for that requires either an announcement from state election officials or a public projection from at least two authoritative, national news outlets. Twitter did not identify the outlets, though news organizations like CNN, the Associated Press, ABC News, and Fox News would fit the bill.

Previously, Twitter said candidates would be prohibited from claiming victory "before election results have been certified." This caveat immediately drew the attention of election experts, because Twitter was drawing a red line that was noticeably out of step with how results are processed. The results publicly reported by election officials and news outlets on election night are always preliminary. Weeks later, the results are formally "certified" by state officials. With Friday's adjustment, Twitter is smoothing out its policies for Election Night, and eliminating a potentially major hiccup.

Such tweets claiming a premature win will receive a misleading information label and users will be directed to Twitter's official US election page for more details.

Warnings that block interactions

Twitter is also now adding more warnings and restrictions to tweets with labels, for example, people will have to tap through a warning to see such tweets, and they will only be able to "quote tweet." Likes, regular retweets and replies will not be available, and those tweets won't be recommended by Twitter. Quote tweets append a tweet to a user's commentary about it.

Twitter had previously added these warnings to tweets in a few situations, but it is now expanding their use.

This will apply to tweets from US political figures, including candidates and campaign accounts, US-based accounts with more than 100,000 followers, or any tweets that rack up significant engagement.

"We expect this will further reduce the visibility of misleading information, and will encourage people to reconsider if they want to amplify these Tweets," Twitter wrote in a blog post on Friday.

Starting next week, when users try to retweet anything with a misleading information label, they'll see a prompt directing them to authoritative information about the topic before they are able to go through with a retweet.

1:03 p.m. ET, October 13, 2020

We asked Trump supporters to show us their Facebook feeds

CNN Business' Donie O'Sullivan

Misleading content shared by Trump and his team is often defended as humor. But his supporters aren't always in on the joke.

1:03 p.m. ET, October 13, 2020

How a crease in Biden's shirt spawned a debate conspiracy theory

CNN Business' Donie O'Sullivan

Prior to the first Presidential debate, a baseless conspiracy theory inundated many Americans. The Trump campaign, Fox News, and a slew of Trump-supporting Facebook pages all fueled speculation that Democratic presidential candidate Joe Biden might wear a secret earpiece to assist him in his debate against President Trump.

"I thought Biden had somebody in his ear," said one Trump supporter the morning after the first presidential debate. Her belief was shored up, she said, by video she had viewed of Biden supposedly adjusting a wire during the debate.

She was on her way to a Trump rally in Duluth, Minnesota and was referring to a YouTube video she was sent by a friend who is serving with the military overseas.

In fact, the video does not show Biden wearing a wire; it shows a crease briefly forming on Biden's shirt after he reached into his coat to itch his shoulder. But when false evidence emerged to support the baseless earpiece claim, it took off like wildfire.

One version of the video that was flagged by fact-checkers as false on Facebook had been shared more than 22,000 times and viewed 800,000 times by Thursday night.

Read more here

1:03 p.m. ET, October 13, 2020

Facebook can't catch misinformation it's already identified as false, activist group says

CNN Business' Brian Fung

With less than four weeks to go before a pivotal US election, Facebook has sought to reassure the public it has learned from its 2016 mistakes. On Wednesday, the company rolled out a new policy against voter intimidation and announced it will temporarily suspend political ads after polls close on Election Day.

But a new report from activist researchers shows that in the past year alone, Facebook has failed to act on hundreds of posts that racked up millions of impressions and contain claims that the social media giant has previously identified as false or misleading — raising fresh questions about the company's readiness for a potential wave of misinformation following Nov. 3.

The report outlines how purveyors of misinformation have successfully evaded Facebook's content review systems — both human and automated — by taking simple steps such as reposting claims against different-colored backgrounds, changing fonts and re-cropping images. The resulting posts appear to be just different enough to have escaped enforcement.

The posts include false claims about President Donald Trump and Vice President Joe Biden as well as false information about mail-in voting and the coronavirus.

The tactics mean that even as Facebook (FB) demotes and applies warning labels to certain posts that have been rated as false by third-party fact-checkers, variations on those same posts continue to replicate virally across the platform unhindered, said Avaaz, the activist group that produced the research.

Read more here