Misinformation Watch

By Donie O'Sullivan, Kaya Yurieff, Kelly Bourdet, the CNN Business team and contributors from across CNN

Updated 11:21 a.m. ET, January 26, 2021
15 Posts
Sort byDropdown arrow
12:46 p.m. ET, October 15, 2020

YouTube still won't say it's banning QAnon, but it is taking new steps to combat it

From CNN Business' Kaya Yurieff

YouTube said on Wednesday that it will prohibit content that claims an individual or group is involved in conspiracy theories like QAnon and Pizzagate that have been used to justify violence in the real world.

Rather than flat-out prohibit content that discusses a conspiracy theory like QAnon, YouTube will prohibit content that threatens or harasses someone by suggesting they are part of harmful conspiracies, YouTube said in a blog post.

QAnon believers have embraced a number of different and often contradictory theories, but the basic false beliefs underlying the far-right conspiracy theory are claims about a cabal of politicians and A-list celebrities engaging in child sex abuse, and a "deep state" effort to undermine President Trump.

Vague discussion of QAnon ideas is not covered by the new policy, such as falsely saying there is a cabal of Washington insiders and celebrities involved in sex trafficking. Still, YouTube said it would look for a variety of signals in such videos. For example, if someone is named or an image of a person is shown anywhere in the video, YouTube would take it down under its new policy.

YouTube said it believes Wednesday's update will have a significant impact on the remaining QAnon content on the platform. In the past YouTube has struggled to enforce its policies effectively and even when it has banned content, it continues to appear on the platform.

The update is YouTube's latest effort to curb the most egregious content coming from QAnon followers, while stopping short of a ban on QAnon.

In fact, it remains resistant to any such blanket ban, even as Facebook has banned QAnon pages, groups and Instagram accounts representing QAnon, while TikTok has banned QAnon accounts and removed QAnon content.

In a recent interview with CNN's Poppy Harlow, YouTube CEO Susan Wojcicki wouldn't say whether the platform would ban QAnon. "We're looking very closely at QAnon," Wojcicki said. "We already implemented a large number of different policies that have helped to maintain that in a responsible way."

Wojcicki pointed to changes made to YouTube's recommendation system, which she said have reduced viewership of QAnon content by more than 80%.

YouTube also said it's taken down tens of thousands of QAnon videos and removed hundreds of QAnon-related channels.

Watch the interview here:

12:33 p.m. ET, October 15, 2020

Twitter suspends fake accounts pretending to be Black Trump supporters

From CNN's Scottie Andrew

Twitter has recently suspended a slew of fake accounts pretending to be Black supporters of President Donald Trump. Many have tweeted this identical phrase: "YES IM BLACK AND IM VOTING FOR TRUMP."

A Twitter spokesperson told CNN the accounts violated rules against platform manipulation and spam, which ban users from tweeting to "artificially amplify or suppress information," among other activities.

It's unclear how many fake accounts were taken down. Twitter did not immediately respond to questions about the number of accounts.

Read more here.

12:45 p.m. ET, October 15, 2020

Florida Latinos flooded with misinformation on social media and Spanish radio

From CNN's Leyla Santiago

As both presidential campaigns step up efforts to court Latino voters, misinformation campaigns are also taking aim at Latinos, especially in Florida, where they make up 20 percent of the electorate, according to Pew Research. The rampant misinformation has led to tensions and strife for some in Florida’s Latino community, as many struggle to make sense of it all weeks before the U.S. presidential elections. 

According to an expert from Equis Labs, networks on social media are coordinating attacks targeting candidates on the left and social movements, like Black Lives Matter. False claims have been shared thousands of times, the expert told CNN. Conspiracy theories have not only ended up on Spanish-language radio and popular messaging apps among Latinos, but some are also being echoed by the Trump campaign.

See more:

1:04 p.m. ET, October 14, 2020

CNN Election 101 podcast dives into identifying disinformation

These days, it’s not so easy to tell what’s true and what’s false on the internet.

From trolls to Russian bots, there are a lot of tools being used to destabilize US elections, and they are counting on regular Americans to click and share their false information. 

In Wednesday's episode of the CNN Election 101 podcast, Kristen Holmes and former CIA analyst Cindy Otis help you figure out how to spot disinformation, and stop it from spreading.

You can now listen here.

12:06 p.m. ET, October 14, 2020

YouTube bans Covid-19 misinformation videos

CNN Business' Kaya Yurieff

YouTube on Wednesday said it would take down videos that include misinformation about Covid-19 vaccines.

The policy will apply to any claims that go against expert consensus from local health officials or the World Health Organization. For example, YouTube said it would remove claims that a vaccine would kill people or cause infertility, or that microchips would be implanted in people who get the vaccine.

The company noted it's already taken action on other types of coronavirus-related misinformation, such as content that disputes the existence of the virus. The company said it's removed over 200,000 videos containing dangerous or misleading information about Covid-19 since February.

YouTube's announcement comes a day after Facebook said it would no longer allow ads that discourage people from getting vaccinated. 

1:18 p.m. ET, October 13, 2020

Facebook only now says it will stop allowing ads that discourage vaccines 

CNN Business' Donie O'Sullivan

Facebook announced Tuesday that it will no longer allow ads that discourage people from getting vaccinated. 

Prominent proponents of anti-vaccine misinformation have for years been using Facebook and Instagram to spread their message, which can have dangerous and even deadly consequences

"Today, we’re launching a new global policy that prohibits ads discouraging people from getting vaccinated. We don’t want these ads on our platforms," Kang-Xing Jin, Facebook’s head of health, and Rob Leathern, a Facebook director of product management, wrote in a post on Tuesday

"Ads that advocate for or against legislation or government policies around vaccines – including a Covid-19 vaccine – are still allowed," they wrote. 

The company said it will be rolling out the ad ban in the coming days. 

Jesselyn Cook, a reporter at HuffPost, highlighted after Facebook’s announcement the type of paid anti-vaccine ads running on Facebook as of Tuesday.

1:21 p.m. ET, October 13, 2020

In reversal, Facebook will ban Holocaust denial posts under hate speech policy

CNN Business' Oliver Effron

Facebook is expanding its hate speech policy to include content that "denies or distorts the Holocaust," a major shift for the platform, which has repeatedly come under fire for its inaction on hateful and false information.

In announcing the policy change, Monika Bickert, Facebook's vice president of content policy, wrote in a blog post that the decision was "supported by the well-documented rise in anti-Semitism and the alarming level of ignorance about the Holocaust." She cited a recent survey that found almost a quarter of adults in the US between the ages of 18 and 39 believed the Holocaust was a myth.

Facebook (FB) will now direct users to credible information if they search for content related to Holocaust denial on its platform.

CEO Mark Zuckerberg has previously said that while he finds Holocaust denial "deeply offensive," he had maintained that Facebook should not police content.

"At the end of the day, I don't believe that our platform should take that down because I think there are things that different people get wrong," Zuckerberg said in a 2018 interview with Recode's Kara Swisher. "I don't think that they're intentionally getting it wrong."

In a Facebook post following Monday's announcement, Zuckerberg noted that his thinking has evolved after seeing data showing an increase in anti-Semitic violence.

"I've struggled with the tension between standing for free expression and the harm caused by minimizing or denying the horror of the Holocaust," he wrote, "...but with the current state of the world, I believe this is the right balance."

Facebook has had a patchy record when it comes to monitoring dangerous or erroneous information. While it has removed some posts from President Trump that violated its policies, the platform has so far taken no action on a post by Trump that claimed, without evidence, that he is immune to coronavirus.

Read more here

6:17 p.m. ET, October 12, 2020

Facebook bans company it says ran fake accounts for Turning Point

Kaya Yurieff and Donie O'Sullivan

Facebook said Thursday it had banned a company it believes ran fake accounts for the conservative group Turning Point USA.

Facebook said the marketing firm Rally Forge, working on behalf of Turning Point USA, ran a campaign that relied upon fake accounts that posted criticism of former Vice President Joe Biden and praise for President Donald Trump. According to Facebook, that campaign included tactics like commenting on the Facebook pages of major national American media outlets.

The alleged activity was first identified through an investigation by The Washington Post, which prompted Facebook to look into the group.

"Many of these accounts used stock profile photos and posed as right-leaning individuals from across the US. In 2018, some of these accounts posed as left-leaning individuals to comment on content as well. This activity was centered primarily around commenting on news articles posted by news organizations and public figures, rather than posting their own content," Facebook said in a report published Thursday.

Facebook added in the report, "The most recent activity included creating what we call 'thinly veiled personas' whose names were slight variations of the names of the people behind them and whose sole activity on our platform was associated with this deceptive campaign."

Read more here

1:03 p.m. ET, October 13, 2020

Twitter won't let you retweet, like or reply to election tweets with warnings on them

CNN Business' Kaya Yurieff

Twitter is rolling out a series of changes ahead of the US election next month in an attempt to clamp down on the spread of misinformation.

On Friday, Twitter said that users, including political candidates, cannot claim an election win before it is authoritatively called. Twitter's new criteria for that requires either an announcement from state election officials or a public projection from at least two authoritative, national news outlets. Twitter did not identify the outlets, though news organizations like CNN, the Associated Press, ABC News, and Fox News would fit the bill.

Previously, Twitter said candidates would be prohibited from claiming victory "before election results have been certified." This caveat immediately drew the attention of election experts, because Twitter was drawing a red line that was noticeably out of step with how results are processed. The results publicly reported by election officials and news outlets on election night are always preliminary. Weeks later, the results are formally "certified" by state officials. With Friday's adjustment, Twitter is smoothing out its policies for Election Night, and eliminating a potentially major hiccup.

Such tweets claiming a premature win will receive a misleading information label and users will be directed to Twitter's official US election page for more details.

Warnings that block interactions

Twitter is also now adding more warnings and restrictions to tweets with labels, for example, people will have to tap through a warning to see such tweets, and they will only be able to "quote tweet." Likes, regular retweets and replies will not be available, and those tweets won't be recommended by Twitter. Quote tweets append a tweet to a user's commentary about it.

Twitter had previously added these warnings to tweets in a few situations, but it is now expanding their use.

This will apply to tweets from US political figures, including candidates and campaign accounts, US-based accounts with more than 100,000 followers, or any tweets that rack up significant engagement.

"We expect this will further reduce the visibility of misleading information, and will encourage people to reconsider if they want to amplify these Tweets," Twitter wrote in a blog post on Friday.

Starting next week, when users try to retweet anything with a misleading information label, they'll see a prompt directing them to authoritative information about the topic before they are able to go through with a retweet.