As the United States gears up for another presidential election, aware of the role online disinformation played in 2016, the business of publishing false or extremist content online remains a lucrative one.
At least $235 million in revenue is generated annually from ads running on extremist and disinformation websites, according to a new study from the Global Disinformation Index provided exclusively to CNN ahead of its September release. That means the people behind websites propagating hate or false information don’t just have an ideological influence — they can also make big money from advertisers who often are unhappy or unaware that their brand name is being displayed alongside content they do not endorse.
The Global Disinformation Index is a nonprofit that assesses websites’ risk of spreading disinformation and rates them based on transparency. It defines “disinformation” as inaccurate information spread “purposefully and/or maliciously.”
For this latest study, the organization surveyed 20,000 domains it suspected of disinformation, looking at the websites’ traffic and audience information, what kinds of ads they were running and how much they made per visitor on advertisements.
The organization’s findings reflect just “the tip of the iceberg,” Danny Rogers, chief technology officer at the Global Disinformation Index, told CNN’s John Avlon on “Reliable Sources” Sunday.
Because of the complex nature of the online advertising ecosystem, companies often don’t know exactly where their ads will end up.
“I think given the choice they would actively choose not to subsidize this kind of content, but right now they don’t have the choice,” Rogers said.
In one instance in 2016, an ad for Allstate insurance ran next to an article on Nowtheendbegins.com espousing a conspiracy theory about the Sandy Hook school shooting, according to a New York Times report. Allstate has since broken ties with several sites classified as disinformation by the Global Disinformation Index.
Mostly, it’s up to companies themselves to try to monitor where their ads end up. Sleeping Giants, an activist group founded after the 2016 election, is aiming to help them. Sleeping Giants runs Twitter and Facebook accounts that alert companies when their ads run on sites that peddle disinformation.
Separately, Google (GOOG) and Facebook (FB), the two largest internet advertising companies, have announced efforts to try to fight misinformation on their sites. Facebook (FB) has removed hundreds of Facebook (FB) pages and Instagram accounts it identified as part of coordinated disinformation campaigns. Google (GOOG) and Facebook (FB) implemented “trust indicators” to identify articles coming from trusted news sources. And Google (GOOG)’s YouTube tweaked its recommendation algorithm to ensure news-related searches would show results from reliable outlets.
Still, the quantity of information posted to the internet each day, technological developments that make false information look real and disagreements over what constitutes “disinformation” present challenges for that work. And that means advertisers must play a role in monitoring where their content shows up, Sleeping Giants Founder Matt Rivitz said Sunday on Reliable Sources.
“Advertisers for a long time viewed media as reach and frequency, and now they have to view it with responsibility, too,” Rivitz said. “It’s bad for society if they’re funding this hate and disinformation.”