The fringe social media site Gab is a well-known home for far-right content, but a new study looks at how Islamophobic posts in particular pull from across the web and more traditional social media.
Gab, which promotes itself as an alternative platform that does not restrict its users’ speech — in contrast to mainstream platforms like Facebook, Twitter, and YouTube, which can ban users for hateful content — gained attention in the wake of the Pittsburgh synagogue shooting, after it was revealed the suspect had posted anti-Semitic content to the platform. (Interviewed by CNN after the Pittsburgh shooting, Gab CEO Andrew Torba said, “It disgusted me,” adding that he was “horrified to find out that this alleged terrorist was on our site.”)
The Anti-Defamation League has noted that Gab is often a home for users who have been banned from Twitter for their conduct; the site is among a set of fringe forums for political discussion known for inflammatory and far-right user posts.
In a new study, researchers looked at a dataset of posts on Gab and found YouTube and Twitter were the top domains to which Gab users linked in posts discussing Islam. Not all of the content from YouTube and Twitter was Islamophobic — some of it had little or nothing to do with Islam — but it was incorporated into Islamophobic posts on Gab.
Researchers from the German Marshall Fund, the Institute for the Future, Stanford University, and social media-mapping firm Graphika, found that of the more than 10 million posts on Gab leading up to the 2018 midterms (from July to October 2018), more than 188,000 contained keywords related to Islam or Muslim-American political candidates (indicating fewer than 2% of Gab posts contained Islam-related keywords).
They found that 27% of those posts involved derogatory terms and that even posts that didn’t use derogatory terms were “often defamatory and demonizing.” Researchers also found sites such as jihadwatch.org and others known to spread disinformation or content critical of Islam in the dataset.
The German Marshall Fund offered CNN an opportunity to review the study before its publication.
“There is a difference between condoning content and allowing content. We do not condone Islamophobia,” a Gab representative said, when asked for comment about Islamophobic content on the platform by CNN. “However, as long as any content is protected by the First Amendment, it will be allowed on Gab.”
Gab, which claims 900,000 registered user accounts, has promoted itself as a refuge from mainstream social-media platforms, writing in its most recent annual report, filed to the SEC, that “marginalized people from every background – and the interesting conversations they have – need a safe place to engage in public, and where necessary anonymous, expression.” On Gab, that can translate into inflammatory content and aggressive political memes.
Despite its fringe reputation, researchers found that mainstream sites fed some of Gab’s Islam-related discussion. In 188,763 Islam-related posts, YouTube videos were cited 13,751 times and Twitter posts were cited 10,206 times — low overall percentages, but more than any other domains.
“Users on Gab consistently link to YouTube in order to share conspiratorial and disinformative videos on the platform,” the researchers wrote, finding “that many contain graphic and patently false rumors about Islam and topics like pedophilia and violence.”
Asked if it tracks YouTube content that’s shared on sites like Gab, and if that raises any red flags in its efforts to confront hate content, YouTube said it enforces its policies “based on the content that appears on YouTube.com, not on other platforms or websites.” Twitter declined to comment on the study but pointed CNN toward its policies banning hateful conduct.
The study “suggests that there’s not any kind of clear-cut separation between, say, Gab and a site like YouTube,” said Sam Woolley, a fellow at the German Marshall Fund and with the Institute for the Future, one of the paper’s authors. Gab and other non-mainstream sites play “a strategic organizational and communication goal for fringe and extremist groups,” Woolley said, with user discussions spilling onto other parts of the web.
Tracking hateful content through keywords can be difficult, and Woolley noted in a conversation with CNN that Gab users sometimes posted Islam-related videos that weren’t themselves hateful but were used in a derogatory context. A co-author noticed there is often “appropriation of content that is actually even made by pro-Muslim groups or by civil-society groups in some circumstances,” Woolley said, content that was sometimes “being taken and used for hateful purposes.” Similarly, Woolley said he found more inflammatory discussion about Islam that didn’t include the derogatory keywords for which his group searched.
The YouTube videos circulated on Gab ranged in substance and popularity. In addition to the anti-Muslim videos mentioned above, many of the most-cited videos noted by the study had nothing to do with Islam or Muslims. The most cited was merely nationalistic: It splices together footage of the 9/11 attacks, reporting on the death of Osama bin Laden, and US soldiers engaged in various wars over a speech by Ronald Reagan during the Cold War.
The second-most cited, with 255 mentions in the Gab dataset, is outright disinformative: It purports to show House Speaker Nancy Pelosi outlining a playbook for Democratic smear tactics. The original video, findable in C-SPAN’s video library, makes it obvious that Pelosi is describing the purported tactics of Democrats’ opponents.
The research paper did not mention specific tweets.
Mainstream social media platforms have made efforts to remove bot-promoted disinformation and have come under criticism for not removing hateful content more aggressively. Both Twitter and YouTube have user guidelines that ban hate speech. Facing public scrutiny over bigoted and conspiratorial content, YouTube recently updated its user policies and said it would ban supremacist content and videos that deny the 2012 school shooting at Sandy Hook.
Memes, disinformation campaigns, and political messages can bubble up through alternative sites onto mainstream ones, the researchers noted. “The content on fringe sites often serves as a harbinger, a signal of what is to come, for problematic and divisive communication across mainstream social media platforms,” they note in the study.
The researchers write that “fringe platforms including Gab are likely to be integral to the spread of Islamophobia — but also to the general online communication of disinformation, hate and political manipulation — during the 2020 U.S. presidential elections.”
By its own claims, Gab has grown rapidly: In a previous report filed to the SEC, the company counted its user base at 394,000 in January 2018—meaning it has more than doubled in size, if the company’s belief that most of its 900,000 accounts are real users (and not duplicates or fake accounts) is accurate. Gab sought in March to withdraw its offer of public shares, saying it would seek funding through other means. It also said this past spring it had faced adverse effects after domain hosting services and PayPal refused to work with Gab, in the wake of the Pittsburgh shooting.
Gab users “find power through their ability to link to mainstream social media platforms—especially YouTube” as they surface that content in their discussions and use it to spread ideas, said the researchers, suggesting mainstream content plays a role in fringe discussions. The researchers recommend policymakers keep an eye on the connection between platforms as they seek to address hate speech and disinformation online.