With most Americans hoping this week’s expected inauguration protests look nothing like the Capitol siege, questions emerge about unrestrained free expression, long championed by First Amendment theorists as a benefit to society, no matter how ugly and hateful.
The optics may be disturbing, especially so soon after the riot, with the potential of protesters – many of like mind with those who stormed the Capitol – screaming, or worse, at troops and police standing guard outside the razor wire-topped fences surrounding the Capitol.
Is allowing this type of expression “good” for America? An old First Amendment theory – known as the safety valve – says it is, that permitting groups to express themselves releases pressure, ensuring objectionable ideas aren’t driven underground where they might boil over into violence.
Permitting free speech, including hate and extremist speech, is often cast as a universal boon, reinforced in idioms such as, “Sunlight is the best disinfectant” and “I don’t agree with what you say, but I’ll defend your right to say it.”
Not all First Amendment scholars are buying the safety valve theory, especially after the deadly episode at the Capitol. They question if extremist speech demands more limitations when it’s inextricably linked to the violence at the nation’s legislative headquarters, after hateful online rhetoric dovetailed with politicians and activists delivering speeches to revved-up crowds that marched to the Capitol, some bent on insurrection.
Even the American Civil Liberties Union, the consummate guardian of speech, has sought to address the “competing values” its long-held defense of expression presents, and some experts say free speech theories need to take into account the way social media has been used to manipulate the marketplace of ideas.
“We have to pay attention to the way that tech platforms are shaping discourse and the way technology moves fringe ideas into the mainstream,” said Joan Donovan, research director at Harvard’s Shorenstein Center on Media, Politics and Public Policy. “The idea we would somehow get out of it by not paying attention to what’s going on and opening the floodgates to more speech misunderstands the phenomenon of online platforms and misunderstands the technology.”
‘Protection against … noxious doctrine’
Ahead of President-elect Joe Biden’s inauguration, government and corporations are taking measures to avoid the violence that marred his Electoral College affirmation. Thousands of troops and police will be on hand, and companies including Apple, Google, Amazon, Twitter and Facebook are deplatforming and banning users, including President Donald Trump, while corporations such as PayPal and Airbnb are temporarily blocking resources, such as fundraising venues and places to stay.
Some of those targeted are crying censorship, but the First Amendment protects against government, not private organizations, stymieing expression. Big Tech and others, several scholars say, are correct to shut down extremist speech after seeing the role words on their platforms played in planning and stoking the Capitol mayhem.
The concept that would become the safety valve theory was born with US Supreme Court Justice Louis Brandeis’ 1927 concurring opinion in Whitney v. California. He posited “that without free speech and assembly discussion would be futile; that with them, discussion affords ordinarily adequate protection against the dissemination of noxious doctrine.”
Four decades later, renowned First Amendment scholar Thomas Emerson named the theory, writing that “suppression of belief, opinion and expression is an affront to the dignity of man, a negation of man’s essential nature.”
Robert Richards, founding director for the Pennsylvania Center for the First Amendment at Penn State, believes the safety valve theory has relevance in today’s censorship discussions, he said, but he’s not sure it’s at play here, yet, given the timeframes, who’s shutting down the speech and the other available avenues for expression.
Rather, he sees the corporations responding in a temporary manner to an emergency situation, “exigent circumstances that threaten to play out again,” he said. Yes, shutting down speakers over time carries risks their behavior will “bubble up in some worse fashion,” but there’s no indication the corporate measures are permanent. They’re narrowly tailored to specific speakers, apps or windows of time, he said.
“The main difference (between corporations and government shutting down speech) is the private sector can make its own rules,” he said. “Going forward, those restrictions will ease up as the temperature of the country’s politics goes down. … I don’t really think there’s a lot of permanent ending of speech.”
A far better example of the safety valve theory is the Arab Spring, Richards said. Citizens rose up across the Middle East and Africa – to spur reform and regime change, not question a legitimate election – but their anger reached critical mass after years of systemic oppression, he said, not a few weeks of Twitter or Airbnb bans. Americans also have alternative venues to speak out, where most Tunisians and Libyans did not
Is banning hate speech a dangerous precedent?
Brandeis died in 1941, Emerson in 1991 – long before the advent of social media. They couldn’t envision what the town square would become once an individual could reach millions by pulling out a phone and pecking at the screen.
“As a technologist and a lawyer, I’ve been troubled by the role that gatherings facilitated by technology have played,” said Wendy Seltzer, an affiliate at Harvard’s Berkman Klein Center for Internet & Society. “The First Amendment protects speech but not incitement to violence, so the challenge is to figure out how to offer the widest possible venue/forum for speech that enables people to develop ideas … and find other perspectives and learn, while having checks in place to prevent it from turning into violence or being a place where violent mobs gather to plot their activities.”
Distinguishing between protected speech and incitement of violence isn’t easy. Many theorists once felt online “rabbit hole conspiracy development” was harmless, Seltzer said, but new research suggests “people (are) being drawn into them because they find themselves surrounded by people with extreme viewpoints and get confirmation for those viewpoints – and they have a hard time getting back to reality.”
While Seltzer says experts need to reevaluate where First Amendment theory “meets practice when people behave in ways that get radicalized by discussion,” others call for outright prohibitions on extremist speech.
Numerous nations, including Canada and some American allies across Europe, have banned hate speech, in particular, but the US Supreme Court historically has held such rhetoric is protected – despite myriad examples of First Amendment caveats, including inciting violence, libel/slander, child pornography, fighting words, copyright infringements and threatening the president.
Roy Gutterman, director of Syracuse University’s Tully Center for Free Speech, worries, he said: Once you start criminalizing speech, where does it end?
“Europe and Germany and France, they have different histories and obviously their laws have criminalized certain speech that we would legally accept here,” he said. “The categories of speech that we criminalize are really specific and narrow, and narrow for a reason. I’m not sure a new category of illegal hate speech would really comport with the First Amendment, and maybe I’m being naive.”
Is it better to cut off hatemongers’ oxygen?
Ninety-four years after Brandeis’ pontifications, it’s still an evolving discussion. Supreme Court decisions offer sparse bright lines for what expression can be banned.
Texas v. Johnson said flag burning is OK. Watts v. US stated that “crude political hyperbole” is not a true threat, another First Amendment caveat. Brandenburg v. Ohio found speech advocating crime is allowed as long as it doesn’t incite imminent lawlessness. Virginia v. Black ruled cross burnings could be carried out in private but not in public to intimidate others. Elonis v US ruled rapping about killing can be protected speech, but Justice Samuel Alito, in a partial dissent, worried the high court decision would “cause confusion and serious problems” because it left the standard for true threats subjective.
Justice Sonia Sotomayor said herself the high court’s standards for true threats are murky. In denying a 2017 hearing in Perez v. Florida involving a man drunkenly threatening to blow up a liquor store, she wrote the Watts and Black cases clarify a speaker must intend to convey a threat to be guilty of a crime, but the high court never addressed the level of intent – “a question we avoided two terms ago in Elonis.”
Gutterman knows definitive answers are elusive; still, he errs on the side of free expression, he said.
“So often, free speech advocates end up having to support viewpoints that we don’t always agree with,” he said. “After what we saw (at the Capitol) and even going back a couple years to Charlottesville, I can understand how people might have reservations about the safety valve theory. I still think it involves a pretty important function, to allow people to say things that are offensive.”
Other observers see it differently, including Richard Moon, a law professor at the University of Windsor in Ontario, Canada, where hate speech laws broadly prohibit inciting hatred against “identifiable groups.”
“Hate speech and false claims on the internet can have consequences – and are not just steam being blown off,” he said. “Hateful speech plays to fear and resentment – appealing to the audience at a visceral level. It is often directed at a sympathetic audience in a context that limits the opportunity for debate and reflection. In such circumstances, some in the audience may take seriously the message of hate speech and follow its logic to a violent conclusion.”
Bad actors harnessed free speech to fool the Capitol rioters into believing an election was stolen, justifying their anger and pushing them to violence, Moon said. While he worries efforts to ban hateful, harassing or misleading speech might become overinclusive, he said, he’s not convinced that’s happening.
“Limiting the spread of false information – particularly to a mob – is not going to lead to more violence,” he said. “I’m not persuaded that the frustration at not being able to reach a significant audience outweighs the risks that come with spreading hatred and disinformation. There is more reason to think that denying hatemongers a platform cuts off their oxygen.”
‘I think about the people who are impacted’
Harvard’s Seltzer and Donovan say it’s time to have new conversations on the absolutist interpretations of the First Amendment, taking into consideration the ways ideas live, grow and motivate others in the technological landscape.
Race and gender scholars point out notions about free speech emanate primarily from scholars and politicians who are overwhelmingly White, Seltzer said: “It’s easy for a White male to say, ‘Anyone’s speech is OK,’ and much harder for someone in a minority position who is constantly bombarded with speech telling them they are not equal and who don’t have as much access to laws and courts to gain protections.”
Those notions also don’t account for confirmation bias, a plague of social media wherein people believe disinformation, seek out data supporting it and ignore contradicting information.
“Our First Amendment theory and practice need to be informed by the context and the way humans behave in technological affordances,” Seltzer said. “I’m comfortable saying I will defend even hateful speech as long as I can stand up in alliance with those harmed by that speech and do useful things to protect them from the consequences of it.”
Donovan doesn’t think about extremist speech in the abstract, she said. As a queer woman, she moved to Canada in 2004 to escape the hate she felt homing in on the LGBTQ community.
As an expert in online extremism and disinformation campaigns, she watched as: Trump “seeded hundreds of little lies” across fertile social media fields; extremists tore Black Lives Matter banners off church facades last month, most with little consequence; and key figures in Gamergate and Charlottesville stoked online fury ahead of the attempted coup.
“I think about what it makes people who are hated on have to do in order to stay safe. … I think about the people who are impacted, all the things Black people are going to have to go through in the next couple weeks to make sure they’re not attacked,” Donovan said.
‘They’re threatening to kill people’
It can’t be ignored, too, that some of those involved in the January 6 attempted coup chanted about the Vice President, “Hang Mike Pence!” and one of the insurrectionists wrote, “Murder the media” on a door inside the Capitol.
“We’re not in the abstract here in that there’s no direct threat, and there’s no morally ambiguous words being used here. They’re threatening to kill people,” Donovan said.
It will take years to “dampen and temper” the effects of extremists having the opportunity to network and congregate unfettered, said Donovan, who in 2018 made a case for journalists and community groups “quarantining” toxic ideas.
“I’m talking about this group that really rode the lightning that was Donald Trump into relevancy, and now we have to understand that those people who have met each other and smiled and high-fived and talked about overthrowing the Capitol and killing (US Rep. Alexandria Ocasio-Cortez) and hanging Mike Pence, they now all know each other. They’ve exchanged phone numbers,” she said.
Those ideas grow more dangerous in numbers, Seltzer said.
“It would be great if the safety valve worked,” she said. “The fear and what we’re seeing in lots of YouTube rabbit holes and QAnon conversations is people expressing things that are out there and finding themselves amid a cluster of people whose views are even further out there – maybe even more dangerous views.
“They’re not in the public square taking turns on a soap box. They’re in an echo chamber.”