The Supreme Court handed Silicon Valley a massive victory on Thursday as it protected online platforms from two lawsuits that legal experts had warned could have upended the internet.
The twin decisions preserve social media companies’ ability to avoid lawsuits stemming from terrorist-related content – and are a defeat for tech industry critics who say platforms are unaccountable.
In so doing, the court sided with tech industry and digital rights groups who had claimed exposing tech platforms to more liability could break the basic functions of many websites, and potentially even create legal risk for individual internet users.
In one of the two cases, Twitter v. Taamneh, the Supreme Court ruled Twitter will not have to face accusations it aided and abetted terrorism when it hosted tweets created by the terror group ISIS.
The court also dismissed Gonzalez v. Google, another closely watched case about social media content moderation – sidestepping an invitation to narrow a key federal liability shield for websites, known as Section 230 of the Communications Decency Act. Thursday’s decision leaves a lower court ruling in place that protected social media platforms from a broad range of content moderation lawsuits.
The Twitter decision was unanimous and written by Justice Clarence Thomas, who said that social media platforms are little different from other digital technologies.
“It might be that bad actors like ISIS are able to use platforms like defendants’ for illegal – and sometimes terrible – ends,” Thomas wrote. “But the same could be said of cell phones, email, or the internet generally.”
Thomas’ opinion reflected the court’s struggle to identify in oral arguments what kinds of speech ought to trigger liability for social media, and what kind deserved protections.
“I think the court recognized the importance of these platforms for billions of people for communicating, and stepped back from interfering with that,” said Samir Jain, vice president of policy at the Center for Democracy and Technology, a group that filed briefs in support of the tech industry.
For months, many legal experts had viewed the Twitter and Google cases as a sign the court might seek sweeping changes to Section 230, a law that has faced bipartisan criticism in connection with tech companies’ content moderation decisions. Thomas in particular has expressed vocal interest in hearing a Section 230 case.
Expectations of a hugely disruptive outcome in both cases prompted what Kate Klonick, a law professor at St. John’s University, described as an “insane flood” of friend-of-the-court briefs.
As oral arguments unfolded, however, and as justices visibly grappled with the complexities of internet speech, the likelihood of massive changes to the law seemed to recede.
“I think it slowly started to creep into the realm of possibility that … maybe the Court has no idea what the hell these cases are about and had MAYBE picked them to be activist, but weren’t ready to be THIS activist,” Klonick tweeted.
Daphne Keller, director of the Program on Platform Regulation at Stanford University, agreed.
“I do think this vindicates all of us who were saying, ‘the Supreme Court took the wrong case, these ones did not present the issues they actually wanted,’” Keller told CNN.
The justices may soon have another opportunity to weigh in on social media. The court is still deciding whether to hear a number of cases dealing with the constitutionality of state laws passed by Texas and Florida that restrict online platforms’ ability to moderate content. But the court’s handling of the Twitter and Google cases suggests the court may approach any new cases carefully.
“The very fact that the justices are proceeding cautiously is a good sign and suggests a more nuanced understanding of these issues than many feared,” said Evelyn Douek, an assistant professor at Stanford Law School.
In Thursday’s Twitter decision, the court held that Twitter’s hosting of general terrorist speech does not create indirect legal responsibility for specific terrorist attacks, effectively raising the bar for future such claims.
“We conclude,” Thomas wrote, “that plaintiffs’ allegations are insufficient to establish that these defendants aided and abetted ISIS in carrying out the relevant attack.”
He stressed that the plaintiffs have “failed to allege that defendants intentionally provided any substantial aid” to the attack at issue, nor did they “pervasively and systemically” assist ISIS in a way that would render them liable for “every ISIS attack.”
Twitter v. Taamneh focused on whether social media companies can be sued under US antiterrorism law for hosting terror-related content that has only a distant relationship with a specific terrorist attack.
The plaintiffs in the case, the family of Nawras Alassaf, who was killed in an ISIS attack in Istanbul in 2017, alleged that social media companies including Twitter had knowingly aided ISIS in violation of federal antiterrorism law by allowing some of the group’s content to persist on their platforms despite policies intended to limit that type of content.
“Countless companies, scholars, content creators and civil society organizations who joined with us in this case will be reassured by this result,” said Halimah DeLaine Prado, Google’s general counsel, in a statement. “We’ll continue our work to safeguard free expression online, combat harmful content, and support businesses and creators who benefit from the internet.”
Twitter did not immediately respond to a request for comment.
Dismisses Google challenge, leaving Section 230 untouched
In a brief order, the court dismissed the case against Google with only a brief opinion, leaving intact a lower court ruling that held Google is immune from a lawsuit that accuses its subsidiary YouTube of aiding and abetting terrorism.
The outcome will likely come as a relief not only for Google but for the many websites and social media companies that urged the Supreme Court not to curtail legal protections for the internet.
The opinion was unsigned, and the court said: “We decline to address the application of Section 230 to a complaint that appears to state little, if any, plausible claim for relief. Instead, we vacate the judgment below and remand the case for Ninth Circuit to consider plaintiffs’ complaint in light of our decision in Twitter.”
No dissents were noted.
The case involving Google zeroed in on whether it can be sued because of its subsidiary YouTube’s algorithmic promotion of terrorist videos on its platform.
The family of Nohemi Gonzalez, who was killed in a 2015 ISIS attack in Paris, alleged that YouTube’s targeted recommendations violated a US antiterrorism law by helping to radicalize viewers and promote ISIS’s worldview.
The allegation sought to carve out content recommendations so that they do not receive protections under Section 230, potentially exposing tech platforms to more liability for how they run their services.
Google and other tech companies have said that that interpretation of Section 230 would increase the legal risks associated with ranking, sorting and curating online content, a basic feature of the modern internet. Google claimed that in such a scenario, websites would seek to play it safe by either removing far more content than is necessary, or by giving up on content moderation altogether and allowing even more harmful material on their platforms.
Friend-of-the-court filings by Craigslist, Microsoft, Yelp and others suggested that the stakes were not limited to algorithms and could also end up affecting virtually anything on the web that might be construed as making a recommendation. That might mean even average internet users who volunteer as moderators on various sites could face legal risks, according to a filing by Reddit and several volunteer Reddit moderators.
Oregon Democratic Sen. Ron Wyden and former California Republican Rep. Chris Cox, the original co-authors of Section 230, argued to the court that Congress’ intent in passing the law was to give websites broad discretion to moderate content as they saw fit.
The Biden administration also weighed in on the case. In a brief filed in December, it argued that Section 230 does protect Google and YouTube from lawsuits “for failing to remove third-party content, including the content it has recommended.” But, the government’s brief argued, those protections do not extend to Google’s algorithms because they represent the company’s own speech, not that of others.
This story is breaking and will be updated.