CNN  — 

An unfavorable ruling against Google in a closely watched Supreme Court case this term about YouTube’s recommendation engine could have sweeping unintended consequences for much of the wider internet, the search giant argued in a legal filing Thursday.

Google, which owns YouTube, is fighting a high-stakes court battle over whether algorithmically generated YouTube recommendations are exempt from Big Tech’s signature liability shield, Section 230 of the Communications Decency Act.

Section 230 broadly protects tech platforms from lawsuits over the companies’ content moderation decisions. But a Supreme Court decision that says AI-based recommendations do not qualify for those protections could “threaten the internet’s core functions,” Google wrote in its brief.

“Websites like Google and Etsy depend on algorithms to sift through mountains of user-created content and display content likely relevant to each user,” the company wrote. “If plaintiffs could evade [Section 230] by targeting how websites sort content or trying to hold users liable for liking or sharing articles, the internet would devolve into a disorganized mess and a litigation minefield.”

In the face of such a ruling, websites could have to choose between intentionally over-moderating their websites, scrubbing them of virtually everything that could be perceived as objectionable, or doing no moderation at all to avoid the risk of liability, Google argued.

Driving the case are claims that Google violated a US antiterrorism law with its content algorithms by recommending pro-ISIS YouTube videos to users. The plaintiffs in the case are the family of Nohemi Gonzalez, who was killed in a 2015 ISIS attack in Paris.

In the filing, Google said “YouTube abhors terrorism” and cited its “increasingly effective actions” to limit the spread of terrorist content on its platform, before insisting that the company cannot be sued for recommending the videos due to its Section 230 liability shield.

The case, Gonzalez v. Google, is viewed as a bellwether for content moderation, and one of the first Supreme Court cases to consider Section 230 since its passage in 1996. Multiple Supreme Court justices have expressed interest in weighing in on the law, which has been broadly interpreted by the courts, defended by the tech industry, and sharply criticized by politicians in both parties.

The Biden administration, in a legal brief last month, argued that Section 230 protections should not extend to recommendation algorithms. President Joe Biden has long called for changes to Section 230, saying tech platforms should take more responsibility for the content that appears on their websites. As recently as Tuesday, Biden published a Wall Street Journal op-ed that urged Congress to amend Section 230.

But in a blog post Thursday, Google General Counsel Halimah DeLaine Prado argued that narrowing Section 230 would increase the threat of litigation against online and small businesses, chilling speech and economic activity on the internet.

“Services could become less useful and less trustworthy — as efforts to root out scams, fraud, conspiracies, malware, violence, harassment, and more are stifled,” DeLaine Prado wrote.