On Wednesday, the Google (GOOGL)-owned video platform said it would take a “stronger stance” against threats and personal attacks, among other changes intended to address the safety of its community.
YouTube said it would now prohibit “veiled” or “implied” threats, not just explicit ones. This new policy includes content that simulates violence against a person or language indicating physical violence could occur. YouTube also said it will no longer permit “maliciously” insulting someone based on characteristics like race, gender expression or sexual orientation, whether it’s a private individual, YouTube personality or public figure. The policy applies to content, as well as comments.
The move comes six months after YouTube faced one of its most high-profile controversies over harassment in recent memory. In June, YouTube came under fire after declining to ban the videos of Steven Crowder, a prominent right-wing personality. Vox journalist Carlos Maza said Crowder used the platform to attack him with homophobic and racist slurs. (YouTube did demonetize Crowder’s videos at that time, meaning he couldn’t earn money from the ads running on his channel.)
A YouTube spokesperson told CNN Business it has wanted to update its policy since April, in response to various behaviors it saw on the platform, ranging from harassment of politicians to YouTube creators targeting each other. The spokesperson said the Crowder situation further highlighted the need to make a comprehensive update to its harassment policy.
The updated policy is part of a broader effort by YouTube to clean up its platform following scrutiny from advocacy groups, lawmakers and media. The company has been trying to improve how it manages content, ranging from removing videos that violate its policies to reducing the spread of “borderline” content and prioritizing authoritative voices in its search results when users are looking for breaking news or information.
The real challenge will be enforcing the new harassment policy. YouTube has struggled in the past to police its massive platform. In June, YouTube said it would ban supremacist content and remove videos that deny well-documented atrocities like the Holocaust and the massacre at Sandy Hook elementary school. However, in September, the Anti-Defamation League released a report that found at least 29 YouTube channels espousing anti-Semitic and white supremacist content.
Now, six months after the policy update,some of the biggest purveyors of hate remain on YouTube, including white supremacist Richard Spencer.
As with other updates it has introduced, YouTube said it plans to enforce the harassment policy through a combination of human reviewers and automated systems. Creators can appeal decisions if they believe YouTube made the wrong decision.
On Wednesday, the company said it had met with dozens of creators from multiple countries and experts from areas such as online bullying and free speech to get their input when it was developing the policy update.
As part of the changes, channels that are found to be repeatedly engaging in harassing behavior through multiple videos or comments will be eliminated from the YouTube Partner Program, which means they won’t be able to earn any money on the platform. The platform could also take more drastic steps if the harassment continues, including removing certain content or deleting the entire channel.
The company said it expects to remove more harassing comments as a result of the new policy, too. It already offers YouTubers the ability to review a comment that seems potentially inappropriate before it’s posted to their channel. Last week, the company rolled the feature out as a default to more accounts. Creators have the ability to opt out.