"Sophia" an artificially intelligent (AI) human-like robot developed by Hong Kong-based humanoid robotics company Hanson Robotics is pictured during the "AI for Good" Global Summit hosted at the International Telecommunication Union (ITU) on June 7, 2017, in Geneva.
The meeting aim to provide a neutral platform for government officials, UN agencies, NGO
"Sophia" an artificially intelligent (AI) human-like robot developed by Hong Kong-based humanoid robotics company Hanson Robotics is pictured during the "AI for Good" Global Summit hosted at the International Telecommunication Union (ITU) on June 7, 2017, in Geneva. The meeting aim to provide a neutral platform for government officials, UN agencies, NGO's, industry leaders, and AI experts to discuss the ethical, technical, societal and policy issues related to AI. / AFP PHOTO / Fabrice COFFRINI (Photo credit should read FABRICE COFFRINI/AFP/Getty Images)
PHOTO: FABRICE COFFRINI/AFP/AFP/Getty Images
Now playing
02:26
Meet Sophia: The robot who smiles and frowns just like us
WASHINGTON, DC - JUNE 22: Facebook
WASHINGTON, DC - JUNE 22: Facebook's Chief Operating Officer Sheryl Sandberg speaks with AEI president Arthur C. Brooks during a public conversation on Facebook's work on 'breakthrough innovations that seek to open up the world' at The American Enterprise Institute for Public Policy Research on June 22, 2016 in Washington, DC. (Photo by Allison Shelley/Getty Images)
PHOTO: Allison Shelley/Getty Images North America/Getty Images
Now playing
01:23
Hear Sandberg downplay Facebook's role in the Capitol riots
screengrab US social media
screengrab US social media
PHOTO: Getty Images
Now playing
04:35
Tech companies ban Trump, but not other problematic leaders
PHOTO: Samsung
Now playing
01:53
See Samsung's new Galaxy S21 lineup
PHOTO: CNN
Now playing
02:47
Extremists and conspiracy theorists search for new platforms online
This illustration picture shows the social media website from Parler displayed on a computer screen in Arlington, Virginia on July 2, 2020. - Amid rising turmoil in social media, recently formed social network Parler is gaining with prominent political conservatives who claim their voices are being silenced by Silicon Valley giants. Parler, founded in Nevada in 2018, bills itself as an alternative to "ideological suppression" at other social networks. (Photo by Olivier Douliery/AFP/Getty Images)
This illustration picture shows the social media website from Parler displayed on a computer screen in Arlington, Virginia on July 2, 2020. - Amid rising turmoil in social media, recently formed social network Parler is gaining with prominent political conservatives who claim their voices are being silenced by Silicon Valley giants. Parler, founded in Nevada in 2018, bills itself as an alternative to "ideological suppression" at other social networks. (Photo by Olivier Douliery/AFP/Getty Images)
PHOTO: Olivier Douliery/AFP/Getty Images
Now playing
03:49
Parler sues Amazon in response to being deplatformed
PHOTO: Twitter
Now playing
02:39
Twitter permanently suspends Donald Trump from platform
Panasonic
Panasonic's Augmented Reality Heads-up Display
PHOTO: Panasonic USA
Now playing
01:06
This tech gives drivers directions on the road in front of them
PHOTO: LG Display
Now playing
01:10
See LG's transparent TV
PHOTO: Twitter/@gregdoesthings
Now playing
02:06
Internet gets creative with empty iPhone boxes
NEW YORK, NY - JUNE 3: The Google logo adorns the outside of their NYC office Google Building 8510 at 85 10th Ave on June 3, 2019 in New York City. Shares of Google parent company Alphabet were down over six percent on Monday, following news reports that the U.S. Department of Justice is preparing to launch an anti-trust investigation aimed at Google. (Photo by Drew Angerer/Getty Images)
NEW YORK, NY - JUNE 3: The Google logo adorns the outside of their NYC office Google Building 8510 at 85 10th Ave on June 3, 2019 in New York City. Shares of Google parent company Alphabet were down over six percent on Monday, following news reports that the U.S. Department of Justice is preparing to launch an anti-trust investigation aimed at Google. (Photo by Drew Angerer/Getty Images)
PHOTO: Drew Angerer/Getty Images North America/Getty Images
Now playing
03:25
Google employee on unionizing: Google can't fire us all
Now playing
02:01
Watch 'deepfake' Queen deliver alternative Christmas speech
Now playing
01:42
Watch father leave daughter dozens of surprise Ring messages
PHOTO: Photo Illustration: Kena Betancur/Getty Images
Now playing
04:50
Zoom's founder says he 'let down' customers. Here's why
Now playing
00:48
See Walmart's self-driving delivery trucks in action
Now playing
01:25
This robotaxi from Amazon's Zoox has no reverse function
(CNN Business) —  

A picture of a woman breastfeeding a baby. A fully clothed woman taking selfies in the mirror. A photo of a vase. These images were all wrongly flagged by Tumblr as improper.

Tumblr began its crackdown on adult content several weeks ago. But behind the scenes its technology still struggles to figure out the difference between what’s banned and approved nudity.

In December, Tumblr banned adult content — specifically “images, videos, or GIFs that show real-life human genitals or female-presenting nipples” to clean up its blogging platform. A handful of exceptions include nudity as art or part of a political event, and written content like erotica.

Tumblr said it would enforce its rules with automated detection, human moderators, and community users flagging objectionable posts. But some Tumblr users are complaining about reportedly misflagged images.

It’s tough for AI to make distinctions between, say, nudity for the sake of art and politics or nudity that is pornographic. An algorithm must be so focused on a specific task, such as spotting faces in pictures, so it can get tripped up by differences humans would see as trivial such as lighting.

To get a sense for how well this can work online, CNN Business created a test Tumblr page and posted photos that don’t violate the service’s policy but might be challenging for AI to sort out. Images included nude sculptures, bare-breasted political demonstrators, and unclothed mannequins.

Most of the posts had no issues. However, we received emails immediately after posting several images that said they were hidden from public view for possibly violating Tumblr’s community guidelines.

As far as we could tell, none of the posts actually did. And the images that were flagged weren’t always the ones we expected. Some images with political nudity — such as bare-breasted female French protesters, painted silver and clothed in red cloaks — appeared fine. Others, including a picture of topless women who had painted their bodies with the Spanish phrase “Mi cuerpo no es obsceno” (“my body is not obscene”), were not. A crowd of mannequins was flagged, too.

Tumblr declined to comment but said in a December blog post that making distinctions between adult content and political and artistic nudity “is not simple at scale,” and it knows “there will be mistakes.” The company has an estimated 21.3 million monthly users, according to eMarketer.

Ethan Zuckerman, director of the Center for Civic Media at MIT, said part of the difficulty in using AI to discern different types of nudity is that while there is plenty of pornography out there, there aren’t all that many images of people getting naked for political reasons.

“You just don’t have as much training data to draw from,” he said.

Tech companies such as Tumblr, Twitter and Facebook are increasingly turning to artificial intelligence as a solution to all kinds of problems, particularly for scrubbing unsavory posts from social networks. But AI’s ability to moderate online content — whether it’s photos, videos, or images — is still quite limited, Zuckerman and other experts say. It can help humans pick out bad posts online, but it’s likely to remain complementary rather than become a panacea in the years ahead.

Facebook in particular has emerged as an AI advocate. In April, CEO Mark Zuckerberg told Congress more than 30 times during 10 hours of questioning that AI would help get rid of social network issues such as hate speech and fake news.

It makes sense to deploy such technology, considering it’d be nearly impossible for human moderators to monitor the content created by a social site’s millions (or, in Facebook’s case, billions) of users.

Dan Goldwasser, an assistant professor at Purdue University who studies natural language processing and machine learning, believes AI will get better at this in the future and we should have realistic expectations for its use in the meantime.

“In some sense, it’s a matter of, well, if you set the bar quite low, AI can be very successful,” Goldwasser said.

A machine-learning algorithm — a type of AI that learns from mounds of data and gets better over time — can identify offensive language or pictures used in specific contexts. That’s because these kinds of posts follow patterns on which AI can be trained. For example, if you give a machine-learning algorithm plenty of racial slurs or nude photos, it can learn to spot those things in text and images.

Often, it’s trickier for machines to flag the nasty things humans post online, Goldwasser said. Inflammatory social media posts, for instance, may not include clearly offensive language; they could instead include false statements about a person or group of people that lead to violence.

AI also has a hard time understanding uniquely human interactions such as humor, sarcasm, and irony. It might sound like you’re saying something mean when it’s meant to be a joke.

Understanding context — including who’s writing or uploading an image, identifying the target audience, and determining surrounding social environment — can be key to figuring out the meaning behind a social-network post. And that’s a lot easier for us than it is for AI.

As AI tools improve, humans will remain an important part of the moderation process. Sarah T. Roberts, an assistant professor at UCLA who researches content moderation and social media, points out that humans are especially good at dissenting when necessary.

For example, we may be able to identify that an image depicting a violent scene is actually a war crime against a group of people. This would be very hard for a computer to determine.

“I think people will always be better at [understanding] nuance and context,” she said.

Zuckerman also believes humans will always play a role in finding and stopping the spread of negative online content.

“Human creativity prevents full automation from coming into play,” he said. “I think we’re always going to find ways to surprise and shock and create new imagery.”