PHOTO: Twitter
Now playing
02:20
Twitter to ban all political ads ahead of 2020 election
WASHINGTON, DC - JUNE 22: Facebook
WASHINGTON, DC - JUNE 22: Facebook's Chief Operating Officer Sheryl Sandberg speaks with AEI president Arthur C. Brooks during a public conversation on Facebook's work on 'breakthrough innovations that seek to open up the world' at The American Enterprise Institute for Public Policy Research on June 22, 2016 in Washington, DC. (Photo by Allison Shelley/Getty Images)
PHOTO: Allison Shelley/Getty Images North America/Getty Images
Now playing
01:23
Hear Sandberg downplay Facebook's role in the Capitol riots
screengrab US social media
screengrab US social media
PHOTO: Getty Images
Now playing
04:35
Tech companies ban Trump, but not other problematic leaders
PHOTO: Samsung
Now playing
01:53
See Samsung's new Galaxy S21 lineup
PHOTO: CNN
Now playing
02:47
Extremists and conspiracy theorists search for new platforms online
This illustration picture shows the social media website from Parler displayed on a computer screen in Arlington, Virginia on July 2, 2020. - Amid rising turmoil in social media, recently formed social network Parler is gaining with prominent political conservatives who claim their voices are being silenced by Silicon Valley giants. Parler, founded in Nevada in 2018, bills itself as an alternative to "ideological suppression" at other social networks. (Photo by Olivier Douliery/AFP/Getty Images)
This illustration picture shows the social media website from Parler displayed on a computer screen in Arlington, Virginia on July 2, 2020. - Amid rising turmoil in social media, recently formed social network Parler is gaining with prominent political conservatives who claim their voices are being silenced by Silicon Valley giants. Parler, founded in Nevada in 2018, bills itself as an alternative to "ideological suppression" at other social networks. (Photo by Olivier Douliery/AFP/Getty Images)
PHOTO: Olivier Douliery/AFP/Getty Images
Now playing
03:49
Parler sues Amazon in response to being deplatformed
PHOTO: Twitter
Now playing
02:39
Twitter permanently suspends Donald Trump from platform
Panasonic
Panasonic's Augmented Reality Heads-up Display
PHOTO: Panasonic USA
Now playing
01:06
This tech gives drivers directions on the road in front of them
PHOTO: LG Display
Now playing
01:10
See LG's transparent TV
PHOTO: Twitter/@gregdoesthings
Now playing
02:06
Internet gets creative with empty iPhone boxes
NEW YORK, NY - JUNE 3: The Google logo adorns the outside of their NYC office Google Building 8510 at 85 10th Ave on June 3, 2019 in New York City. Shares of Google parent company Alphabet were down over six percent on Monday, following news reports that the U.S. Department of Justice is preparing to launch an anti-trust investigation aimed at Google. (Photo by Drew Angerer/Getty Images)
NEW YORK, NY - JUNE 3: The Google logo adorns the outside of their NYC office Google Building 8510 at 85 10th Ave on June 3, 2019 in New York City. Shares of Google parent company Alphabet were down over six percent on Monday, following news reports that the U.S. Department of Justice is preparing to launch an anti-trust investigation aimed at Google. (Photo by Drew Angerer/Getty Images)
PHOTO: Drew Angerer/Getty Images North America/Getty Images
Now playing
03:25
Google employee on unionizing: Google can't fire us all
Now playing
02:01
Watch 'deepfake' Queen deliver alternative Christmas speech
Now playing
01:42
Watch father leave daughter dozens of surprise Ring messages
PHOTO: Photo Illustration: Kena Betancur/Getty Images
Now playing
04:50
Zoom's founder says he 'let down' customers. Here's why
Now playing
00:48
See Walmart's self-driving delivery trucks in action
Now playing
01:25
This robotaxi from Amazon's Zoox has no reverse function

Editor’s Note: Tim Weninger is associate professor of computer science and engineering at the University of Notre Dame. The opinions expressed in this commentary are his own.

Last month, Twitter CEO Jack Dorsey announced that the social network will no longer permit political ads to run on the site. Dorsey noted that a political message should be earned — not bought — and acknowledged the challenges that political ads online pose when it comes to misleading information, deepfakes and targeted messaging. Twitter announced additional details on the policy on Friday, saying it would indeed allow certain groups to advertise on political issues. But removing politicians’ ads is just a small step toward fighting the broader problem of disinformation.

Do political campaigns spend money to spin news stories about their opponents? Of course they do. And the ability for a campaign to hyper-optimize and micro-target false narratives to specific groups of social media users erodes our ability to make informed decisions and weakens our democracy.

Other social media platforms, like Facebook — which has been criticized for not fact-checking political ads posted on its site — and Instagram, should follow Twitter’s lead. Either social media companies should choose to fact-check ads that they wish to run or decline to run them altogether. Doing nothing is reckless and further endangers social discourse.

But paid posts are a miniscule portion of the political content that is shared on social media. Instead, the vast majority of political content that circulates on these platforms are unpaid posts. Most troubling is that many of these unpaid posts originate from malicious groups posing as legitimate activists or news agencies, and these fake organizations wield a lot of influence on these platforms.

For instance, Russia’s Internet Research Agency (IRA), the organization widely blamed for spreading disinformation during the 2016 presidential election, only spent $100,000 to purchase nearly 3,400 Facebook and Instagram ads in the months leading up to the election. Now compare those 3,400 ads with the “61,500 Facebook posts, 116,000 Instagram posts, and 10.4 million tweets,” that the IRA made and posted while pretending to be regular users, according to the Senate’s Select Committee on Intelligence. The committee argues that “an emphasis on the relatively small number of advertisements, and the cost of those advertisements, has detracted focus from the more prevalent use of original, free content via multiple social media platforms.”

Even more alarming: Many prominent Americans, including Trump campaign officials and critics, promoted this IRA-created content, which made it look that much more legitimate. And there is thus far no evidence that these Americans knew the accounts were tied to Russia. The overwhelming presence of non-paid, politically charged memes, videos and clickbait stories dwarfs the impact of sponsored political ads. Simply put, without even realizing it, regular users can easily spread disinformation.

Scientific research is becoming clear on this point. My own lab’s study of social media users found that 73% of all likes and shares, contributed by real, actual humans, occurred without the user ever reading the article — they never even clicked on the link. Social media users are mostly headline browsers: We scroll through our newsfeed, find something that amuses or angers us, and without considering the content or consequences, we spread it. So it’s no wonder that political disinformation spreads so quickly. We’re often doing the sharing.

So what should be done?

There are dozens of ideas being promoted as solutions to the problem of online disinformation. California’s new bot disclosure law, for example, makes it illegal for anyone to pose online as a bot with the intent to mislead someone else, and several countries have recently passed laws to prohibit the creation of disinformation. But research shows that the role of bots in spreading disinformation is minor compared to the regular sharing done by regular users. And giving governments the power to decide fact from fiction — and what can be published and what cannot — carries significant risks to free speech and the free press.

Fact-checking is another common refrain. But fact-checking takes time. A fake story can be created, go viral and be forgotten by the time a fact-checker has time to even consider it. Even if there was an instantaneous, super-AI fact-checking system, what would we do with it? There is some evidence that tagging fake posts has some small effect in limiting how often they are shared, but in most practical cases fact-checking does too little too late.

Social media companies rely, almost entirely, on users for their content. And it’s clear that these platforms are doing a lousy job at asking their users for quality content. If social media companies are serious about stopping the spread of disinformation, then they should encourage their users to be more thoughtful about the content that we post and share.

There are countless ways to do this. Most of us aren’t willing participants in malicious disinformation campaigns. So, in addition to the ongoing efforts to demonetize websites that promote disinformation and take down hateful content, social media platforms should provide better tools to the users to help make better social sharing decisions.

For example, many social platforms allow users to report posts that they believe to be intentionally spreading disinformation. It’s unclear how effective or widely used these systems are, but according to The Washington Post, Facebook keeps a trustworthiness score for every user in order to gauge the credibility of a user report. Keeping this score secret rightfully protects it from abuse, but surely Facebook could let us know a little about how we’re doing. Why not notify me if an article that I shared yesterday was fact-checked to be false or misleading? Such a feature would challenge me to think twice about sharing salacious or misleading content next time.

Despite their best efforts, so far it appears that social media companies have invented and discovered dozens of ways to not fix the problem. There is no clear solution here. But any kind of progress will have to include a broad recognition that it is the users, individually and collectively, who need to fix the newsfeed.

Updated: This article has been updated to reflect new details that Twitter released about its political ad policy.