Editor’s Note: Stanley Fish is the author of several books of literary theory, including two on John Milton. His new book is “The First: How to think About Hate Speech, Campus Speech, Religious Speech, Fake News, Post-truth, and Donald Trump.” The opinions expressed in this commentary are his own; view more opinion at CNN.
In the course of Mark Zuckerberg’s back and forth with Alexandria Ocasio-Cortez last week, the New York congresswoman asked him: “So you won’t take down lies or you will take down lies? I think that’s just a pretty simple yes or no?” Zuckerberg refused the bait contained in the word “lies” and responded, “people should be able to see for themselves what politicians that they may or may not vote for are saying.”
What Zuckerberg did was shift the focus from the telling of lies, and his platform’s complicity in their dissemination, to the marketplace-of-ideas concept in which information is put into the world without any prejudgment of its worth and veracity, and the winnowing of the true from the false is left to time and the working of the market.
This is a very familiar strategy that allows the vehicle of falsehood and defamation to escape criticism by placing the responsibility of discernment on those who are the recipients of the information the platform innocently delivers.
Behind this strategy lie two oft quoted statements by Justice Louis Brandeis: “sunlight is said to be the best of disinfectants” and “the remedy for harmful speech is more speech, not enforced silence.” And behind these two free speech platitudes is the assumption that when bad or false ideas are allowed to see the light of day, that light will expose them for what they are and they will wither and die.
The only counter argument to this happy picture is all of recorded history; for, as survey research has repeatedly shown, the result of putting into the conversation dangerous and scurrilous views is that those views receive a broader distribution than they otherwise could have hoped for, and are taken up by people who would never have heard of them had they not been indiscriminately published. Zuckerberg is putting his faith (a word carefully chosen) in a mechanism and a process that doesn’t exist.
One can understand why. He is pledged to two contradictory ambitions. On the one hand, he doesn’t want to censor anyone’s speech, but on the other he doesn’t want his platform to be the vehicle of evil effects. As he put it earlier (in a 2018 interview with Recode), “there are two core principles at play here. There’s giving people a voice, then there’s keeping the community safe.” “Look,” he says almost plaintively, “I want to make sure our products are used for good.”
But how can he make sure of that without setting himself up as the arbiter of the good, something he very much does not want to do. In his public pronouncements, Zuckerberg tacks back and forth between the two contradictory positions he wants simultaneously to occupy.
At times he promises that artificial intelligence technology will come up with algorithms that will allow us to flag harmful speech without any input from fallible and prejudiced human judgment. At other times, he is less optimistic and says things like “people use tools for good and bad.”
So in one moment Zuckerberg is putting his faith in technology and in another he is throwing up his hands. At one point the pendulum swung from the giving-everyone-a-voice side to the keeping-the-community-safe side. Facebook removed pages belonging to conspiracy theorist Alex Jones and his website InfoWars because, the company announced, they violated “community standards.” But of course community standards are various and volatile and it is easy to imagine community standards that clash and in clashing, take away any principled basis for the action Facebook has taken. If it’s Alex Jones and InfoWars this time, who is going to be next?
It is easy to poke fun at Zuckerberg’s performance in the past couple of years, but to be fair he is simply a particularly visible emblem of the tensions and contradictions that emerge whenever there is an attempt to draw a line between speech that is a genuine contribution to democratic deliberation and speech that threatens democracy’s foundations.
Facebook and other social media platforms are ever trying to draw that line, but no such line can ever be drawn because the distinction between speech that helps us to make decisions and speech that corrupts the decision-making process will always be a partisan one. The dilemma that produces Zuckerberg’s comical performances is baked into the situation. Neither Facebook nor any other platform will ever be able to strike the balance Zuckerberg seeks.
Get our free weekly newsletter
So what should Zuckerberg and his platform do? Well, they could continue to argue, as they have in the past, that theirs is a tech, not a media, company and is therefore not responsible for the content that utilizes them as a conveyor. (Kind of like Western Union.)
Or they could embrace a media identity and accept the responsibility of monitoring (by some mechanism not yet devised) the truth and falsehood of, at the least, political advertisements that appear on their site.
To do the first – that is, to do nothing – is to risk the wrath of those politicians who are already accusing them of being complicit in the doing of bad things. To do the second is to risk being accused by all sides of being partisan, something that is also already happening. Right now, in the context of the #MeToo movement and other markers of a heightened cultural sensitivity, the latter risk would seem to be a better bet, but, needless to say, not a safe bet.
Facebook’s precarious non-position got no easier to hold Wednesday, when fellow digital titan Twitter banned political ads entirely. Patience thins and Zuckerberg’s critics will not wait much longer for a response. Between a rock and a hard place doesn’t begin to cover it.