Support The Bulwark and subscribe today.
  Join Now

Gresham’s Law of the Internet

Content moderation isn’t “censorship.” It’s what built the Internet.
January 19, 2021
Gresham’s Law of the Internet
(Photo by Jaap Arriens/NurPhoto via Getty Images)

The “chief policy officer” of Parler, the social media platform that recently got kicked off of Amazon’s servers after it was exposed as a hotbed of planning for the January 6 insurrection, has been going around peddling a lot of nonsense about “censorship” and “surveillance.”

What she describes as “censorship” is pressure on social media platforms “to moderate, as they call it, content on their platforms, but that would require 24-hour surveillance, and we don’t think that is consistent with the principles of America.”

What she describes in skeptical tones as what “they” call moderation is, in fact, moderation. “Moderation” is the proper, technical legal term for what happens when the host of a forum decides what is and is not allowed to appear on that forum. Not only is this moderation the exact opposite of censorship—private hosts making the decisions instead of government—it is also an essential function platforms serve that has helped build the modern Internet.

What she means by “surveillance” is even wackier, because she is referring simply to the hosts reading material posted publicly on their own forum, which I guess makes this “the world of Orwell’s Nineteen Eighty-Four.”

Please.

Knowing the difference between this sort of thing and “censorship” or “surveillance”—knowing the difference between the actions of a private platform and those of government—is Classical Liberalism 101, and all the people on the right who are suddenly playing dumb about it should be ashamed of themselves.

The confused mess people are making of these concepts helps explain not only why Parler doesn’t have a leg to stand on in its disputes with Amazon and Google, but also why their forum was doomed to be a cesspool of racism, conspiracies, and incitement from its very conception.

First, let’s be clear about the actual basis for Amazon’s decision to withdraw their web hosting services from Parler. Amazon didn’t target Parler because of ordinary conservative political discussion or even ordinary discussion by pro-Trump fanatics. It cut the company off because of frequent and increasing threats of political violence and insurrection which Parler failed to remove or to make any plans to remove, in direct violation of Amazon’s terms of service.

If a baker shouldn’t be required to bake a cake for a gay wedding—and he shouldn’t—then it is even more true that Amazon should not be forced to provide technological support for incitement to violence and an attempt to undermine our political system.

This is exactly why Section 230 of the Communications Decency Act of 1996 gave platforms both the power and the mandate to moderate their forums. Prior to Section 230, under a dubious court ruling, any attempt to moderate a forum, to pick what could or could not be posted, made the host directly responsible as the publisher of all the content posted there. In effect, if you removed one post as inappropriate, it was assumed that you were endorsing every post you didn’t remove. This made it impossible for a site with user-submitted content to grow to any significant size, since it would have to individually monitor and approve every single word posted on it. Section 230 was specifically intended to enable partial moderation, to allow a site to monitor and review the most offensive posts without having to take responsibility for everything else.

Remember that Section 230 was part of the Communications Decency Act. It was specifically a mandate for tech companies to make the Internet more family friendly by policing smut on their platforms. But is also helped make the Internet more attractive and functional for users of all ages by allowing forums to filter out the worst of the trolls and the crazies.

Far from being “censorship” or limiting discussion, this was essential to building the modern Internet.


To understand why, we have to understand some basic principles about how an Internet forum works. These are not rules about “social media” because I formalized them decades ago, way back in the olden days before the Internet had pictures. (We stared at green text on the screen of a monitor connected to the university mainframe. After walking uphill to school both ways.) What people born after the year 2000 don’t realize is that there’s nothing really new on 21st century social media that wasn’t already happening on Usenet in 1988.

Back around that time, I came up with what I call Gresham’s Law of the Internet. Named after Gresham’s Law in economics, it states that bad postings drive out good. The more a forum is dominated by postings of poor quality, and particularly by postings that are rude, aggressive, insulting, and threatening, the less likely it is that decent and reasonable people will be motivated to post there. A totally unmoderated Internet forum will tend to be taken over by trolls and racists and crazy people transcribing the voices in their heads—and become useless for everybody else.

Beneath this is what I call the Inverse Square Law of the Internet, which holds that the number of a person’s postings tends to be inversely proportionate to their quality. In other words, the craziest people always have more time and energy to post than you do. (If they don’t, you should be worried that you are becoming one of them.) The more irrational an idea, the more obsessive its adherents become, and the less willing they are to let anyone else have the last word. Meanwhile, genuine experts and thoughtful people tend to have a lot of other, more productive things to do with their time.

If you’ve ever posted a comment on the Internet that is critical of the Universal Basic Income, Ron Paul, or Elon Musk, you know what I’m talking about. Or, for the matter, if you’ve published something critical of President Trump.

That’s why social media platforms have to moderate their content, as well as supplying tools for people to moderate their own feeds and block users who offend them. And that’s why it’s so stupid to brand this kind of moderation, the kind of minimal maintenance a host does to allow a forum to function, as “censorship.”


That’s also why it was so predictable that all of the platforms that tried to compete with Twitter by proclaiming themselves as “free speech” forums were doomed to fail. Anyone remember Gab, which quickly became a hangout for white nationalists and for no one else?

A platform that really wanted to compete with Twitter would have to moderate more aggressively, in order to provide a better experience for its readers—more thoughtful expression and less yelling—and to keep from becoming a magnet for all the most obnoxious people who got kicked off every other platform.

Instead, Parler did the opposite. By loudly proclaiming their intention not to ban anyone, no matter how offensive their speech, they hung up a sign saying, “Trolls Welcome.” Anyone could have predicted that this was going to become a clubhouse for white nationalists and unhinged conspiracy theorists. And it’s only a short step from there to becoming a center for planning and inciting political violence.


I have criticized Google, Facebook, and especially Twitter for being ham-handed, biased, bad at moderating, and generally scared to death of their interns and of the woke mobs who coalesce on their own platforms. But do they actually seem to be in charge? Facebook has become a cesspool of conspiracy theories, and the best defense Parler has to offer is to point to all the insurrection incitement and planning that was happening on everybody else’s platforms, too, despite their efforts to stop it.

My sense of the current Big Tech crackdown is that it’s partly going to be politically biased overreach but is also a frantic attempt to control the worst excesses of the vast collective Id that manifests itself on their platforms. So complaining about Big Tech “censorship” seems appropriate to, say, 2015, when they were overzealously purging slightly trollish critics of feminism. But it isn’t our biggest problem now, and acting as if it is seems like a politically convenient distraction.

I would like to see an effort to help Big Tech do its moderation better and more fairly, especially at a moment like now, when the conservative movement is suffering from a catastrophic self-inflicted loss of credibility that will tempt the most intolerant wing of the left to press its advantage.

But we should none of us disparage the attempt to moderate content, because it is what makes the modern Internet usable in the first place. Nor should we imagine that the need for rational political debate will be served by regulation that unleashes the crazies and the trolls.

Gresham’s Law of the Internet tells us that this is a good way to debase our intellectual currency—and rather than saving our digital public squares, it will sink them into the muck.

Robert Tracinski

Robert Tracinski is editor of Symposium, a journal of liberalism, and writes additional commentary at The Tracinski Letter.