Judicial Activism Can’t Fix Section 230
Al Gore didn’t invent the internet, but he did play a role in ensuring that the internet could grow organically as an open platform through his work on the Telecommunications Act of 1996. The act, signed by President Clinton in an elaborate ceremony at the Library of Congress at the dawn of the public internet era, established that broadband internet service would be less heavily regulated than other telecommunications infrastructure like radio, telephone landlines, and broadcast, cable, and satellite TV. Title V of the act encompassed the Communications Decency Act, which included provisions designed to protect children from indecent online materials. The Supreme Court pared back many of those provisions in a series of cases, but one part of the CDA that survived intact was Section 230—at least for now.
Section 230 addressed the “moderator’s dilemma” of whether or not to moderate user-generated content. A communications platform that takes no steps to moderate content usually is not a “publisher,” and therefore not liable for content, under defamation law. Like a bulletin board on which anyone can paste any kind of flier or handbill, if the board’s owner makes no effort to vet or remove what is posted, they take no responsibility for the content of the flyers, and therefore can’t be held responsible. But as soon as the platform owner takes some steps to moderate content—say, by tearing down handbills that say mean things about the neighbors—the platform owner takes responsibility. Whatever they don’t take down, they implicitly endorse, and defamation liability may attach for failing to moderate adequately. Section 230 addresses the moderator’s dilemma by providing a safe harbor to online platform providers, which allows them to host and moderate user-generated content without fear of publisher or other secondary liability.
This special carve-out for online publishers has been part of the laissez-faire legal environment that helped incubate the rapid growth and unfettered creativity of the early internet age. It removed significant legal (and therefore financial) risks for internet startups, and allowed the growth of everything from Facebook and Reddit to Nextdoor and Spotify.
There are serious concerns that the anything-goes environment of big social media platforms under Section 230 can be abused by terrorists, mass shooters, child pornographers, human traffickers, conspiracy theorists, cyber thieves, and other criminals. But the MAGA movement has been in a tizzy about Section 230 for years because of the perceived (and perhaps sometimes real) political bias of the people and algorithms that moderate huge platforms such as Twitter. Now that Donald Trump has been deplatformed and Parler has been de-hosted, these objections have compounded.
The terrible irony in this “conservative” reaction to social media bias is that the proposals being floated read like the vodka-soaked ravings of a drunken apparatchik from the State Directorate of Digital Oversight. Take Sen. Josh Hawley’s proposed “Ending Support for internet Censorship Act,” first floated in 2019. The bill would require social media companies to obtain an “immunity certification” from the Federal Trade Commission based on a political neutrality requirement, which the FTC would police. Nothing says “free, open, competitive internet” like getting pulled over on the information superhighway so an FTC political licensing officer can inspect your papers.
Also in 2019, Hawley introduced the “Limiting Section 230 Immunity to Good Samaritans Act,” which would require social media to include government-approved terms of service on political neutrality to qualify for Section 230 immunity. Under the plan, platforms who violate the neutrality obligation in “bad faith” would be subject to statutory damages and attorneys’ fees. Free speech would suffer, but trial lawyers would make a killing.
In May 2020, President Trump issued a ranting, whining Executive Order about Section 230: “As recently as last week, Representative Adam Schiff was continuing to mislead his followers by peddling the long-disproved Russian Collusion Hoax, and Twitter did not flag those tweets.” The order sought a new FCC rulemaking and directed a federal review of social media platforms led by the FTC—a move more likely to intimidate than to refine the regulatory regime.
Following Trump’s order, Sen. Hawley introduced the Behavioral Advertising Decisions are Downgrading Services (BAD ADS) Act which would strip platforms of Section 230 immunity for employing targeted ads. The substance was no more serious than his previous proposals, although the acronym at least sounded badass.
Now that the Trump administration is no more, supposed conservatives have moved on to suggest the Supreme Court should decide the question—judicial restraint be damned. Biotech CEO Vivek Ramaswamy and Yale Law professor Jed Rubenfeld argued that the Court should declare private social media companies to be “state actors” subject to the Bill of Rights and the Fourteenth Amendment. This idea is doctrinally dead in the water and jurisprudentially looney.
The “state action” doctrine, as its name suggests, demarcates governmental action from private action for purposes of civil rights protections. By definition, civil rights protections only apply against governmental action as defined in the Bill of Rights and the Reconstruction Amendments. They shield us from arbitrary and excessive governmental power—not from each other. Our private relationships are governed by private law (contracts, torts, and property), by statutes and regulations that reflect our political values about how we should treat each other, and at the outer edges, by criminal law.
Ramaswamy and Rubenfeld would effectively nationalize the American media ecosystem by judicial fiat. Among the many collateral consequences of contorting state action doctrine to include social media companies would be that the companies could be subject to other civil rights provisions, such as the Fourth Amendment. Bolting social media directly to the Constitution would eliminate any sort of give-and-take, compromise, or democratic input in privacy laws or other regulations about how companies, individuals, and social media interact. The balance of user privacy and digital commerce would be governed instead by Constitutional litigation.
Philip Hamburger is the latest “conservative” to marry the jurisprudential assertiveness of the Warren Court with the politics of MAGA. He acknowledges that Twitter et al. are not state actors, which should be the end of the discussion for First Amendment claims against them. But apparently the state action doctrine has penumbras and emanations that transmogrify private companies into governments. The First Amendment “prohibits only government censorship,” he warns, “Yet one must worry that the government has privatized censorship.” A right of private “censorship,” however, is not only constitutionally acceptable but constitutionally required. A private person or entity has a First Amendment right not to speak and not to endorse someone else’s speech.
Hamburger tries to avoid this First Amendment axiom by suggesting that big social media platforms are “akin to common carriers.” Common carrier status, of course, would entail the same sorts of regulatory obligations for social media platforms that currently apply to telecommunications media infrastructure—the internet’s “physical” layer—as well as railroads, airlines, and cruise ships.
Hamburger’s suggestion distorts a long-running debate about common carrier status for broadband internet infrastructure companies. Internet infrastructure service providers—the companies that install and run the wires and switches that comprise the internet’s physical layer—were not classified as common carriers under the Telecommunications Act of 1996. Rather, in 2002, the FCC classified them as “Information Services” in pursuit of the vision of an unencumbered, open internet.
Whether it makes sense (and is authorized by current communications law) to reclassify internet backbone providers as common carriers remains fiercely contested by lawyers, economists, and policymakers. This question is central to the “network neutrality” debate. No one, however, seriously argues that “edge providers” such as social media apps should also be classified as common carriers. The scope of governmental control over the internet if the content layer—as opposed to the physical infrastructure layer—were to entail common carrier obligations would be breathtaking. It would be like telling people that if they want to talk with their mother on the phone they also have to make an equal-time call to their creepy uncle.
It’s true that there’s a kind of “Section 230 fundamentalism,” which can’t admit that social media platforms should have some kind of legal responsibility to police terroristic, child-endangering, and other violent criminal content. Laws like the Allow States and Victims to Fight Online Sex Trafficking Act (sponsored by Sen. Hawley’s fellow Missourian, Rep. Ann Wagner, and signed by President Trump), which pares back Section 230 immunity for some kinds of sex trafficking content, reflect important values, even if those values are not well implemented in the statutory language. In June, the Justice Department published some proposals about criminal content and Section 230 that should be taken seriously, although other aspects of that report are excessive and reactionary.
And there are some legitimate problems with market concentration among social media platforms, which the dynamic, market-based internet policy of the Telecommunications Act of 1996 did not entirely foresee. If anything, market concentration may call for more antitrust scrutiny, which is already happening. It’s good to recall, though, that only 15 years ago Twitter didn’t exist (it was founded in March 2006), and that Google is just over 20 years old. What will the internet and social media landscape look like five, 10, or 15 years from now? Neither the FTC nor the Supreme Court should have the final word.