“Free Speech” Ought to Mean More than Mocking Trans People
Explaining why he wants to (maybe) buy Twitter, Elon Musk said that he values free speech. Here’s how he defined it: “By ‘free speech’, I simply mean that which matches the law. I am against censorship that goes far beyond the law.”
That definition is both shallow and contradictory. And it does not offer any guidance to the challenging questions the idea of “free speech” poses to a genuinely free society.
In the United States, the First Amendment says the government cannot punish you for speech (with some established exceptions, such as incitement and sexual harassment). The same law also protects free association—meaning that it’s perfectly legal for private organizations to exercise their freedom of association by excluding some speech. A support group for widows doesn’t have to let a never-married man talk. A Bible study group doesn’t have to give equal time to an atheist. Private groups can choose to do so if they want; but they can’t be compelled to.
Similarly, websites can have a terms of service agreement. They can also choose how to moderate content, as they try to create a community and appeal to users. To pick an easy example, websites are not legally required to host pornography, even though porn is protected speech under the First Amendment.
Musk knows that “matches the law” is a bad standard—though it would give him an excuse to kowtow to censorship demands from China, which accounts for a quarter of the sales and half the production for his electric car company Tesla—which is why he shifts in the very next sentence to the claim that content moderation shouldn’t go “far beyond the law.” But that’s already the consensus for moderation, at least on mainstream platforms: Go beyond the law, but not too far. Sure, some ideologues call for heavy-handed censorship, but no social network implements their pure vision, not least because these platforms are businesses and that’s not what the market wants.
The debate is over what, exactly, constitutes “too far.” What content moderation rules and enforcement mechanisms produce the sort of engagement, discourse, and culture—and therefore attract and keep the sort of users—that make a website a going concern? For the most influential platforms, which practices serve the overlapping, but sometimes conflicting, goals of what’s best for the company, the users, the country, and the world?
For Elon Musk and many others who complain about Twitter’s policies, the answer seems to be a lot narrower than abstract appeals to free speech suggest.
With Great Power…
On the low end of the content moderation scale is 4chan and 8chan/8kun. Those image boards are basically a free-for-all which facilitate anonymous posts. As per the internet cliche, this means Nazis and porn. Banner ads show hardcore pornography, and some areas of the sites feature violent images, violent fantasies, racial and gendered slurs, pro-Hitler memes, you name it.
There’s also a lot of nerdy fan culture (anime, video games, etc.) and other harmless discussion. 4chan played a role in the growth of the alt-right, the loose hacker collective “Anonymous,” and QAnon. It’s where the Buffalo supermarket attacker says he was radicalized. In 2018-19, three different white nationalist terrorists announced their attacks on 8chan (to encouragement from users of the site).
If you want this type of platform, it exists.
But a lot of people don’t want that. The big social networks—Facebook, LinkedIn, YouTube, Twitter—aim to be the online mainstream, appealing to a wide variety of users and the businesses that sell to them. That requires stopping behavior that isn’t illegal, but makes the platform inhospitable, such as hate speech.
What, exactly, counts as hate speech? Which groups warrant the most protection? What’s unacceptable, what’s a judgment call, and what’s close to the line but not over it? The debate’s in the details.
But given these networks’ size, there are also questions of public responsibility. A few platforms exert a lot of power over the flow of information; platforms that are, for better or worse, the primary method of news and expression for millions. That gives them a responsibility to be broadly inclusive and promote healthy discourse. They should default to allowing people to post things (and mostly do), but also have a responsibility not to become tools for bad actors.
Again, the devil’s in the details: Who is a bad actor? What is bad activity?
Here you will not find many absolutists. Facebook and Twitter cracked down on ISIS supporters, including speech that didn’t mention violence, and few Americans objected. Somewhat more controversial has been countering Russian intelligence activity and the conspiracy theorists of QAnon. The next step on the spectrum is the debate over efforts to moderate false information about public health and politics, specifically COVID, vaccines, and the 2020 election.
The reason this is an important and difficult problem—one that can’t be solved by saying “free speech”—is that social media doesn’t just allow expression. It distributes it and amplifies it; algorithmically pushing the speech out to people, even those who didn’t request to see it.
Free speech means you shouldn’t punish, censor, or chill expression. It doesn’t mean you have to disseminate someone’s expressions for them.
Algorithmic dissemination makes social media unlike anything else. A person can say false and misleading things about consequential subjects, tell everyone they know, post it on a blog, or put it on a picket sign. But social media creates the possibility of rapid virality in which an algorithm boosts the dissemination of such a post, putting it in front of thousands, or even millions, of people. And they won’t just observe it, they’ll see it embedded among posts from trusted sources, both personal and professional.
With free speech, some people will be wrong and that’s okay. The problem is when social media turns inaccurate information into real world harm. False fearmongering about vaccines and promotion of false miracle cures harm public health. False conspiracy theories (QAnon, the “great replacement,” Trump’s “Big Lie” about the 2020 election, etc.) have motivated real world violence. In Myanmar and Ethiopia, inflammatory lies about a minority group going viral on social media contributed to mass killing, arguably genocide.
“Maybe we shouldn’t facilitate ethnic cleansing” does not seem like an unreasonable standard. The question is not if big platforms will go beyond the law in regulating speech. They will. The question is which speech and how.
Bias (And the Lack Thereof)
I don’t know what Elon Musk will change about Twitter—or if the deal will go through—but I know his public statements on free speech do not reflect deep, well-informed thoughts on the subject. As Mike Masnick notes in techdirt, citing a TED interview with Musk from April 14, the problem is not that Musk’s shallow “free speech” vision has never been tried. The problem is that it’s been tried many times . . . and failed. If anything, the Musk view was the prevailing Silicon Valley ethos for the entire early period of the internet up until, roughly, 2010. But the same problems kept cropping up, and engineers and developers have spent years troubleshooting, refining, stumbling towards something that balances various responsibilities.
There seems to be a general consensus among both those most excited by and most worried about the possibility of Musk’s purchase that when Musk says “free speech” he means, in practice, looser rules about misinformation and hate speech, especially regarding race and gender. Both have a sense that Musk aims to tilt Twitter in favor of the culture war right.
Among the reasons they think this is Musk shared a meme that claims the right has stayed in the same place since 2008 while the left has veered far to the left; incited online pile-ons against Twitter employees who have become objects of right-wing ire; engaged in positive, supportive interactions with right-wing trolls, such as Pizzagate conspiracy theorist Mike Cernovich—for example, Musk to Cernovich on May 9: “Twitter obv has a strong left wing bias”—and his purchase bid includes money and vocal support from influential people who claim to be victims of the powerful left-wing elite.
This supposed bias is an article of faith for large swaths of the right, but when serious researchers have gone looking for it, they don’t find empirical support. A 2021 study found that, across seven advanced democratic countries, Twitter’s algorithm boosts posts by right-wing politicians and parties a little more than posts by left-wing politicians and parties. Another 2021 study set loose some politically neutral “drifter bots” on Twitter and found strong evidence of conservative bias, but “from user interactions (and abuse) rather than platform algorithms.”
Content moderation decisions can be haphazard, not least because the Big Tech business model means a small number of employees rely on algorithms and user reporting to oversee far more content than they can possibly handle. Public perception of these decisions often derives from a few anecdotes repeated by interested parties, and doesn’t match the data. For example, a 2022 paper found strong support in the U.S.—from both Democrats and Republicans—for social media companies taking action against misinformation. Of accounts banned or suspended for misinformation, more were conservative than liberal, but there was no evidence of political bias in enforcement decisions. Every banned or suspended account had clearly violated terms of service, it’s just that people on the right happened to break misinformation rules more often.
So what conservatives really object to isn’t the enforcement, but the rules themselves. Some people want to tweet things about race and gender that others think should be socially unacceptable, like the n-word (maybe not as bad, but nevertheless over the line). Big social media platforms, including Twitter, have decided that some of those expressions are bigoted, that they drive users from the platform, and therefore unwelcome.
Multiple Twitter alternatives have been tried, all vowing to be “free speech” platforms that don’t moderate content. Every one of them—Gab, Parler, Gettr, etc.—has ended up moderating speech and enforcing rules, because what their “unfettered free speech” resulted in was doxxing, promotion of violence, and various other depravities that underscored why content moderation became the norm on the internet in the first place. And all these alternative platforms have flopped as businesses because “Twitter for people who want to post things you can’t post on Twitter” isn’t appealing to most users.
For business reasons, if nothing else, Twitter under Elon Musk would still moderate content. It might, however, change which users it prioritizes.
Which Users? Whose Speech?
Consider the case of the Babylon Bee, a website that bills itself as “Christian news satire.” On March 20, Babylon Bee CEO Seth Dillon tweeted that the Bee’s Twitter account was partially restricted for violating Twitter’s “rules against hateful conduct.” The tweet in question featured the name of, and a link to, a March 15 article: “The Babylon Bee’s Man Of The Year Is Rachel Levine.”
Levine is a transwoman currently serving as the United States assistant secretary for Health. Twitter’s terms of service consider calling her a man to be harassment based on gender identity, so Twitter hid that tweet (everything else the Bee posted is still visible) and said the @TheBabylonBee account can read tweets, and send direct messages, but couldn’t add new posts or retweets until it deletes the tweet about Levine.
According to Dillon, Musk reached out to him after the suspension and they talked about free speech. When Dillon confirmed why the Bee got suspended, Musk mused that he “might need to buy Twitter.”
Dillon says the Bee will not delete the tweet about Levine, and explained his rationale: (1) It was “telling the truth,” and (2) you should “never censor yourself.” As of this writing, the website’s Twitter account, which has 1.5 million followers and posted multiple times daily, hasn’t tweeted anything new (though Dillon and others have continued sharing links to Babylon Bee articles, which it still freely publishes on its website).
As a First Amendment case, this is straightforward stuff. Twitter and the Babylon Bee are both private companies, with rights of speech and association, and can therefore decide what’s on their respective websites. The Bee can publish what it wants on its site. Twitter can block tweets that the Bee sends on Twitter’s platform. And the government can’t force either to put up, take down, or spread anything. If you see this case as a free speech issue—even a little bit—then you’re not using legal standards.
So to make this a case of problematic censorship, we need to go beyond the law to a “culture of free speech” standard; a set of norms that place Twitter’s action outside the realm of acceptable behavior.
Prominent writers and organizations have been raising alarms about that very issue. A recent New York Times editorial argued that self-censorship due to fear of “retaliation or harsh criticism,” including on social media, is evidence that “America Has a Free Speech Problem.” The Atlantic’s Anne Applebaum writes that “fear of the internet mob” is causing “self-censorship and silence” similar to authoritarian governments. A widely-read open letter published in Harper’s argued that “a vogue for public shaming and ostracism” is one of the ways “censoriousness” is “spreading more widely in our culture.”
Twitter apparently agrees, explaining its Hateful Conduct Policy like this: “Free expression is a human right . . . if people experience abuse on Twitter, it can jeopardize their ability to express themselves.”
Twitter’s policy notes that “research has shown that some groups of people are disproportionately targeted with abuse online,” and says it will remove content that attacks people “on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.”
Just about everyone supports moderating some expressions about some group on that list. So this isn’t a debate between free speech and censoriousness. It’s about tradeoffs between types of censorship. Moderators can partially reduce the flood of abuse thrown at trans people, black people, women, or any group that gets a disproportionate amount—or they can stand aside as internet mobs hound those people into self-censorship.
Trans rights activists and Twitter’s current policies say that calling Levine a man belongs on the socially unacceptable side of the line. Seth Dillon, theBabylon Bee, and apparently Elon Musk say it doesn’t. That’s the actual debate.
Twitter is a private company, and its rules are up to its owners, whether that’s Elon Musk or anyone else. As a supporter of the First Amendment, I accept that, even if I don’t agree with their choices. But as someone who greatly values free speech—not just legal protections from government, but a culture that fosters expression and dialogue—I refuse to cede the concept of free speech to those who think a defining feature is trolls trying to drive trans people and other minorities off social media.