Ethics and Twitter: Elon Musk’s Quest for Free Expression

Written by Caitlin Ring Carlson, Associate Professor and Faculty Fellow
April 26, 2022

On April 25th, Elon Musk announced that he had reached a deal with Twitter to buy the company for $44 billion cash. The purchase puts one of the largest social media companies in the country in the hands of the world’s richest man.

In a press release about the takeover, Musk doubled down on his absolutist position regarding free expression on the platform.

“Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated,” said Musk.

In the past, Musk has balked as Twitter has deplatformed users and removed posts for violating the company’s community standards. With this purchase, Musk moves into the driver’s seat and will now have the power to determine how Twitter handles content moderation.

Caitlin CarlsonTwitter, a Town Square?

While some people are excited about the prospect of unfettered expression on the platform, several high-profile users announced that they were considering leaving Twitter because of Musk’s takeover. After news of the pending purchase broke, the hashtag #GoodByeTwitter was trending.

Fears about the increase of harassment, hate speech, and mis- and disinformation were cited as the reasons for the potential exodus. Following Musk’s announcement, NAACP President Derrick Johnson released a statement addressing these concerns.

“Mr. Musk…on your new acquisition…Disinformation, misinformation and hate speech have NO PLACE on Twitter. Do not allow 45 to return to the platform. Do not allow Twitter to become a petri dish for hate speech, or falsehoods that subvert our democracy. Protecting our democracy is of utmost importance, especially as the midterm elections approach.
Mr. Musk: lives are at risk, and so is American democracy, said Johnson.”

As someone who studies hate speech, I agree with Johnson’s sentiments. Musk is wading into a complicated arena where making the wrong decision regarding content moderation can cause very real harm to individuals, society and democracy.

While Musk may see Twitter as the digital town square, the reality is that as a private virtual platform, Twitter is not bound by the First Amendment. Oftentimes people misunderstand the fact that the First Amendment only prohibits the government from restricting free speech and free press. Social media platforms are free to regulate content in any way they choose provided they don’t run afoul of existing laws regarding unprotected categories of expression such as obscenity, child pornography, threats, or harassment. Even then, Section 230 protects social media companies and other computer services from legal liability for what third party users post on their sites.

As they engage in the process of content moderation, Twitter, Facebook, TikTok and other social networking services make tradeoffs between protecting users’ expression and limiting the spread of disinformation or ensuring that other users aren’t silenced or harmed by attacks on their immutable identity characteristics such as race, gender, gender identity or sexual orientation.

Content Moderation and Social Responsibility

Generally, the process of content moderation involves three distinct elements: community standards, automatic removal, and community flagging. It’s possible that Musk may look to alter all three of these to allow more expression onto the platform.

Unfortunately, we know what this is likely to look like. Platforms like Gab or 4chan are notorious for their lack of regulation and consequently, the volume of racist, homophobic, and misogynistic speech on their sites.twitter logo

While we know what an absence of effective content moderation can bring, we also know that effective content moderation can make a positive impact on our information environment. Twitter’s past decisions to deplatform problematic users and label mis- and disinformation have been effective. Removing the account of far-right political commentator Milo Yiannopoulous limited the reach of his racist and anti-Muslim ideas. Labeling mis- and disinformation posted by former President Trump helped at least some Americans separate fact from fiction regarding the accuracy and authenticity of the 2020 Presidential election results.

Moving away from these kinds of practices is irresponsible and unethical. In no other industry would we stand by and allow a product to do this much damage to our democracy or its citizens. When companies illegally discharge toxic chemicals into our waterways, they’re fined under the Clean Water Act. However, we allow social media platforms to pollute our information ecosystem without fear of penalty.

Instead of regulating media, we generally adhere to a theory of social responsibility, asking media companies to self-regulate in a way that serves the public. Except for broadcast television and radio, most media, including social media, print, film, and video games are left to self-regulate. Therefore, it is essential that these companies take their approach to corporate social responsibility seriously.

Elon Musk’s other companies, Tesla and Space X, each have corporate social responsibility strategies that address the needs of their stakeholders. I implore Musk to consider a similar approach to Twitter. Corporate social responsibility for social media platforms demands responsible content moderation. Failure to do this will undo Twitter’s previous efforts to keep hate speech, harassment, and mis- and disinformation from flourishing on the site. Backsliding on these issues will be harmful to users, particularly those from traditionally marginalized communities.

If Musk truly wants a digital town square where “matters vital to humanity are debated” then he must engage in content moderation practices that ensure all of us can participate and have a voice in that space. Otherwise, the public debates Musk seeks to protect will include only those able to withstand the abuse that will undoubtedly accompany any effort to participate.

Caitlin Ring Carlson is Associate Professor of Communication and Faculty Fellow of the Center for Business Ethics at Seattle University. Her latest book is Hate Speech (MIT Press).

Share on Facebook and LinkedIn!