(ABC News)

Is Free Speech Really Free?

On January 6, after the initial shock and terror of the Washington capitol riots, before the dust had settled or the tear gas dissipated, former President Donald Trump’s Twitter account was locked. Then he was locked out of Facebook. Then, almost every other social media site blocked the president from tweeting, posting, sending pictures, or communicating in any other way with the global online community. Over 700,000 members of the online conspiracy group Q-Anon, whose followers  orchestrated many of the events on January 6, were kicked off of Twitter. 

Even as Americans were still expressing their rage at the riot, others began to debate the Twitter and Facebook suspensions. These moves were lauded by many, who believed that the sub-groups on social media sites were largely responsible for the violent, massive crowd that gathered at the capitol building. Others, including First Amendment scholars and the American Civil Liberties Union (ACLU), criticized Twitter and Facebook for the unlimited power they seem to wield over those who use their sites. 

But this was not a battle that began on January 6. This is an issue stretching back as far as 1996, and has been in the news as recently as Joe Biden’s inauguration. And this battle begins and ends with Section 230.

In 1996, Senators Ron Wyden and Chris Cox added a section to the Communications Decency Act. The section that they added, Section 230, which has since become known by its advocates as “the most important law protecting internet speech,” says that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Essentially, social media platforms such as Facebook and Twitter cannot be considered publishers of the information on their sites. That protects them from legal liability for illegal information published on their websites. 

Section 230 lso allows social media companies to restrict speech if they choose to, allowing an exception to the First Amendment right to the freedom of speech. It is this aspect of the law that has embroiled Democrats and Republicans in a fight with tech companies and their powerful owners.

Surprisingly, Section 230 is one issue that Republicans and Democrats are fairly united in their condemnation of, even if it is for different reasons. During his presidency, Donald Trump repeatedly tried to repeal Section 230, even saying that he would “‘unequivocally VETO’ the National Defense Authorization Act (NDAA) if it [didn’t] include a repeal of Section 230.” This anger mainly stemmed from his consistent tirades against social media owners such as Jack Dorcy, CEO of Twitter, and Mark Zuckerberg, the CEO of Facebook. This fury only intensified when, in the recent election season, Twitter began to add disclaimers to Trump’s tweets about mail-in fraud, offering information about mail-in ballots and the legitimacy of voting my mail. 

Democrats and Republicans in Congress push back against Section 230 for different reasons. Whereas conservatives see reforming Section 230 as “a way to combat perceived anti-conservative bias in Big Tech companies,” liberal-leaning policy makers want to find  “a way to make companies liable for harmful content.” Finding a middle ground is proving to be difficult. 

The argument over Section 230 isn’t just a Constitutional issue about freedom of speech: for many people, it’s deeply political, social, moral, and personal. Some believe that the true intention of the law, to have a “a forum for a true diversity of political discourse,” has been betrayed by the apparent censorship of those who do not believe in the same things as the CEOs of the social media companies. Others believe that the sites are not doing enough to protect their users from harmful content, or, in the words of Danielle Keats Citron, a law professor at Boston University, “It gives immunity to people who do not earn it and are not worthy of it.” 

Tech companies push back against these claims, saying that repealing Section 230 would mean the end of free speech on the internet. To compensate for their legal vulnerability, sites like Twitter and Facebook would have to moderate their posts much more closely, which would mean that many more people would not be able to share their beliefs and opinions. 

Rabbi Jonathan Sacks, the former Chief Rabbi of England, said, “Technology gives us power, but it does not and cannot tell us how to use that power. Thanks to technology, we can instantly communicate across the world, but it still doesn’t help us know what to say.” Just for context, every day, 500 million tweets are sent out into the void of the internet. Do all of the tweeters “know what they’re saying”? Maybe some do, and maybe others don’t. Social media has been proven to be a force for good in the world, but it has also been proven time and time again to be a force for evil — an easy, often anonymous way to spread virulent hate and falsehoods in a matter of seconds. Anti-Semitism, racism, homophobia, and all other forms for hatred are rampant on social media sites, proudly displayed  in both mainstream and underground sites. Even if the companies are able to monitor what they deem to be “hate speech,” more and more content continues to be produced every day. The Rabbinic notion of chezek reya, literally translated as “damage done by seeing,” says that even looking at something of someone else’s, like looking at someone’s social media without their consent or knowledge, can cause damage by invading their privacy.  How do we moderate the real-life damage which is often caused by posts on social media? And how do we predict the damage done by restricting online freedom of speech?

During this most recent election cycle, both Fox News, a conservative leaning news site, and Mike Lindell, a conservative businessman, were sued for libel by Dominion Voting Systems, a corporation that sells electronic voting software. The lawsuit came after weeks of Fox and Lindell said that the Dominion voting systems were rigged for Joe Biden, and had skewed the results of the election. They only stopped spreading this disinformation when they were held legally responsible for the damage they had done, and they were even forced to acknowledge the fact that they had been enthusiastically promoting false information. If Facebook, Twitter, Snapchat, and so many other sites were held legally responsible for the information on their sites, perhaps we would see less disinformation, less hate, less real-world consequences for allegedly “harmless” posts. The future of online free-speech relies on Section 230, and we may see what that future holds within the next year.

Ayla Kattler is a sophomore at ​Milken Community High School in California. ​She is a Staff Writer for Fresh Ink for Teens.

Share this post

Share on facebook
Share on google
Share on twitter
Share on linkedin
Share on pinterest
Share on print
Share on email

You May Also Like

Stay in Touch

Subscribe to stay up to date about our latest posts, writing competitions and Fresh Ink news

Close Menu