Constitutional Freedoms in the Age of Technology

The Constitution is written for the people, ensuring that government corruption and tyranny don't restrict individual freedoms. Several amendments were written to keep power in check, the most crucial being the First Amendment, which guarantees freedom of speech and freedom of the press. Interestingly, recent advancements in technology have made both the federal government and the average person reevaluate whether or not the right to speak your mind as protected by the Constitution needs an update. 

Today, social media platforms such as Facebook and Twitter enable large populations across the country to stay in contact with each other. Idealistically, people use these apps as a way to inspire others and to grow as a person by sharing ideas and engaging in respectful debates and conversations about both political and apolitical matters alike. However, some individuals spread negative messages with the intent of hurting others directly. In addition to negative content, these spaces use algorithms as a way to recommend content to users. Generally, these algorithms in social media apps utilize data such as how often a person interacts with a post instance, if they rewatch a video multiple times, leave a comment, or save the video. The controversial aspect of this process is that the algorithm bases recommendations not on the actual qualities and ideas in a post, but on the quantity of similar posts. For instance, this results in users receiving videos of factual historical short stories and other positive recommendations, while others see hate speech. Due to this characteristic, there have been numerous cases, specifically regarding Section 230 of the Communications Decency Act. Many of these cases relate to what Section 230 protects and whether it holds companies responsible for content that’s spread and recommended to users.

One possible solution, given the circumstances, is the censorship of content. The First Amendment protects a user’s freedom of speech, while censorship works to limit a person's ability to express themselves, leading to contradictory goals and tricky situations regarding what should be censored and who should be censored. Censorship not only is tricky but may also demonstrate discriminatory practices by companies, possibly barring those from the same area or region as known terrorists and others. With social media and its complicated relationship with the First Amendment, plaintiffs argue that these social media platforms should be responsible for what is in the algorithm and what is posted on the sites.

In Force v. Facebook, families of victims affected by Hamas argued that Facebook should be held accountable for the quality of content and ideology present on its platform. Regarding its algorithm, plaintiffs argued that the algorithm used on Facebook’s platform aided Hamas and their attacks on Israel. They claimed features such as content viewed by a user—specifically content from those who support Hamas—being recommended to others and having "suggested friends" based on seeming interests shared with other people's profiles, possibly encouraging people to join Hamas and helping them find each other. Initially, the case ruled in favor of Facebook, citing that Section 230 protected the organization from being held accountable for what the algorithm selects as preferences. However, this decision sparked controversy due to arguments stating that various criminal and harmful activities could be facilitated under the guise of the algorithm if companies like Facebook continue to evade accountability. On the contrary, had the court ruled in favor of the plaintiffs, the solution wouldn’t be so straightforward. There is not a single definitive way to regulate user-generated content. For instance, if Facebook were to implement censorship, it might exclude certain groups of people to prevent harmful messages, but this could lead to discrimination against individuals based on factors they cannot control, such as the presence of terrorist organizations in their region. Thus, such actions would violate individuals’ freedom of speech under the First Amendment1.

Building on the idea of censorship, in Murthy v. Missouri, plaintiffs consisting of various individuals argue that officials are violating the First Amendment1 by using various methods to pressure people to delete or demote posts on topics such as election integrity. Specifically, plaintiffs allege that these defendants have used public statements about reforming Section 230 to threaten social media platforms into suppressing content. By forcing people not to speak about certain topics, it infringes on their First Amendment rights. Although authorities are attempting to take down certain posts, some argue that this censorship is for the greater good. They argue that without such filtering, misinformation about events like elections and their processes can spread unchecked. Misinformation can harm individuals by keeping them ignorant about certain ideas or topics, potentially undermining democracy by mixing false information with factual information. In cases like these, censorship is considered reasonable because misinformation on widely used media platforms can be mistaken as fact, leading people to make decisions based on false information that could otherwise have been avoided. For instance, individuals might avoid getting a COVID-19 vaccine after seeing a graph lacking valid data or research that can be traced back to a source. In this case, the First Amendment is being violated. However, this case causes a person to think: should people be allowed to say everything and anything merely having freedom of speech?

The First Amendment protects a person’s freedom of speech and of the press. The reason why many believe this Amendment to be the most important is because it helps to maintain democracy within our society in addition to keeping the balance of power between the people and the government. The First Amendment relates to social media since people use social media to express themselves and their opinions to others. But with some posts and content, controversial and harmful ideas are expressed. Although a possible solution to this issue would be censorship, some people are opposed to it because biases may be used to discriminate against certain groups of people based on traits that they can’t control. On the other hand, others are in support of censorship because without there being a way to sort out beneficial ideas and harmful misinformation, the misinformation has a danger of being a widely accepted belief, which can lead to consequences down the road for those who relied their actions on said misinformation. From cases like Force v. Facebook and Murthy v. Missouri, we can see that censorship has trade-offs within it. In Force v. Facebook censorship is seen as a negative thing as there isn’t an effective way to make sure certain content is effectively censored without banning the use of the platform in a region, showing discriminatory behavior that may be present when it comes to where and how a censorship may work. Murthy v. Missouri on the other hand talks about the positives of censorship by demonstrating how misinformation can influence and harm individuals.

Censorship in social media has a fine line between being beneficial to a person and trampling on the same person’s freedom protected by the First Amendment. In a way, censorship is a double-edged sword where it can sort out harmful information, but possibly at the cost of people being barred from using platforms based on stereotypes in areas they live in.

Fatema Bushra