Extending Corporate Tort Liability to Social Media Algorithms

In September 2021, whistleblower reports about social media platforms’ use of artificial intelligence (AI) that promote certain platform content over others raised critical questions about the relationship between AI algorithms and corporate liability standards. Facebook consistently claims that AI is an “efficient” and “proactive” means to stop hate speech and other problematic content on its platform. However, internal documents reveal that AI removes less than ten percent of harmful content, such as hate speech or misinformation, from the platform. [1] What it does remove, moreover, is often done with bias; content in flagrant violation of platform standards remains on the site while other posts such as business advertisements or personal posts not in violation of platform standards are removed. [2] Facebook’s tattered patchwork of AI algorithms spotlights a larger gap in AI law. Recently, two circuit court decisions concerning Facebook reached contradictory findings about the platform’s liability for the use of its algorithms. Action must be taken to ensure legal regimes mitigate harms caused by discrepancies in AI oversight by extending corporate tort liability to the use of AI algorithms.

Facebook’s algorithms are semi-autonomous, which means that they require some level of human intervention to set the criteria an algorithm searches for, but ultimately use algorithms described by Facebook as “classifiers” to operate effectively. [3] As a result, their algorithms are guided by humans, but operate independently and learn based on content they police. Their algorithm that polices problematic content, discussed previously, is the central arm Facebook uses to monitor platform content. However, Facebook also uses an engagement-based algorithm, which promotes content that has more views and likes. [4] These two programs are consistently in contention: their self-policing algorithm misses most problematic content while their engagement algorithm promotes it. Specifically, Facebook prioritizes engagement over self-policing, and research shows that a majority of high-engagement accounts are in violation of platform standards. [5] Discrepancies between the two algorithms have resulted in a rise of hate speech on the platform and inflamed real-world tensions. This reality reveals the jarring lack of legal frameworks to prevent problematic behaviors by AI. Large technology corporations often claim separation from their AI networks due to their semi-autonomous nature; in turn, when issues of corporate liability arise, they cite the algorithms’ virtual independence. [6] Additionally, the law does little to correct this discrepancy, as legal accountability for AI is still little more than conjecture. [7]

While there are several means through which to regulate AI algorithms, such as data policy, corporate tort liability constitutes the most effective method available within existing legal regimes. One potential avenue for legal accountability is the expansion of data protections as a means of regulating AI through osmosis. In short, by applying data privacy laws to AI narrowly, the user-specific approach of social media algorithms can be treated as harmful. However, this approach has multiple flaws, namely its lack of tailored applicability and an overarching framework. Data privacy only offers some legal accountability, only regulating certain aspects of AI data collection, a narrow aim that ultimately fails to challenge the central framework of social media algorithms. [8] Additionally, existing data privacy statutes are themselves not broadly applicable; they only apply to specific areas of data collection, such as biophysical privacy or educational privacy. [9] A more comprehensive, all-encompassing approach is necessary to effectively protect the individual from harms caused by AI algorithms, as potentially offered through the expansion of corporate tort liability.

As it exists currently, corporate tort liability holds corporations accountable for harmful actions or behavior even if corporate actions are not intentionally harmful. Its implications for AI regulation are broad, encompassing both intentional and unintentional harms, as the central question shifts to whether or not harm is present, and not whether or not it was intended by the corporation. Patel v. Facebook (2019), a case heard in the Ninth Circuit, addresses this question in relation to AI algorithms. The plaintiffs, led by Nimesh Patel, filed a class action suit claiming that one of Facebook’s AI face-recognition algorithms violated an Illinois statute that protects biophysical privacy in the face of a mass collection of data meant to enhance facial recognition of individuals. [10] Facebook sought to dismiss the lawsuit, arguing that Illinois’ extraterritoriality doctrine kept them from being liable. In their findings, the court found that the plaintiffs had legal standing, holding that the algorithm posed a “material harm” to the plaintiffs and that Facebook’s AI system quantifiably harmed the individuals by collecting data for facial recognition. [11] This case offers a potential framework through which AI algorithms can be held accountable. Critically, the finding that AI algorithms can be found to be harmful allows for the tort liability process to come into effect. While this kind of legal approach has not yet been taken, Patel indicates that corporate liability is a possibility. If an AI algorithm causes harm, it must be prosecuted through tort liability.

However, in the same timeframe as Patel’s ruling in 2019, other circuit court decisions have directly contradicted many of its holdings, obfuscating the applicability of tort liability for social media platforms. The position of liability for content comes into question in Force v. Facebook (2019). The case was heard in the Second Circuit and decided within a month of the Patel ruling where the plaintiffs, a group of U.S. citizens who were victims of a Hamas terrorist attack, sued Facebook, arguing that Facebook implicitly assisted Hamas by keeping their violence-promoting content online through their AI algorithm’s oversight. [12] However, in accordance with 42 U.S.C. 230(c)(1), the rule that protects social media platforms from liability for content on their platforms, the court found that Facebook could not be found to have criminal or civil liability for content. [13] This case complicates the implications of Patel, as it found that Facebook was shielded from liability because the content itself was not created by the platform. Force, however, did not deliberately consider the implicit role of Facebook’s algorithms in perpetuating violence—only their liability, or lack thereof, under 42 U.S.C. 230(c)(1). Given this consideration, Force constitutes deferral of the question of AI liability, rather than a conclusive ruling. However, the case does highlight the need to clarify the line between a harmful post and an algorithm causing harm via platform content.

Patel and Force raise critical questions in the effort to regulate exploitative corporate practices. If an algorithm further popularizes content that incites violence or hate, does that constitute a distinct action that can be prosecuted? Even if the post itself is the directly harmful action, does the use of an algorithm to promote it cause an additional harm? It is critical that the Court consider these questions to determine where the line of liability can fall. To date, the Court has not yet ruled definitively on the subject. [14] Ultimately, the strongest course of action is the simplest: the Court should follow the model hypothesized by Patel and expand corporate tort liability to apply to social media algorithms.

Firstly, social media algorithms must be legally approached as what they are, namely, an extension of the corporation, expressly suited to serve its aims. If algorithms such as the ones used by Facebook are considered under tort liability, the nature of AI is fundamentally changed, forcing corporations to deliberately consider the ways that algorithms affect the world at large. Without a tangible form of human oversight, it is functionally impossible to effectively prevent AI abuses, whereas with legal consequences, collective interests are protected. 

To this end, it is critical to clearly define the line between platform content, which social media firms are not liable for, and algorithmic promotion, which the company should be held accountable for. [15] In doing so, there must be a measurable standard that offers a clear boundary between the two. For example, if a post on Facebook contains hate speech, Facebook is not directly liable for that content. However, if Facebook amplifies the reach of that content by promoting it for other users, a result of their algorithm, that constitutes a harm that was perpetuated by AI. This kind of harm is directly caused by Facebook’s promotion, not just the original creator, and should be held to standards comparable to other cases of corporate liability.

Corporate tort liability offers the strongest legal framework through which to effectively hold social media firms accountable for the use of AI algorithms. Ultimately, expanding existing liability standards, as outlined in Patel v. Facebook, offers a broad avenue through which to expand accountability to prevent AI algorithms from causing significant harm to the individual. As it stands currently, existing precedent is unclear as to the legal standing of AI algorithms. Further decisions need to be made to reconcile the discrepancies between Patel and Force and enshrine tort accountability for large tech firms.

Edited by Iris Chen

Sources:

[1] “How Facebook Uses Super-Efficient AI Models to Detect Hate Speech,” Meta AI  (November 19, 2020), online at https://ai.facebook.com/blog/how-facebook-uses-super-efficient-ai-models-to-detect-hate-speech/

[2] Seetharaman, Deepa, Jeff Horwitz, and Justin Scheck, “Facebook Says AI Will Clean up the Platform. Its Own Engineers Have Doubts,” The Wall Street Journal (October 17, 2021), online at https://www.wsj.com/articles/facebook-ai-enforce-rules-engineers-doubtful-artificial-intelligece-11634338184?mod=article_inline; Frier, Sarah, “Facebook's AI Is Mistakenly Banning Small Business ADS-without Explanation,” Fortune (November 28, 2020), online at https://fortune.com/2020/11/28/facebooks-ai-is-mistakenly-banning-some-small-business-ads/.  

[3] Perrigo, Billy, “Facebook Says It’s Removing More Hate Speech Than Ever Before. But There’s a Catch,” Time (November 27, 2019), online at https://time.com/5739688/facebook-hate-speech-languages/

[4] Hao, Karen, “The Facebook whistleblower says its algorithms are dangerous. Here’s why.,” MIT Technology Review (October 5, 2021), online at https://www.technologyreview.com/2021/10/05/1036519/facebook-whistleblower-frances-haugen-algorithms/

[5] Hindman, Michael, Nathanael Lubin, and Trevor Davis, “Facebook Has a Superuser-Supremacy Problem,” The Atlantic, (February 10, 2022), online at https://www.theatlantic.com/technology/archive/2022/02/facebook-hate-speech-misinformation-superusers/621617/.  

[6] Id at 5.

[7] Rodrigues, Rowena, “Legal and Human Rights Issues of AI: Gaps, Challenges and Vulnerabilities,” Journal of Responsible Technology (2020), online at https://doi.org/10.1016/j.jrt.2020.100005

[8] Klosowski, Thorin, “The State of Consumer Data Privacy Laws in the US (And Why It Matters),” The New York Times (September 6, 2021), online at https://www.nytimes.com/wirecutter/blog/state-of-privacy-laws-in-us/

[9] Id.

[10] Patel v. Facebook, 18-15982 (9th Cir. 2019).

[11] Id.

[12] Force v. Facebook, 18-397 (2nd Cir. 2019).

[13] Id; Communications Decency Act, US Code 47 § 230.

[14] Business and Corporate Litigation Committee, Business Law Section, American Bar Association, “Recent Developments in Artificial Intelligence Cases,” Business Law Today (June 16, 2021), online at https://businesslawtoday.org/2021/06/recent-developments-in-artificial-intelligence-cases/.

[15] Id.