Steven Aftergood: CRS on Free Speech and the Regulation of Social Media Content (#GoogleGestapo)

Commerce, Corruption, Cultural Intelligence, IO Impotency
0Shares
Steven Aftergood

Free Speech and the Regulation of Social Media Content

Currently, federal law does not offer much recourse for social media users who seek to challenge a social media provider’s decision about whether and how to present a user’s content. Lawsuits predicated on these sites’ decisions to host or remove content have been largely unsuccessful, facing at least two significant barriers under existing federal law.

First, while individuals have sometimes alleged that these companies violated their free speech rights by discriminating against users’ content, courts have held that the First Amendment, which provides protection against state action, is not implicated by the actions of these private companies.

Second, courts have concluded that many non-constitutional claims are barred by Section 230 of the Communications Decency Act, 47 U.S.C. § 230, which provides immunity to providers of interactive computer services, including social media providers, both for certain decisions to host content created by others and for actions taken “voluntarily” and “in good faith” to restrict access to “objectionable” material.

Some have argued that Congress should step in to regulate social media sites. Government action regulating internet content would constitute state action that may implicate the First Amendment. In particular, social media providers may argue that government regulations impermissibly infringe on the providers’ own constitutional free speech rights. Legal commentators have argued that when social media platforms decide whether and how to post users’ content, these publication decisions are themselves protected under the First Amendment. There are few court decisions evaluating whether a social media site, by virtue of publishing, organizing, or even editing protected speech, is itself exercising free speech rights. Consequently, commentators have largely analyzed the question of whether the First Amendment protects a social media site’s publication decisions by analogy to other types of First Amendment cases. There are at least three possible frameworks for analyzing governmental restrictions on social media sites’ ability to moderate user content.

First, using the analogue of the company town, social media sites could be treated as state actors who are themselves bound to follow the First Amendment when they regulate protected speech. If social media sites were treated as state actors under the First Amendment, then the Constitution itself would constrain their conduct, even absent legislative regulation. The second possible framework would view social media sites as analogous to special industries like common carriers or broadcast media. The Court has historically allowed greater regulation of these industries’ speech, given the need to protect public access for users of their services. Under the second framework, if special aspects of social media sites threaten the use of the medium for communicative or expressive purposes, courts might approve of content-neutral regulations intended to solve those problems. The third analogy would treat social media sites like news editors, who generally receive the full protections of the First Amendment when making editorial decisions. If social media sites were considered to be equivalent to newspaper editors when they make decisions about whether and how to present users’ content, then those editorial decisions would receive the broadest protections under the First Amendment. Any government regulations that alter the editorial choices of social media sites by forcing them to host content that they would not otherwise transmit, or requiring them to take down content they would like to host, could be subject to strict scrutiny. A number of federal trial courts have held that search engines exercise editorial judgment protected by the First Amendment when they make decisions about whether and how to present specific websites or advertisements in search results, seemingly adopting this last framework.

Which of these three frameworks applies will depend largely on the particular action being regulated. Under existing law, social media platforms may be more likely to receive First Amendment protection when they exercise more editorial discretion in presenting user-generated content, rather than if they neutrally transmit all such content. In addition, certain types of speech receive less protection under the First Amendment. Courts may be more likely to uphold regulations targeting certain disfavored categories of speech such as obscenity or speech inciting violence. Finally, if a law targets a social media site’s conduct rather than speech, it may not trigger the protections of the First Amendment at all.

Financial Liberty at Risk-728x90




liberty-risk-dark