Social Media Platforms Under the Supreme Court Microscope (Published in Landslide, Vol. 15, No. 4, June/July 2023, by the American Bar Association)

It must be an unsettling time for social media companies, notwithstanding what has been recently going on with Twitter. Three cases regarding social media platforms are before the U.S. Supreme Court this term, and one case is an amalgam of three individual cases. This means that our justices will be examining five separate cases involving social media platforms and the application of various state and federal (and in some cases international) laws to the operations of those platforms.
Read the full article.
PDF Version
Online

 

Social Media Platforms Under the Supreme Court Microscope

 – By Peter Kelman, Esq.

Introduction

It must be an unsettling time for social media companies.  Never mind what is going on with Twitter.  Three cases regarding social media platforms are before the United States Supreme Court this term.  And one case is an amalgam of three other individual cases. This means that our Justices will be examining five separate cases involving social media platforms and the application of various state and federal (and in some cases international) laws to the operations of those platforms.

These cases contain all the elements for high stakes legal drama.  There is a split in the circuits.  There are questions about constitutional interpretation.  There are questions about federal preemption.  There are even questions about extra-territorial application of domestic law.  A virtual smorgasbord of legal issues suitable for a law school final exam.  When asked to opine on an issue, the legal department of a social media company must feel as if it is caught in a maze of distorting mirrors found in a carnival.  In one mirror objects appear tall and skinny; in the next mirror, those same objects appear short and squat.  So too for a social media company.  If a social media company were to remove a hostile user’s posts, in some jurisdictions the company would appear as a socially-aware entity, applauded for its vigilant eye.  However, in other jurisdictions the company would appear as a self-serving propagandist, censoring a message incompatible with its social leanings, subjecting itself to a $100,000 fine.  When presented with the same image, our mirrors should reflect a uniform appearance.

Social media companies must be asking of our government and courts the same thing all entrepreneurs ask of any regulatory environment: give us consistency and predictability so we can build and grow our business according to the rules you have created.

Enter the Supreme Court.  

 

Cast of Characters

While the facts of the three cases differ, the principal actors in each case are always two:

  1. The First Amendment[i] – Almost every legal argument about the rights and responsibilities of social media platforms comes down to an analysis of whether the First Amendment applies to social media platforms, and if so, how it applies; and
  2. Section 230 of the Communications Decency Act[2] – Section 230 of the Communications Decency Act (the “CDA”) is the remaining vestige of a more sweeping law passed by Congress in 1996 to curb the availability of child pornography on the Internet. After most of the law was declared unconstitutional by the Supreme Court in 1997 in Reno v. ACLU,[3] Section 230 remained.  Section 230 grants immunity from lawsuits to social media companies (a/k/a internet service providers) for posting the content of third parties.

The Cases

Case 1: Gonzalez v Google. [4]

Gonzalez v Google is an appeal from a Ninth Circuit ruling with respect to three separate cases, all involving claims brought against social media platforms.  The plaintiffs are the estates of three individuals, Gonzalez, Clayborn and Taamneh, each of whom was killed by terrorists in different locations: Paris, Istanbul, and San Bernardino.  Gonzalez sued YouTube, a wholly owned subsidiary of Google.  Clayborn and Taamneh sued Google, Twitter and Facebook.  While the specifics of each case varied, the general theory of liability was the same.  The plaintiffs alleged that the defendants were liable for allowing terrorist organizations to broadcast content, such as videos, on the defendants’ web sites.  The plaintiffs brought suit against the defendants alleging, inter alia, direct and secondary liability for the deaths of the plaintiffs, in violation of the Anti-Terrorism Act (“ATA”)[5], and the Justice Against Sponsors of Terrorism Act (“JASTA”)[6].  The defendants moved to dismiss the complaints.  The defendants maintained that the CDA immunized them from the liability alleged by the plaintiffs.  The District Court of Northern California agreed with the defendants and dismissed all three complaints[7].  All plaintiffs appealed the District Court’s decisions to the Ninth Circuit.  For the most part, the Ninth Circuit affirmed the lower court’s rulings.

Gonzalez successfully appealed the Ninth Circuit decision to the Supreme Court.  Gonzalez challenged the Ninth Circuit’s decision insofar as it based dismissal of the complaint on the grounds that the CDA shielded Google (d/b/a YouTube) against the claims brought by Gonzalez.  Gonzalez focused its writ of certiorari on one particular aspect of YouTube’s behavior.  Gonzalez questioned whether YouTube’s aggregation of certain videos based on its perception of a viewer’s likes and dislikes was an activity that fell outside the scope of CDA protection.

The Gonzalez writ of certiorari presented the Supreme Court with this question:

Does section 230(c)(l) immunize interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limit the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information?[8] 

Gonzalez claimed that YouTube’s targeted recommendations fell outside the scope of a traditional publishing activity.   Therefore, the CDA did not shield YouTube from the consequences of that behavior.  Therein lies the crux of the question Gonzalez has put before the Court: for the purpose of construing the CDA, at what point does a social media platform create its own content as opposed to merely publishing the content of a third party?

 

The CDA.

Part 1 of the CDA[9] reads in full:

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

Part 2 of the CDA[10] reads in full:

No provider or user of an interactive computer service shall be held liable on account of– 

(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

(B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

         The courts idealistically, and in this writer’s opinion, unrealistically, link together parts one and two of the CDA to paint a rosy picture of what a social media company can do while remaining protected by the CDA.  Courts have stated that while part one eliminates a host’s liability for the posts of others, part two encourages a host to nevertheless police those posts without liability.[11]  Part two has been construed by courts to imply that as long as a host acts in a manner consistent with the activities of a traditional publisher, it will not lose the immunity granted by part one.  While that may be true in theory, in practice the immunity of part one discourages a host from undertaking any activity contemplated by part two.  Practically speaking, why should a social media company police its posts if there are no repercussions to turning a blind eye to them?  Why risk being a good Samaritan if you don’t have to be a Samaritan at all?  If a social media company engages in part two CDA activity too aggressively, it risks crossing the fuzzy line of traditional publishing, thereby abrogating the protective cocoon of part one of the CDA.

CDA Analysis Paradigm.

In the typical case construing the CDA, a plaintiff brings an action against a social media company (an “interactive computer service” in the words of the CDA) alleging that the social media company engaged in conduct that created content on its platform.  The plaintiff claims that the offending content is not solely authored by a third party, therefore the CDA does not apply.  The seminal case for such an analysis is Fair Housing Council of San Fernando Valley v. Rommates.com,[12] (“Roommates”).  In Roommates the Ninth Circuit held that Roommates.com could not claim immunity under CDA, because it created a questionnaire that users responded to when posting messages on its site.   The court observed that Roommates.com played an essential role in the creation of the message posted by its users, and ruled that the offending messages were not purely those of another information content provider.

Roommates provided the template to determine whether the CDA shielded the defendant from liability.  If the web host creates, even in part, information displayed on its web site, then the immunity of the CDA is nullified.  Information can be created in many ways, some obvious, some not so obvious.  For example, if the web host authors a message by composing the words of a message, obviously the host has created information.  However, take the case of a web host that organizes and aggregates messages written by third parties and presents those messages as a group to a viewer.  Has the aggregation and presentation of those messages created information?  Can the whole be greater than the sum of its parts?  That is what Gonzalez is arguing to the Supreme Court.

 

A CDA Analysis of Gonzalez.

The Ninth Circuit framed Gonzalez’s theory of liability against YouTube (“Google” for the purposes of the decision) as this:

The Gonzalez Plaintiffs’ theory of liability generally arises from Google’s recommendations of content to users. These recommendations are based upon the content and “what is known about the viewer.” Specifically, the complaint alleges Google uses computer algorithms to match and suggest content to users based upon their viewing history.[13] 

The Ninth Circuit analyzed Google’s use of algorithms to combine discrete YouTube videos when presented to a viewer.  The Court adopted an approach it first articulated in Carafano v. Metrosplash.com, Inc. [14]  In Carafano, the Ninth Circuit held that where a host provides a neutral tool to its users, and a user hijacks that tool for an illegal purpose to create defamatory content, the host was not liable as a contributor to the offending content.  In that context, the social media host is fulfilling a function analogous to that of a newspaper providing a template for a seller to list goods in its classified ads.

This approach is useful to a court for several reasons.  First, it justifies reliance on part one of the CDA to insulate a media platform from liability.  Under this view, the media platform has contributed no substance to the offensive message; it has merely provided a vessel for the message.  But even if that conclusion were to be questioned, part two of the CDA offers comfort to the court, insofar as a court can characterize the activity of the media host as a traditional publishing activity.

However, the Gonzalez petition[15] challenges the outcome of this analysis.  Apparently the Supreme Court was receptive to reviewing these challenges.  Gonzalez questions the assumption that aggregating messages with similar content does not add information to the individual pieces of content.  There is power in numbers, persuasive power.  Especially upon persons subject to influence.  For example, assume you were the head of the Venice Bureau of Tourism.  Would you put on your web page one gorgeous picture of Venice, or hundreds of beautiful pictures?  Or, assume you wanted to convince people that a certain social movement was legitimate and popular.  Would you display one picture of a throng, or hundreds of pictures of different groups?  It is well-documented[16] that one effect of the Internet, with its ubiquity of information, is to make extremists feel more mainstream.  The aggregation of individual content items transmits information suggesting popularity and acceptability that a single item does not transmit.

 

The Economics of Content Aggregation and Content Recommendations.

 We should not lose sight of the motives of a media platform company regarding content aggregation and recommendations.  Simply put, the business of social media is all about eyeballs and money.  Media platform companies make money[17] when viewers watch the content they display.  Interspersed throughout the content are ads.  Ads drive the revenue[18] for media platforms.  The more content that is viewed, the greater the opportunity to run ads, the more money the platform makes from advertisements.  It would seem appropriate to hold a media platform accountable for the consequence of its aggregation of content, where that behavior is a source of revenue for the social media company.

Next, under a CDA part two analysis, one must ask whether such content aggregation is a traditional function of publishing, or whether it is a new activity of publishing, which is enabled by computerized technology.  A strong argument can be made that it is a novel activity, one enabled by the processing power of computers.  True, antecedents have been present for many years in newspapers that segregated articles into discrete “sections.”  The purpose of aggregating news by section was to concentrate similar stories onto contiguous pages, filled with appropriate targeted ads.  While the roots of this categorization of information can be found in newspapers, it has blossomed into a veritable forest on the Internet.

 

  1. The Role of Algorithms in Gonzalez.

Which brings to the Supreme Court’s attention the next layer of obfuscation challenged by the Gonzales petition.  Gonzalez alleges that the recommendations made by Google for viewing similar videos are recommendations created by algorithms.[19] That algorithms have taken the aggregation of content to unprecedented levels not anticipated by traditional publishing activities.  Google counters that argument with the assertion that an algorithm is a neutral piece of technology (relying on Carafano).[20]  Google claims there is nothing inherently bad about an algorithm and that it should not be penalized if a bad actor uses a neutral tool to accomplish an unlawful end.

The Ninth Circuit agreed with Google on this point about its use of algorithms.  The Court held:

Though we accept as true the [Plaintiff’s] allegation that Google’s algorithms recommend ISIS content to users, the algorithms do not treat ISIS-created content differently than any other third-party created content, and thus are entitled to § 230 immunity.[21]

It will be interesting to observe how the Supreme Court addresses the question whether a company can be held accountable for the consequence of its algorithms.

 

 

Case 2: Netchoice, LLC v. Ken Paxton, Attorney General of Texas (Fifth Circuit).

Case 3:  Netchoice, LLC v. Attorney General, State of Florida (Eleventh Circuit).

Netchoice v Paxton, Attorney General, State of Texas[22] and Netchoice v Attorney General, State of Florida[23] present the Court with an opportunity to resolve a split between two circuit court decisions.  These cases are concerned with virtually identical state statutes in Texas and Florida.  Each state’s statute attempted to prevent a social media platform from removing posts from its platform.

The State Statutes.

The statutes in Texas and Florida for all intents and purposes are clones of each other.  Florida’s statute[24] is titled, “An Act Relating to Social Media Companies.” The Texas statute[25] is titled, “AN ACT relating to censorship of or certain other interference with digital expression, including expression on social media platforms or through electronic mail messages.”  When judged on the basis of succinctness, score one for Florida.

Both statutes endeavor to limit the ability of a social media company to edit and / or remove user content from its site.  The Florida statute states that a social media company may not “censor, deplatform, or shadow ban a journalistic enterprise based on the content of its publication or broadcast.”[26]

The Eleventh Circuit elaborated on this prohibition as follows:

The term “censor” is also defined broadly to include not only actions taken to “delete,” “edit,” or “inhibit the publication of” content, but also any effort to “post an addendum to any content or material.” Id. § 501.2041(1)(b). The only exception to this provision’s prohibition is for “obscene” content. Id. § 501.2041(2)(j).[27  

The Texas statute prohibits the following conduct by a social media company:

A social media platform may not censor a user, a user’s expression, or a user’s ability to receive the expression of another person based on:

(1) the viewpoint of the user or another person;

(2) the viewpoint represented in the user’s expression or another person’s expression; or

(3) a user’s geographic location in this state or any part of this state.[28]

The Fifth Circuit points out that under Texas state law:

“Censor” means “to block, ban, remove, de-platform, demonetize, de-boost, restrict, deny equal access or visibility to, or otherwise discriminate against expression.” Id. § 143A.001(1). For Section 7 to apply, a censored user must reside in Texas, do business in Texas, or share or receive expression in Texas. Id. § 143A.004(a)–(b).[29]

The Fifth Circuit also notes some of the exceptions to this ban on censorship, which exceptions are broader than those in the Florida statute:

Section 7 does not limit censorship of expression that a Platform “is specifically authorized to censor by federal law”; expression that “is the subject of a referral or request from an organization with the purpose of preventing the sexual exploitation of children and protecting survivors of sexual abuse from ongoing harassment”; expression that “directly incites criminal activity or consists of specific threats of violence targeted against a person or group because of their race, color, disability, religion, national origin or ancestry, age, sex, or status as a peace officer or judge”; or “unlawful expression.” Id. § 143A.006.[30]

Both states ground their right to regulate the content of a social media company on the premise that a social media company is a “common carrier.”  Per each statute, because a social media company is a common carrier, it assumes the responsibilities of a quasi-government agency and hence, becomes subject to First Amendment restrictions.[31]

 

Trapped in the Funhouse.

Pity the poor social media company trying to do right in today’s regulatory environment.  What is right?  The attorneys for Gonzalez argue that right means removing terrorist videos from a social media platform.  But not according to Florida law.  Per Florida’s statute, social media posts can be removed by a host only if those posts are obscene.[32]  The terrorist videos are gruesome, but not obscene as defined by Florida’s state statute. Florida Public Law 847.001 defines obscene as follows:

Obscene” means the status of material which:

(a) The average person, applying contemporary community standards, would find, taken as a whole, appeals to the prurient interest;

(b) Depicts or describes, in a patently offensive way, sexual conduct as specifically defined herein; and

(c) Taken as a whole, lacks serious literary, artistic, political, or scientific value.[33]

Under Florida law, the terrorist video must remain.  Under Texas law, it is not certain whether the videos can be removed.  If the terrorist videos target specific groups of people, perhaps that justifies their removal.   But what if the terrorist is simply mad against the world, and seeks annihilation of all of humanity?  The violent content of a  non-discriminating terrorist advocating world-wide mayhem would seem to pass statutory muster under the Texas statute.

 

The Eleventh Circuit Decision in Netchoice v. Attorney General, State of Florida.

The Eleventh Circuit ruled that the Florida statute regulating social media companies was unconstitutional.  Florida’s classification of a social media company as a common carrier is the lynchpin to invoking First Amendment scrutiny to their actions.  The Court questioned the authority of Florida to declare a social media company a common carrier.  It reviewed prior decisions analyzing whether an entity was a common carrier and disagreed with Florida’s classification.   It stated:

Three important points about social-media platforms: First—and this would be too obvious to mention if it weren’t so often lost or obscured in political rhetoric—platforms are private enterprises, not governmental (or even quasi-governmental) entities.[34] 

The Court further buttressed its conclusion by noting that the Telecommunications Act of 1996 stated that interactive computer services were not to be treated as common carriers. The Court wrote: “The Telecommunications Act of 1996 explicitly differentiates “interactive computer services”—like social-media platforms—from “common carriers or telecommunications ser-vices.” See, e.g., 47 U.S.C. § 223(e)(6) (“Nothing in this section shall be construed to treat interactive computer services as common carriers or telecommunications carriers.”).”[35]

Once the Eleventh Circuit concluded that a social media company is not a common carrier, its next conclusion followed inevitably.  Namely that there is no Constitutional restriction imposed by the First Amendment upon a social media platform’s right to censor the posts of its users.[36]  In fact, and this becomes a major point of disagreement between the Eleventh and Fifth Circuits, the Eleventh Circuit held that by eliminating a social media company’s right to censor posts, the Florida statute was unconstitutionally abridging freedom of expression of the social medial company.  In other words, the Court reasoned that censorship of expression is itself an act of expression by the censor, which the government cannot abridge.[37]

For these and other reasons, the Eleventh Circuit struck down most provisions of the Florida statute without any reference to the CDA.

 

 

The Fifth Circuit Decision in Netchoice v. Ken Paxton, Attorney-General of Texas.

If you followed the analysis of the Eleventh Circuit decision regarding the Florida law, then you followed the analysis of the Fifth Circuit regarding the Texas statute.  The only differences are the conclusions.  For the most part, the method of analysis was the same.  Where the Eleventh Circuit saw black, the Fifth Circuit saw white; where the Eleventh Circuit saw red lights, the Fifth Circuit saw green lights.

Is a social media platform a common carrier?  No doubt about it, according to the Fifth Circuit.  Andrew Oldham, writing for the Court, summarized the Court’s common carrier analysis by stating:

Texas permissibly determined that the Platforms are common carriers subject to nondiscrimination regulation. That’s because the Platforms are communications firms, hold themselves out to serve the public without individualized bargaining, and are affected with a public interest.[38]

Having reached that conclusion, the Court next concludes that censorship by social media platforms is barred by the First Amendment.  Judge Oldham charted the course of his 113 page decision on page two when he wrote, “Today we reject the idea that corporations have a freewheeling First Amendment right to censor what people say.”[39]  The Court ruled against social media platforms with respect to two issues implicating the First Amendment.  First, the Court held that by censoring the posts of their users, social media companies were engaging in the restriction of speech prohibited by the First Amendment.[40]  Second, the Court held that a social media company’s censorship activity is not an expressive act protected by the First Amendment.  Netchoice argued that when a social media company edits or deletes a user’s post, it is engaging in its own constitutionally protected form of expression.[41]  The Court cited the CDA to rebut this theory of expression.  The Court observed that the premise of the CDA is that a social media platform is not an author of the messages it hosts.  To which the Fifth Circuit appended, since a social media platform is not an author, it cannot be found to be engaging in a form of expression.[42]

As a result of the foregoing conclusions, the Fifth Circuit upheld the Texas statute and reversed the district court’s injunction which had stayed implementation of the statute.

 

Conclusion.

The application of long-standing legal principles to novel technologies has always been a challenge.  Nowhere is this more apparent than in these three cases before the Supreme Court.  Tensions abound.  Are social media companies common carriers, and if so, what does that mean?  Do social media companies have an obligation to police themselves and their algorithms so as not to propagate messages of evil?  Is censorship a constitutionally protected form of expression?  These are just some of the questions lurking in these cases.  Behind them all a more overarching question remains unanswered.  It is the question of whether or not a social media platform delivers a message apart from the messages of its users.  If it doesn’t create information, then the CDA makes sense, and social media companies should not be held liable for the content they display.  If it delivers no information, then the platform would not be engaging in a form of expression and there would be no First Amendment issues.  But if a social media company does deliver information to its users, independent of the messages posted by third parties, then, the First Amendment should be the controlling source of law, and the CDA should be construed, and perhaps amended,[43] accordingly.

To be decided by our Supreme Court.

 

__________________________________________________________________________

Footnotes:

[1]  U.S. Const. amend. I, reads in part: “Congress shall make no law … abridging the freedom of speech …”

 

[2] 47 U.S.C. § 230

 

[3] Reno v. ACLU, 521 U.S. 844 (1997)

 

[4] Gonzalez v. Google, LLC, 2 F.4th 871 (9th Cir. 2021)

 

[5] 18 U.S.C. § 2333

 

[6] Pub. L. No. 144-222, 130 Stat. 852 (2016)

 

[7] Gonzalez v. Google LLC, D.C. No 4:16-cv-03282-DMR (N.D. California 2018); Taamneh v. Twitter, Inc., Google LLC, and Facebook, Inc., D.C. No 3:17-cv-04107-EMC (N.D. California 2018); Clayborn v. Twitter, Inc., Google LLC, and Facebook, Inc., D.C. 3:17-cv-06894-LB and 3:18-cv-00543-LB (N.D. California 2018)

 

[8]  Gonzalez v. Google LLC, 21-1333 (2021), cert. granted Oct. 3, 2022

 

[9] 47 U.S.C. § 230 (c) (1)

 

[10] 47 U.S.C. § 230 (c) (2)

 

[11] E.g. Force v. Facebook, Inc., 934 F.3d 53, 76–89 (2d Cir. 2019).

 

[12] Fair Housing Council v. Roommates.com, LLC, 521 F.3d 1157 (9th Cir. 2008)

 

[13] Gonzalez v. Google LLC, 21-1333 (2021), cert. granted Oct. 3, 2022, Petition for Certiorari

 

[14] Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, 1122 (9th Cir. 2003)

 

[15] Gonzalez v. Google LLC, 21-1333 (2021), cert. granted Oct. 3, 2022, Reply Brief for Petitioners

 

[16] Steven Lee Myers and Stuart A. Thompson, Racist and Violent Ideas Jump From Web’s Fringes to Mainstream Sites, N.Y. Times, June 1, 2022

 

[17] YouTube, How Does YouTube Make Money, (last visited November 17, 2022); https://www.youtube.com/howyoutubeworks/our-commitments/sharing-revenue/#:~:text=YouTube%27s%20main%20source%20of%20revenue,%2C%20channel%20memberships%2C%20and%20merchandise

 

[18] Andrew Beattie, How YouTube Makes Money off Videos, Investopedia (October 31, 2021), https://www.investopedia.com/articles/personal-finance/053015/how-youtube-makes-money-videos.asp

 

[19] Gonzalez v. Google LLC, 21-1333 (2021), cert. granted Oct. 3, 2022, Reply Brief for Petitioners

 

[20] Gonzalez v. Google LLC, 21-1333 (2021), cert. granted Oct. 3, 2022, Brief In Opposition to Petition for Writ of Certiorari

 

[21] Gonzalez v. Google, LLC, 2 F.4th 871, 894 (9th Cir. 2021)

 

[22] Netchoice, L.L.C. v. Ken Paxton, Attorney General of Texas, 27 F.4th 1119 (5th Cir. 2022)

 

[23] Netchoice, LLC v. Attorney General of Florida, 34 F.4th 1196 (11th Cir. 2022)

 

[24] Fla. Stat., SB-7072 § 501.2401 (2022)

 

[25] Tex. Stat., House Bill 20 Ch. 120 (2021)

 

[26] Fla. Stat., SB-7072 § 501.2401 ¶(2) (d) (2022)

 

[27] Netchoice, LLC v. Attorney General of Florida, 34 F.4th 1196, 1200 (11th Cir. 2022)

 

[28] Tex. Stat., House Bill 20 Ch. 20 § 143A.002 (2021)

 

[29] Netchoice, L.L.C. v. Ken Paxton, Attorney General of Texas, 27 F.4th 1119, 1123 (5th Cir. 2022)

 

[30] Id.

 

[31] Fla. Stat., SB-7072 §501.2401 ¶ (1) (2022); Tex. Stat., House Bill 20 §1 (4) (2021)

 

[32] Fla. Stat., SB-7072 § 501.2401 ¶ (2) (d) (2022)

 

[33] Fla. Stat., § 847.001 (2022)

 

[34] Netchoice, LLC v. Attorney General of Florida, 34 F.4th 1196, 1201 (11th Cir. 2022)

 

[35] Id. at 1239

 

[36] Peter Kelman, Esq., Guess who Puts their Pants on One Leg at a Time and is an Advocate of Free Speech until it Costs Money: You, Me and Elon Musk, https://kelmanlaw.com/guess-who-puts-their-pants-on-one-leg-at-a-time-and-is-an-advocate-of-free-speech-until-it-costs-money-you-me-and-elon-musk/ (May 12, 2022)

 

[37] Netchoice, LLC v. Attorney General of Florida, 34 F.4th 1196, 1225 (11th Cir. 2022).

 

[38] Netchoice, L.L.C. v. Ken Paxton, Attorney General of Texas, 27 F.4th 1119, 1172 (5th Cir. 2022)

 

[39] Id. at 1121

 

[40] Id. at 1133

 

[41] Id. at 1139

 

[42] Id. at 1127

 

[43] Peter Kelman, Esq., How to Fix Section 230 of the Communications InDecency Act, https://kelmanlaw.com/how-to-fix-section-230-of-the-communications-decency-act-2/ (May 5, 2022)