How to Fix Section 230 of the Communications Indecency Act

Introduction

Last week a distraught client called me with an urgent matter.  A professor at a local university, she made the mistake of flunking a student on a test.  The student retaliated by posting two videos about her on YouTube.  The first video was the source of my client’s anguish.  In that video the student incoherently told a false narrative about my client’s personal life, sparing no one in her family, not even her children.  All were lied about and clearly defamed.  The vitriol was so extreme, she feared for her safety and her family’s safety, and left her home to temporarily reside in another town.  The second video, while not a personal attack on my client, was nevertheless troublesome.  The student had posted images of her exams from this course, thereby necessitating her to create all new exams.  An annoyance for sure, but not an existential threat like the first video.

I listened to my client’s horror story and told her I would do the best job I could as fast as I could.  But not to create false hope, I predicted what would happen.  I said I could probably get the video with the exam images removed pretty quickly.  However, I told her that the video defaming her and her family would take time to get removed.  This was not what she wanted to hear.

I was right.  The system is wrong.  Blame Section 230 of the Communications InDecency Act (47 U.S.C. Section 230) (the “CDA”) for not getting the first video removed.  Credit the Digital Millennium Copyright Act (17 U.S.C. Section 1201) (the “DMCA”) for getting the second video removed.

This article discusses how to fix the system, in particular how to fix the CDA.

 

The Communications InDecency Act

Some commentators have called the CDA “the twenty-six words that created the Internet.”  Vinton Cerf, Robert Kahn, Tim Berners-Lee and certain folks behind ARPANET might feel shortchanged by such a statement (and don’t forget Al Gore!).  And let’s not split hairs trying to understand the difference between the Internet and the World Wide Web.  It may be a bit of hyperbole to attribute the invention of Internet technology to a piece of legislation, not that well-drafted as we shall learn.  While the CDA did not create the Internet, it may have created some social media platform companies that conduct business on the Internet and it has undoubtedly shaped the content that appears on those platforms.  For sure, those twenty-six words helped create extraordinary wealth for Mark Zuckerberg, who was twelve years old when the CDA was passed, and became a billionaire eleven years later.

The CDA is the only remaining vestige of legislation enacted in 1996 that was intended to regulate pornography on the Internet.  In particular, Congress attempted to eliminate a child’s access to pornography on the Internet (hence, the origin of the name “Decency”, which is ironic insofar as all that remains is a protective shield for indecency).  However that First Amendment watchdog, the American Civil Liberties Union, stepped in and opposed the law on the grounds that it was impeding freedom of speech.  In 1997, the U. S. Supreme Court in Reno v. ACLU unanimously struck down most of the provisions in the CDA as unconstitutional limitations on free speech.  Section 230 survived the bloodbath.

In full, here are those twenty-six words (47 U.S.C. § 230 (c) (1)):

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

Those twenty-six words granted immunity to any “interactive computer service,” a/k/a a web site, Internet host or social media platform, if the site hosted material posted by its users and the site did not write or modify the content posted by its users.

The law was intended to provide clarity and reconcile two previously conflicting court decisions that addressed the question of when a web site could be held liable for content posted on it.  In a 1991 decision, Cubby, Inc. v Compuserve, Inc.,  a New York district court held that Compuserve was not liable for content posted by its users because CompuServe did not review its users’ posts.  However, the 1995 case of Stratton Oakmont, Inc. v. Prodigy Servs. Co. muddied the clear waters of Cubby.  In Stratton Oakmont, the New York Supreme Court held Prodigy liable for certain posts of its users because Prodigy “moderated” the forum in which those posts appeared.  This case elevated the blood pressure of Internet platform companies at that time, like America Online, Prodigy, and CompuServe, to name a few, to unacceptable levels.  The CDA was the medicine Congress prescribed to prevent these companies from near-certain death induced by the hypertension of looming monumental liability.

Internet host companies could not have been happier.  Congress bestowed upon them a gift in the form of a double-barreled shot of limited liability and cost reduction.  The first barrel was obvious.  If any of those companies had been reserving funds to pay off potential liability claims, those reserves could immediately be placed into employee bank accounts for vacation pay.  If an Internet host was not liable for the posts of its users, no money had to be placed in reserve to pay for insurance against third-party claims or placed in reserve to pay for such claims directly, if they weren’t insured.

The second barrel was a little more subtle, it went like this.  Per the language of the statute, an Internet host was immune as long as the message was “provided by another content provider.”  A “content provider” is a user.  So an Internet host was immune as long as the message was provided by another user; i.e. the message was not provided by, or edited by, or moderated by the Internet host itself.  That was the mistake Prodigy made.  It got itself in hot water when it “moderated” the forum in which the offending message appeared.

The takeaway from Stratton Oakmont and the CDA was that if an Internet host wanted immunity from defamatory messages posted by its users, the best course of action was to do nothing.  Consequently, if any Internet host was considering investing in a department to edit, sanitize, moderate, or in any way try to weed out offensive messages, the directive was clear:  IMMEDIATELY STOP ANY AND ALL CORRECTIVE ACTION INTENDED TO HELP INNOCENT VICTIMS OF DEFAMATION!!!  Which Internet hosts were all too happy to do, since the cost of any such activity was a sunk cost to the business; it generated no revenue.

Let’s fast forward from 1996 to my client’s complaint.  Why did I anticipate that YouTube would be reluctant to remove the video that defamed her and her family?  Because YouTube does not want to blow the immunity that the CDA grants it.  YouTube remains immune from claims of defamation caused by the content posted by its users, only if YouTube does not involve itself in creating, editing or modifying that content.  Even removing content without a court order could be problematic. A victim of defamation could maintain that if an Internet host removes some posts from its site,  the host would be culpable for letting other posts remain.  In for a penny, in for a pound.  Thus, the CDA encourages, in fact mandates, that Internet hosts turn a blind eye to the content posted by their users if they want to continue to enjoy the financial benefits of immunity.  So for that reason, no ready path exists to get an Internet host to remove defaming content posted to its web site by a user.

Contrast the real world impact of the CDA on Internet content with its intended effect.   Instead of  purging the Internet of child-accessible pornography, it has allowed social media sites to grow into well-protected repositories of defamatory content.  It’s hard to imagine a piece of legislation backfiring more than this.

 

The Digital Millennium Copyright Act

The DMCA was passed by Congress in 1998, two years after it enacted the CDA.  In some ways, the DMCA was a carve-out exception to the wide swath of immunity Congress provided Internet hosts in the CDA.  But the exception was narrow.  Congress created an exception for copyright holders.  A rather odd exception, if you think about it.

While the CDA did not provide any mechanism for getting an Internet host to remove defamatory content, the DMCA provided such a mechanism for copyright holders.  And that mechanism, called a DMCA take-down notice, is what should be imported into the CDA.  More about that later.

The DMCA starts with the same premise as the CDA.  It bestows immunity on an Internet host against copyright infringement if the infringing material is posted on the Internet host by someone other than the Internet host.  Sound familiar?  For example, assume Bill is the author of a copyrighted poem, and Jane posts a copy of that poem on Facebook, Bill cannot sue Facebook for copyright infringement, even though a copy of Bill’s poem appears on Facebook.  The rationale for such immunity is the same rationale as was used in the CDA.  Facebook did not create the content that infringes Bill’s poem.  Facebook is unaware that Jane posted the infringing material, therefore Facebook should not be held liable for Jane posting it on the Facebook site.

So far the DMCA treats copyrighted material just as the CDA treats defamatory material.  But the DMCA then goes several steps further and offers a remedy to  protect the rights of copyright holders, whereas the CDA offers no remedy to protect persons against defamation on the Internet.  The DMCA establishes a process whereby a copyright holder can notify an Internet host that their copyright is being infringed by a third party on the Internet host’s website.  Upon receipt of such a notification of infringement, the Internet host then has an obligation to investigate the alleged infringement, and under certain circumstances, remove the infringing content.  If the Internet host does not follow the procedure set forth in the statute, the Internet host loses its immunity to suit from the copyright holder.  Which means that the copyright holder then has the legal right to sue the Internet host for copyright infringement.  Without going into the different calculations of damages for copyright infringement, this loss of immunity serves as a tremendous incentive for an Internet host to follow the take-down procedures set forth in DMCA in order to maintain its immunity.

Under certain circumstances, the DMCA gives copyright holders other ways to seek compensation directly from an Internet host for copyright infringement.  For example, if an Internet host allows recidivists, or repeat offenders, to continually post copyright infringing material on its site, then the Internet host loses its immunity, even if it follows the investigative procedures mandated by the statute.  In other words, the DMCA punishes an Internet host for its willful blindness.

Let’s fast forward from 1998 to my client’s complaint.  Why did I anticipate that YouTube would  swiftly remove the video with pictures of her exams?  Because the exam is a copyrighted work.  The copyright vests either with the professor or with her university.  But either way, it is copyrighted.  All I did was follow the procedures YouTube maintains for removing copyright infringing material, and voilà, the video with the exams was taken down.  Let’s be clear here.  YouTube does not maintain such an efficient process to be nice to copyright holders.  At least I don’t think so.  YouTube maintains this process because the DMCA requires compliance with the statute if YouTube wants to maintain its immunity from being sued.

 

The CDA In the News

For a long time, the CDA stayed out of the headlines.  Like a guard dog slumbering at the gates of a junkyard, the CDA lay dormant unless disturbed by an intruder.  And that was just what the Internet hosting companies wanted.  Keep a low profile.  Every so often there was a case that tested the boundaries of what it meant to create content in a message and who was responsible for creating that content.  Over and over the notion was reinforced that to maintain immunity, an Internet host should remain distant from the content of user messages.  For example, in a 2008 California case, Fair Housing Council of San Fernando Valley v. Rommates.com, the Ninth Circuit held that Roommates.com could not claim immunity under CDA, because it created a questionnaire that its users responded to when posting messages on its site.  The Ninth Circuit held that even though Roommates.com did not create the text of the messages posted on its site, because its questionnaire was incorporated into the process used to create text messages, Roommates.com was partially responsible for the content of messages posted on its site, and hence, the messages were not those of “another content provider.”  The court’s marching orders were clear: an Internet host should not get involved in the creation of content posted on its site if it wants the protection of the CDA.

Things changed in 2020.  Who was the person wandering near the junkyard who awoke the slumbering guard dog?  None other than @realDonaldTrump.  In 2020, our ex-president began to turn his wrath on the very social media platforms that catapulted him to popularity.  It wasn’t enough for the ex-president that he appeared to have immunity for whatever he posted on Twitter, our ex-president wanted Twitter to silence his critics.  More precisely, he wanted to sue Facebook, Twitter and YouTube because they refused to remove content that was critical of him.  He was advised by the justice department that Twitter was simply doing what the CDA said it should do to stay out of trouble.  In typical Trump-like fashion, he turned the tables on the Justice Department, and issued an executive order in which he told the Justice Department to take another look at the CDA, and to start interpreting it more narrowly.  This generated a fair amount of commentary, because it seemed like he was ultimately inviting Twitter and other social media platforms not to silence his critics, but to silence him.  Which is exactly what happened when they banned him from using their platforms.

However, many politicians believed that Trump was on to something and took up his cause.   Numerous bills were introduced to modify and even outright appeal the CDA.  Republicans were especially interested in curtailing the immunity of social media platforms.  Lindsey Graham led the charge with a proposed bill intended to clarify the instances when social media platforms could censor (oops, edit) content.  But to the delight of our social media companies, nothing passed.

Then in 2022, the European Union, which has long been proactive about regulating business on the Internet, focused its attention on solving this problem.  In the spring of 2022, the EU passed the Digital Services Act, which promulgated guidelines for future legislation that will protect consumers in various e-commerce and social media environments.  The EU views this as protecting consumers and individuals from exploitation by social media platforms and e-commerce businesses.  Binding legislation has yet to be passed.

Back in America, the CDA has once again come to the forefront in connection with Elon Musk’s proposed acquisition of Twitter.  Musk views Twitter as a platform for “free speech.”  Does that mean the freedom to defame?  Stay tuned.

 

How to fix the CDA

Times have changed since the CDA was passed in 1996.  Technology has changed dramatically.  What appeared to be a gargantuan manual undertaking in 1996 can now be done by computer software.  The fundamental assumption behind the CDA was that an Internet host was not aware of all the posts on its site.  And because the Internet host is not aware of the content that its users post, it should not be held accountable for the legal consequences of what its users post.  Back in 1996 a convincing analogy could be made that an Internet host was like a telephone carrier.  The Internet host provided the infrastructure which transmitted the messages of users; it was nothing more than a conduit.   Just as we do not hold a telephone carrier accountable for the content of a telephone message, the CDA made it clear that an Internet host should not be held liable for the content of a user’s message.

There are two principal problems with this logic.  The first has always been present, from 1996 to the current day.  The second is one that arises with the new-found power of modern technology.

The first problem was made evident by the DMCA.  The DMCA starts with the same assumption as the CDA.  Namely that an Internet host cannot be assumed to be aware of all material posted on its site.  Therefore, an Internet host is not liable for copyright infringement.  But that doesn’t mean that an Internet host can’t be made aware of content on its site, if that content is copyright infringing.  So to protect the owners of copyrighted material, the DMCA included the DMCA take-down provisions, whereby a copyright owner can notify an Internet host of the fact that someone has posted infringing material on the host’s site.  Then, per the DMCA, the Internet host has an obligation to investigate the allegation.  It must make a good faith determination that either copyright infringement has occurred, in which case the Internet host must remove the infringing content, or that the allegation of infringement is not true, in which case the cited content remains posted.

Why not import a similar process of notification and take-down requirement into the CDA?  Why not give a victim of defamation the right to advise an Internet host that a user has posted defamatory content on its site?  Upon notification to the hosting company, the hosting company would have an obligation to conduct an investigation, and then act in accordance with the result of its investigation.  It does not have to be a complicated process.  In fact, one could take the language used in the DMCA take-down provisions, make a couple of global changes and have it work in the CDA.  And certainly the concept of the repeat offender found in the DMCA should be imported into the CDA.  If a person has repeatedly posted defamatory material on an Internet host’s site, one would hope that the Internet host would ban the repeat offender from any future posts on its site (good bye, @realDonaldTrump).

I can hear the criticisms already.  One will be that it is easier to prove copyright infringement than to prove defamation.  Because copyrighted material is registered with the federal government, so it is a fairly binary decision whether or not an owner has copyright protected content.  But this is not true.  Not all copyrighted material is federally registered.  In fact, only a small percentage of copyrighted material is registered with the federal government.  Under federal copyright law, every original creation containing some element of creativity made by a human being is copyright protected automatically upon creation.  Every original song, picture, written work, painting, etc. is copyright protected upon creation, whether or not federal registration is applied for.  One can easily imagine the number of copyright protected works created in any given year to be in the tens or hundreds of millions.  Yet in 2020, the U.S. copyright office issued only approximately 450,000 copyright registrations, which is a mere fraction of copyright protected works created.  Consequently many take down notices pertain to non-registered works (like my client’s exams), in which case, an investigation must be conducted with respect to the authenticity of the claim of copyright.  Just like the investigation that would have to be conducted  into the veracity of a claim of defamation in a CDA take-down notice.

Another criticism will be from the social media companies themselves.  They, of course, will argue that the burden of such a process will be cost prohibitive and will put them out of business.  We will hear about the armies of lawyers they will have to hire to support such investigations.  Our hearts will bleed.  However, the heart of Nobel prize-winning economist Paul Krugman will not bleed.  Mr. Krugman recently wrote a column in which he asserts that business executives are the last people you want to ask for a realistic assessment of the impact of changed regulation on business.  As Mr. Krugman points out, regulation change typically increases the cost to do business, and that is anathema to enhanced profitability.  Mr. Krugman wrote, “The big lesson here is that you can’t trust industries to provide a reliable, or even honest, assessment of the economic impact of policies that might hurt their bottom line.”  Executives usually argue against regulation change by painting dire, but unrealistic scenarios.  While executives should be listened to, they need not be believed.

Further, there is nothing to say that an Internet host could not charge for conducting such an investigation pursuant to a CDA take-down notice.  If a person wants to file a take-down notice under the CDA, it would seem appropriate that they should pay for the costs incurred by the host to investigate and process such a claim.  Perhaps the legislation could then authorize any costs associated with such an investigation be borne by the defamer if the take-down notice is upheld.  That would certainly seem appropriate and doable as all Internet hosts have credit card information from their users.  Such a provision could be written into the terms of use of a social media platform.  It would provide the additional benefit of acting as a punitive measure to deter defamation on the Internet.

Who knows, perhaps the reputational adjudicatory department of Facebook could develop into a profit center.  Or perhaps we might see a repeat of what happened in the early 2000s, when ICANN (the Internet Corporation for Assigned Names and Numbers) created its UDRP (Uniform Domain Name Dispute Resolution Policy) to resolve domain name disputes between private parties.  Implementation of the UDRP spawned a cottage industry of profitable panels that adjudicated domain name disputes pursuant to guidelines issued by ICANN.  Perhaps the same would occur if a CDA take-down policy became operational.  It might spawn a cottage industry of for-profit panels to adjudicate reputational disputes pursuant to CDA guidelines.

In fact, the UDRP could serve as a model for resolving contested defamation disputes.  The UDRP established an arbitration-like process;  decisions are rendered in a matter of months.  While not instantaneous, the process is a lot faster that going into court.  Each party provides the basis for its case to a panel of one to three persons, and a decision is rendered shortly after all materials have been submitted.  Typically, there is no direct testimony or hearing.    Legislative change almost always creates economic opportunity for entrepreneurs who understand the ramifications of such change.

The second problem with the current structure of the CDA is that it is premised on the capabilities of technology from the 1990s.  Back then it may have been true that an Internet host could not be aware of the content of postings.  But that was then, and now is now.  For example, in 2018, Facebook disclosed how it uses artificial intelligence to scan posts to detect objectionable content such as pornographic material.  Later, in 2020, Facebook touted its use of “super-efficient AI” to detect 95% of the hate speech removed from its site.  The linguistic capabilities of artificial intelligence are growing by leaps and bounds.  What was inconceivable in 1996 is now commonplace.  Our laws should be updated to reflect the capabilities that current technology enables.  Thus the fundamental assumption that an Internet host cannot be made aware of defamatory posts is no longer immutably true.  Not all, but certainly much defamatory content could be proactively detected by Internet hosts before the content is published on the site.  Just as there may be a delay in the transmission of live broadcasts so objectionable material can be edited out in real time, an Internet host could interpose a scanning function before publishing the content of its users.

 

Conclusion

Thus, the fix to repair the CDA need not be complicated, nor cost-prohibitive. Current problems with the CDA could be remedied by including a take-down mechanism like that found in the DMCA and by imposing filtering requirements that utilize the current capabilities of modern technology to proactively identify inflammatory content.  If these changes had been in place last month, I suspect YouTube would have quickly removed the first video my client complained about.  That was the one she really cared about. For her, as for most people, her reputation was far more valuable than her copyright.  Our laws should reflect that reality.  Our laws should protect the 330 million people who live in the United States, each of whom has a reputation, as strongly as our laws protect the fraction of that population who own a copyright.

 

Copyright 2022, Peter Kelman, Esq.

All rights reserved.