fbpx

Resolving the Section 230 Crisis

Public concern mounts as a few of the nation’s biggest tech players—Google, Amazon, Facebook, and Twitter—have achieved market dominance and captured the nation’s flow of information and online commercial transactions. They facilitate all manner of human activity, for good and for ill, and hold the power to track our movements, guide purchasing decisions, regulate the flow of information, and shape political discourse. Yet, all the while, as private entities, they remain free to exercise these powers behind closed doors and, as online rather than physical-world entities, they enjoy immunity from some of the rules that govern their analog counterparts.

Section 230 and its Critics

With great power has come great controversy. Most recently, Twitter and Facebook have faced criticism for their decision to restrict access to a series of stories published by the New York Post about 2020 Democratic presidential candidate Joe Biden’s son, Hunter. The decision is just the latest in a string of high-profile disputes. Last year, Facebook was criticized for its decision not to remove a video of Speaker of the House Nancy Pelosi that had been edited to make her appear drunk and confused. And for years now, a debate has been simmering about how to respond to various bad-actor websites like those that aid terrorists, facilitate unlawful gun sales, and profit from child abuse and sex trafficking.

At the heart of the controversy lies Section 230 of the Communications Decency Act of 1996, a statute whose tame title belies the weighty protections it provides to the tech industry. Section 230 immunizes online entities against lawsuits related to content created by their users or other third parties. The law promotes “decency” on the internet by allowing online entities to censor “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” content without fear of being “treated as the publisher or speaker” of—and held liable for—whatever content they fail to censor. The law promotes freedom of expression by guaranteeing online entities’ ability to relay and host the massive volumes of tweets, snaps, likes, and old-fashioned emails that flow into their systems without incurring liability for their contents. Absent Section 230’s protections, online platforms would face an economically crippling duty to review the inconceivable volume of data that flows through their systems to ensure that none of its users’ posts contained defamatory speech or other unlawful content. Online platforms might be compelled to heavily censor user speech or disallow online posting altogether to avoid the risk of liability.

But Section 230—as interpreted by the courts—has not kept pace with the times and now presides over a very different internet from the one it was designed to govern. A law designed to foster free expression now protects entities even should they choose to silence disfavored viewpoints. And, despite its publication-centric roots, Section 230 now insulates online entities from liability for all manner of lawsuits, including product-defect claims—such as the one brought against Snapchat for the design of the app’s speed filter, which resulted in many accidents by teenage drivers—and claims against online marketplaces, like the sex-trafficking conspiracy claim brought against the website Backpage.com, which hosted “escort” ads of underage girls and obstructed law-enforcement efforts against sex traffickers so that it could continue to profit from the ad sales.

Public anger is growing. Not only, it seems, has Big Tech become too powerful, but it even plays by a different set of rules than everyone else. Calls for Section 230 reform have come from every corner. Democrats criticize online platforms’ failure to protect the public, reasoning that, given their dominance, online platforms have a responsibility to identify and limit the spread of falsified political ads, hate speech, materials promoting terrorism, and other harmful material. President Trump and Republicans, for their part, criticize media platforms for their perceived bias, alleging that platforms’ content-censorship practices systematically silence conservative voices. And all have come together to criticize Section 230’s protection from civil liability of bad-actor websites that purposefully or knowingly facilitate sex trafficking, child pornography, terrorism, and other unlawful activity.

Judicial Interpretation

Somehow spared from criticism, however, has been the judiciary. Big Tech is vilified. Legislative proposals abound. But almost no one has pointed a finger at the courts and judges who are Section 230’s true creators. No one, that is, except for Justice Clarence Thomas, who recently reminded us that things could have been—and may still become—otherwise.

Last month, the Supreme Court again declined a chance to interpret Section 230, when it denied a request to review the Ninth Circuit’s decision in Malwarebytes v. Enigma. Despite numerous opportunities to do so, the Court has never interpreted the statute. But that may soon change. Although he agreed with his colleagues’ decision not to hear the case, Justice Thomas took the unusual step of issuing a statement to explain why, “in an appropriate case,” the Supreme Court should consider the scope of Section 230 immunity. He lamented that lower courts “have long emphasized nontextual arguments when interpreting §230, leaving questionable precedent in their wake.” In particular, he questioned courts’ application of Section 230 immunity even to platforms that leave content on their sites that they know to be unlawful; to those that seek out and curate unlawful content for their sites; and to claims outside the publishing context, such as those related to defective products. Sensing a gap between Congress’s words and current internet immunity doctrine, Justice Thomas urged the Court in a future case to consider whether “the text of [Section 230] aligns with the current state of immunity enjoyed by Internet platforms.”

By taking a statute targeted to promote internet publication and the censorship of indecent material and pressing it into service as an internet-freedom cure-all, courts have created an expansive doctrine of immunity that is ill-suited for the modern internet.

To students of the law, the story is familiar: A statute is stretched by well-meaning judges trying to craft good policy in hard cases, statutory glosses are added to glosses, and the glosses eventually swallow the text to form a doctrine untethered from the statute that gave it life. Such has been the course of internet immunity doctrine under Section 230, whose evolution over the last 20 years has turned the small, unheralded provision attached to the much more comprehensive Communications Decency Act into what can now be fairly called the lynchpin of modern internet law. Its transformation from foundling to foundationary proceeded in two discreet intellectual moves.

First, courts interpreted Section 230’s purpose of promoting free expression to operate independently of its promotion of online decency. An entity can claim immunity under the statute for hosting unlawful content even if, rather than slipping through the cracks, the unlawful content is the result of an entity not engaging in any censorship of objectionable material at all. What is more, an entity can claim immunity even if it possesses actual knowledge of unlawful material and still fails to remove it. Given that it does nothing to encourage the removal of objectionable content, this view is in tension with Section 230’s title, “Protection for ‘Good Samaritan’ blocking and screening of offensive material” and its enactment as part of the “Communications Decency Act.” But the approach is not impossible to reconcile with the text and, seemingly more important to courts, it supports a policy of maximal free expression on the internet.

Courts around the country, led by the Fourth Circuit’s now-famous decision in Zeran v. America Online, were concerned that free expression would suffer unless they granted broad Section 230 immunity, even to entities with actual knowledge of unlawful content. They feared what is known as the heckler’s veto problem: If platforms become liable for any content they are made aware of but fail to take down, platforms might decide to automatically take down, without investigation, any content merely reported to them as objectionable to avoid the cost of investigating. An internet user’s post might be taken down and her freedom to speak her mind undermined by the unverified complaint of an internet “heckler.” To avoid this problem and thereby further a policy of “freedom of speech in the new and burgeoning Internet medium,” early courts granted broad immunity under Section 230 to any claim implicating an entity’s “exercise of a publisher’s traditional editorial functions—such as deciding whether to publish, withdraw, postpone or alter content”—even when the entity is made aware that the content is unlawful.

The second, more worrisome, step in Section 230’s transformation is its application to nonpublication claims. Although Section 230 is publication centric—it encourages censorship and it speaks in terms of “publishers or speakers” and “content providers”—publication has never been the internet’s exclusive function, and it is even less so now than it was in 1996. The internet operates as a virtual world, complete with all manner of goods and services and every kind of wrongdoing. That includes not only publication-related wrongs, like defamation, but also physical-world wrongs, like designing defective smartphone apps or facilitating sex trafficking or illegal gun sales. Claims involving such wrongdoing—against Snapchat, Backpage, or Armslist, for example—raise categorically different issues than Section 230 or the internet immunity doctrine it inspired are designed to handle. Rather than argue that an online entity should have reviewed and moderated third-party content, such claims are analogous to physical-world product defect or conspiracy cases. They argue that an online entity should have designed its app or website differently, typically to include more safety features, or that it intentionally facilitated and profited from unlawful activity.

Courts considering such claims, however, have sometimes ignored the distinction and doubled down on earlier policy-based reasoning. Even for claims that do not allege a failure to review third-party content—and thus do not implicate the moderation burden and heckler’s veto concern—courts often grant immunity to defendants on the ground that to do otherwise would interfere with the entity’s control over “traditional editorial functions.” It does not matter that applying the doctrine to bar product-defect claims is in tension with Section 230’s publication focus and the logic of Zeran, which premised protection of editorial functions despite a culpable mental state on the need to avoid the heckler’s veto. Internet immunity doctrine has become an independent judicial creation, untethered from and largely unconcerned with the words of the statute that gave it life.

A Judicial Solution?

Thus, here we are today: A judicially created internet immunity doctrine, a too-powerful tech industry that plays by a different set of rules, and a Supreme Court openly contemplating upsetting the whole house of cards. Of course, it must be acknowledged, much of the dissonance between immunity doctrine and the internet landscape is attributable to the internet’s dramatic evolution over the past two decades. Courts could never have foreseen those changes. But that is exactly the problem. By taking a statute targeted to promote internet publication and the censorship of indecent material and pressing it into service as an internet-freedom cure-all, courts have created an expansive doctrine of immunity that is ill-suited for the modern internet, yet now cemented in precedent across the country.

That internet immunity doctrine is a judicial creation, however, has its benefits. Courts can always change course. Because the Supreme Court has not (yet) interpreted Section 230, the question of its scope has been left for independent resolution in 63 jurisdictions—the 13 federal circuit courts of appeal and the high courts of the 50 states. Thus far, the story has been one of judicial lemmings citing other courts’ decisions as if maximal immunity inevitably flows from the words of the statute. It does not, and Justice Thomas’s statement in Malwarebytes should embolden other courts to say so.

A text-focused renaissance of Section 230 would shift internet immunity doctrine in two directions. First, it would expose tech companies to liability where they act as more than conduits and can be thought of as somehow “responsible for” the content they host—for example, because they know that unlawful content is on their platforms but fail to remove it or because they intentionally curate it. Second, it would limit online entities’ ability to assert immunity in lawsuits not directly related to publication, such as claims for negligence, product defects, conspiracy, or antitrust violations.

Big Tech will inevitably yowl that the destruction of the internet is upon us. It is not. But there is reason for caution. As they develop the contours of what it takes for an online entity to become “responsible for” third-party content or what constitutes a publication-related claim, courts must avoid reimposing on tech companies the same content-moderation burden that drove Congress to enact Section 230 in the first place. Online platforms simply cannot review all the content they host—or even all the complaints they receive—and any claim that would hold them responsible for doing so should remain a nonstarter. That said, Big Tech’s parade of horribles about what would happen if courts get it exactly wrong should not deter them from trying to get it right. With public anger growing, Congress inactive, and all branches of government now openly questioning the scope of internet immunity, now is the time for judges to put their judicial laboratories of democracy to work to tailor a textual solution suited for the modern internet.