Resolving the Section 230 Crisis

Public concern mounts as a few of the nation’s biggest tech players—Google, Amazon, Facebook, and Twitter—have achieved market dominance and captured the nation’s flow of information and online commercial transactions. They facilitate all manner of human activity, for good and for ill, and hold the power to track our movements, guide purchasing decisions, regulate the flow of information, and shape political discourse. Yet, all the while, as private entities, they remain free to exercise these powers behind closed doors and, as online rather than physical-world entities, they enjoy immunity from some of the rules that govern their analog counterparts.

Section 230 and its Critics

With great power has come great controversy. Most recently, Twitter and Facebook have faced criticism for their decision to restrict access to a series of stories published by the New York Post about 2020 Democratic presidential candidate Joe Biden’s son, Hunter. The decision is just the latest in a string of high-profile disputes. Last year, Facebook was criticized for its decision not to remove a video of Speaker of the House Nancy Pelosi that had been edited to make her appear drunk and confused. And for years now, a debate has been simmering about how to respond to various bad-actor websites like those that aid terrorists, facilitate unlawful gun sales, and profit from child abuse and sex trafficking.

At the heart of the controversy lies Section 230 of the Communications Decency Act of 1996, a statute whose tame title belies the weighty protections it provides to the tech industry. Section 230 immunizes online entities against lawsuits related to content created by their users or other third parties. The law promotes “decency” on the internet by allowing online entities to censor “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” content without fear of being “treated as the publisher or speaker” of—and held liable for—whatever content they fail to censor. The law promotes freedom of expression by guaranteeing online entities’ ability to relay and host the massive volumes of tweets, snaps, likes, and old-fashioned emails that flow into their systems without incurring liability for their contents. Absent Section 230’s protections, online platforms would face an economically crippling duty to review the inconceivable volume of data that flows through their systems to ensure that none of its users’ posts contained defamatory speech or other unlawful content. Online platforms might be compelled to heavily censor user speech or disallow online posting altogether to avoid the risk of liability.

But Section 230—as interpreted by the courts—has not kept pace with the times and now presides over a very different internet from the one it was designed to govern. A law designed to foster free expression now protects entities even should they choose to silence disfavored viewpoints. And, despite its publication-centric roots, Section 230 now insulates online entities from liability for all manner of lawsuits, including product-defect claims—such as the one brought against Snapchat for the design of the app’s speed filter, which resulted in many accidents by teenage drivers—and claims against online marketplaces, like the sex-trafficking conspiracy claim brought against the website Backpage.com, which hosted “escort” ads of underage girls and obstructed law-enforcement efforts against sex traffickers so that it could continue to profit from the ad sales.

Public anger is growing. Not only, it seems, has Big Tech become too powerful, but it even plays by a different set of rules than everyone else. Calls for Section 230 reform have come from every corner. Democrats criticize online platforms’ failure to protect the public, reasoning that, given their dominance, online platforms have a responsibility to identify and limit the spread of falsified political ads, hate speech, materials promoting terrorism, and other harmful material. President Trump and Republicans, for their part, criticize media platforms for their perceived bias, alleging that platforms’ content-censorship practices systematically silence conservative voices. And all have come together to criticize Section 230’s protection from civil liability of bad-actor websites that purposefully or knowingly facilitate sex trafficking, child pornography, terrorism, and other unlawful activity.

Judicial Interpretation

Somehow spared from criticism, however, has been the judiciary. Big Tech is vilified. Legislative proposals abound. But almost no one has pointed a finger at the courts and judges who are Section 230’s true creators. No one, that is, except for Justice Clarence Thomas, who recently reminded us that things could have been—and may still become—otherwise.

Last month, the Supreme Court again declined a chance to interpret Section 230, when it denied a request to review the Ninth Circuit’s decision in Malwarebytes v. Enigma. Despite numerous opportunities to do so, the Court has never interpreted the statute. But that may soon change. Although he agreed with his colleagues’ decision not to hear the case, Justice Thomas took the unusual step of issuing a statement to explain why, “in an appropriate case,” the Supreme Court should consider the scope of Section 230 immunity. He lamented that lower courts “have long emphasized nontextual arguments when interpreting §230, leaving questionable precedent in their wake.” In particular, he questioned courts’ application of Section 230 immunity even to platforms that leave content on their sites that they know to be unlawful; to those that seek out and curate unlawful content for their sites; and to claims outside the publishing context, such as those related to defective products. Sensing a gap between Congress’s words and current internet immunity doctrine, Justice Thomas urged the Court in a future case to consider whether “the text of [Section 230] aligns with the current state of immunity enjoyed by Internet platforms.”

By taking a statute targeted to promote internet publication and the censorship of indecent material and pressing it into service as an internet-freedom cure-all, courts have created an expansive doctrine of immunity that is ill-suited for the modern internet.

To students of the law, the story is familiar: A statute is stretched by well-meaning judges trying to craft good policy in hard cases, statutory glosses are added to glosses, and the glosses eventually swallow the text to form a doctrine untethered from the statute that gave it life. Such has been the course of internet immunity doctrine under Section 230, whose evolution over the last 20 years has turned the small, unheralded provision attached to the much more comprehensive Communications Decency Act into what can now be fairly called the lynchpin of modern internet law. Its transformation from foundling to foundationary proceeded in two discreet intellectual moves.

First, courts interpreted Section 230’s purpose of promoting free expression to operate independently of its promotion of online decency. An entity can claim immunity under the statute for hosting unlawful content even if, rather than slipping through the cracks, the unlawful content is the result of an entity not engaging in any censorship of objectionable material at all. What is more, an entity can claim immunity even if it possesses actual knowledge of unlawful material and still fails to remove it. Given that it does nothing to encourage the removal of objectionable content, this view is in tension with Section 230’s title, “Protection for ‘Good Samaritan’ blocking and screening of offensive material” and its enactment as part of the “Communications Decency Act.” But the approach is not impossible to reconcile with the text and, seemingly more important to courts, it supports a policy of maximal free expression on the internet.

Courts around the country, led by the Fourth Circuit’s now-famous decision in Zeran v. America Online, were concerned that free expression would suffer unless they granted broad Section 230 immunity, even to entities with actual knowledge of unlawful content. They feared what is known as the heckler’s veto problem: If platforms become liable for any content they are made aware of but fail to take down, platforms might decide to automatically take down, without investigation, any content merely reported to them as objectionable to avoid the cost of investigating. An internet user’s post might be taken down and her freedom to speak her mind undermined by the unverified complaint of an internet “heckler.” To avoid this problem and thereby further a policy of “freedom of speech in the new and burgeoning Internet medium,” early courts granted broad immunity under Section 230 to any claim implicating an entity’s “exercise of a publisher’s traditional editorial functions—such as deciding whether to publish, withdraw, postpone or alter content”—even when the entity is made aware that the content is unlawful.

The second, more worrisome, step in Section 230’s transformation is its application to nonpublication claims. Although Section 230 is publication centric—it encourages censorship and it speaks in terms of “publishers or speakers” and “content providers”—publication has never been the internet’s exclusive function, and it is even less so now than it was in 1996. The internet operates as a virtual world, complete with all manner of goods and services and every kind of wrongdoing. That includes not only publication-related wrongs, like defamation, but also physical-world wrongs, like designing defective smartphone apps or facilitating sex trafficking or illegal gun sales. Claims involving such wrongdoing—against Snapchat, Backpage, or Armslist, for example—raise categorically different issues than Section 230 or the internet immunity doctrine it inspired are designed to handle. Rather than argue that an online entity should have reviewed and moderated third-party content, such claims are analogous to physical-world product defect or conspiracy cases. They argue that an online entity should have designed its app or website differently, typically to include more safety features, or that it intentionally facilitated and profited from unlawful activity.

Courts considering such claims, however, have sometimes ignored the distinction and doubled down on earlier policy-based reasoning. Even for claims that do not allege a failure to review third-party content—and thus do not implicate the moderation burden and heckler’s veto concern—courts often grant immunity to defendants on the ground that to do otherwise would interfere with the entity’s control over “traditional editorial functions.” It does not matter that applying the doctrine to bar product-defect claims is in tension with Section 230’s publication focus and the logic of Zeran, which premised protection of editorial functions despite a culpable mental state on the need to avoid the heckler’s veto. Internet immunity doctrine has become an independent judicial creation, untethered from and largely unconcerned with the words of the statute that gave it life.

A Judicial Solution?

Thus, here we are today: A judicially created internet immunity doctrine, a too-powerful tech industry that plays by a different set of rules, and a Supreme Court openly contemplating upsetting the whole house of cards. Of course, it must be acknowledged, much of the dissonance between immunity doctrine and the internet landscape is attributable to the internet’s dramatic evolution over the past two decades. Courts could never have foreseen those changes. But that is exactly the problem. By taking a statute targeted to promote internet publication and the censorship of indecent material and pressing it into service as an internet-freedom cure-all, courts have created an expansive doctrine of immunity that is ill-suited for the modern internet, yet now cemented in precedent across the country.

That internet immunity doctrine is a judicial creation, however, has its benefits. Courts can always change course. Because the Supreme Court has not (yet) interpreted Section 230, the question of its scope has been left for independent resolution in 63 jurisdictions—the 13 federal circuit courts of appeal and the high courts of the 50 states. Thus far, the story has been one of judicial lemmings citing other courts’ decisions as if maximal immunity inevitably flows from the words of the statute. It does not, and Justice Thomas’s statement in Malwarebytes should embolden other courts to say so.

A text-focused renaissance of Section 230 would shift internet immunity doctrine in two directions. First, it would expose tech companies to liability where they act as more than conduits and can be thought of as somehow “responsible for” the content they host—for example, because they know that unlawful content is on their platforms but fail to remove it or because they intentionally curate it. Second, it would limit online entities’ ability to assert immunity in lawsuits not directly related to publication, such as claims for negligence, product defects, conspiracy, or antitrust violations.

Big Tech will inevitably yowl that the destruction of the internet is upon us. It is not. But there is reason for caution. As they develop the contours of what it takes for an online entity to become “responsible for” third-party content or what constitutes a publication-related claim, courts must avoid reimposing on tech companies the same content-moderation burden that drove Congress to enact Section 230 in the first place. Online platforms simply cannot review all the content they host—or even all the complaints they receive—and any claim that would hold them responsible for doing so should remain a nonstarter. That said, Big Tech’s parade of horribles about what would happen if courts get it exactly wrong should not deter them from trying to get it right. With public anger growing, Congress inactive, and all branches of government now openly questioning the scope of internet immunity, now is the time for judges to put their judicial laboratories of democracy to work to tailor a textual solution suited for the modern internet.

Reader Discussion

Law & Liberty welcomes civil and lively discussion of its articles. Abusive comments will not be tolerated. We reserve the right to delete comments - or ban users - without notification or explanation.

on November 20, 2020 at 09:03:30 am

Isn't, or rather, wasn't, obscene, "lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” content" protected by the 1st Amendment ?

read full comment
Image of Bert Schwitters
Bert Schwitters
on November 20, 2020 at 09:46:18 am

Nice try at unscrambling the mess, but I'm still confused.

I'm also still angry about the politics of it all. And the politics of all of it is what's REALLY important, the capacity of Big Tech to be Big Brother. And Big Brother is what they've become, and we are in a state of political crisis because of it. We must put a stop to THAT, now. The rest can wait. I don't give a rat's ass about all the rest because all the rest pales to insignificance in comparison to the Big Brother stuff.

Congress and the courts made the mess and have failed to clean it up. Nothing new there. Indeed, if history is our guide, it is probably best that Congress and the courts stay out of it. When they try to "fix" a problem the problem usually gets worse.

So rather than rewording the statute, just do a straightforward, clean repeal of Section 230, and let market competition provide the platforms and current liability laws address wrongdoing. Would that not "correct the problem" of political censorship?

Absent a clean repeal of Section 230, I am intrigued by the possibility of making a Bivens-type claim against BOTH the United States AND internet platforms like Google, Facebook and Twitter when they censor political speech using Section 230 as both their sword and their shield. In enacting Section 230 did the United States not delegate an unlawful power of censorship over political speech and in using Section 230 as a sword and shield to block political speech are the internet platforms not acting "under color of law," the authority which Congress gave them?

These are not matters within my ken. But some creative lawyering seems called for.

read full comment
Image of paladin
on November 20, 2020 at 10:20:30 am

Important issue. However, that statute, enacted far back in 1996. At the time, nothing similar to what is know today, existed actually. I quote Justice Thomas indeed ( "Enigma" mentioned):

" When Congress enacted the statute, most of today's major Internet platforms did not exist"

In accordance, this is first up to Congress, to amend all this. If courts deal with it, they must consider it, in light of the underlygin purpose, and public policy dictated by Congress at the time. And that is what has been done by courts. Justice Thomas, also recognized it in his statement or opinion there.

But, one should pay attention to the wording or the text itself. For it does state, I quote relevant part:

(2) Civil liability

No provider or user of an interactive computer service shall be held liable on account of—

(A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;

End of quotation:

So, strategic demand as prescribed, is: "in good faith". So, what is good faith? One may suggest, that knowingly, or unknowingly represent good faith ( and it is much more complicated than that). Yet, what do we know ? Can someone, user or provider, know that something is lawful or not ? A judge sometimes, may sit, months on one publication, with great dilemma, whether it is lawful or not. How should one user or provider know it? If censored unjustifiably, then, it is discriminating, and violating constitutional rights. And indeed, we read at the foot of the provision, that: "whether or not such material is constitutionally protected".

Here to Sec 230:



read full comment
Image of El roam
El roam
on November 20, 2020 at 12:48:22 pm

Agreed with paladin re: let the market sort it out.
And for similar reasons as Paladin with one additional reason:

"(2) Civil liability

No provider or user of an interactive computer service shall be held liable on account of—

(A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; "

Let us begin with the last qualifier: "constitutionally protected."
I cannot believe I am actually saying this but here I credit the Congress with both the good sense and the recognition that the powers of the central government are indeed limited and that it may not impose upon a private entity such restrictions as those placed upon either the central government or state governments.
SCOTUS's doctrine of incorporation of the Bill of Rights against the States and their political subdivisions was initially a matter of some dispute. That is no longer true. It is accepted and practiced.
Can one imagine the uproar had SCOTUS "incorporated" the Bill of rights against private entities? Yes, there are some exceptions for certain types of discrimination but I know of no instance where 1st, 2nd, 4th, 5th, etc amendment protections are demanded of private corporations. The wording of Section 230 as quoted above clearly demonstrates the Congress's recognition that it may not require private entities to comply with the same limits as government entities.

Regrettably, for those who wish to punish the tech titans for their (obviously) Bad Faith censorship, and unless SCOTUS decides to *incorporate* the entire Bill of rights against private entities, there is nothing that the Judiciary or the Legislative may do to correct this OTHER than as Paladin argues repeal section 230 in its entirety. And as the essayist has shown, because the Lower courts have morphed 230 into a Monopoly game get-out-of-jail-free card, such repeal would also eliminate the Tech titans abuse of product liability laws, etc.

read full comment
Image of gabe
on November 20, 2020 at 15:47:53 pm

I failed to indicate what a "Bivens suit" is. It is an implied right of action for an individual to sue federal officials (and the United States) for the constitutional tort of infringing one's constitutional rights while said officials (or the United Sates) are acting under "color of law."
Creative lawyering or BS, I do not know, but worth analysis and maybe a try against Big Tech.

BTW: while we're stripping those bastards of Section 230 protection, let's 1) add a new section to 18 USC (criminal code) making it a felony for the internet platforms to censor anyone because of their political or religious views; 2) prohibit those platforms from using their technology to promote a political viewpoint, to encourage voters of a political party to vote, or to support or suppress a political candidate or viewpoint without public disclosure by the internet platforms of their political action, and 3) designate such political actions by internet providers "in-kind political contributions" subject to the laws of campaign finance.

"When they go low, we go high," as Donald Trump would say.

read full comment
Image of paladin
on November 20, 2020 at 16:57:06 pm

A couple of informal opinions:

1. It appears that the Communications Decency Act was motivated by the belief that the internet was a technological Academy of Athens; capable of disseminating wisdom, principled political discourse, and honing the novel but ingenious ideas that would usher in a new Golden Age. Congress of course realized that this information would also contain no small measure of undesirable content, such as pornography, scandalous untruths and plots against the common good. Congress realized that a trade-off was needed and the result was section 230, by which it was hoped that internet platforms would nurture the intellectual potential of the internet and at the same time not be deterred from discouraging its less desirable tendencies. This all seemed to make sense at first, but appears to have been flawed by several unaccounted-for phenomena.

Firstly, the influence of "Big Tech" arises not from the quality of the discourse that it enables on the internet, but from its ability to profit from directed advertising. It is this commercial aspect of Big Tech that alters the trade-offs that Congress thought it was crafting between the beneficial and detrimental potentials of the internet. The suppression of some political views and the promotion of others are simply variations on advertisements that benefit discrete interests, but the commercial interests of Big Tech are always present in the mix. These advertising endeavors are ubiquitous, and tech oligopolies attempt to conceal them with such neutral-sounding pretenses as "fact-checking" or "guarding against misinformation." The ludicrousness of this easily observed in the attempt to label as misinformation claims regarding wearing masks to "stop" the spread of COVID, a proposition that is so full of conflicting data, experience and analysis that no one can definitively say whether any claims on the topic are misinformation or not. Congress missed this very relevant possibility: the public interests in the beneficial potential of the internet on public discourse would be corrupted by the insular interests of a few tech oligarchs, and section 230 might be used to further this corruption, regardless of any other benefits it may supply.

Secondly, Congress seemed to appreciate that the internet of 1996 was evolving, but seems not to have considered the possibility that because of this obvious fact, the Communications decency Act might well be obsolete at the time it was enacted. Smarter people than anyone in Congress or the relevant administrative agencies were figuring out how to exploit the internet environment, and use it to private advantage, and this environment included Congress's naivete regarding the possibility of such exploitation. Congress saw the down side of the internet as people looking at dirty pictures and plotting anti-social behavior, not as an object of manipulation by very limited and not particularly public-minded interests. In this sense, Congress was like an infectious disease specialist trying to keep up with the virulence of a rapidly-mutating pathogen that had the power to evolve out of any attempts to suppress it.

Courts cannot and should not attempt to remedy the Congressional deficiency by inventing a mechanism for adaptation to changing technological environments that Congress failed to foresee or adequately provide for. To do so merely exacerbates the condition of kritarchy, in which the least representative branch of government, and often the least competent, is left to fashion policies by mere dint of opportunity. This is no way to run an representative republic.

2. One of perils of Big Tech is aided by popular acceptance of the dominant fallacy of twenty-first century America: "You don't need to think because we have done your thinking for you." We supposedly do not need to consider conflicting information regarding COVID because "fact-checkers" will do that for us, honest! We don't need to ponder the business deals of Joe Biden's family because some technocrat is ready to save us from the wrong conclusion, by eliminating the question altogether. The unalterable fact however is that this confidence game (because that is what it is) depends on the naivete and intellectual laziness of the American public to delegate their thinking to oligopolies who then have the luxury of seeing to their own interests. Here is the crucial point: The civic health of the United States in matters of public importance is not dependent on "transparency" or schemes of liability for whatever content ends up on a website. The best defense against the excess of Big Tech and the corruption of public discourse is skepticism. The power-mongers in America know this. When one reads of "gaslighting" the target of this phenomenon is skepticism and a healthy distrust of unexamined claims. The ideas that claims of Russian collusion in the 2016 election should be accepted on the ipse dixit of the New York Times or Washington Post were likewise attempts to downplay the importance of skepticism in political discourse. Likewise for assurances of Twitter and Facebook and Google that they are concerned about our gullibility, and thus will undertake to protect us from ideas that might lead to wrong-think. All such claims should be met with skepticism.

read full comment
Image of z9z99
on November 21, 2020 at 19:46:20 pm


Not to put too fine a point on it but:

"One of perils of Big Tech is aided by popular acceptance of the dominant fallacy of twenty-first century America: "You don't need to think because we have done your thinking for you." We supposedly do not need to consider conflicting information regarding COVID because "fact-checkers" will do that for us, honest!"

My own experience would indicate that this intellectual "ostrich-ism" was / is not unique to the 21st century having had the pleasure of encountering since the 1960's.
The difference now, in 21st century, is that the Tech titans provide simply a patina of plausibility to the unjustified assertion that one ought not to think by offering all manner of fact checking and alleged transparency.
The current situation is not dissimilar to the old notions and mechanics of party affiliation, "Let the Party Speak for me."
The major difference was that in the late 50's - 60's, the Party at least provided free hotdogs to us kids at the Labor Day / 4th July picnics they sponsored.
Some freely consumed the hotdogs but not the baloney!

read full comment
Image of gabe
on November 22, 2020 at 17:03:07 pm

Unless my English is defective, this article does not address what has become sec 230's effective authorization of censoring social-political speech.

read full comment
Image of Angelo M. Codevilla
Angelo M. Codevilla
on November 23, 2020 at 17:13:03 pm

Agree and that's my (prior) point re the need for some creative lawyering to bring private suits against the United States and Big Tech: Congress delegated to Big Tech the power to suppress speech, a power which per the constitution is prohibited to Congress and, thus, which cannot lawfully be delegated. Section 230 is unconstitutional. Otherwise, Congress through delegation can make indirectly lawful what Congress cannot do directly. Further, if the delegation is considered to be lawful, then its exercise by Big Tech so as to restrict free speech is, per Bivens, an action "under color of law" which violates federal civil rights laws and the constitution.
As I said, this area of law is "not my ken" but worth a look.

read full comment
Image of paladin
on November 23, 2020 at 11:39:17 am

1) Paladin: Careful on Bivens claim: You may get what you want.


It appears that the Ninth Circuit has applied Bivens in just such an expansive fashion.

2) Prof. Codevilla is correct, re: "...sec 230's effective authorization of censoring social-political speech."
While the congress may not have intended this, 230 effectively DOES authorize such censorious carping editorial behavior.
The essay alludes to such but does not specifically address either the problem or offer any viable solution.
I am not certain that there is one other than repeal.
As I suggested earlier, short of a further, and frankly unconstitutional expansion of "incorporation doctrine" to private entities, there is no means available to control the censors.
Or are we to once again impose a Fairness Doctrine upon the media. while this may be well deserved, it presents certain problems of its own.
What IS "fair'?
Who determines "fair"?
Dollars to donuts, FAIR will be determined by either some government appointed functionary (guess which party) or some algorithm. Oh wait, we are already there.
No, the only solution is Repeal of the protections afforded our Tech Titans. Let the market and civil liability laws sort it out

read full comment
Image of gabe
on November 24, 2020 at 06:37:12 am

Does not change my opinion. I hope we get what I want in any lawful way we can get it, which is harsh, punitive, confiscatory but legal recourse against the SOB's who swung the election to Biden by controlling the flow and content of public information.

read full comment
Image of paladin
on November 27, 2020 at 15:13:37 pm

Well, it woulds seem that The Trumpster agrees with us - REPEAL SECTION 230!


read full comment
Image of gabe
on November 27, 2020 at 15:23:44 pm

And here is additional evidence that Twitter (and others) are "publishers"


wherein we find a "queer, women of color, an Asian american" (self described, and no doubt beaming with pride and self absorption) intending to censor according to the precepts of "transformative justice" and "procedural justice". As evidence of her good intentions and the efficacy of these novel forms of justice, Ms Su cites the example of how colleges have employed "Transformative justice ... for its role in addressing sexual assault on college campuses..."
We all know how well that has worked out!

read full comment
Image of gabe
on November 21, 2020 at 07:48:08 am

[…] Source link […]

on November 21, 2020 at 16:36:02 pm

[…] National Interest Will Asian-Americans Trend Conservative? – Helen Raleigh at City Journal Resolving the Section 230 Crisis – Gregory M. Dickinson at Law & Liberty The Rise of e-Sports & Competitive Gaming […]

Law & Liberty welcomes civil and lively discussion of its articles. Abusive comments will not be tolerated. We reserve the right to delete comments - or ban users - without notification or explanation.