Softly Regulating Social Media Platforms

One of the difficult issues of our day is determining the appropriate regulatory response to social media platforms—such as Facebook, YouTube, and Twitter—that have a dominant market position. Is it consistent with classical liberal principles to regulate those platforms? Even if it would be consistent with classical liberalism, would it be a good idea?

Another question is what type of regulation would make sense. Perhaps certain less intrusive forms of regulation would be desirable, whereas more restrictive forms would be problematic.

In this post, I discuss some possible types of regulation that might be categorized as soft regulation—as regulation that promotes certain goals by attempting to minimize (but not eliminate) coercion. I am not necessarily recommending these proposals, just putting them on the table.

Here are some proposals, ranging from least coercive to more coercive.

1. Simple Transparency. Each social media platform is required to announce terms of service rules, including rules for whether or not it will discriminate on the basis of political ideology. The platform is not prohibited from discriminating, but if it does discriminate, it must say so and how it will do so.

If the platform does not discriminate, it must also announce how this prohibition will be enforced, including who will be making these judgments, whether these enforcers will indicate the specific actions or writings that violated the terms of service (and how they did so), and whether there are any appeal options.

2. Requirements to Claim Nondiscrimination. Under this proposal, a social media platform that announces it does not discriminate on the basis of political ideology must actually adopt some policies. It must employ reliable mechanisms for identifying violations of terms of service in a politically neutral manner. For example, solely using the Southern Poverty Legal Center, a left-wing group, to identify inappropriate content would not be a reliable mechanism. It must also provide a reliable appeals mechanism to ensure that political bias is not being employed. To be clear, under this proposal a platform is not prohibited from discriminating based on political ideology. But if it discriminates, it must plainly say it is doing so.

3. Requirements of All Social Media Platforms that Enjoy Communications Decency Act Immunity. Under this proposal, all social media platforms would be required to institute a policy of nondiscrimination—and to follow the requirements under number 2 above—in order to enjoy Communications Decency Act immunity. Under section 230 of the Act, social media platforms are not responsible for the actions of defamation committed by users of the platform, since the platforms are not considered to be publishers of the users’ statements. But one might restrict this “privilege” if social media platforms are actually intervening so as to discriminate on the basis of political statements.

These various proposals attempt to minimize the level of coercion to a certain degree. The first proposal merely requires disclosure as to policies for political discrimination. The second requires certain institutions to be used by those platforms that claim not to discriminate. And the third actually requires nondiscrimination for platforms that receive the “privilege” of immunity.

I am not sure whether any of these proposals are desirable overall. But, unless the social media platforms were willing to state that they discriminated based on politics, they would represent a significant change in the way the platforms behave. And under the third proposal, these reforms would be essentially mandatory (and therefore not all that soft) because of the great value of the privilege of immunity.