fbpx

A Little Knowledge Is a Dangerous Thing

Many years ago, a microeconomics instructor of mine made what was at that time the unexceptional observation that more information is always better than less information, that “more information weakly dominates less information.” The “weakly” bit meant that, if the information was relevant to a choice, knowing it would improve one’s position. And if the information were not relevant to a choice, it could simply be ignored, with the choice remaining what it would have been without the added bit of information. Thus, more information never makes outcomes worse, and sometimes improves them. So more information is always to be preferred to less information.

Yet the claim’s generality did not seem quite right. There was, first, the issue of whether the value of the learned information was greater than the cost of acquiring the information in the first place. The possibility of “rational ignorance” is well known and is not problematic. Our everyday lives provide any number of examples in which we choose to remain ignorant of something because the cost of acquiring information is greater than the expected benefit of getting the information. Yet even if information were free, it didn’t seem as though more information would always weakly dominate less information. The best I could do on the spur of the moment was this counterexample: That a six-year old learning the information, “Santa Claus does not exist,” does not leave the six-year old weakly better off than he or she was before learning the information. The professor smirked, and continued on to another topic.

The 2019 film The Farewell presents a more substantive, if contestable, counterexample. A Chinese family, with siblings and grandchildren scattered throughout the United States and Japan, learns that the family matriarch in China has lung cancer and has only a few months to live. The family chooses not to tell the grandmother of her diagnosis, choosing instead, as one of her son’s puts it, to “bear her burden for her.” The Americanized granddaughter resists the sentiment, arguing that it is not right to keep her grandmother in the dark about something so important, although the granddaughter grumpily complies with the family’s corporate decision. The film presents several scenes in which family members discuss their contrasting cultural attitudes toward informational disclosure as a means of promoting a person’s agency or autonomy versus corporate and paternalistic attitudes toward information revelation.

Cass Sunstein presents a surprisingly skeptical view of “more-information-is-always-better” claims in his new book, Too Much Information. Pertinent to the two cases above, for example, Sunstein, a law professor and head of the White House Office of Information and Regulatory Affairs for several years during the Obama administration, argues that the affective or emotional impact of information should be included as a fundamental consideration in crafting government disclosure-of-information policies. He writes that one of the “primary goals” of his analysis in the book “is to offer a plea for focusing intensely on the emotional effects of receiving information” as a consideration whether disclosure in fact leaves people “better or worse” off.

More broadly, however, against the view that more information always weakly dominates less information, Sunstein discusses the many different ways in which disclosing more information does not necessarily leave people better off.

Inefficient Disclosure Policies

Sunstein targets government disclosure policies more than academic analysis. (Indeed, few economists today would make the unqualified statement of my micro professor.) In particular, Sunstein takes aim at basing government disclosure policies in a non-utilitarian “right to know.” He argues the right-to-know rubric leads policymakers to adopt disclosure policies when disclosure is at best useless and at worse downright counterproductive.

He writes, “The primary question in this book is simple: When should government require companies, employers, hospitals, and others to disclose information?” His answer, he writes, is simple, although perhaps deceptively simple. Government should require disclosure “When information would significantly improve people’s lives.” The surprise is that the book focuses mainly on the argument that making judgment of when disclosure “improves people’s lives” can be so complicated that government policymakers often should not attempt it except under carefully identified conditions.

To be sure, Sunstein nods at many cases in which disclosure policies provide information that improve people’s choices. Yet the surprising focus of his book is on the many cases in which more information simply wastes government and private time and resources or actually worsens people’s lives.

Information can . . . improve people’s lives if it makes them happier. Unfortunately, some information does not improve people’s lives in any way. It does not improve their decisions, and it does not make them happier. Sometimes it is useless. Sometimes it makes them miserable. Sometimes it makes their decisions worse.

Examples abound in the book of how more information can sometimes worsen people’s lives. On the lighter side, Sunstein discusses a response to a policy he implemented when working for the White House that, among other things, would require nutritional posting for popcorn in movie theatres. He was taken aback when a friend sent him an email with the subject line, “Sunstein Ruins Popcorn.” More seriously, Sunstein discusses false inferences that consumers sometimes draw from disclosures, resulting in worse decisions. For example, consumers infer from disclosure that food is a genetically-modified organism (GMO) that it is less safe than non-modified food. Yet the disclosure is intended for informational purposes only, and is not intended to communicate danger. Consumers may avoid purchasing a product they would otherwise prefer—perhaps purchasing a more expensive alternative—because they misperceive the purpose of the disclosure. The welfare losses from misperceiving the purpose of disclosure requirements can be huge in the aggregate. So, too, the costs imposed on people in the process of the government acquiring and then disclosing information sometimes far exceed the minimal benefits derived from disclosure.

At the level of policy-making, Sunstein discusses many disclosures that work. For example, warning labels on cigarettes. But he dwells on understanding the reasons in cases in which disclosure either didn’t seem to work or produced unintended consequences, as those discussed above.

Sunstein’s Skepticism toward Disclosure Policies

Early in the book Sunstein writes that the volume’s purpose is to provide a framework “to clarify not only when mandatory disclosure is a good idea, but the form that mandatory disclosure should take.” Yet it turns out that he actually provides little leverage on those questions, aside from the broad point that more information does not always weakly dominate less information.

For example, after criticizing “willingness-to-pay” analysis, Sunstein punts any policy framework for disclosures, arguing instead that government agencies need to do more research before imposing disclosure requirements:

In the future, it would be far better for agencies to make progress in answering difficult questions about the actual effects of information on people’s lives. Those effects might be strongly positive or strongly negative. The next generation of work on disclosure requirements—and regulatory benefits in general—should make it a priority to produce those answers.

I don’t disagree with Sunstein’s conclusion—and Sunstein should be commended for his principled ambivalence. Yet the conclusion that disclosure requirements might produce “strongly positive or strongly negative” effects on people’s lives does not provide the promised analytical leverage that would “clarify . . . when mandatory disclosure is a good idea.”

The book reads almost as though Sunstein started the book with one hypothesis in mind—that he would develop a framework that would help with developing sensible government disclosure policies going forward—but he instead became increasingly skeptical of his initial project as he worked through the research.

Similarly, at the end of the very next chapter, Sunstein also concludes his discussion with a call for additional study:

Further research is needed to gain a better understanding of when, why, and how disclosure requirements have intended or unintended consequences, as well as how policies can be improved. But one thing is clear: psychology changes everything.

Here, again, Sunstein’s conclusions counsel skepticism regarding disclosure requirements, except when drafted under very specific circumstances. No framework for policy here, either.

Sunstein further develops his skeptical theme in a chapter devoted to the costs of what he calls “sludge,” which is the administrative burden imposed by government requirements that people provide it with information.” Sludge imposes “serious costs in terms of time, frustration, money, humiliation, and sometimes even health.” Sometimes the information is useful, but Sunstein is skeptical. He argues the government often imposes informational requirements without considering whether the benefit of acquiring the information is worth the cost. He recommends that “in the future, it should be a high priority for deregulation and deregulators” to acquire information on whether the cost of “sludge” is worth the benefits. He again concludes skeptically, opining that “in many cases . . . acquisition of the relevant information will demonstrate that sludge is not worth the candle.”

Sunstein’s broad doubts and questions regarding mandatory disclosure policies are important in themselves. They are doubly notable, however, given his stature among the Democratic intellectual and governing elite. The book reads almost as though Sunstein started the book with one hypothesis in mind—that he would develop a framework that would help with developing sensible government disclosure policies going forward—but he instead became increasingly skeptical of his initial project as he worked through the research.

It merits stress that he does not provide a broadside against any and all disclosure requirements. Yet he pointedly opposes broadly aimed disclosure requirements. Instead, he would have them drafted on carefully identified case-by-cases bases.

Curious Textual Choices

Beyond his substantive points, there are several curious oversights of what would have been helpful technical verbiage for Sunstein’s analysis.

First, unless I missed a passing use of it, the phrase “rational ignorance” never appears in a book in which the concept of rational ignorance plays a central role. It’s an odd omission, not least because the term is well known today. More importantly, however, the label is both so understandable and so self-explanatory to even non-academics that its use would have provided a helpful means to organize and frame one of the fundamental concepts Sunstein deploys in the book.

Similarly, there are well-known alternative specifications of people’s preferences that would seemingly have helped explain some of the behavior that puzzles Sunstein. For example, Sunstein devotes an entire chapter to this puzzle regarding people’s behavior toward Facebook: people report being happier when they quit Facebook, yet they continue on Facebook. So, too, people who quit Facebook, and report being happier as a result, nonetheless also report that they desire to continue to use Facebook.

The puzzle disappears, however, if we hypothesize that people were engaging in mini-maxing behavior. That is, people make decisions based on criteria with simple informational requirements, such as minimizing the maximum losses they face. Most people in cases like this are not attentive to finely changing probabilities or less-than-maximum losses. For Facebook, people are concerned that they will miss out on something big if they quit Facebook. Their mini-maxing fear induces them to continue on Facebook despite the non-maximum losses they incur by remaining on the social medium. The oddity of this omission is that at numerous other points of the book, Sunstein expressly invokes people’s use of “shortcuts” or “heuristics”—approaches that include mini-maxing—as explanations for otherwise puzzling behavior.

Finally, Sunstein could have usefully resurrected the old distinction between “risk” and “uncertainty.” “Risk” applying outcomes occurring with a generally known and well-behaved probability distribution (like a roulette wheel, or the weather) while “uncertainty” applies to outcomes in which the underlying probability distribution is not known or well behaved (the proverbial “black swan” event). Again, Sunstein invokes the underlying concept, but could have usefully drawn on the language to help organize parts of his argument.

These terminological quibbles aside, Sunstein’s book provides a host of reasons to approach disclosure requirements skeptically. This does not mean opposing the policies across-the-board. Nonetheless, Sunstein seemingly would limit the crafting of disclosure policies to case-by-case, empirically rich cases. This would seem to be an abundantly sensible approach to government disclosure policies.