The mantra of "follow the science" ignores the reality that no one set of facts can dictate any political result.
In today’s Manichean political world, coronavirus policy disputes often get portrayed as a conflict between scientific expertise, on the one hand, and invincible ignorance on the other. This is yet another variation of the riff we heard earlier on global warming, vaccinations, and more. While journalistically tidy, dividing the world into objective, dispassionate experts versus the anti-scientific horde not only ignores what motivates public skepticism, it actually contributes to the difficulty of leveraging expertise to address policy problems. The irony is that less would be more: A little more humility regarding the domain of scientific expertise would enhance the policy authority of experts rather than detract from it.
Science is most ill-served by its ostensible friends who declaim its authority the loudest. Most of the public gets little exposure to science beyond a smattering in elementary school and high school, and introductory surveys in college. Here, however, particularly in the lower grades, science is often communicated as a fixed set of unquestionably authoritative facts, with scientists presented as members of an all-but magical authoritative clerisy. Even when science is taught as a process of investigation and knowing, the process is explained as a formalistic recipe of steps to follow.
Yet as a process of knowing, the scientific enterprise is, in principle, both epistemologically democratic and epistemologically conservative. The insistence of science on “replicability” means that scientific conclusions are open to all. To be sure, this is often true only in principle. Specialized language often develops as a necessary shortcut, and, as a result, scientific writing can be quite technical. Further, empirical tests can be costly and difficult to replicate. Nonetheless, science is in fact not a form of secret knowledge; it is open to all willing and able to understand the language.
So, too, canonical scientific investigation is epistemologically conservative. Empirical results must not only be replicable, they must be demonstrated, as it were, beyond a reasonable doubt. This is usually a level of statistical significance at the 95 percent level. This canonical approach, however, requires access to data in sufficient quantities, and of sufficient quality, to control adequately for relevant independent variables and still generate statistically significant results. This is often a demanding standard to meet, both theoretically and empirically.
Beyond the actual openness of science to democratic deliberation, however, very practical problems understandably arise regarding public recognition of expertise in modern policy debates. Public skepticism is not merely a matter of doubting the usefulness of the scientific method. While “science” is often presented as a self-executing discipline that describes a raw, objective world outside the realm of human judgment, it is nonetheless performed and communicated by humans. And here, as in other areas of life, expertise necessarily reflects human temptations and constraints.
One can fully grant the existence and usefulness of experts and expertise, yet also recognize from whence skepticism can derive. The problem is this: while credentialing experts solves one “cheap talk” problem, it creates another, less recognized cheap talk problem. It’s the latter from whence skepticism derives.
Nobel prize winning economist Michael Spence famously accounted for the way costly investment in credentials—in expertise—can solve the problem of cheap-talk. I provide an example below because the formal definition of cheap talk in game theory is more subtle than it sounds. “Cheap talk” is canonically defined as “communication between players that does not directly affect the payoffs of the game.” The problem exists when people with different capabilities or expertise send the same “message”—or “pool” on the same message. While cheap talk can often credibly transmit information between people, a problem exists when interests are directly adverse—as between, say, a buyer and a seller.
For example, in Spence’s original article, the central example is an employer who wants to hire high capability individuals rather than hire low capability individuals. The individuals themselves know whether they are high or low capability, but the employer does not. If the employer asks, “Are you a high capability or low capability person?” all of the applicants will say, “I am a high capability person” no matter what type they actually are.
The solution to this problem in Spence’s article is educational credentialing. That is, high capability workers can invest in generating a “credential” through what we call “education.” The cost of obtaining this credential is lower for high capability individuals than for low capability individuals, so high capability individuals will invest in the credential, low capability individuals will not. Observing whether an applicant holds the credential thereby allows the employer credibly to know whether the applicant is a high- or low-capability applicant, and then hire appropriately. (Controversially, this account of the educational process does not require that “education” actually add to a person’s human capital; education can be purely a credentialing mechanism.)
Real-life examples abound. Teachers, for example, know that every student who comes in to discuss a low grade on a test will say that he or she studied hard for the exam. Undoubtedly some students did, and performed poorly nonetheless. Undoubtedly, however, other students did not study hard, notwithstanding the message to the instructor that they did study hard. All students “pool” on the message, “I studied hard,” irrespective of whether they did or not.
This pooling creates a problem for the hard-working (but still poorly performing) student. Because the “I-studied-hard” message of the slacking students “pools” with the message of the non-slacking students, instructors cannot easily tell them apart. As a result, instructors discount the likely truthfulness of the “I-studied-hard” message of non-slacking students who are in fact telling the truth.
So, too, with people claiming to be experts. If we ask almost anyone proffering an ostensibly expert opinion whether they actually know what they are talking about, the vast majority will say “yes.” For many that message will be accurate. For others, however, it will not be accurate. As in the Spence signaling model, one solution to this problem is to require that putative experts invest in a costly credential—usually a doctorate in modern times—to separate those who are truly expert in an area from those who are not.
This dualism into “expert” and “non-expert,” however, results in a less-recognized secondary cheap talk problem. When one holds the credential of “expert,” the temptation exists to claim the mantle of expertise even when one opines on matters outside one’s domain of expertise, or on a matter which does not really admit an expert answer.
The public often senses something isn’t exactly right, but with the simplistic dualism of “expert or nonexpert,” the public often has difficulty articulating their skepticism without seeming to be attacking the idea of expertise itself. This invites the response that skeptics are “anti-science” when they are in fact responding to the possibility of overreach of experts as a group.
Here are two temptations experts face to speak beyond their expertise and so to invite the non-expert public to discount their messages.
The first is the experts’ fear that if they communicate scientific results with all the modesty of the scientific process properly understood that the public and policy makers will be insufficiently motivated to take action the expert believes is necessary. That is, the expert believes it necessary to present the issue in bold black and whites, rather than in more accurate hues of gray, in order to gin up what the expert believes is a necessary policy response.
While perhaps an understandable temptation, this is simply a higher-order version of the fable of “The Boy Who Cried Wolf.” If the habitual message is “we are on the very precipice of disaster,” and yet disaster does not occur, the public learns to discount the message despite the fact that the expert is a real expert and there is likely some policy problem that needs to be addressed.
Experts must realize that drawing policy problems in striking black-and-white clarity, a clarity their evidence does not in fact support, does not in fact help their immediate cause. Rather, it invites broader discounting of expert messages in other policy domains as well. Further, with the dualism of “expert” and “non-expert,” overreach by some experts nonetheless influences the reputation of even the more careful expert.
A related problem is that journalists often consider it their responsibility to present issues clearly and strikingly to their readers. Eliminating weasel words is an all-but-required discipline for editorial pages. This in essence requires that experts punt acknowledging the conditionality and hedging inherent in accurately communicating scientific results.
Generalizing beyond the early data carries with it costs in both the short term and the long term. Not because of ignorant restiveness in the American public, but because the experts themselves trained Americans to be overly skeptical that the experts actually understood what was going on. This is, for example, the problem with the “masking” message in the pandemic. Communication of the black and white “wear a mask” message today contrasts sharply with the black and white “do not wear a mask” message of earlier in the pandemic. The irony is that a more tepid message regarding not wearing masks earlier in the pandemic could very well have resulted in greater openness among the public to masking mandates today.
Secondly, and perhaps more significantly for policy debates, experts often are tempted to take advantage of their designation as “experts” in trying to extend their authority beyond the science. In doing so they attempt to claim authority for their personal value judgments that those judgments do not warrant.
Epidemiology, for example, is without doubt a real expertise. A part of this expertise undoubtedly includes recommendations on what may tend to mitigate the risk of infection. But just as there is a range of behavior that may mitigate the risk of infection, so, too, there is also a range of costs to different mitigation strategies. While expertise informs the public of the choices available, the actual choice of the tradeoff between expected benefits and expected costs is a value judgment, it is not a matter of expertise. While experts can certainly hold their own opinions as to how they would balance different policy benefits and costs, that opinion does not derive from their expertise. Indeed, given that experts often choose domains in which they are passionately interested, experts are often tempted to overemphasize the significance of their own domain of expertise relative to other equally significant dimensions of life.
Much of the American public’s current skepticism towards expert opinion does not derive from their belief that these experts are not truly experts. Rather it derives from the belief that experts are abusing the deference their expertise is due. The skepticism derives because of the suspicion that experts are trussing up fully debatable value judgments as authoritative expert opinions.
Experts exist, and expert opinion merits due deference. But expert overreach—either on their own account, or as a result of allowing the media to paint expert insights in bolder colors than the science actually merits—carries with it its own cost. To be sure, there may be a portion of the American public who resist any recognition of expertise. But experts also must recognize that their own actions, and that of their ostensible friends, has itself invited much of the skepticism now being manifested by the American public toward expert opinion.