Making Financial Regulation AntiFragile

Nassim Nicholas Taleb, best-known for his theory of “the black swan,” is back with a new book, Antifragile: Things that Gain from Disorder. A sprawling, somewhat disorderly book (although it is far from clear that the book gains from its own disorder), Taleb’s latest entry is also a provocative, insightful book that holds the potential to provide a new approach to many social and economic issues, particularly issues of finance and financial regulation.

Taleb’s central contribution in this book is to introduce a new analytical concept into our understanding of the world—antifragility—and to explore the way in which becoming aware of this analytical concept changes our understanding of systems, not just social and economic, but even physical (or so he believes). While he stretches the concept well beyond the breaking point in places, these excesses should not detract from the importance of the central insight.

Taleb argues that traditionally we have had only two ways of thinking about systems, classifying them either as “fragile” or “robust.” “Fragility” is the property of being easily broken and collapsing under stress. For Taleb’s purposes, fragility is most important in that it describes things that do not like volatility, and as such do not like randomness, uncertainty, disorder, or errors. For example, if you have a crate full of delicate China, you could be exceedingly careful with it for 23 hours and 59 minutes. But if you drop it once, i.e., expose it to one shock, the entire box will shatter. Thus, even though the average handling of the China is very careful, that is largely irrelevant if there is one volatile event to which it cannot easily adapt or resist.

“Robustness” is the quality of being largely indifferent to randomness or extremes. For example, if I am carrying a box full of plastic plates instead of China, the contents of the box will be largely unaffected by how it is treated. Carefully, roughly, or somewhere in between, it is all the same to plastic plates.

Antifragility by contrast, is a system that gains from disorder and volatility—i.e., exposure to stresses improves the operation of the system and makes it stronger. In other words, there is a sort of positive feedback loop from extreme events that makes the system stronger and more resistant to future shocks in the long run. This third category, antifragility, is the category that Taleb says has been overlooked.

Underneath the concept of antifragility lays an apparent anomaly—for the system to be more antifragile (and thus be more resistant to failure) it will often be the case that particular elements within the system must be allowed to fail. Failure of constituent elements is important for two reasons. First, it weeds out the “weak” links in the system, replacing them with stronger constituent elements. Second, the failure of some elements of the system provides information feedback loops to other elements of the system as to what works and what does not.

For Taleb, what “works” appears to be defined largely by the survival value of certain constituent elements, not some external measure of “efficiency.” For example, Taleb observes that antifragile systems are often rich in redundancy, such as the human body, which has two kidneys (when one could survive with just one) and a variety of neural and other networks and systems that appear to be redundant, and thus apparently wasteful and inefficient. But redundancy is useful in the face of extreme pressures. As Taleb puts it (p. 45), “Redundancy is ambiguous because it seems like a waste if nothing unusual happens. Except that something unusual happens—usually.” (Note that although Taleb uses the idea of two kidneys as an example of the relationship between redundancy and antifragility, it seems that this particular redundancy actually should be classified as robustness, not antifragility—having one kidney fail, and thus increasing the stress on the other kidney, does not actually seem to increase your overall fitness it leaves it marginally unaffected).

The primary analytical value of the different categories is that fragile systems (medical, economic, social planning) are those “in which the benefits are small and visible, and the side effects potentially severe and invisible.” (p. 10). Antifragile systems, by contrast, are those in which the costs are small and visible, but the costs of side effects are contained or even welcomed. In terms of probability, fragile systems have long left-tail distributions, but small right tails: there is little upside risk but much downside risk. Antifragile systems have long right-tail distributions but small left tails. (Robust systems have thick means with minimal right or left tails). Taleb also notes that this focus on preventing small ongoing risks also tends to create an “interventionist” bias into systems that seems to reward constant tinkering while ignoring the larger risks that might be building up under the surface.

The hard and unintuitive part of antifragility is that for a system to be more antifragile (and thus more resistant to failure) it is necessary for certain elements of the system to be subject to failure, in order to weed out the weaker elements. Antifragility “is largely about the errors of others—the antifragility of some comes necessarily at the expense of the fragility of others. In a system, the sacrifices of some units—fragile units, that is, or people—are often necessary for the well-being of other units or the whole. The fragility of every startup is necessary for the economy to be antifragile, and that’s what makes, among other things, entrepreneurship work: the fragility of individual entrepreneurs and their necessarily high failure rate.” (p. 65).

He identifies the restaurant industry as a useful example of an antifragile system. Why is it, Taleb asks, that restaurants in the United States are generally of high quality, in terms of food, service, and ambience? The reason, he says, is because the overwhelming number of restaurants fail—the restaurant system is antifragile because individual restaurants are fragile. As a result of this ruthless process of variation and selection, poorer restaurants are weeded out of the system making room for higher-quality restaurants. “Restaurants are fragile; they compete with each other, but the collective of local restaurants is antifragile for that very reason. Had restaurants been individually robust, hence immortal, the overall business would be either stagnant or weak, and would deliver nothing better than cafeteria food—and I mean Soviet-style cafeteria food. Further, it would be marred with systemic shortages, with, once a while, a complete crisis and government bailout. All that quality, stability, and reliability are owed to the fragility of the restaurant itself.”

So how might Taleb’s insights on antifragility be used to improve, among other things, government policy-making? In particular, what about the recurring disasters of the banking system and the apparent inability of policy-makers to do anything about it? Indeed, few believe that Congress’s 2400 pages of legislation in Dodd-Frank and the hundreds of regulations implementing it will prevent the “next” financial crisis, but will merely affect its shape. Is there a better way?

Although Taleb mentions financial regulation a few times in the book, he doesn’t develop a model of what antifragile financial regulation might look like. His one proposal (not developed at all) is that banks that are considered too big to fail would be permitted to pay their managers no more than a senior government civil servant. But his insights suggest that when it comes to financial regulation, we should reorient our views in two ways: first with respect to our attitudes regarding prevention of bank failures and second with respect to focusing on the resiliency and adaptability of the financial system to bank failures after the fact.

Roughly put, the animating myth of the past century is that with scientific management and enough data it is possible to prevent bank failures and to “manage” the economy to prevent unexpected shocks to the system. But Taleb accurately observes that simply having more information and data hasn’t and won’t by itself do anything to staunch bank failures. We have never before in human history had as much data about the financial system as the period preceding the 2008 financial panic yet our Washington and Wall Street solons were still blindsided by it. Yet this copious trove of data did not prevent the crisis—indeed, one suspects their blindness was in part hubris caused by having access to so much data. Yet the pretense of Dodd-Frank is that somehow getting “more information” to a centralized panel of wise regulators (the Financial Stability Oversight Commission) will enable them to take the correct actions to stave off the next crisis. Yet the problem that caused the 2008 panic was not a lack of information, which was plentiful, but the sense to know how to interpret the information—and, in particular, the reality that at the time there was no one single correct way to interpret the information. But still the myth persists that with enough information and wise regulation we can prevent bank failures.

Taleb’s analysis suggests a different approach, however: rather than focusing on trying to prevent individual banks from failing (i.e., making individual banks less fragile) perhaps instead we should think about how to make the financial system as a whole less fragile, and ideally, antifragile. In other words, perhaps the key to preventing recurring failures of the financial system is by permitting more frequent failures of individual banks (as with the restaurant business). More bank failures may mean fewer systemic failures.

Reorienting our focus from the stability of individual banks to the stability of the financial system would have important implications for how we think about regulation and systemic risk. For example, the implicit assumption that the goal should be to minimize individual bank failures implies a variety of regulatory recommendations. For example, consider the regulatory overlay of the Basel II financial regulations, which purport to provide a mathematically-rigorous system for measuring financial risk and thus calculating capital requirements. Under Basel’s logic, there is essentially a “correct” answer to the question of the optimal bank balance sheet for regulatory purposes—indeed, it is implied that in the ideal world, all banks would have essentially the same balance sheet and asset compositions and that if they do so they should be immune to failure.

But Taleb’s insight turns this approach on its head. For the downside of a centralized approach such as Basel II is that in purporting to describe an ideal balance sheet it also tends to increase the homogenization of bank balance sheets. Thus, if there are any errors in the way in which capital reserves are calculated (for example, if Greek’s sovereign debt is really not AAA-quality and thus is underpriced relative to other AAA-rated bonds) then the error tends to be replicated among other banks in the system and thus the initial miscalculation is amplified. Moreover, making bank balance sheets more homogeneous also makes them more vulnerable to a common shock of the type that provided the onset of the 2008 financial crisis, when uncertainties about the value of mortgage securities swept through all of the major banks (as Peter Wallison has argued). Regulations such as Basel, therefore, tend to increase the collinearity of the risk among different banks, increasing the systemic exposure to the same negative common shocks.

Other regulatory innovations that might make sense when focusing on the stability of individual banks might also increase the fragility of the system as a whole. For example, the regulatory requirement of “mark to market” might further exacerbate the disruption of homogeneity-inducing rules such as Basel. When one bank has to dispose of its assets at a fire sale, this reduces the current market price of those assets. If other banks have to adjust their valuations of the same assets and, importantly, they hold many of the same assets themselves, then mark to market again tends to amplify the size of the initial financial shock, essentially transmitting the disruption to other banks in the financial system.

An antifragile system, by contrast, would place less focus on preventing individual banks from failing in order to maximize the stability of the system as a whole. The failures of particular banks can make the system less fragile by weeding out ineptly-managed banks and sending market feedback signals as to survivors. Thus, for example, in contrast to the homogenizing approach of the Basel regime, regulators might consider permitting more heterogeneous capital structures, so that different banks might hold different compositions of assets, different levels of reserves, and the like. This means the acceptance on an ongoing basis of more individual bank failures, as the flame-outs of certain banks provided lessons for others. But these failures of particular banks would send feedback signals to other banks to adjust over time, rather than waiting for catastrophic systemic shocks. Admittedly, the presence of deposit insurance weakens these signals by insulating one large group of creditors from monitoring responsibilities and thus weakening the feedback signals from market signals. In fact, applying an analysis similar to that here, Lawrence White has held up the historical systems of “free banking” as exemplifying the characteristics of an antifragile banking system.

An antifragile approach to financial regulation would also have implications for decision making within banks as well. For example, during the decades running up to the 2008 crisis, modeling of risk within banks became increasingly highly mathematical and technical, to the point that it is often remarked that bank heads had no comprehension of the relevant investing models. PhD mathematicians commanded princely sums from Wall Street to construct these models, which produced intricate and specific calculations of the expected return from various investments.

But there was a problem with these models—they were extremely fragile. While they could produce predictions with exceeding precision, the predictions were only as good as the underlying data and assumptions that went into them. Thus, for example, quants produced precise valuations of mortgage securities based on decades of housing data. Yet it has been reported that because there had not been a sustained drop in nationwide housing values since the Great Depression, some of the models could not even receive assumptions that there might be a dramatic, multi-period drop in housing values nationwide. Thus, while the models could produce very specific outputs, they relied heavily on the assumptions that went into the models and small initial errors in the assumptions were hugely magnified because of the models themselves. They really were elaborate, intricate castles built in the air by their well-paid architects.

An antifragile approach would find a place for experience and common sense alongside financial models. Experience is antifragile (it avoids billion dollar catastrophes even if it misses pennies and nickels) because it is constantly being refined by testing pressures and those with the greatest adaptability will be those who tend to survive. Highly technical mathematical models that rely on exploiting small pricing and arbitrage errors are the quintessential example of a fragile system, providing small returns at the expense of exposure to massive downside risk. As all the firms in the market herd toward exploiting the same small arbitrage opportunities they increase the fragility of the overall system. As White observes, “The alternative to ‘overoptimization’ in banking is the practice of traditional rules of thumb or heuristics that have stood the test of time.” Yet one can only imagine the scene inside a Bank of America conference room if a veteran banker had tried to question the quants based only on his sense that their models didn’t seem to capture the whole picture.

A second insight implied by Taleb’s analysis would be to shift some of the emphasis in our regulatory framework away from a primary focus on trying to prevent bank failures to a regime with greater emphasis on the resiliency and adaptability of the system in response to the inevitability of bank failures. This approach would aim at keeping bank failures from rippling through the system. Taleb describes this as shifting the focus from predicting the probability of failure in systems to a focus on the exposure to failure. He uses the example of the failure of the Fukushima nuclear reactor which has led to a reexamination of the design of nuclear reactors to build them smaller and embed them in the ground to reduce the damage if one of the reactor’s fails (which one suspects is likely to happen in the future).

Building a financial system more adaptable to failure of its constituent parts points the observer to the value of redundancy. Consider the rationale for treating certain banks as being “too big to fail,” which is that certain banks are so “interconnected” with others in the system that they cannot be permitted to fail because of the ripple effect that their failure will have on other banks.

But while the idea of interconnectedness is theoretically plausible (albeit empirically unproven) it does not follow that certain banks must be too big to fail. In light of the inefficiencies and externalities imposed on the economy by TBTF banks, one must question whether the efficiency gains of having TBTF banks are sufficiently large to outweigh these obvious costs. To date the empirical evidence is somewhat thin and unsettled on whether the benefits of having TBTF banks outweigh the costs. But that debate misses a central point—it is not inevitable that banks need be so interconnected. Counterparties can take self-help measures to protect themselves (and by implication, taxpayers) by diversifying their portfolio among different banks. While the “fragilista” (Taleb’s term) architect of the first-best world of financial regulation might bemoan the theoretical inefficiencies of this sort of self-help, by making the long-term system less fragile it may in the end be less expensive. The small efficiency gains and the huge catastrophic losses created by TBTF gains would seem to be the very exemplar of the anti-fragile banks that Taleb criticizes. Retailers and manufacturers routinely source inputs from more than one supplier precisely to reduce their exposure to breakdowns by one supplier—although this approach may be less efficient than would be a single supplier it makes sense in order to avoid the catastrophe of a breakdown by one supplier.

While there is thus much to like in Taleb’s book, from this reader’s perspective there a few points that cannot pass unmentioned. There is a startling amount of score-settling and seemingly unnecessary provocation throughout the text. Taleb, of course, has as valid a claim of “I told you so” as anyone with respect to the financial crisis. Still, leaving aside what one thinks of the form of calling out of egregious people (I’m sort of in favor of it actually) it eventually comes to feel heavy-handed and obtrusive and detracts from the narrative flow of the book.

Among Taleb’s enemies are copy-editors, whom he feels create bland, unimaginative writing. Be that as it may, Taleb’s book might have benefited from some sort of editor to rein-in some of his more extended digressions and elaborations on his theories. For example, Taleb applies his theory to provide advice on everything from financial regulation to exercise to diet and nutrition. Some of his myriad examples appear to have little to do with his overarching theory and it is not even clear why some of them actually matter. Moreover, in that the book clocks in at 426 pages of text (plus appendices and extensive footnote material), this reader felt that the book would have been better had it been substantially shorter and, well, edited by a copy-editor who could have helped to distinguish the wheat from the chaff.

Finally, one irony that hangs on the whole book is Taleb’s visceral dislike for “theories,” which he sees as the font of all fragility. Instead Taleb sings the virtues of “empiricism,” in medicine, economics, and anywhere else. But isn’t the central purpose of the book to expound a theory of antifragility? And isn’t his goal in the end to provide prescriptive advice about how to make systems, from medicine, to economies, to nutrition, more antifragile? In the end it seems that Taleb’s quarrel is not with theory per se but rather with bad theories—hubristic theories that fail to recognize their own limits and vulnerabilities. His book is a theory about theories and which elements distinguish good theories from bad. There is nothing wrong with that, but it is not clear that Taleb realizes that in dismissing most theories he is actually proposing a theory as to why they should be dismissed.

But those quarrels do not take away from the power of Taleb’s central insight about the importance of understanding fragility within systems. I have focused on Dodd-Frank and antifragile financial regulation, but the rolling disaster of Obamacare’s effort to remake the American healthcare system is yet another example of what happens when these insights are ignored.