fbpx

Less Would Be More for Tech Antitrust

Kristian Stout’s insightful “Big Tech and the Regressive Project of the Neo-Brandeisians” is timely and a must read for anyone involved in antitrust or information technology policy. He understands that the so-called neo-Brandeisians lack intellectual foundations for their ideas. He also sees that, properly applied, today’s best practices in antitrust would not lead to a ramp up in antitrust actions, as some would wish.

I’ll add three things to Stout’s work. First, I’ll explain that the neo-Brandeisians embrace little of the thinking of their namesake—Justice Louis Brandeis—so the moniker is a misnomer. I will also add that today’s antitrust tools are unable to adapt to Big Tech. Finally, I will argue that the dynamic and adaptive natures of Big Tech reveal fatal flaws in our antitrust theories and that a more appropriate intellectual foundation would imply less, not more, antitrust action.

The Non-Brandeisians

Stout is right that while some progressives and conservatives seem to have found common cause in attacking Big Tech, they differ in their reasons. Both groups have errors in their thinking, but I will focus on the progressives’ arguments.

The progressives are led by people naming themselves neo-Brandeisians. This group’s animating bias appears to be an aversion to everything “big,” claiming that big is anticompetitive. But they leave the meaning of “big” to people’s imaginations, or worse their prejudices. They claim that their anti-bigness is a rebirth of Brandeis’s golden age of antitrust. But the reality is more gilded than golden: Their anti-bigness actually embraces big government, something Brandeis opposed. On that and other issues, the neo-Brandeisians go in directions that are heretical to their namesake’s world view.

Brandeis was indeed critical of big business. He railed against large businesses in his writings and attacked them in his private legal practice. He did these for several reasons. One was that he believed that people thrive when leading self-directed lives running enterprises, not when working within an organization. So his ideal world would have been made up of small, main-street businesses charging high prices, with little government involvement. He believed that large organizations were inherently inefficient because he also believed no one had the mental capacity to run a large organization. And he was suspicious of other people’s motives, so he wanted to limit others’ scopes of influence.

These beliefs about human nature led him to view large businesses as illegitimate and unnatural, except in rare cases. This view of human nature also led him to be suspicious of big government.

So the neo-Brandeisians are wrong to think that their anti-bigness aligns with Brandeis’s views. He was not a fan of competition: He supported small-business exemptions from antitrust so that they could collude in price and not waste money competing. He thought it was unproductive for customers to comparison shop—something that is critical for competitive markets—because he didn’t think they were capable of doing it well, and because it took time away from more productive pursuits.

The neo-Brandeisians are non-Brandeisians because they deviate so much from Brandeis’s views: They tend to embrace big government, want extensive oversight of individual decision making, and seek to reduce opportunities for entrepreneurs.

Shortcomings of Today’s Antitrust Tools

Stout explains that properly applying today’s antitrust tools would not result in a ramping up of antitrust cases. I agree, but with the qualification that I am doubtful that it is possible to properly apply today’s tools to digital markets.

Defining markets is a fundamental step in today’s antitrust enforcement. The framework is simple and intuitive. Given that the goal is to see if there is an abuse of market power, the first step is to identify the relevant market. The second is to determine if there is power in said market. And the third is to see if that power is being used to harm consumers. Stout correctly points out that the tech companies engage in rivalry across traditional market boundaries. And they compete for markets and not just in markets, so antitrust officials should look at markets in more dimensions than they currently do.

I would add that even if antitrust regulators could overcome these problems, they would still fail because the markets are constantly evolving. The underlying problem is data decay. By necessity, regulators use historical data to identify market boundaries. Relying on historical data is appropriate because it anchors our analyses in reality. Without an anchor in reality, conjecture and bias would dominate antitrust work even more than it does today. And as Alfred Kahn observed many years ago, regulators’ mindsets are biased towards finding things to regulate.

But in fast-changing industries, yesterday’s data tell us little about today’s realities, and even less about the future. The future is the relevant time period for antitrust because, by definition, antitrust acts to affect the future. The fast changing circumstances cause data decay, so much so that using the data can lead to erroneous conclusions. This was the case in the AOL merger with Time Warner. In that case regulators examined the past and concluded that instant messaging was a market and that AOL dominated it. They then extrapolated from the past and formed the belief that AOL would leverage that market position to dominate advanced instant messaging. As a result the regulators placed controls on the merged company that would weaken its intertemporal network effects. But in reality, technology changed and made AOL’s instant messaging service largely irrelevant. The data had decayed before it was put to use, but the regulators failed to notice.

Flaws in the Foundation

I would add the following to Stout’s concerns about today’s antitrust practices: Tech industry dynamics are revealing inherent flaws in today’s antitrust theories.

One flaw is that today’s antitrust theories are generally based on economic models that conflate business success with market power. For economists, this pattern began with Abba Lerner’s seminal 1934 paper on market power. In it he defined a firm with monopoly power as one that can price above its marginal production costs and developed an index that purports to measure the degree of market power. For a firm with no monopoly power, the index is zero because the firm’s prices are exactly equal to its marginal costs.

The current truism held by the neo-Brandeisians and many journalists that big is bad will harm the US economy if it takes hold in actual antitrust actions.

The economic model underlying Lerner’s index is the Cournot model, which says that firms compete by choosing how much they will produce, and that each firm’s choice takes into consideration how much each other firm is producing. In this model, a firm prices significantly above marginal production costs also has a large market share. To get this result, the economic analyst has to impute to the firm some unique characteristics that give the firm advantages over its rivals, such as lower costs or innately higher quality. So the model gives the appearance of market power and market share going hand in hand, and the market share falling to the firm in the form of a gift. Despite several attempts by economists to show that a firm can have a high market share and not have market power, the metric and its implications have remained prominent in antitrust economics.

Lerner’s index consistently gives false positives in the tech world because marginal production costs appear to be very close to zero. Consider Microsoft’s Windows operating system, for example. Windows serves about 1.5 billion PCs, giving it a 78% share of all PC operating systems. Users pay about $60 for Windows if they do not buy it bundled with other software. How high is this price relative to Microsoft’s marginal production costs? Suppose that the number of users went up to 1.7 billion. These increased sales would have almost no impact on Microsoft’s costs because the company doesn’t have to make additional copies: The software is either installed by the equipment manufactures or downloaded from the internet. So the marginal production cost is effectively zero, which makes it appear that relevant Lerner Index is near infinity, or in some sense opposite the zero index value that implies perfect competition.

A superficial analysis might conclude that the index is correctly identifying monopoly power because, after all, Windows has a 78% market share. But the index would be the same for any operating system: Apple’s iOS, Linux, and Alphabet’s Android all have Lerner Indices that approach infinity. In other words, the index makes it appear that all market participants have extreme market power. Any indicator that does this is fundamentally flawed.

Another flaw in using the standard antitrust models to analyze market power is that market power has to be assumed in the model, which means that applications of the models can tell us nothing about the nature of market power. The two most prominent developers of modern economics—Adam Smith and John Stuart Mill—identified market power as an undeserved ability to avoid competition. Both were concerned about sources of market power and identified government protection as the primary source, the other being a unique, necessary resource that only one firm could use. This sense of market power is generally missing from today’s economic models.

Most economic models of market power do not worry about why one firm seems to fare so much better than its rivals. In effect what they do is assume that the differentiator falls from heaven and then the advantaged firm exploits its gift.

The assumption of gifted advantage never made sense as a true reflection of reality, but as a simplifying assumption it seemed to do little harm for analyzing stable and long-enduring large firms in industries for which there were few close substitutes. But this assumption runs strongly counter to how tech industries work. Tech firms and their offerings are always changing, making the gift-based analyses yield bad results. Consider, for example, an analysis that was prominent and important in the US government’s case against Microsoft. The analysis began with an assumption that there existed a firm that held a significant market share in operating systems and then asked whether this market share could be used to advantage the large firm in its competition for future generations of operating systems and related products. The analysis never asked why the firm was so significant, whether the share actually gave undeserved advantage, and the impact of a policy that penalizes a firm that gains significant market shares through superior enterprise and effort. If these questions had been analyzed, the research model would likely have concluded that the government’s case against Microsoft was going to harm customers.

Implications for Exclusionary Conduct Claims

It is common today for antitrust advocates to hold that tech firms are engaging in exclusionary conduct, i.e., the acts of a firm with monopoly power that disadvantage and harm competitors. Examples of such claims include complaints that Google biases its search results in favor of its own enterprises and Amazon unfairly promotes its own products. Unfortunately for the advocates, big tech doesn’t have monopoly power and, even if it does bias searches and listings, that doesn’t necessarily harm rivals.

I have already explained why it is erroneous to conclude that a large digital firm necessarily has market power. So I will turn my attention to the effects of bias. I will use Google and Amazon as examples. I do not know if they bias their work to favor their own products, but many people do believe this so I will accept it for sake of argument in this article.

Platforms such as Google and Amazon work hard to attract users, and advertisers (in the case of Google) and third-party sellers (in the case of Amazon) benefit from that work. In fact, the more a platform provider profits from attracting users, the harder it works to draw them in. Biasing search or product lists can increase the profitability of attracting users to a platform. These higher profits incentivize the platform to attract more users. This benefits advertisers and third-party sellers if enough of the new users buy from them.

Apparently this is happening. Search and e-commerce are growing, especially during the pandemic. So Google and Amazon appear to be adding users. And advertisers and third-party sellers continue to flock to these platforms, implying that they find the platforms valuable relative to the alternatives. There are examples of advertisers and sellers being unhappy that they have to compete with the platforms, but that is probably a result of the WYSIATI bias (What You See Is All There Is). The platforms’ rivals do not experience the platforms’ efforts to attract users and so naively act as if the users would be there even if the platforms did not bias their results.

This does not mean that all advertisers and third-party sellers benefit from platform bias. But given their numbers, it seems likely that some do, whether they realize it or not. At least these advertisers and third-party sellers, and their customers, could be made worse off if government officials prohibit bias.

The current truism held by the neo-Brandeisians and many journalists that big is bad will harm the US economy if it takes hold in actual antitrust actions. Even if it does not take hold, consumer welfare is being made at risk by current antitrust practices because they are not up to the task of identifying and addressing market power in tech. Today’s procedures and theories give false positives, and probably few false negatives, resulting in an overabundance of antitrust restrictions that limit what consumers can enjoy in digital markets.