fbpx

The Limitations of AI

A few weeks ago Professor Barry Smith delivered a lecture in Turin, based on his book Why Machines Will Never Rule the World. It was a Monday night. During his talk, Smith casually observed that Nvidia’s strong stock performance was a symptom of AI’s hype and compared it to the infamous tulip mania. In 2023, he remarked, the AI industry spent 17 times more on chips than it brought in in revenue. The following Thursday, and then again on Friday, Nvidia’s stocks dropped substantially in value (they have since regained much).

Stock-picking is not a game philosophers necessarily want to play, and that applies to Smith too. But the Nvidia swing came perhaps as less of a surprise to his audience than to others. The AI debate is a battleground between those who expect unprecedented productivity gains and those who see it as a harbinger of the world of Terminator—the second being a bigger army. In these heated discussions, Smith and his coauthor, Jobst Landgrebe, may be alone in claiming the middle ground.

Smith, a renowned authority in ontology, and Landgrebe, a polymath and a former AI entrepreneur, do not deny the progress of narrow (or weak) AI and its impact. But they take a contrarian approach to both the extreme enthusiasm and the over-the-top fears that seem to mark the AI debate. For Smith and Landgrebe, machines will not think—nor will they aim to take our place and, indeed, rule the world. Plots that are good for science fiction are not necessarily a good description of technological developments, nor should they guide public policy: as it happened in the European Union, or the State of California, which rushed to regulate artificial intelligence assuming it to be an existential threat for humanity.

To understand Smith and Landgrebe’s argument, it is important to dig into some of the categories they use.

Narrow AI refers to those artificial intelligence systems that are designed to perform specific tasks in what Smith and Landgrebe call a “logic system.” In their realistic understanding, “system” is a word that applies both to organic and artificial realities. In basic terms, a system is “a set of elements standing in inter-relations.” But the dynamics of such inter-relations can vary sharply.

A “logic system” is one whose behaviour can be predicted using “propositions of mathematics linked together by logical relations.”

In particular, a logic system satisfies four conditions, clearly articulated by Smith and Landgrebe:

  1. “The system behaviour can be explained by reference only to one of the four fundamental interactions of gravity, electromagnetic force, and the weak and strong nuclear force.”
  2. “The system behaviour of interest is dominated by a single homogenous and isotropic force in such a way that the effects of the other interactions are so small, in the context of the modelled aspect, that they can be neglected.”
  3. “In each system there are groups consisting of elements of the same type” which interact with each other in identical manner. “For example, in the solar system, the sun and the planets can be seen as a group of elements (of type: lump of matter) which interact via gravitation.”
  4. “The boundary conditions of the system can be assumed to be fixed without invalidating the model.”

This suggests that a logic system has its own equilibrium and is not evolutionary. It resembles a closed environment, in which experiments can be performed with the confidence that each observed effect can be traced back to a clear cause.

The “application of the differential calculus in physics and engineering” has been so successful that it made people forget that “the class of systems with all four of these properties is rather small.” Indeed, “the overwhelming majority of systems in the universe, and even of systems that we encounter in our daily lives, are what we shall learn to identify as complex systems.”

For so-called “general AI” to exist, and then for computers to be able to emulate and go beyond the sort of intelligence humans show, we should be able to model “complex systems”—like, for example, the human brain.

“All complex systems are such that they obey the laws of physics,” and yet “for mathematical reasons we cannot use these laws to analyse the behaviours of complex systems because the complexity of such systems goes beyond our mathematical modelling abilities.” Physical systems such as the weather can be seen as complex, and so can organic systems such as our digestive or neurological systems. Smith and Landgrebe also consider “hybrid” systems such as the New York Stock Exchange or traffic. Such systems cannot be modelled to yield the kind of clear-cut, mathematical forecasts that can be used in technological applications.

If the only thing Hollywood screenwriters can do is recycle 1960s comic book heroes sprinkled with contemporary political correctness, they may fear AI will take their job. But machines’ creativity will never top their programmers’ creativity.

One case in point is the long-awaited promise of self-driving cars. “In 2010, at the Shanghai Expo, General Motors had produced a video showing a driverless car taking a pregnant woman to hospital at breakneck speed and, as the commentary assured the viewers, safely.” But predictions of driverless cars flooding the market and putting all taxi and Uber drivers out of business so far proved over-optimistic. The enthusiasts for self-driving cars neglect the difference between logic and complex systems highlighted by Smith and Landgrebe:

Consider … the case of models for self-driving cars. Algorithms used here are adequate where the software is able to model the sensory input deriving from traffic events through sensors (camera, radar, lidar, sonar) in such a way that it reacts to this input, given the destination, at least as well as (or, realistically, better than) the average human; otherwise self-driving cars will cause more accidents than cars driven by humans, and this will be deemed unacceptable.

Complex systems often correspond to layers of different interrelated sets of elements, that is: systems. The human body can be seen as a series of correlated systems. Complex systems are those wherein it is not so easy to connect one effect to one specific cause; they are the theatre of multi-causal events. Attempts to model them typically end up resembling a simple system, and if the elements to be modelled are carefully identified and properly accounted for, such models can be helpful for limited and modest predictions. But it’s difficult to do that, so a model of a complex system is often a paradigmatic case of “pretence of knowledge.”

Smith and Landgrebe’s arguments appeal to readers of Mises, Hayek, and the Austrian school economists. The two authors are themselves well acquainted with the so-called economic calculation debate and the teachings of the Austrian school. The economy is, in their view, a quintessentially complex system.

Economics can be used to make pattern predictions, of the kind that are useful to decision makers, but not to predict precise events to the fullest degree. For example, setting a price ceiling usually causes the supply of the good to decline. Yet “no economist can quantify such effects exactly in advance (and it is even hard to do this in hindsight, given the many mixed effects in the real-world economy). This is because no economic model can exactly predict any single economic quantity for any selected time or time interval in the future, whether this be the price of a good or the excess capacity of a production method.” This does not depend on the amount of data available to sketch such a prediction, but rather on the nature of the economic system itself—which is a complex one, affected by a number of actors who cannot, to quote Adam Smith, be arranged with as much ease as the hand arranges the different pieces upon a chess–board.

In this regard, Smith and Landgrebe spoke the same language as the Austrian school. Indeed, “the first economist to realise this was Ludwig von Mises in his ‘economic calculation argument.’” There are similarities between the grand debate that pitted Mises and Hayek against Oskar Lange and Abba Lerner, to mention only the most relevant combatants. The Austrian economists emphasised the cognitive limits of what is possible for the human mind: the market allows all actors to take advantage of dispersed knowledge and to adapt to change, but it does so in a piecemeal way, with hardly any promise of “perfection” in its working. The social scientist is at best an observer of human interactions, able to take note of them and make qualified and precarious predictions based upon certain regularities. Yet, humans’ behaviour and preferences change and adapt in ways that cannot be predicted with rigorous certainty.

Their socialist opponents acknowledged some of the virtues of the market, but maintained that they could be replicated and improved through central planning boards which could learn from experience, while also setting clear targets for themselves. Markets could be mimicked without the means of production being in private hands and, hence, without the sweeping inequalities that capitalism brought with itself. This of course assumes that real markets can be described through a series of equations and that it would suffice to replicate them, to achieve what a market economy does, without its shortcomings. Hayek’s famous rebuke highlighted what we tend to call “data,” and assume as such, things that are not “given” at all. Competition is a discovery procedure in which new knowledge emerges continuously, and prices convey bits of such knowledge to economic actors, affecting and being affected at the same time by their actions.

The fact that Hayek’s opponents seemed not to understand his argument was sort of a puzzle, as he strove to understand the workings of the constructivist mind. In his later work, Hayek himself distinguished between “simple phenomena,” to be understood as those in which the outcomes generated by applying a stimulus into a system could be predicted, and complex phenomena. The latter are those in which the elements that make the system up do not interact in a linear fashion and in which the elements and the ways in which they interact are too vast to be comprehended by scientific observers. At best, in complex phenomena, the scientist can understand some general principle that governs the interaction of various elements—but cannot make a rigorous forecast. This is evident in the case, evoked by Smith and Landgrebe, of setting a price ceiling.

The Turin audience’s questions to Smith were not that different than those Mises would have received, some eighty years ago. Smith made the point that general AI is impossible, because we cannot effectively model the human intelligence that it is supposed to mimic. He emphasised that it is not just a matter of feeding ChatGPT with more material—and that the current practice of feeding it with more AI-generated material is hardly improving its answers. He frequently resorted to the example of chatbots, not a happy experience of AI for most of us. Indeed, he and Landgrebe write that “conversation machines are doomed to failure” because “productive language is a creative act which cannot be emulated mathematically because mathematical models of natural processes represent in every case stable and repetitive laws.” In his talk, Smith emphasised that wanting something is crucial for all human conversations (let it be the student seeking knowledge from the professor or two traders who want to exchange something for something else) and there is no way to teach a machine to want something.

Yet the audience repeatedly tried to bring him to admit that if only we had more calculating capacity, or could feed AI with more or “better” knowledge, general AI could be achieved. Similarly, Mises would have been asked if the problem was not that planners simply lacked a computer powerful enough to connect all the bits of partial knowledge that could be collected through the economy.

Why Machines Will Never Rule the World is a complex book (though Smith expresses its core arguments in lively talks). It was published two years ago and its reception was not as enthusiastic as it deserves, largely because its message disappoints everyone.

The book indirectly challenges the current common wisdom by which no investment in AI is big enough, as there is a quasi-infinite potential for development. If the game is not developing general AI, but improving narrow AI, one suspects that such investments signal irrational exuberance more than careful calculation. We may be in the AI equivalent of the Dot-Com era: lots of useful applications will come out of it, but the bubble at some point has to burst.

It also proves that the AI scare is missing the point. Narrow AI will certainly improve productivity in some areas and will displace jobs, as innovation typically does. But ChatGPT won’t be the next Saul Bellow nor Perplexity the new Allan Bloom. Machines, tomorrow as today, are tools activated and programmed by humans—who own the creative part of the process. Of course, if the only thing Hollywood screenwriters can do is recycle 1960s comic book heroes sprinkled with contemporary political correctness, they may fear AI will take their job. But machines’ creativity will never top their programmers’ creativity.

Such a sober account is clearly disappointing, as humans tend to divide between those who believe in miracles and those who enjoy being frightened about everything else.