fbpx

Should We Welcome Our AI Overlords?

This year marks the 10th anniversary of what was initially hailed as a milestone in the history of artificial intelligence. In 2011, IBM’s question-answering computer Watson soundly defeated television gameshow Jeopardy! champions Ken Jennings and Brad Rutter—Watson’s three-day take was $77,000, compared to Jennings’ $24,000 and Rutter’s $22,000, garnering it a $1 million prize. This IBM victory followed by just 14 years a similar computer-human showdown between the company’s Deep Blue supercomputer and world chess champion Gary Kasparov, in which the computer triumphed 3.5-2.5. The 2011 outcome already decided, Jennings included in one of his Final Jeopardy! responses, “I, for one, welcome our new computer overlords.” Though Jennings’ remark was undoubtedly at least partly tongue in cheek, it raises a momentous question: did Watson’s victory represent a threat to human liberty?

Just a day after Watson’s victory was televised, IBM began a marketing blitz, declaring that it was “already exploring ways to apply Watson’s skills to the rich, varied languages of health care, finance, law, and academia.” Within a few years, the company was touting Watson as “a revolutionary approach to medicine and health care that is likely to have significant social, economic, and political consequences.” The implication was clear—Watson could take in information and cross-reference cases, clinical series, and bench research findings in much greater numbers at a far higher rate than any human physician. It seemed only a matter of time—and a short period of time at that—before Watson would be calling the shots. IBM marketing hype notwithstanding, however, even AI’s most ardent proponents must admit that its ascent to world domination has taken longer than predicted.

At MD Anderson Cancer Center in Houston, Texas, for example, Watson was installed in 2013 with the expectation that it would revolutionize the care of cancer patients. IBM and MD Anderson announced that the partnership would leverage the ability of cognitive systems to “understand” the context of users’ questions, uncover answers from Big Data, and improve performance by “continuously learning from experiences.” IBM Watson’s general manager declared that “Data need no longer be a challenge, but rather a catalyst, to more efficiently deploying new advances into patient care.” MD Anderson regarded Watson as a central player in their “Apollo program,” a “moonshot” technology-driven adaptive learning environment. One can imagine poor human hematologists and oncologists quaking in their white coats as the IBM trucks rolled up to the facility.

Yet within just four years, MD Anderson “benched” Watson in a bellwether setback for AI in medicine. It turned out that Watson’s voracious appetite for data was not matched by its omnivory. Fed patient charts, articles from the medical literature, and research findings, Watson could not ingest the information the way a physician would. It had difficulty comparing one patient with others who came before and after. In order to get Watson to assign significant weight to clinical studies with small numbers of participants, physicians had to make up “synthetic” cases. More troubling, some of Watson’s recommendations for patient care turned out to be useless—irrelevant in one way or another to the patient at hand. And in other cases, Watson’s prescriptions can only be described as hazardous. IBM and MD Anderson forgot that medicine is both science and art. A scathing report by auditors found that, at a cost of $62 million, Watson had failed to achieve its goals.

Alarm bells should have been ringing from the moment of Watson’s triumph. The Watson that competed and won at Jeopardy! had been built specifically to excel in that context. Simply put, it was very good at combing databases and formulating answers to Jeopardy!-style questions. During breaks, Ken Jennings and Brad Rutter could make small talk with the host, take a sip of water or a bathroom break, or reach down to tie a loose shoelace. Watson was completely incapable of such tasks, having been programmed to do one thing and one thing only—answer trivia questions. IBM’s bold predictions aside, Watson even at its best could be likened to little more than a so-called idiot-savant—able to perform at a high level in one area but otherwise incompetent at a variety of ordinary human activities.

In fact, Watson could not even “understand” what it was doing. It could produce answers, but it did not experience playing the game and had no relationship with the host or fellow contestants on the program. Watson could not have said in any genuine sense that it was nervous, that it enjoyed the competition, or that it rejoiced in its victory, the proceeds of which went to a non-profit that helps children in poverty and World Community Grid, IBM’s “humanitarian” supercomputer. Watson did not decide where the proceeds would go or take any satisfaction in knowing that at least a portion of its winnings were going to a good cause. When we talk about Watson as understanding, having feelings about, or savoring any event or experience, we are doing so only metaphorically, imputing human characteristics to a machine. Watson is not and never can be human, in part because it lacks emotion.

It is up to every lover of liberty to resist the ill-conceived metaphor of the computer as master.

Watson is not sentient because Watson is not alive. It made a terrific blunder in Final Jeopardy!, when the category was “US Cities” and the clue was, “Its largest airport is named after a World War II hero and its second-largest airport is named after a World War II battle.” Watson’s response? “What is Toronto?” Having committed such a gross error, Watson did not blush or feel any embarrassment, because Watson has no body and cannot feel anything. The special-purpose AI could not grasp the larger context in which it was operating. Likewise, Watson could not take responsibility or bear any adverse consequences for its blatant failure. Perhaps it would not make the same mistake again, but this is merely response modification and not true learning. Watson cannot truly bear responsibility at all because Watson is not a moral agent. In fact, Watson is a soulless machine that can be programmed to mimic any number of human operations, but agency in the human sense is far beyond its capacity. As such, Watson could never function as a tyrant.

Consider the game of baseball. Watson can analyze all the statistics in this most statistics-laden sport. It might well out-Sabermetric Bill James and the Society for American Baseball Research, uncovering previously unsuspected correlations between obscure aspects of the game. This was the focus of Michael Lewis’ book Moneyball, which recounted how the low-budget Oakland Athletics were able to win games by focusing not on historically-prized player attributes such as speed and ball contact but on neglected metrics such as on-base rate, slugging percentage, and how many pitches a batter draws from the opposing pitcher. But there is more to baseball than statistics. A baseball game is also an aesthetic phenomenon, to many a thing of beauty involving skill, strategy, and the will to win. Data alone can never produce an understanding of what baseball is, why human beings care about it, or what it would take to coach a team to play its best.

Shakespeare could have composed his plays on a word processor, Rembrandt might have produced his paintings using a computer graphics program, and Mozart could have employed music production software for all his compositions. Yet there was far more to their art than the means of production. Shakespeare was not just a statistical freak whose works would have eventually been produced by any text-generating bot, Rembrandt was doing something vastly more significant than any digital image technology could ever produce, and Mozart was engaged in a creative process qualitatively different from the scraping of horses’ hairs across sheep gut. To put Watson’s limitations in more pedestrian terms, you could program it to write jokes, but you would need a human being to tell if they were any good.

We are, after all, not naturally digital creatures—data are tools we use, but they do not define us. Likewise, computers are useful instruments, but they represent rather poor metaphors for human cognition, feeling, hopes and fears. What Watson’s programmers did was remarkable, but we must never forget what Watson didn’t do and in fact cannot do—compared to Ken Jennings, Brad Rutter, Alex Trebek, or any other human being, Watson is an incredibly narrowly focused question-answering machine that, as programmed for the game of Jeopardy!, is largely incapable of much of anything else. Jennings’ comment about our “new overlords” elicited smiles, but it is ultimately, at least to those who know, misleading. Watson is and always will be not a lord but a servant, and not even really a servant, but merely a tool, always subject to the will of its programmer, who might just dream of imposing tyranny. It is up to every lover of liberty to resist the ill-conceived metaphor of the computer as master.

Related