fbpx

Transcending Technopessimism

In the world today, technopessimism is reaching a fever pitch. Everywhere you look there are headlines like, “Meta’s AI internet chatbot starts spewing fake news,” “Self-driving Uber Car Kills Pedestrian in Arizona,” and “Artificial Intelligence Has a Racial and Gender Bias Problem.” Artificial Intelligence can be sexist, racist, or just profoundly stupid. However, substitute a human for the AI in the above headlines, and they become completely mundane. The knee-jerk reaction to these sensational headlines is to call for limits and constraints on AI. But the voices calling for the demise of AI need to pause and realize that to err is both human and AI. AI misconduct garners great attention when compared with human misbehavior, but that’s because human transgressions are taken for granted, not because technology is actually worse. In many cases, even the most egregious of AI errors can be audited and corrected. In extreme cases, AIs can be shut down. Society generally frowns on “shutting down” humans whose behavior is stupid or insulting.

Consider, for example, proposed new NYC legislation requiring AI to be audited for bias before being used to make hiring decisions. Proponents argue that AI can be biased against certain classes of applicants. This is just one of many cases in which concerns have been raised about AI making biased decisions on a whole range of issues from loan decisions to granting parole. Of course, these biases may exist in the training set, because human agents have likewise been biased. AI is just perpetuating creator biases. Likewise, AIs can indeed miss qualified candidates with an atypical résumé, but so can humans. Of course, human biases can often be quite deeply rooted. When the algorithm is a jerk, we can fix it. We can change the training data. We can check that it is actually fixed before deploying into the wild. Control systems can be imposed on the decisions, including allowing a human to audit and override decisions. And if we can’t fix it, we can shut it off (this is discouraged for humans).

Similarly, people worry about a lack of transparency in the reasoning behind AI’s recommendations. Indeed, the best-performing algorithms are often those where there is very little clarity in how the AI made its decision. But humans are also opaque, and an entire field of psychology exists to explain the introspection illusion, detailing how bad we are at explaining our decisions. Suites of tools exist for understanding what happened in an AI algorithm, and why. For AI algorithms for which transparency is not possible, the models remain deterministic and explainable—give the algorithm a sample input, and the output is determined. A tween debugger would probably be The Killer App. Algorithms can be black boxes, but they are extremely clear when compared to the mushy black boxes inside humans. Even if they were audited in this legislation (which they are not), it’s not clear they would perform better. Given a résumé, you know how the AI is going to respond, although you may not know why. The same is rarely true of a human, who may not even respond the same if they were presented with it at different times over the day.

While creative destruction has always fueled progress, the velocity of AI innovation has made it particularly hard for certain segments of the population to adapt.

The manipulation done by social media algorithms is another popular source of apprehension. Social media platforms maximize the time that users spend on the platform by providing them with content that interests them. Judgmental people could argue whether people should want the content they consume. However, these platforms are just the latest iteration of advertising manipulating us, which dates back at least to papyrus. Humans have manipulated other humans since time out of mind. Social media manipulates, but perhaps even less than forces like family, religion, government, and other media.

Many people fear AI’s inability to understand ethics, regardless of whether that means that it does, in fact, act ethically. My favorite meme is about the Trolley Problem applied to self-driving cars, which probably could not decide whom to kill in an accident situation. But humans have this problem too. Most humans are likely not applying utilitarianism, duty-based ethics, or any other deep thinking in this situation. On the contrary, they are thinking, “Holy crap, I’m about to hit that thing. Must swerve.” Or at the very best, “Seems like fewer people to the right.”

There remain real concerns over AI. It was always possible for the neighborhood gossip to smear your reputation in your town, but now your secrets can be leaked to the world. Questions about informed consent are complicated with complex systems and use cases. While creative destruction has always fueled progress, the velocity of AI innovation has made it particularly hard for certain segments of the population to adapt. Even so, our concerns with AI should always be viewed through the lens of comparison to human failings. 

Sometimes the alternative to AI isn’t human, but nothing. In many cases, AI provides a service, often due to the scale of data that needs to be processed, that simply couldn’t be matched by humans. For example, while there are human translators, there is no way that the functionality of Google Translate could be matched by humans. Likewise, no human is going to curate all of the Spotify playlists or scan all credit card transactions looking for fraud. Even in areas where human experts are clearly better, e.g. psychologists, AI technologies have let the number of people who receive therapy expand. Even in areas of psychotherapy, where humans are clearly better equipped to provide the service, AI allows it to scale, as it has done since the days of ELIZA.

It can be difficult for many people to put AI transgressions into perspective because many lack a real understanding of the technology, and mysterious, complex systems scare people.

Often the same AI technologies are used both for evil and for good. Computer vision is used for surveillance, often encroaching on people’s privacy. But it is also employed in wildlife conservation efforts, such as monitoring endangered species and preventing poaching. AI algorithms analyze camera trap images, acoustic data, and satellite imagery to identify and track animals, assess population dynamics, and detect illegal activities. AI is a tool, and like all tools, it is its application that we should be judging, not the tool itself.

It can be difficult for many people to put AI transgressions into perspective because many lack a real understanding of the technology, and mysterious, complex systems scare people. The problem is exacerbated by sensationalized and deliberate misinformation on the part of Hollywood and the media, who want to manipulate the public into consuming more news. Taken together, these factors lead some people to think that the problem could be even worse than the headlines, the so-called slippery slope. Additionally, there is the issue of Frederic Bastiat’s “Seen vs. Unseen.” A headline tells us that a self-driving car hit a pole, but how many accidents by distracted humans would a generalized use of AI cars prevent? People see the wrong person sent to jail by the racist AI, but how many are sent by judges, some of whom have even more nefarious motives? Without comparing these AI misdeeds with the human alternatives, the default reaction is to hinder AI. In many cases, this is short-sighted and counterproductive.

Related