fbpx

AI in a Humane Culture?

It’s become very difficult for non-experts to keep up with the latest developments in artificial intelligence. Chatbots are improving at a dizzying pace, and it’s hard to predict how the world as we know it will change. Will AI replace executive assistants and writers? Will every child have a personalized AI friend and assistant? 

None of us know how exactly AI will change things—not even the engineers who create it. But most agree that it will be transformative. Some changes will be structural and beyond individual control, but others will be at the discretion of communities, schools, families, and individuals. And various types of organizations, industries, and professional sectors must determine whether and how to integrate it into their work. So as AI offers to insinuate itself into new aspects of life, people must decide when to adopt it, and when to pass it up. But we need time to deliberate these matters—which is why the recent calls to put a pause on development are sound.

What’s also needed in deciding how to use AI is nimble thinking. Unfortunately, our current instincts about technology are inadequate. Some have an unrealistic distrust of technology, while others treat tech as the key to unlocking utopia. For example, traditionalist conservatives tend to make blanket denunciations of technology, suggesting that modern tech and human flourishing are either incompatible or inevitably in tension with one another. Others in Silicon Valley uncritically embrace technological progress, even to the point of aspiring for transhumanism. Most of us are skeptical of technology in some instances and grateful for it in others, but we don’t quite have a vocabulary for explaining why inventions are sometimes good and sometimes bad.

One way to refresh our thinking about technology is to break it into two categories: tools and mechanisms. Tools, I’d argue, are things that support human beings in their creative or intellectual endeavors. They serve as extensions of our bodies or minds. They don’t usurp a person’s focus or effort while doing or making something; rather, they help a person perform a task, complete a project, or create something. Tools can be very simple, everyday things like forks, toothbrushes, bicycles, even cars. Digital inventions can also be tools: Microsoft Word, for example, is a writing tool. Tools may streamline an activity, make a task more efficient, and sometimes involve automation, but the user is the main driver of the action. Microsoft Word simplifies writing because it bypasses the need to write out each letter, or erase errors by hand.

By contrast, mechanisms automate the creative process in such a way that the person is rendered a passive beneficiary of its work. The human role in the activity is subordinate to the mechanism’s, which is essential and primary to task completion. Think of a GPS: it tells the driver when to turn in real time; the driver obeys the GPS, which is directing the movement.

Perhaps these definitions seem to imply an endorsement of tools and a denunciation of mechanisms. But the “tool” and “mechanism” labels don’t determine the technology’s moral content: mechanisms can be good, and tools can be bad. For example, dishwashers and laundry machines are mechanisms. The appliance is the primary agent in performing the task (washing dishes and clothing), and human beings merely arrange for the appliances to complete those tasks. The person who would otherwise be doing the cleaning doesn’t seem to lose out on much by handing over the task to the machine. 

Many people use their phones as a creative tool: they can write on their phone and generate various kinds of content. But people also use their phone as mechanisms, mindlessly consuming content populated by an app’s algorithm.

Not only can mechanisms be good or neutral, but tools can be bad. Many weapons such as knives and guns are tools, but are often used to cause unjustified harm to another. When used to harm or manipulate other people or ourselves, tools are obviously malign. So rather than substituting as a moral analysis, the tool vs. mechanism distinction offers a starting framework for making a more thoughtful, nuanced, and rigorous moral analysis of our technology that gets beyond instinctive distrust or techno-utopian faith.

There is of course some ambiguity and overlap built into these definitions. Plenty of things can fall into both categories, depending on how they are used. For example, because of their range of functionality, smartphones can function as both tools and mechanisms. Many people use their phones as a creative tool: they can write on their phone and generate various kinds of content. But people also use their phone as mechanisms, mindlessly consuming content populated by an app’s algorithm. So while smartphones are a hybrid technology, they’re more mechanism than tool because of how much they automate for us, and because of their tendency to sedate us. 

Even though mechanisms can be good, they pose more complex problems than tools do. Mechanisms are more alienating because they entail memory loss unaccompanied by new skills, streamlined action, or efficient creation. Mechanisms erode habits that sometimes belong to longstanding traditions. To return to an earlier example: GPS usually means forgetting or never learning how to get around town. Or think of the example that Jon Askonos highlighted in Compact last year: the transition in American agriculture from crop rotation to the use of chemical fertilizer, which skyrocketed production. The fertilizer functioned as a mechanism, in this case, because it automated and streamlined what was formerly a very complex process that demanded discipline and skill. Fertilizer too required knowledge, but this knowledge was largely external to the land’s natural rhythms and demands, and was more about manipulation of the soil with chemistry and risk management. Askonos wrote, “The new agriculture shared some virtues with the old but discarded careful attention to the land as a whole, self-reliance, thrift, and adaptive re-use. Instead, it became of paramount importance to master the relationship between soil, fertilizer, water, and other inputs.”

The question mechanisms raise is whether the memories, knowledge, and habits that they erase are too precious to lose. If this mechanism would damage culture, human wellbeing, or communities, should we abandon it, or place significant limits on it? Could we create pockets of life that are completely free from that mechanism, and that are therefore tasked with preserving memory? In the aggregate, mechanisms pose a political question: as a community, we consider how a mechanism is altering society, the economy, and culture, then decide if and how we want to circumscribe it. 

Let’s apply this principle to a contemporary example, one that’s been widely discussed: social media like smartphones, can function as tools or mechanisms. But because of their algorithms and strong behavioral nudges, I think most social media are overwhelmingly mechanistic: they erase social knowledge and habits that come with other kinds of more active communication. Why call your college roommate when you can see details about her life on Instagram? Why email someone about an interesting article when you can tweet about it? Too much is lost social media become the predominant form of socializing and communicating. This is why individuals, communities, and even governments should enforce limits on their usage. Such limits don’t have to look like a top-down ban (though with TikTok it probably should for national security reasons). But for others, it can be a combination of common sense policies like age restrictions, and social norms, like stigmatizing smartphone use at certain times, as I’ve argued before. 

When AI is used as a support for larger efforts, it functions as a tool, even as it is obviously a mechanism too.

Tools raise different questions, but they are more straightforward: is the activity supported or facilitated by the tool good? Is the thing it helps us create good? In many cases, it depends on the person using it. A cocktail shaker is a good tool for the temperate drinker, but of course not for the alcoholic. But some tools are inherently disordered and should rarely, if ever, be used. For example, while some genetic engineering aims it curing disease, the ability to alter human beings at the genetic level is too much power for people to have over one another. 

Understanding tools and mechanisms won’t give us answers about how and how not to incorporate AI into our lives. But it can help us ask better questions about AI and other emerging technologies with revolutionary potential. Many have predicted that any work that can be done remotely will be AI-replaceable: coding, administrative support, and writing. In these ways, AI would be mechanistic. If AI does begin to take on these skills, human coders, writers, and administrators will lose economic and cultural value, and these skills would eventually be forgotten by many. Perhaps we can stomach putting administrative tasks on autopilot, though administrative workers will likely object to being put out of the job. 

But what about writing? Are we comfortable outsourcing writing to AI, or do we want a civilization with a high number of people who are skilled at putting together sensible sentences? Would we become passive consumers of AI’s work? Or would new kinds of human cognition emerge if AI writes for us—and better than we can? If AI synthesizes and analyzes information more efficiently than human beings can, perhaps using AI’s work might improve our own thinking and unleash new talents in us. 

When AI is used as a support for larger efforts, it functions as a tool, even as it is obviously a mechanism too. AI might empower new kinds of human activity. For example, some in the AI community predict that it will unleash unprecedented medical advances. Of course, we must consider whether these kinds of advances are humane and ethical; after all, many of us question the direction reproductive technology is heading. But it’s not hard to imagine AI empowering us to do good things, like accelerating cancer treatment research; or make advances in research on energy efficiency. But, even when it’s used as a tool, we cannot stop asking whether the project for which we’ve enlisted AI’s help is compatible with our flourishing, or is an attempt to alter our nature. 

These questions will take time to answer because, as noted at the opening of this essay, we know so little about what the current AI software is capable of. But if we manage to stop AI from racing past our reach to manage it, we should consider whether the skills and knowledge it replaces are worth losing both in our own lives, and on a social scale. And when it’s a tool, we also must consider whether the new powers that it unlocks will better us—or make us monstrous.

Related

ChatGPT2

What Humanity Adds

The adjustments that AI will demand of our professional and educational institutions first require reflection on what it means to be human.