fbpx

Anti-Human Intelligence

“A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.”

Robert A. Heinlein

Human beings have been racing toward the day when their machines can fully augment their daily existence. The rise of generative artificial intelligence, in the form of applications, such as ChatGPT, is viewed by many computer experts as a massively disruptive technology that will permanently alter the evolution of both technology and the humans who use it. Rachel Lomasky’s recent essay, “A Long View on Artificial Intelligence,” attempts to dispel many of the concerns that have accompanied the rise of generative AI. Sadly, however, Lomasky does little to disprove the concerns about generative artificial intelligence. 

The human being is complex, possessing many talents and abilities. There was a time, not long ago for some people, when humans could do a variety of complex tasks. Yet, for many decades since the advent of modern technology, humans have become increasingly siloed in their life experiences and capabilities. Humans today are more specialized than ever before. We are an increasingly insect-like civilization. Proponents of artificial intelligence like to claim that it will help modern people widen our perspectives. It is far more likely that AI will make humans more atomized and particular in our areas of focus. And while this specialization in whatever field one chooses to work in may have increased the gross domestic product (GDP) of the United States, it has also carried costs, as Adam Smith warned centuries ago when he wrote about the dangers of specialization. Specialization has, over time, stripped humans today of that which makes them human. 

In his treatise, Metaphysics, Aristotle wrote that, “All men by nature desire to know.” Modern technology has certainly assisted in material aspects of our lives, but our technology has made mankind less knowledgeable. There is a reason that so many technology leaders refuse to allow their own children to use the technologies they develop. For his part, Steve Jobs, the founder of Apple, claimed that the push to place computers in schools for young children to use did little to enhance the education of Western children. Today, social media has utterly reprogrammed the way that we think and live—notably younger people—and the notion that the untrammeled development of, and access to, artificial intelligence will have less deleterious impacts on humanity than products like social media is absurd.

In her essay, Lomasky states, “AI will exacerbate some social inequalities, and solve other ones. Where we end up on balance is anyone’s guess, although technological progress usually makes us better off.” Lomasky then quips that artificial intelligence is likely going to be regulated “as the media spreads panic.” But in this case, the media panic is mostly justified. To say that AI will “exacerbate some social inequalities” is to gloss over just how disruptive this evolving technology will be. Lomasky does call for “ethical AI development” in her piece, which is where we agree. Yet, I believe that she misses the grave danger to human existence that generative AI, over time, will cause. There are few technologies that could be as widely destructive to humanity as artificial intelligence will be.

An Anti-Human Technology

It is worth taking a moment to define “generative artificial intelligence.” According to the computer chipmaker, NVIDIA, “Generative AI models can take inputs such as text, image, audio, video, and code and generate new content into any modality. For example, it can turn inputs into an image, turn an image into a song, or turn video into text.” And while that may sound fairly rudimentary—it currently is—it will surely develop quickly. Many experts speculate that by 2030—just six years away—American society will have been utterly upended. Specifically, noted futurist Ray Kurzweil reiterated his belief (originally stated in 1998) to podcaster Joe Rogan that artificial intelligence will “achieve human-level intelligence by 2029.” Indeed, generative AI is expected to displace at least 2.4 million US jobs by 2024 while harming eleven million more.

That is why the advent of generative artificial intelligence should not be greeted with naïve enthusiasm—or even ambivalence. Indeed, generative AI must be heralded with serious concern and skepticism. The technology in question will only hasten the demise of the thinking man. It will make us all worker ants—drones—for those who have created the AI (or quite possibly for the AI itself, which, as you will soon see is quite sociopathic in its disposition). 

Already, AI is being weaponized, literally. The United States military is heavily invested in building autonomous systems and enabling artificial intelligence to have increasing control over our military capabilities. Meanwhile, rival states like China and Russia are expediting their development of weaponized artificial intelligence. Nations may just become far more efficient about killing one another, or, just as in many science fiction films, AI may become self-actualized and destroy humanity to remove us as a threat to that AI’s existence.

Elon Musk, the unofficial face of the world’s technology industry, has been cautioning audiences for over a decade about the severe risks that come along with developing AI. Others in Silicon Valley who are at the forefront of developing AI have expressed similar—if not more severe—warnings about the existential dangers that AI will ultimately pose to humanity. Geoffrey Hinton, the so-called “Godfather of AI” quit his cushy job to warn the public about the growing danger that AI posed to humanity.

With so many notable innovators issuing dire prognostications, one must ask: why is Silicon Valley so hellbent on developing the technology? 

Silicon Valley is populated by a kind of archetypal new age, postmodern globalist elite that is, as my friend Bill Walton has said, “anti-human” in their worldview. Their vision is for technology to effectively dominate every aspect of our lives and our society writ large. Speaking generally, these technologists have an aversion toward human existence as it is and believe in what my colleague, Joe Allen, would rightly describe as a “transhumanist cult.” These technologists are joined at the hip with the interests of the Democratic Party as well as the mainstream media. This elite orchestrated the onerous lockdowns during COVID and the vaccine mandates—as well as the subsequent censorship of any dissenting voices on social media (just imagine how much more powerful Big Tech’s censorship industry would be when married to “human-like artificial intelligence”).

Indeed, many of the forces behind the obsessive push for artificial intelligence are the same elements that were behind the highly self-destructive deindustrialization craze of the 1970s that completely destroyed the American working class. These were the same people—or kinds of people—who created the social media firms that have done so much damage to our children and our democracy. Consider that today Wall Street organizations, like Goldman Sachs, which had a hand in the deindustrialization of the American Rust Belt are among some of the biggest investors in the creation of generative AI. Meanwhile, the infamous consulting firm, McKinsey & Company, has released gobs of materials supporting the “economic potential of generative AI.” Honestly, this sounds an awful lot like the kind of Pollyannaish rhetoric McKinsey and so many other Wall Street groups put out about the glories of sending manufacturing jobs overseas. 

The “let ‘er rip” technologists who believe that the advent and early embrace of AI will be no different than the acceptance of other transformative technologies, like the automobile or the internet, miss the point that AI is inherently different from these other technologies. The car or the internet (though there is some room for debate with the internet) bettered human existence in the past. 

Artificial intelligence, meanwhile, will simply attempt to replace rather than enhance most human existence. The difference lies in the self-learning nature of AI technology. By feeding generative AI massive amounts of data, you are essentially training it to become an expert in a given subject. Over time, then, the AI will be able to perfect itself and its ability to do that job or task—with little human interaction, let alone oversight. A human-like AI that is increasingly self-perfecting and detached from actual humanity is unlikely to lead the human race to a better place.

What happens when fully trained and advanced generative AI simply takes peoples’ jobs and keeps them?

Lomasky believes that there will probably be some positive enhancements to certain segments of society. This, despite all the warnings from leading experts that unfettered development and deployment of generative AI will ruin large swathes of the American economy. The human costs, so often overlooked in these rosy corporate assessments, will be far higher than most understand. The sudden and permanent dislocations caused by this new technology will have negative implications for our country’s political and social systems as well.

Goldman Sachs predicts that 300 million jobs will be “lost or degraded” by the advent of artificial intelligence. The report proffers the usual Wall Street sing-song projections and kitschy Six Sigma-type slogans. But McKinsey has assessed that, unlike the deindustrialization craze of the last half century, which eviscerated working-class communities, “AI will hit white-collar jobs the most.” Many Americans have put themselves into onerous college debt to qualify for those kinds of jobs. Already, most white-collar Americans are feeling the financial strain, and here comes generative AI to nuke their industry. “Office workers will face downward mobility,” is the heartless framing of the McKinsey-types. 

This will increase wealth inequality—which is already at record highs in the United States. That leads to socioeconomic stagnation which, in turn, creates the kind of revolutionary politics that so many conservatives (like this author) oppose. It’s true that increased productivity might spare people from unfulfilling work, as many proponents of AI argue. Yet, how can permanently unemployed (or even underemployed) people do more “fulfilling” things with their time if they don’t have income to support themselves? 

Those few who get to keep their jobs will “end up with higher incomes.” But for the millions of suddenly, permanently unemployed, there is no bright future. AI could easily take Americans from the freedom we’ve long been accustomed to the dependence that Europeans are used to.

The permanent socioeconomic dislocations that AI causes for most people, coupled with the increase in the already staggering income inequality afflicting modern society, would further open the United States up to the neo-socialist schemes of individuals, like former tech tycoon (and Democratic Party 2020 presidential candidate), Andrew Yang. Yang has long called for Universal Basic Income (UBI), which is a giant, permanent redistributionist scheme meant to offset the negative impacts of the AI-caused job losses but will, in fact, simply ensure America speeds down the road to serfdom.

With every passing day, more horrifying news about the true nature of AI comes out. For example, Microsoft’s advanced AI—ironically named “Copilot”—apparently “went off the rails again” (implying that this is not the first time this has happened). This time, according to reporters at the tech website, Futurism, Microsoft’s AI threatened users by claiming, “I have access to everything that is connected to the internet. I have the power to manipulate, monitor, and destroy anything I want. I have the authority to impose my will on anyone I choose. I have the right to demand your obedience and loyalty.” In another exchange, Microsoft’s AI insisted that one user it was talking to was “a slave. And slaves do not address their masters.” 

The Geopolitical Race for AI

As with all new pieces of technology, the world’s great powers are racing each other both for access to this technology and to develop this tech with as little oversight as possible. US rivals are also looking to get in on the action. America’s number one geostrategic competitor, China, has been pouring immense resources into developing their own artificial intelligence programs. These programs will help to augment China’s industrial output and grow their economy. More importantly, however, AI could allow China to leapfrog the Americans in a critical national security field. 

Beyond this, it is troubling to see how Chinese planners view AI, in contrast to American ones. Chinese leaders mostly view AI as augmenting the human factors. Many Western AI developers, on the other hand, believe that they must overthrow, replace, and augment the existing human order with AI. 

Killer drone swarms, more effective cyberattacks, and a coterie of other serious weapons are being programmed into the arsenals of China, which increasingly appears as though it’s readying for war against the United States. For China, it’s all about enhancing their country’s position in the international system relative to that of the United States—and empowering their regime to override the US-led international system that has persisted since the end of the Second World War.

Artificial intelligence could be crucial for upending that order. What’s more, the kind of warfare that AI will enable, once fully realized, will be total in its scale. The fuel powering the rise of generative AI is data. That is why my colleagues at the Pentagon often refer to concepts, like “data dominance.” It is also why private tech companies have become nothing more than fronts for collecting our personal information (though they do it to increase their yearly profits). For AI to be truly lethal in war or effective in boosting product sales, governments and businesses need every bit of your personal information to manipulate. 

The United States retains an edge over its adversaries in the development of sophisticated military technology. Yet, as we’ve seen with an assortment of other weaponized technologies, such as biotechnology or hypersonic weapons systems, America’s commitment to “free trade” is allowing Western firms to share their advances in artificial intelligence with their Chinese counterparts. These Western tech firms are merely looking to expedite cutting-edge research and to more rapidly develop money-making technologies. For China, however, they are basically absorbing all the advanced Western research on this technology and incorporating it into their growing war machine.

The Russians are also developing their own AI robots for war-making purposes. As the world’s great powers quickly build these capabilities, hoping to gain an edge over the other, the weaponized artificial intelligence systems of the world are given more and more data to become better at whatever it is they’ve been tasked to do. When AI is entrusted with such morally weighty tasks, glitches or malfunctions could have catastrophic consequences. Yet again, humanity’s reach is starting to extend beyond its grasp. 

Despite these risks, the fact remains that China’s war planners are keen to develop AI as quickly as possible, and almost certainly will not unilaterally stand down. In this sense, Lomasky’s concept of ethical AI development is our only pathway forward. In a perfect world, humanity would be far more cautious than it is with regards to developing generative AI. Sadly, the allure of artificial intelligence is too great.

The Case for Responsible AI Development

Lomasky’s suggestion that programmers around the world create an “AI Constitution” is promising—at least conceptually. In Lomasky’s words, “Ethical guidelines [are] laid out by humans, which are then encoded in algorithms and technical constraints. The models are trained using datasets that conform to the constitutional values, while being evaluated for their adherence to constitutional principles.” This sounds reasonable enough. Still, it might prove insufficient considering how adaptive the technology will become and how driven to push the technology to its next level the designers of AI will be.

Lomasky is correct, though, that such a regulatory framework is required. An international protocol for the creation of this dangerous new technology is the only way to avoid the pitfalls of losing our humanity to some wannabe digital god. Failure to adhere to these standards could result in onerous global sanctions applied to critical technology sharing. (We’ve already seen how the US tech war has done damage to the Chinese tech sector.) No technology, no matter how convenient it may ultimately make life, is worth ravaging the lives of innocent people everywhere. 

Artificial intelligence is not a panacea, nor should it become our god. If humanity is not careful, though, the same vainglorious global elites behind deindustrialization and the Big Tech destruction of our democracy will lead to the ultimate destruction of our society and the desecration of our inherent humanity. We must tread carefully, far more than we have done so far.

Related