fbpx

AI in a Free Society

In A Long View on Artificial Intelligence, my friend Rachel Lomasky makes the points that I have long tried to make, far better than I have done. She notes that the innovation and efficiency gained from AI will be like nothing we have ever seen. I agree with her completely. Even Alan Turing, a genius by all accounts, had no concept of the wealth of human benefits AI would give. 

As Lomasky, Asheeh Agarwal, Vinay Agrawal, and I all agree, AI will create innovation and abundance. People will use AI at work to find new and better ways to do their jobs or, indeed, create new jobs. AI will make business-to-business manufacturing and ecosystems run better and more efficiently. Personally, I’m excited about making supply chains and infrastructure more effective, not because I want my Amazon package delivered on time, but because international trade becomes ever easier and more invisible to the customer. Even now, AI is being used by the US military to monitor and report on hazards and issues in the Red Sea a major shipping throughway, in order to ensure the safety and security of vessels transiting in that area. And, yes, AI will allow for gains on the battlefield too, but AI may also save lives as unmanned warfare becomes the norm (if we have to have war at all).

We are living in an age of abundance not just in AI, but in all technologies. Did we think we would have a computer in our pocket every day or widespread satellite communication? Did we know that we would have driver-assisted cars and delivery drones (even if we don’t have jetpacks yet!)? And then there are drones, which aren’t just for fun, but monitor difficult-to-reach places, enable agricultural production, and even film news events. Did we know this would happen 10 years ago? 

Even techno-pessimists will admit that AI will improve healthcare, among many other things, through new diagnostic tools and research. This, in their view, still does not outweigh the loss of jobs, or the potential for irresponsible acts or possible criminal activity online. But thankfully humans have the capacity to think and act in different ways with different solutions. As Lomasky notes, competition between open-source and proprietary systems will become obvious as both types of companies and organisations will be on the market. There will be a chance to evaluate and promote ethical frameworks. Companies with different codes of conduct and different approaches will compete for consumers of both machine learning and generative AI. Just like when one is shopping for clothes or cars, different values will be embedded in different organisations, and that will allow for differentiation and consumer choice. 

However, none of this will be possible without a light-touch regulatory regime and human, rather than government, AI-led innovation.

Software entrepreneur Marc Andreesen probably has the best description of AI. He describes AI as:

The application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it. AI is a computer program like any other—it runs, takes input, processes, and generates output. AI’s output is useful across a wide range of fields, ranging from coding to medicine to law to the creative arts. It is owned by people and controlled by people, like any other technology.

The human component is key here. AI isn’t autonomous or separate from us as humans. We made it and will use it to make the world a better place.

In Hayek’s 1988 book, The Fatal Conceit, Hayek describes the most critical error of socialism. That is “man is able to shape the world around him according to his wishes”—and without dispersed knowledge. In today’s world of emerging AI technologies, I think that those who are trying to regulate—or over-regulate—this nascent technology are committing the “fatal conceit,” or trying to design the outcome of AI before it has ever developed. In other words, this is the precautionary principle put into practice.

The subject of AI and freedom has come up often lately, in a number of discussions, articles, panels, and even recent testimonies in the House Judiciary Select Subcommittee on the Weaponization of the Federal Government. I would suggest that there are three different ways AI and Freedom can be thought of together.

There is the freedom to innovate—to make things, create things, work with others, contract, develop, and buy and sell all kinds of technology and products, in this case as it relates to AI. The threat to this freedom is heavy-handedness, excessive regulation, and the use of the precautionary principle.

Right now, there is widespread moral panic about AI. There is always a moral panic when new technologies emerge.

There is the freedom of speech, which includes the First Amendment, but also the freedom to talk, debate, agree, disagree, and have discussions. This is also the freedom to engage in knowledge creation, something humans have been doing for a long time. The threat to freedom of speech is regulation which determines “right” and “wrong” speech, as well as limited speech with bias.

Finally, there is the freedom of choice—this is the freedom to use, engage in, not engage, and buy or sell different products related to AI. In effect, this is the freedom to determine what one wants to use, see, or do with AI. It protects the autonomy of the individual. The major threat here is the prevention of innovation and a limited market to make limited choices.

Andreesen has commented widely on the use and future of AI precisely because he is investing in new and emerging technologies and businesses. He believes that AI will be the gateway or entryway platform for the future. It will solve problems, like civil engineering or cancer, but it will also save time and free up the human capacity to do other things. Those things might be of all sorts. Some might be productive and fruitfully challenging, others leisurely and restful.

Right now, there is widespread moral panic about AI. There is always a moral panic when new technologies emerge. Virginia Postrel noted this in her 1998 book, The Future and Its Enemies. In the book, she describes the tension between dynamists who support change, creativity, and exploration in the pursuit of progress; and statists, who believe that progress must be controlled in a top-down manner through careful and cautious planning. The issue is bipartisan and the US has not become the world’s leading digital economy by taking the statist route. The Internet had—and still has—mostly light touch regulation. 

Adam Thierer of the R Street Institute recently noted an even older and more prescient book, Technologies of Freedom by Ithiel de Sola Pool. Written in 1983, this book offers four guidelines for “electronic” speech:

1. The First Amendment, which applies fully to all media.

2. A legal and social understanding that anyone may publish at will.

3. Recognition that enforcement must be after the fact, not by prior restraint.

4. Regulation as a last recourse. In a free society, the burden of proof should be on the would-be regulator, facilitating a free society with the lowest possible regulation of communication.

The choices America makes with respect to AI will reinforce or undermine our commitment to freedom. The techno-panic we see today is in many ways normal, but it is important to make sure that AI isn’t stifled in the early stages. That way, there can be more options to choose from when it comes to AI and the market. Of course, I personally want a classical liberal AI chat, but that is just me.

In the US, prosecution under criminal law will address criminal issues like sexually explicit images of children that are generated through AI. Any other criminal offense that may arise will have pre-existing law for prosecution. International agreements will be made, most likely through coordination and communication with allies. Engagement in warfare and battles will change and rules of engagement will be outlined, though they may not always be adhered to. The best and brightest will work together, create guidelines, and a code of ethics, and enable the production of even more abundance. 

AI isn’t new, nor is the human creation of technology or panic about new technology. We need to ensure that the three freedoms that I’ve outlined are protected so that 1,000 flowers may bloom in an AI world.