fbpx

The Chess God and Its Implications for Our Legal and Political Future

For law professors and political enthusiasts, it is easy to assume that most important event of the year was political, like either Neil Gorsuch’s confirmation or the tax reductions that just passed.  But in our age of technological acceleration, it is almost certain that some technical innovation will have far more profound effects in the long run. And this year the most important technological event was the performance of Alpha Zero, a computer chess program that beat the best previous programs without losing a game.

Of course, computer programs have been beating humans ever since IBM’s Deep Blue beat Gary Kasparov. But all these computer programs depended on incorporating human knowledge in content of their programs—either algorithms incorporating human evaluations of positions and pieces or neural networks that learn from previous games played by humans. These programs had the edge over humans in large part because they could use computer power to calculate many more moves and thus see farther into the possibilities on the board.  What is extraordinary is that Alpha Zero was simply given the rules of the game and played thousands of games against itself, getting better through a process of reinforcement learning over neural networks. It thus did not depend on either human evaluations or human data. Yet it achieved its dominance after only a day’s learning and without looking as far ahead as other programs.

In my view, emergence of this form of AI has many implications, including for law and politics.

  1. The architecture of the human mind has inbuilt limitations compared to AI.  One of the striking facts about Alpha Go was that its understanding of chess was much deeper than that of humans. For instance, it consistently was willing to sacrifice pieces for positional advantage when even a grandmaster would not have done so, because he would have been impeded by the heuristic anchor of the value assigned to the pieces.  Even the best humans seem to need such anchors because some subtleties of position are beyond their powers of pattern recognition. Alpha Zero, in contrast, is a Chess God, or as one grandmaster said, it played not like a machine but like an alien who came to earth.  For those who know chess I recommend watching this game to get a sense of Alpha Go’s divine style.
  2. In an increasing number of domains the trial and error of many minds will lose out to machines and more centralized forms of learning. Alpha Go gained more knowledge in a day that did all players collectively in a thousand years.
  3. We can look forward to new drugs and other medical breakthroughs, because similar programs can discern medically rewarding patterns that humans cannot.  For instance, those who produced Alpha Go are planning to use similar machine learning to work on protein folding which can help in the discovery of new drugs.
  4. Law may not be a completely formal system but is has some formal subfields. Think about the rule against perpetuities and some areas of tax. Lawyers in these areas may well be machined away, as it were. Lawyers would be well advised to go into areas where law is fuzzy and politically driven.  Administrative agency promulgated rules may be a good area.
  5. In a previous essay, I suggested that in most areas of law a computer and a human would be better than just a computer, because even in the formal domain of chess computers plus very good humans could beat a computer because they complemented one another.  But it is not all clear that humans could much improve on Alpha Go and there will be some domains of law that may become almost wholly the province of AI.
  6.  The implications of the increasing power of AI go far beyond law to any other kind of job that requires  following formal rules. This kind of job displacement may affect white collar workers more than blue collar workers.  Politics thus may be in for a bumpy ride. Not I hasten to add, because there will be no jobs for human left to do. Humans like dealing with other humans. There is a great future in personal services. But these kinds of jobs may require very new kinds of skills and training. Much of the disruption of politics come in transitions from one kind of economy to the next, and innovations like Alpha Go suggest we are on the brink of one of the greatest and fastest transitions in human history.
Reader Discussion

Law & Liberty welcomes civil and lively discussion of its articles. Abusive comments will not be tolerated. We reserve the right to delete comments - or ban users - without notification or explanation.

on December 26, 2017 at 10:32:55 am

An Alpha Go's law program would be able to trace precedent, with results depending on how such previous cases are uploaded into it, narrowly or broadly. You're at the mercy of the programmers there. Would it be able to find cases that are from different areas but legally analogous? The only thing an Alpha Go law version would remove is not lawyers, but the need for as much legal research, in as much as you trusted its reading of a case. Lawyers' jobs are no so much in jeopardy as paralegals and clerks.

Further, chess is static, the law is not. Could an Alpha go law version be able to take previous precedent and apply it to a new circumstance? Assuming AI is capable of creating legal analogies, we would always be stuck (if we just chose to use the AI's recommendations) with using old approaches to deal with new problems, which potentially might need new solutions.

"I say, the first thing we do, is unplug all the lawyers!" - Richard the Butcher

read full comment
Image of mad_kalak
mad_kalak
on December 26, 2017 at 10:54:42 am

The practice of law already involves numerous "fill-in-the blanks" tasks. These jobs are being rapidly undertaken by computer technology and/or, for the near-term, by low-cost foreign labor. This trend can only accelerate with the advancement of AI, which suggests that the recent decline in law school enrollments is a smart response of the spontaneous market order and may be a welcome long-term trend. For decades law has drained away, misused and under-employed too-much of the country's superior talent that would have been better steered toward occupations more personally and socially rewarding.

Joint law-business and similar programs may be a way for new lawyers to counter the trend. Older lawyers will need to embrace fields of practice which require superior "people skills," such as negotiation, mediation and litigation. "People skills" do not come naturally, for the most part, to the personality-types who have been attracted to law schools since law became a big business rather than a personal profession, i.e., the practice of law as we have known it since the 1970's.

The economics of medicine, already severely damaged by dirigisme and insurance monopoly, may be thoroughly revolutionized and upended by AI, so that doctors go the way of pharmacists and nurses and become administrative middle-men, if not functionaries, in executing the superior decisions of AI in diagnosing illness and managing treatment.

read full comment
Image of timothy
timothy
on December 26, 2017 at 11:06:55 am

Hadn't heard about Alpha Go. Pretty cool.

That said, I'd generally read this sort of essay as a kind of Swiftian parody--except then I'd expect it to be funnier. If McGinnis is sincere, I'd like to see an example of how AI could displace lawyers.

Quite often, the problem with law is not a lack of knowledge of constitutions, statutes, ordinances, rules, and precedent: the problem is deciding which should apply to a novel context. Since no set of facts exactly matches another set of facts, judgment is required. I don't see how AI would replace judgment.

More generally, it is hardly surprising that computers would excel in highly artificial worlds with arbitrarily constrained rules. In the real world of real complexities--including complexities you cannot anticipate at the time you're designing your optimization strategies--it is far from clear that AI has the advantage.

read full comment
Image of nobody.really
nobody.really
on December 26, 2017 at 13:22:39 pm

Like Twain's death, reports of AI's sophistication are greatly exaggerated. I do not doubt, and I certainly admire, the engineering brilliance that went into fabricating Alpha Zero. But such an achievement is at best a 2-dimensional rendering of the 3-dimensional figure that is the human mind, because two dimensions are all that can be seen in the Flatland of AI.

Computers calculate; they don't think. To the extent a process can be reduced to a sequence of calculations, no matter how complex, then AI looks like it can think, and it certainly can out-calculate the human mind. But the entire AI and neural research industries are based on assimilating the understanding of human thought to a model of complex, rule-based calculation, because that is a function they can replicate in machines. Researchers are like the drunk looking for his keys under the streetlight because the light is better there.

Call me when Alpha Zero wins at Fizzbin.

read full comment
Image of QET
QET
on December 26, 2017 at 14:09:57 pm

I watched every game ever released by AlphaZero. It is truely amazing. It will have the greatest short term impact where the “rules” are more well known. But long term, it wont be the law that the computers are analizing, it will be the judges. Knowing what kinds of arguments the judges will find most attractive and therefore be able to predict the outcome of the case from the briefs. That will happen before it can create the briefs itself.

read full comment
Image of Devin Watkins
Devin Watkins
on December 26, 2017 at 14:17:17 pm

For past AI, you are mostly right. But AlphaZero is a whole nother type of creature. It beat the worlds best at go. A game that requires intuition more than any other game in the world. No other computer program could even beat the most basic pro. It is lightyears beyond anything you have ever seen before in AI research. Fizzbin is far too easy for it to even be a challenge.

There is NO human understanding that it is creating rules based off of, that is the beauty of it. It is literally teaching itself starting at tabla raza.

read full comment
Image of Devin Watkins
Devin Watkins
on December 26, 2017 at 15:03:52 pm

How is it different in the aspect that QET is speaking about? Instead of being taught the rules by programmers, and using its superior computing power to calculate the odds that one chess move is superior to another (and thereby beat a human opponent), it instead taught itself the optimum way to play....but how.....well, by using its superior computer power to play again and again to calculate the optimum move in order to beat its opponents. From that respect, it's the same. All your doing, if anything, is removing the bias of the programmers in a game with no subjectivity. The OP even alludes to this. Is it an advance? Sure. Practical applications exist I am sure. In law and beauty pageants, not so much.

read full comment
Image of mad_kalak
mad_kalak
on December 26, 2017 at 15:31:59 pm

The advatage is in its ability to adapt to new situations. Previously each new game or set of rules had to be independantly coded by human programmers with their detailed complex knowledge of strategy and tactics applicable in the specific situation. Now it learns by itself without human intervention besides some very simple rules. This means it can be reapplied in new situations with little modification. They developed this AI to win at Go, not chess. And yet it was still the best at chess with very little modifications. The only modifications made were making it even more generic so it didnt cheat with a few of the special rules in go (like its symetrical nature or lack of draws).

There is a set of AI tasks that are known as AI-Complete because to solve them is to have AI that is generally as intelligent as humans. One classic example is natural language understaning).This is the largest adance to solving those kinds of problems do to how generic the solution is. It is a more general purpose AI then has existed before, and what is the human brain but a general purpose thinking machine. Think about the idea of computers reading judges opinions, briefs, contracts and really understanding the meaning. The possibilities are endless.

It is simply NOT creating rulebased understandings of existing human knowledge. It is truely creating new knowledge through inducive reasoning. All prior computer ai was based on deductive reasoning from a set of known rules.

read full comment
Image of Devin Watkins
Devin Watkins
on December 26, 2017 at 17:33:21 pm

THis may be nothing more than the "apparent", yet illusory effect of employing an Intel Core i7-8700K as opposed to an old Intel 8080 processor. YEP, it can "compute" a zillion times more possible outcomes than can the old 8080 BUT it is STILL computing serially, and its effectiveness is limited, as nobody.really and others have said, by the "static" rules of the challenge.

Consider McGinnis' claim: " For instance, those who produced Alpha Go are planning to use similar machine learning to work on protein folding which can help in the discovery of new drugs"

This assumes that a) the human microbiological system is static. This is doubtful, at best. As we do not understand all of our biochemistry, and certainly not at the level of sub-cellular interactions / chemistry, it is presumptuous to make such a claim. As an example, for decades, many had argued, (Richard Dawkins among others) that human evolution was filled with erroneous starts / stops / ERRORS. They made this claim based upon *their* (mis)understanding of "Junk DNA ( dna fragments and / or chains that do not appear to serve a function). Lo and behold as instrumentation and techniques advanced, we found that rather than "junk" a fair number of these are quite functional and serve as response agents to illness, disease, etc. (Not all, but the number is growing). How then will alpha Go address this? How will it create new drugs if it is a) unable to *compute* what junk DNA will do, b) what the multitude of interactions with other protein / peptide chains, sub-cell chemical interactions will be?

Now we can assume that as in Chess, Alpha Boy will simply compute its bloody head off and do it far more rapidly than anything we can presently imagine. BUT - a Noble Laureate has calculated that in order to create a 100 element peptide chain in the human body and NOT have it be destructive of the DNA / body, the odds against it are 1 in 10 to the 40,000+ power. That number is larger than all the microseconds in the universe since the Big Bang. Hmmmm - Alpha is gunna be a busy boy.

Moreover, without having an understanding of the "interactions between all the elements of human biochemistry, it is doubtful how much superior to standard human drug research it will be. YEP, Alpha Boy can process things faster - and in a number of instances, this will work to the advantage of both drug companies and patients - but how vast should one go when operating in a void (of knowledge? how will Alpha Boy prove out its new drugs concoctions?

Nope, unless, and until *human* understanding of human biochemistry is greatly increased, Alpha Boy, may serve only to eliminate a fair number of PhD Biochemists.

One wonders how Alpha Boy can perform inductive reasoning in the study of DNA when there is an apparent dearth of known and (at this point) knowable / observable premises.

There is of course, the "alchemists" option.

read full comment
Image of gabe
gabe
on December 26, 2017 at 23:33:58 pm

Actually that isnt the case at all. In fact, AlphaZero computes a thousand times slower than the previous best computer program, and still went undefeated. It thinks better, not faster.

read full comment
Image of Devin Watkins
Devin Watkins
on December 27, 2017 at 05:47:20 am

They say AlphaZero already better understands Justice Roberts' Sibelius decision than Roberts himself. So I don't think its too long before AlphaZero can write its own opinion upholding the individual mandate or any other law or tax. The taxing clause, and the necessary and proper clause, were almost written with AI in mind to be able to understand and enforce it. The Chinese claim they have AI that can enforce Chinese democracy and the Saudis claim their AI can enforce Shari without any human intervention.

read full comment
Image of Smith
Smith
on December 27, 2017 at 10:36:26 am

Devin:

I suppose it depends upon what "is" is or what thinking is.

It would be interesting to see what the "code" is, what instructions were given to Alpha Boy.

I am reminded of an experiment conducted by (and cited by both Dawkins and Stephen Jay Gould) that purported to demonstrate that *blind* evolution is not only possible but undeniable in which a computer program accelerated a hypothetical set of evolutionary mutations and "selected" only those that were, shall we say, proper.

Turns out it all came down to the programmers coding and the *limits* placed upon the "discretionary" THINKING of the computer.

I'll wait and see what Alpha Boy comes up with. If he comes up with a drug to fix my blown discs, then he has my vote. Until then.......

read full comment
Image of gabe
gabe
on December 27, 2017 at 11:25:14 am

I was talking in terms of speed of proccesing that stockfish (the prior computer chess champion) processed 70 million positions per second, while AlphaZero looked at only 80,000 positions per second. It's not just about who can calculate faster.

read full comment
Image of Devin Watkins
Devin Watkins
on December 27, 2017 at 21:54:27 pm

QET,

I think you are pretty close the mark. At its most fundamental level (I mean, above the level of manipulation of binary strings) AI is some form of pattern recognition. This is not a knock against AI; pattern recognition is very important to cognition. It enables classification, permits the identification of surrogate data when primary data is missing or corrupt, and is much more efficient than item-by-item analysis. It is what allows humans to read efficiently without sounding out every syllable. But it is not in itself thinking. One would suspect that there are some animals that have very sophisticated pattern recognition capacities that do not otherwise reason.

The ability of AI applications to recognize patterns in complex data, and to optimize outputs according to some criterion, make AI very useful for some applications. One of the inherent strengths of using data-trained pattern recognition algorithms is that they inherently evaluate the information content of data. Humans vary widely in their ability to do this, and as a result tend to rely to much on some data and ignore others, leading to such undesirable outcomes as over use of antibiotics, lousy investments, and counterproductive regulations. But, as you point out, pattern recognition is not identically thinking. Nor is the ability of an AI application to perform simulations with alternative data sets or optimize some prescribed outcome.

Of course, AI has some challenges before it. One is that it does pattern recognition extremely well, and one particular type of pattern recognition is the socially and politically reviled activity known as "profiling." Assume that you used am AI application to determine where and how best to deploy law enforcement assets. One can imagine that the outcome of such objective data analysis might be objectionable in some circles.

Another limitation is that AI cannot "want" something that is abstract, therefore it cannot have abstract values. One might wonder what an AI application would have recommended at the Wannsee conference, or in dealing with the issues that preceded the Holodomor. What would AI have advised Churchill regarding the French fleet at Oran?

Perhaps the worst attribute of AI is, being an objective processor, it has no moral responsibility. There is a group of thinkers who subscribe to the concept of "human biodiversity" and believe that traits of human intelligence, personality and behavior are largely genetic, and therefore subject to prediction. It is quite easy to see how AI would legitimize the argument that we can predict which persons will be prone to criminal or antisocial behavior. We do this already in whispers. Do we want it to be a thing, to have some very proficient and perhaps quite accurate computer algorithm single out those who might commit crimes but have not yet done so? One of the risks of AI is that, like so many other concepts in human history, it might serve as a convenient excuse for humans to things doing things to each other that they should not.

read full comment
Image of z9z99
z9z99
on December 28, 2017 at 13:32:44 pm

Devin:
“The adva[n]tage is in its ability to adapt to new situations.”

Z:
Z’s comments on pattern recognition are spot-on and they reminded me of an incident from commercial aviation where the deficiencies of AI’s pattern recognition were exposed.

During the introductory phase of the A330 (as I recall it was the A330). Airbus decided to show off the new plane and arranged for a quick fly-by at a small airport near Toulouse. The pilot(s), highly experienced Airbus veterans decided to perform a routine known as Alpha Max, in which the aircraft simulates a landing by approaching the runway at very low speed, at very low altitude (200 feet approx.) in a nose high attitude (not unlike a Lippenstein Stallion on its hind legs – but not as severe). The pilots executed this maneuver, as they had done countless times before, only to find that the A330’s “pattern recognition, its’ AI, if you will, reduced engine power at the *critical* moment when Alpha Max technique required full and immediate power upsurge.
It turns out that the A330’s AI “recognized” a pattern. It thought that the plane was landing, as almost all factors would indicate that it was doing so. BUT it was not – it was “showing off”- or, more accurately, the pilots were showing off. Consequently, the A330’s AI *refused* to allow the pilots to rapidly increase engine thrust and the plane crashed into a hillside, killing a number of invited guests.

What does this tell us about AI and its’ vaunted pattern recognition. (BTW: there are 3 additional examples of similar “pattern recognition failures” that I shall not cite here).

1) AI may recognize a pattern BUT it may not be the correct pattern.
2) In instances where it may recognize the correct pattern it may not be able to make the *correct* choice as it is, by definition, and absent human input of* values*, value free. Consider a case where a self driving automobile has somehow *learned* that accidents are sub-optimal outcomes and adversely affect the performance of the vehicle. Confronted with a situation where the pattern indicates an imminent collision into another vehicle, for example, a large semi-truck or into a crowd of pedestrians, what does the “pattern” itself tell the AI to do.
3) Absent values, could it not choose to hit pedestrians.
The above would indicate to me that this accelerated, yet still *serial* processing of information falls somewhat short of *thinking.”

Now I’ll go and get in my 22 year old, 4 cyclinder Ford Ranger in the hopes that even in the snow I will be able to both recognize patterns and presumably make the right choices.

read full comment
Image of gabe
gabe
on December 28, 2017 at 19:53:32 pm

Better a computer than a green law clerk.

read full comment
Image of Alana
Alana
on December 28, 2017 at 23:53:26 pm

Yes, to an extent the computer AI is just pattern recognition. But to an extent, all intelligence is just pattern recognition. The plane in your example, the pilots had one pattern they thought was going on, and the AI thought another was happening. The AI was wrong in this case. It should have recognized that being that far off the ground, it couldn't have been a landing. It failed at identifying the proper pattern. And no doubt that AIs make mistakes or errors, but so do people. There should always be a way of shutting off the AI just in case. But eventually we will get to the point that the computer AIs are better at identifying the right pattern then the humans are. They will never be perfect, but all they need to do is be better than humans. We are not there yet, but one day....

There is only two types of reasoning, inductive and deductive. Computers are very good at deductive reasoning (give them the logic and they will find the answer). They are very bad at inductive reasoning (identifying the pattern or "rule" within a partial set of data). The better they get at identifying patterns generically, the closer they are to true AI. Previous AI have done it for things like faces, but it is still a very controlled environment. We need AI that is more generalized. Allow it to derive things like the principles of mathematics or physics. These are just pattern recognition problems at their heart. But we don't have an AI yet that can derive quantum mechanics from a set of experimental data.

Obviously the training of an AI designed for driving a car is going to be put in situations like that during the testing. And as long as the programs inform the AI that hitting the pedestrians would be a mistake (that is something that would have to be added by humans), then the AI will learn from that and not do it.

read full comment
Image of Devin Watkins
Devin Watkins
on January 11, 2018 at 13:42:59 pm

[…] C. Dix Professor of Constitutional Law at Northwestern University’s Pritzker School of Law, posted a short article about the booming power of AI and its potential impact […]

read full comment
Image of Who Decides AI's Role In Human Governance? | Copy Paste Programmers
Who Decides AI's Role In Human Governance? | Copy Paste Programmers

Law & Liberty welcomes civil and lively discussion of its articles. Abusive comments will not be tolerated. We reserve the right to delete comments - or ban users - without notification or explanation.