fbpx

Algorithmic Fortune-Telling

Vaccination was introduced to Europe and North America in the early 1700s. It was met with considerable skepticism (if not outright hostility), and explicitly banned in some countries. The process, however, was finally redeemed through statistics. In 1766 the Swiss mathematician Daniel Bernouilli extracted the relevant rates of susceptibility and fatality from census records and predicted an average improvement of 14 years of life expectancy for those inoculated against smallpox. Admittedly, Bernouilli’s data was flawed, and his calculations were based upon a number of questionable assumptions, such as simply ignoring the potential risks of inoculation. But it provided the model for many of the processes that were to come—abundant new sources of data (originally collected for very different purposes), fed through the latest mathematical developments in probability and statistics, and resulting in a definitive shift in social perception and public policy. It is perhaps also notable that Bernouilli was roundly criticized at the time for reducing the all-too-human realities of sickness and death into an abstract calculation of costs and benefits.

Bernouilli of course only had access to a limited range of information, collated over several years, and largely for the purposes of calculating government annuities. Today, our data is harvested from a bewildering variety of sources, including smartphones, internet searches, social media posts, and GPS tracking data. In their new book, The Age of Prediction: Algorithms, AI, and the Shifting Shadows of Risk, Igor Tulchinsky and Christopher Mason—the former the CEO of a quantitative investment firm, the latter a geneticist and computational biologist—take us on a tour of this brave new world of unprecedented predictive power. Yet much like Bernouilli, it is again epidemiology that most vividly illustrates our dawning reality. During the recent pandemic, predictive algorithms were deployed not only to fast-track vaccine development, but also to stimulate market prices and reboot the economy. As Tulchinsky and Mason put it:

Stocks and markets have been around since the early seventeenth century and vaccines since the eighteenth century, yet never before have the algorithms used in drug and vaccine discovery and the algorithms that shape much of the market had so much in common. These algorithms, fed by oceanic amounts of data and increasing computer power, define the Age of Prediction.

Granted, none of these tools actually helped us to predict the COVID-19 outbreak; the point is rather that—for better or worse—these tools are now involved in almost every aspect of our lives.

Unsurprisingly, therefore, the best parts of The Age of Prediction are the ones covering recent advances in genetic sequencing. The first human genome project took over a decade to complete; the same work is now regularly concluded in a matter of hours. Additional information can also be taken from traces of the various micro-organisms infesting the human body (the microbiome) and compared against the species more generally (the metagenome). On the one hand, this leads to more bespoke options for consumers, tailored to their individual genome:

Insurance is also being personalized, much like health care. No two customers are alike, and differences among them will only grow. Insurers will want that stream of data coming from your wearable or your regular genetics test. Gained a few pounds over the holidays? That information, along with your cholesterol level and blood pressure, could be transmitted to your insurer, which may contact you about a healthier diet or offer the number of a nearby gym while slightly increasing your premium.

On the other hand, leave some hair at a crime scene, and scientists can pinpoint the city where you were born; calculate your probable height, weight, and physical appearance; predict your tone of voice, the thickness of your beard, what you last ate; and even make a pretty good stab at how you felt when you committed the crime. And so much genetic information potentially available in public spaces raises significant questions of privacy, ones which current legislation is utterly unequipped to address.

As prediction improves, the risks associated with uncertainty decline; yet at the same time, these improvements open up new possibilities and technologies, creating new risks and spawning further uncertainties of their own. In the end, it seems we can only make predictions about our predictions, trust the science, and hope for the best.

Our current age is not the first to suppose that it is on the brink of finally mastering uncertainty. The problem tends not to be gathering enough data, but rather gathering the right sort of data, something best appreciated in hindsight (it was the unprecedented array of data driving Newton’s calculations that convinced the scientific establishment of his time there was nothing left to discover). Tulchinsky and Mason provide an engaging overview of just how rapidly our predictive powers are expanding, but have little to say on what makes the Age of Prediction epistemically privileged in this respect. What is clear however is the extent to which these constantly evolving predictive tools are becoming a ubiquitous and invasive element of daily life. And this raises social and political questions, not just of surveillance and privacy, but also of power and control.

It is no accident therefore that tech companies are suddenly bewailing the potential dangers of artificial intelligence and demanding strict government regulation—not because they seek wise and benevolent constraints on their own research, but because the additional financial and bureaucratic hurdles will make it more difficult for smaller competitors to enter the market. Moreover, some predictions can become self-fulfilling, in the way that opinion polls build momentum for the front-runner, or how we all end up watching the “most popular” Netflix recommendations because it’s easier than navigating the rest of the catalog. Under the tyranny of choice, prediction becomes a welcome substitute for decision. One wonders if this may be a feature rather than a bug, since for all our data-crunching algorithms, the best way to predict someone’s behavior is to determine their decisions in advance.

As with many of the recent books on the emerging field of artificial intelligence, there is more to be learnt from what the authors do not discuss.

None of this however seems to trouble Tulchinsky and Mason. The Age of Prediction is light reading, with the Reader’s Digest approach to history and philosophy that one has come to dread in contemporary popular science. Much like the data-crunching algorithms at the heart of the narrative, there is plenty of surface detail here but little interest in the underlying depths. The authors repeatedly announce their interest in the unintended consequences of ever greater prediction—the sort of moral hazard where seat-belts encourage reckless driving, or a low genetic risk of heart disease justifies unhealthy eating—only to return again and again to the general public’s “paradoxical” resistance towards sharing their personal data. They write:

Even today, the feedback loop created by individuals reacting to demands for intimate data or real-time monitoring has spawned a wide range of new risks, which may include people not only refusing to share their own data but also rejecting a hold on the very realities that underlie that risk. Ironically, this can limit predictions and increase risk, but still satisfy a petulant need for some to feel they have more control.

The problem it seems is not that these new technologies are invasive or disruptive; the problem is rather that people may impede the ability of these new technologies to better govern almost every aspect of their lives.

Throughout, one encounters the polite bemusement of those who have never expected to find themselves at the sharp end of a predictive algorithm. The authors conclude their discussion on forensic profiling with the prediction that “many people will accept a loss of privacy for the actual reduction in risk as long as someone is watching how these algorithms and data are being used” with the absolute confidence of those who will be doing the watching.

As with many of the recent books on the emerging field of artificial intelligence, therefore, there is often more to be learnt from what the authors do not discuss. Walter Lippman famously made the case that ordinary people lack the expertise to rule themselves. Tulchinsky and Mason rightly take issues with his naked elitism, but nevertheless conclude that:

Lippman was essentially right about a democratic citizenry uninformed on many important matters, in particular economics, science, and foreign policy. But he may have misjudged the potency of his solution, which was to find experts to tackle issues that in some cases might have no clear-cut solutions, that run roughshod over popular conceptions of fair play or morality, or that requires sacrifices from voters. (Think of the difficulties of doing anything about a relatively straightforward predictive problem such as climate change.).

Such presuppositions underlie almost every discussion in the book. Since everybody agrees on how to solve these “straightforward predictive problems,” this is just further evidence that we were right all along—and not the consequences of the inevitable feedback loop that arises when prediction begins to drive the very data upon which it is based. And as more and more social and political problems become amenable to the all-powerful algorithm, the less we will need to waste our time asking the opinion of an uninformed electorate.

For instance, there is no question for Tulchinsky and Mason that the ethics of self-driving vehicles is one of utilitarian calculation and how to best balance the preservation of perfectly fungible human lives—perhaps modified by regionally specific variables depending upon how the locals feel about their pets and their old people (I look forward to the car dealer explaining how my new hatchback will willingly sacrifice my family for the greater good). Likewise, they touch on the potential impact of this Age of Prediction on the economy—it is estimated that the full implementation of the same self-driving vehicles will lead to the loss of 4.5 million jobs in the United States—but seem confident that some form of universal basic income will allow the displaced to spend more time with their families and engaging in community service (the only concrete prediction offered concerning the possibility of new jobs supporting our high-tech future is that they will be outsourced to developing countries).

And that is when one finally grasps the genius of the book. There is something so inoffensively superficial about the discussion, such a pronounced lack of interest in its social and political consequences, something so comfortably predictable about its overall perspective—perhaps the book itself has been written by an algorithm, a brilliantly postmodern deconstruction of its own central thesis, and one that will undoubtedly be celebrated by all the cognoscenti in the publishing community who have not yet been replaced by robots. It will probably be a great success.

Related