fbpx

The Dangers of Policing by Algorithm

The 2002 science fiction and action film Minority Report, based on a short story by Phillip K. Dick of The Man in the High Tower fame, depicted a form of policing with the capacity to predict, with certainty, who would commit murder. As told in the film, the use of the system in Washington, D.C. successfully reduces the murder rate to zero, encouraging federal officials to consider extending it nationwide. Just one problem: it was subject to tampering and misuse for political and criminal purposes, putting the inconvenient innocent away as well as the pre-guilty.  

It turns out “big data” and sophisticated algorithms are bringing us closer to Dick’s “pre-crime” law enforcement than we might have imagined, helping to erode the presumption of innocence in the name of crime prevention. Law enforcement agencies across the country are increasingly turning to vast data-mining programs and algorithms for something called “intelligence-led policing” (ILP) that ostensibly helps departments predict not only where crime might occur, but also the identities of potential criminals.

ILP builds upon principles first used in the 1990s as part of “CompStat” and other data-driven policing practices. CompStat, credited with helping to break the back of New York City’s crime problems under then-Mayor Rudy Giuliani, directed police resources to crime “hot-spots” where individuals engaged in chronic offenses against property and persons could be arrested in the act. The program, since replicated in jurisdictions across the country, was intended to improve public safety through data-informed policing practices that disrupted patterns of criminal behavior.

ILP sounds a lot like CompStat but isn’t. Rather than using aggregated data to direct police to areas of high criminal activity, ILP identifies individuals who, on the basis of their criminal records, socioeconomic status, neighborhood, and other factors gleaned from social media activity, might represent an increased risk to the community. Those individuals, including minors, are then targeted for increased scrutiny by police that, in some cases, borders on or becomes actual harassment. This goes beyond CompStat in monitoring not for crime but for a threat of crime by specific individuals implied through analysis of vast stores of data.

So, what does intelligence-led policing look like in practice?

The Tampa Bay Times conducted an investigation into the use of ILP by the Pasco County Sheriff’s Department to increase the monitoring of individuals determined to be at higher risk of committing criminal offenses. What accompanied the monitoring were frequent, usually unannounced visits by officers, and increased fines and arrests for petty crimes, such as allowing a teenager to use nicotine at home or keeping chickens in a backyard. A number of individuals targeted by ILP and their families experienced this as harassment, opting to leave the county entirely to escape police monitoring. Even if the strategy worked as hoped, it seems clear this would be a matter not of preventing crime but shifting potential crime elsewhere.

Fresno, California’s police department has gone further by embedding intelligence-led practices in the city’s 911 call system. When an emergency call comes in, operators consult a program that gives them a “threat score” for the address and residents involved to prepare officers and other first-responders for possible problems. The civilian parties may not, and almost always are not, aware of the threat level assigned to them, ramping up the chances of miscommunication and misunderstanding between them and the police. A Fresno city councilman who asked in an open hearing for his own threat assessment learned that while he himself had been rated “green” (low-risk), his residential address received a “yellow” (medium-risk) score. The department representatives could not say for certain why the councilman’s address was a higher risk but speculated it may have had to do with the actions of a previous resident.

How many people are aware that the terminology or images they use on popular sites like Facebook and Instagram may wind up informing police attitudes and behaviors toward them without their knowledge or consent?

Beyond the potential for abuse, error, and increased use of aggressive policing practices, ILP raises basic issues of fairness in the way it is used, as well as serious constitutional questions of due process and the presumption of innocence.

The first issue that has to be addressed is whether such programs are intelligible to the police forces that use them. As has been noted in the context of artificial intelligence more generally, it is often difficult for the programmers of sophisticated algorithms to understand how these systems reach their conclusions. If the programmers don’t know (and often won’t share what they do know in an effort to protect proprietary knowledge), police users of the information cannot possibly know, much less independently assess, the validity and reliability of the reports they receive.

This challenge is reminiscent of the broader debate over the use of algorithm-driven risk-assessment programs in the criminal justice system. Civil liberty watchdogs and criminal justice reform advocates have argued that such programs are inevitably biased on matters of race, neighborhood, and class, leading to biased decisions on whether to detain a charged subject and the lengths of sentences for those convicted. ILP further raises the stakes in this debate. Potential bias based on individual characteristics, criminal history, and other factors is being extended not to matters relating to crimes that have been actually committed (as in pre-trial or sentencing decisions) but crimes that might be committed.

Criminologists sometimes describe the criminal justice system as being “sticky”—once you have a criminal record, your chances of re-encountering the police and courts because of probation/parole violations or petty crimes, either of which can send you back to prison, increases. Criminal records also make it far more difficult to get a job or find stable housing. The criminal justice system, then, becomes part of a self-fulfilling prophecy of future criminal infractions. ILP may be becoming part of this ever-denser web of law enforcement and prosecution that drives recidivism and re-incarceration.

Another problem with ILP is that citizens are unaware of the breadth and quantity of the data being “scraped” from public sources and social media accounts that is then used to build the risk profiles. How many people are aware that the terminology or images they use on popular sites like Facebook and Instagram may wind up informing police attitudes and behaviors toward them without their knowledge or consent—and without an opportunity to challenge the conclusions being drawn? Negative data or inferences from that data threaten to become a kind of extra-judicial indictment that risks sweeping up unsuspecting citizens and exposing them, without their knowledge, to future, higher-risk encounters with law enforcement.

Much has been made of how the Communist Party leaders in China are using big data and video monitoring to develop and apply “social credit scores” that determine ideological and behavioral reliability and distribute social and economic privilege accordingly. To my mind, there’s a real question about whether ILP doesn’t share some of the characteristics of that system, subjecting individuals to a kind of predictive analytics that is meant to tell us who is likely to commit a crime and who isn’t. Such a system threatens to turn core elements of American constitutional freedom like the presumption of innocence and due process on their heads: henceforth, those with criminal records are at risk of being accused of crimes that they haven’t yet (and might never) commit and then denied the opportunity to understand where and how the accusation originated or to challenge their risk status.  

Given these potential pitfalls, it’s a wonder more questions haven’t been raised about ILP and its implications for freedom and justice. I suspect the answer lies in two factors. First, law enforcement occupies a privileged position in American society as the “thin blue line” that stands between civilization and chaos. What law enforcement asks for it generally gets, and what it’s been asking for lately are resources that make it look more and more like the military with ever-increasing levels of armament. ILP is another military- and foreign intelligence-adapted technique that further blurs the line between a security force that defends the nation against foreign aggressors and a police force with a mandate to serve and protect. ILP is one more step along a path that may end up turning citizens into enemies.

Second,  the reality is that ILP is mainly a tool for combatting the most serious crimes, and the communities that are most afflicted by these kinds of crime are populated mainly by low-income and minority groups. When ILP abuses occur, they are most likely to occur in neighborhoods far removed from the oversight and concern of the majority of Americans. This is another one of those problems that mostly affect “others” rather than ourselves, and it contributes to and exacerbates other inequities in communities that already bear the brunt of discrimination, social disadvantage, over-policing, and excessive incarceration. If we want a better understanding of the kinds of practices that exacerbate tensions between police and communities, examining the use of ILP might be a good place to start.   

Reader Discussion

Law & Liberty welcomes civil and lively discussion of its articles. Abusive comments will not be tolerated. We reserve the right to delete comments - or ban users - without notification or explanation.

on January 19, 2021 at 12:17:45 pm

This is most alarming and portentous of grave danger! We are a nation divided by political class enmity. We are newly inaugurating an administration controlled by a political party of revolutionary ideologues which has heretofore demonstrated its righteous predisposition and moral will to combine political animosity with law enforcement, technology and science to target and quell opposition by ferreting out, harassing, cancelling, de-platforming, banning, quarantining and crushing its opponents, one group and one person at a time. One need only look back on the last four years of information technology in service of Democrat Deep State lawlessness, spying and election control, and of scientism-cum-science in service of Blue State police control to be terrified of the Orwellian potential for ILP to render the USA a Red China wannabe.

read full comment
Image of paladin
paladin
on January 19, 2021 at 13:45:28 pm

The problem with all data, including sophisticated algorithms, is it depends upon the character of the content one imputes. “Garbage in; garbage out”, is not for naught; in all things, Truth begets Truth, while error begets error.
Error cannot beget Truth but it certainly, like a light that “shines in the darkness”, help to illuminate that which is true.

read full comment
Image of N.D.
N.D.
on January 19, 2021 at 14:51:46 pm

Mr. Orrell raises a valid concern, although on inspection it is not novel.

The subject of Mr. Orrell's essay might reasonably be termed "automated prejudice," or if one wished to sound more savvy, "e-prejudice," or "cyber prejudice." By whatever term used, the significance lies in the noun rather than the adjective; the problem is that of pre-judging, not the mechanism of accomplishing it. The objections are not new. What Mr. Orrell describes in his essay has previously been labeled "profiling," and this is not a novel source of controversy.

Some years ago there was talk of "criminal genes", that were associated with antisocial behavior. As seen from the discussion of this finding, there was a great deal of anxiety, qualification, and warning attached to the idea that crime might be as much fate as choice. Describing this situation demands mixed metaphors: A can of worms inside of Pandora's box sitting on a third rail. The idea that observable traits, or identifiable genetic ones, might be associated with behaviors, or outcomes, or attitudes is too easily co-opted to validate notions that intelligence, and personality are likewise determined. This would undermine a number of facile yet fashionable theories regarding crime, and poverty and drug use, etc. such as that these are determined by "culture," or the effects of "white supremacy," or "systemic racism," or misguided government policy. The variety of such arguments suggests both that there is no single class of determinants of an individual's circumstances, and the task of arguing such is futile; as well as that such notions, being facile, are seductive. The observation that traits are associated with other traits is inconvenient to political narratives, even though this concept is central, not only to human cognition, but is also to day-to-day life.

Human beings are able to identify dogs on sight because an observable combination of traits is characteristic of dogs. Doctors make diagnoses by associating clinical traits with diseases. Actuaries are able to compute insurance premiums because a discrete combination of traits is associated with risk; the relatively nascent sciences of genomics and proteomics assume that observable combinations of genes or blood proteins are associated with specific health states and outcomes. "Deep learning" is an automated method to find patterns in large amounts of data and identify predictor variables that are associated with outcomes of interest. All of these are examples of profiling of one kind or another, and are kin of the "Intelligence-led policing" discussed by Mr. Orrell. The problem arises from the idea that there are acceptable and unacceptable uses for profiling, without a clear rule for determining which is which. University admissions decisions are based on profiling of various sorts and there is no clear consensus as to which category of acceptable or unacceptable this belongs. New York experienced significant benefit from a type of profiling in neighborhood policing, yet activists succeeded in abolishing this as a means of crime control. The argument wasn't that it didn't work but that it is bad in itself. One may also note that profiling is the business model of Twitter, Facebook, Google and Amazon. Profiles are constructed from internet and online buying habits to target advertising, and there is increasing concern that these technologies are capable of not only predicting, but also of influencing behavior.

The difficult issues arise because profiling is 1.) intrinsic to human cognition, 2.) wildly beneficial in some contexts and subject to abuse in others, 3.) capable of vast expansion with modern technologies. There is a conundrum as to whether this presents a policy issue, a political issue, a moral issue, or something else, perhaps a combination of multiple issues. Would, for example, the FISA Court issue a warrant based on an ILP assessment? I would offer that the situation is analogous the conundrum of what to do with Nazi medical research. Should this research, assuming that there is some scientific usefulness in it (as for example the classic anatomy illustrations of Eduard Pernkopf using political prisoners executed by the Nazis as their subjects) be disregarded on moral grounds? I suspect that the issues raised by Mr. Orrell will not be resolved by political log-rolling or legal hand-waving, or academic chin-stroking, or hysterical "activism" unless and until we undertake a serious moral examination of the issue.

read full comment
Image of z9z99
z9z99

Law & Liberty welcomes civil and lively discussion of its articles. Abusive comments will not be tolerated. We reserve the right to delete comments - or ban users - without notification or explanation.

Related

Metropolitan police

Decentralize the Police

A one-size-fits-all, centralized, bureaucratic service provider for all city services simply cannot satisfy the demands of citizens in many areas.