fbpx

Scientifically Undermining the Rule of Law

Before he turned murderously religious, one of the Belgian bombers had been a bank robber. He fired a Kalashnikov at the police when they interrupted him in an attempted robbery, for which crime, or combination of crimes, he received a sentence of nine years’ imprisonment. Of those nine years he served only four, being conditionally discharged. The principal condition was that he had to attend a probation office once a month: about as much use, one might have supposed, as an igloo in the tropics.

No doubt he underwent various assessments before release establishing his low risk of re-offending; he probably also said before his release that he now realised that shooting policemen with Kalashnikovs was wrong, that he was sorry for it, etc. One of the causes, then, of terrorism in Europe is penological frivolity. A forty-year sentence would have been more appropriate.

In any case, penology is increasingly opposed to the rule of law: it favours the arbitrary and the speculative over the predictable and the certain. A good instance of this tendency was to be found in a paper recently published in the Lancet, one of the most important and prestigious medical journals in the world. Its title was Prediction of violent reoffending on release from prison: derivation and external validation of a scalable tool, and its authors were British and Swedish.

There is nothing intrinsically wrong with trying to predict the likelihood of a person reoffending after release from prison, of course; it is the use to which such prediction is put that may be wrong. The authors tell us in their introduction, for example, that such predictions may be used to help courts in their sentencing of those convicted: and though they do not actually say so, this must mean that the higher the assessed risk of reoffending, the longer or more severe the sentence. It is difficult to see how else such prediction could help the court in its sentencing.

The authors also tell us that the prediction of reoffending may help authorities to decide on dates of release of prisoners. This can mean only that prisoners deemed at low risk of reoffending are to be released sooner than those deemed at high risk, even if their crimes that were proved beyond reasonable doubt were similar or the same. In other words, prisoners are to be punished (or relatively rewarded) not for what they did do, but for what they might do in the future.

This would not be so arbitrary and contrary to the rule of law if the predictions were 100 per cent accurate, or very nearly so: but of course they are not. We find the following self-congratulatory sentence in the summary of the paper: ‘We have developed a prediction model in a Swedish prison population that can assist with decision making on release by identifying those who are at low risk of future violent offending…;’ yet there were both numerous false positives and false negatives to their predictions (more than a third of the prisoners placed at the high end of the spectrum risk did not re-offend, and more than a tenth of those placed at the lower end did reoffend).

The term reoffend in this context is itself an indication that the researchers either did not understand the phenomena that they were researching, or deliberately donned rose-tinted spectacles: for they took reconviction as being coterminous with reoffending, which of course it is not. The Swedish police may be more efficient than most in the elucidation of crime, but they can hardly elucidate every crime. The authors tell us that within two to three years of release, 11263 of 47326 released prisoners (24 per cent) had reoffended violently and 21739 had reoffended in other ways, making a total of 59 per cent.

On the generous supposition than nine out of ten violent offences committed by the released prisoners led to conviction, more than 26 per cent of released prisoners would have reoffended in violent fashion. On the generous supposition that seven out of ten of the released prisoners’ other offences led to conviction, 92 per cent of released prisoners would have reoffended.

Whether the prisoners reoffended or failed to reoffend was counted in binary fashion: yes or no. One, two or a hundred offences counted as one. Swedish criminals may be less productive of crimes than British; but even so, it is likely that the true recidivism rate was more than 100 per cent, as measured by crimes committed and not by convictions. Of this the authors of the paper showed no awareness whatever.

This is not to say that the paper was altogether without interest or value. It informs us that, of the violent offences, 1 per cent were homicide, that is to say, 112 in total. This is probably at least 50 times the rate expected for the age-adjusted population as a whole, which perhaps is not surprising; but when the authors tell us, more or less en passant, that shorter prison sentences are associated with higher rates of violent crime than longer, we may wonder – though the data do not allow us to say so definitively, because the strength of the association is not stated – whether some homicides, at least, would have been prevented by longer prison sentences. (More than half in Sweden are less than 6 months.)

This little fact, however, is interesting for two other reasons. First it flies in the face of the assumption that prison is a school of crime, for the supposed education seems to be in inverse proportion to its length. Second, since longer prison sentences are given to more serious or repeat offenders, the association is exactly the reverse of what would be expected – unless prison exerted a corrective effect (though there is also the effect of aging to be considered).

The authors found, perhaps not surprisingly, that there was a strong association between violent offending and alcohol or drug abuse. This led the authors to suggest that released prisoners with a history of either should be offered assistance to overcome them, a reasonable enough suggestion if such assistance is effective. On that question I am agnostic but sceptical.

The authors were proud that the easily-administered scale that they developed using relatively few variables was about as predictive as the scales doctors use to predict heart attack and stroke. No doubt this confirmed them in their underlying belief that crime is disease, on the basis of the following syllogism:

Heart attacks are a disease.

Heart attacks are moderately predictable.

Reoffending is moderately predictable.

Therefore, reoffending is a disease.

That is why, after all, they published in the Lancet: but no belief undermines the rule of law more thoroughly.

Reader Discussion

Law & Liberty welcomes civil and lively discussion of its articles. Abusive comments will not be tolerated. We reserve the right to delete comments - or ban users - without notification or explanation.

on April 18, 2016 at 11:59:32 am

Physical scientists forever face the wrath of creationists/biblical literalists, because the analysis of the former offends the world views of the latter. Similarly, social scientists forever face the wrath of moralists, for the same reasons.

The Inspector Javerts of the world look upon the legal system as a version of divine justice. The suggestion that human behavior might follow predictable (and manipulable!) patterns offends their sense of free will, and thus of culpability. But the rest of us look upon the judicial system as simply one more social invention for making the world slightly better than it would be in the absence such a system.

There is nothing intrinsically wrong with trying to predict the likelihood of a person reoffending after release from prison, of course; it is the use to which such prediction is put that may be wrong. The authors tell us in their introduction, for example, that such predictions may be used to help courts in their sentencing of those convicted: and though they do not actually say so, this must mean that the higher the assessed risk of reoffending, the longer or more severe the sentence.

Yes, if you subscribe to an instrumentalist model, you would want to consider likelihood of re-offense as one component of clemency. But to clarify one point: No, predicted recidivism wouldn’t have to be the sole criterion. Auto thieves have a higher rate of recidivism than murderers, but that needn’t drive us to conclude that the penalty for grand theft auto must be larger than for murder.

Also, someone practicing civil disobedience, or a journalist refusing to disclose a source, may pose almost no threat to the public but may pose a huge likelihood to “reoffend” – indeed, they often pledge to do so. These prisoners pose a real challenge for instrumentalists, because a sentence that would be sufficiently long to discourage reoffending would almost certainly seem disproportionate to the crime.

read full comment
Image of nobody.really
nobody.really
on April 18, 2016 at 12:00:17 pm

The authors also tell us that the prediction of reoffending may help authorities to decide on dates of release of prisoners. This can mean only that prisoners deemed at low risk of reoffending are to be released sooner than those deemed at high risk, even if their crimes that were proved beyond reasonable doubt were similar or the same. In other words, prisoners are to be punished (or relatively rewarded) not for what they did do, but for what they might do in the future.
* * *
[S]horter prison sentences are associated with higher rates of violent crime than longer….

[S]ince longer prison sentences are given to more serious or repeat offenders, the association is exactly the reverse of what would be expected – unless prison exerted a corrective effect (though there is also the effect of aging to be considered).

Exactly!

Last I checked, there was evidence that violent crimes are committed primarily by men under the age of 35 (different studies cite different ages, but generally around 35). Thus, an instrumentalist might have greater reluctance to give parole to someone who committed a violent crime at 20 than at 30, for the simple reason that he might want to get the prisoner closer to the age of 35 before releasing him.

Nope, this analysis does not reflect the individual’s degree of culpability. But then, the fact that a prisoner grew up in a poor household lacking a father figure doesn’t reflect the individual’s degree of culpability either, yet may bear heavily on the individual’s likelihood to engage in violent crime. In short, the world isn’t fair. The criminal justice system isn’t primarily aimed at fairness; it’s aimed at making the best of an unfair world.

read full comment
Image of nobody.really
nobody.really
on April 18, 2016 at 12:01:31 pm

This would not be so arbitrary and contrary to the rule of law if the predictions were 100 per cent accurate, or very nearly so: but of course they are not. We find the following self-congratulatory sentence in the summary of the paper: ‘We have developed a prediction model in a Swedish prison population that can assist with decision making on release by identifying those who are at low risk of future violent offending…;’ yet there were both numerous false positives and false negatives to their predictions (more than a third of the prisoners placed at the high end of the spectrum risk did not re-offend, and more than a tenth of those placed at the lower end did reoffend).

True, the author’s model does not correlate perfectly with the data -- a familiar, indeed almost universal, dynamic of science. And thus Dalrymple recommends that we discard the model in favor of a more perfect alternative … such as our flawless judges and juries? Our omniscient legislators? What, exactly, is the model that Dalrymple finds to be error-free?

Or, alternatively, perhaps Dalrymple is not offended by the magnitude of the errors created by the authors’ model, but rather by the explicitness of the errors. After all, we have rituals for picking legislators and judges and juries, and for conducting trials. So long as we perform these rituals perfectly, then surely the outcomes of the process must be flawless – for, what other standard can there be to judge the outcome than the standard of the perfection of the ritual? Social scientists -- by impudently suggesting that we might judge social systems by a rational standard rather than a ritualized one, and measuring the deviation from this standard -- offend this worldview.

read full comment
Image of nobody.really
nobody.really
on April 18, 2016 at 12:05:52 pm

[T]he researchers … took reconviction as being coterminous with reoffending, which of course it is not. The Swedish police may be more efficient than most in the elucidation of crime, but they can hardly elucidate every crime. The authors tell us that within two to three years of release, 11263 of 47326 released prisoners (24 per cent) had reoffended violently and 21739 had reoffended in other ways, making a total of 59 per cent.

A fair point: If we want to predict an ex-con’s likelihood to commit a crime, we might not want to look solely at conviction rates.

But does not the same analysis apply to people who are NOT ex-cons? If we would keep a convict in prison longer due to concern that law enforcement identifies and convicts less than 100% of crimes, wouldn’t the same rationale justify putting NON-convicts in prison?

If the goal is to minimize the number of crimes, arguably parole is not the biggest obstacle; the presumption of innocence is. Thus, we have a simple method to reduce crime: Incarcerate everyone!

All that said, Dalrymple makes one salient point:

[T]he paper was altogether without interest or value. It informs us that, of the violent [re]offences, 1 per cent were homicide, that is to say, 112 in total. This is probably at least 50 times the rate expected for the age-adjusted population as a whole….

I have argued for taking the analysis Dalrymple offers for convicts and extending it to the rest of the population. I expect Dalrymple would object on ritualized ground: We should subject prisoners/ex-cons to scrutiny, but not everyone else, because our rituals say we can. That is, some people occupy a different status than others, and thus we should judge them by different standards.

I am not especially impressed with the ritualized argument. However, it is not a big leap to say that society developed these rituals to reflect an underlying reality, to wit: Violent crimes are committed by a small portion of the population. While we might expect a certain number of murders to arise within any population, we may have statistical reason to expect more murders to arise among ex-cons. Thus, we have cause to deny parole to such people.

(The US has incarcerated a large portion of its population, and violent crime has plummeted. Cause? Coincidence? It’s a hotly contested topic.)

I find this a compelling argument (with some refinements). But observe: The argument rests on the idea that we’re deciding how long to incarcerate a prisoner not based on the crimes for which he has been convicted, but based on the crimes we anticipate he’d be likely to commit if he were released. Yet this is the kind of reasoning that Dalrymple claims to reject.

Jean Valjean may have posed a low risk of recidivism, but that was of no concern to Javert. The law prescribed the length of Valjean’s sentence, and thus, forcing Valjean to serve that length was indisputably the correct course of action – simply because Javert recognized no other standard by which to judge correctness. When Javert comes to the conclusion that there might be a different standard, his world is undone. So we can hardly be surprised that, before that point, he would feel threatened by any any suggestion to the contrary.

read full comment
Image of nobody.really
nobody.really

Law & Liberty welcomes civil and lively discussion of its articles. Abusive comments will not be tolerated. We reserve the right to delete comments - or ban users - without notification or explanation.