THIS STORY HAS BEEN FORMATTED FOR EASY PRINTING

You will commit a crime in the future

Inside the new science of predicting violence

By Leon Neyfakh
February 20, 2011

E-mail this article

Invalid E-mail address
Invalid E-mail address

Sending your article

Your article has been sent.

Text size +

The ability to predict what someone will do in the future would be a seriously handy superpower. And it’s one that companies like Netflix and Amazon, by crunching the massive trails of data most of us leave behind these days, have come pretty close to acquiring. Surely, though, there is something more ambitious to be done with our dazzling modern technology than trying to guess what kind of microwave someone’s going to want next. Something like preventing murders.

It’s a seductive notion, that we could know who will and who won’t commit a crime in the future. And while it may call to mind the science-fiction world of “Minority Report,” making judgments about people’s potential to be dangerous is in fact an essential — and routine — part of how the American justice system works. It is what parole boards do, and what sentencing hearings are for. The consequences of getting such high-stakes decisions wrong can be devastating, as was made tragically plain last Christmas when police say a fellow officer from Woburn was shot and killed by 57-year-old Domenic Cinelli, a career criminal who had been paroled in 2008 while serving three concurrent life sentences for armed robbery.

What if we had a better method for reliably identifying threats like Cinelli? That has been a dream in criminal justice going back at least as far as the 19th century, when the Italian criminologist Cesare Lombroso claimed he could pick out delinquents from an early age based on physical defects and the shapes of their skulls.

Today, ideas like Lombroso’s, tinged with phrenology and eugenics, have largely been discarded. But over the past 40 years or so, the pursuit of mechanical crime prediction methods has taken off, and given rise to a competitive industry that involves not only specialists in criminology and psychology but also computer scientists and for-profit companies. Most of the tools that have come out of their research, known as actuarial risk assessment instruments, are essentially checklists that examine a range of character traits and biographical facts about an individual, crunch the answers, and use them to estimate that person’s likelihood of returning to crime. There are more than 120 such tools in existence now, some applicable to all kinds of offenders and others intended for specific populations, like juvenile delinquents or sex offenders.

“There has been a strong move toward the use of risk assessment instruments in the criminal justice system in recent years,” said John Monahan, a psychologist at the University of Virginia Law School who has been studying models of prediction since the 1970s. “The science of risk assessment is much better now than it was 20 years ago.” The instruments have gained traction not only as a public-safety measure, Monahan said, but because they allow for more efficient allocation of resources: When prison budgets are stretched thin, it makes sense to try to focus more funds on those inmates who pose a greater risk.

For all their refinements over the years, these surveys tend to be built around fairly intuitive connections between past and future behavior. Most are based on sample populations of convicted criminals, and work by determining whether a given individual more closely resembles the people in the sample who returned to crime, or those who stayed clean. It should come as no surprise that the best predictor of future crime has always been past crime, and that older people are less likely to reoffend than younger ones. But some of the tools do suggest an array of surprising insights: that people whose victims were women should be considered lower risk than those whose victims were men, for example. Other tools reflect research which says that, although prison therapists might not like to hear it, a criminal’s emotional health — whether he or she is depressed, anxious, or suffers from low self-esteem — does not help predict future behavior.

“The philosophy is to cast the net out and catch as many fish — that is, variables — as you can,” said Richard Berk, a professor of criminology and statistics at the University of Pennsylvania who is working on even newer approaches to crime prediction. “The computers are finding relationships that are unanticipated.”

The Massachusetts Parole Board was not using any of these risk assessment tools when its members voted to free Cinelli in 2008. Since then, the board has adopted one called COMPAS, created by a Michigan-based private company called the Northpointe Institute for Public Management, which assigns a numerical risk level to every parolee. The parole board now considers that number, along with other information, in making its decisions.

As predictive tools promise to grow more precise, and are increasingly woven into our criminal justice system, however, some critics are questioning whether they’re as effective at reducing crime as proponents claim — and, on an ethical level, whether it should be acceptable to assess people as members of a statistical group rather than as individuals. Fundamentally, the debate pits the desire for more efficient crime prevention against our instinctual discomfort with the notion of putting or keeping people behind bars not for things they’ve done, but for who they are.

The modern approach to risk assessment can be traced back to the 1920s, when a University of Chicago sociologist named Ernest Burgess built a primitive prediction tool for use by parole boards. As University of Chicago professor Bernard Harcourt traces in his 2007 book, “Against Prediction, Burgess conducted a study of 3,000 inmates who’d been released from prison some years earlier, and classified them according to 22 variables, including personality, nationality, and what he termed “social type.” Each classification was associated with a recidivism rate derived from the sample. Within social type, for instance, Burgess distinguished between “hobos,” “ne’er-do-wells,” “farm boys,” “drug addicts,” “gangsters,” and “recent immigrants.”

A version of the Burgess method was used in Illinois throughout the ’30s, ’40s, and ’50s. Most states, however, relied instead on “clinical judgment” — imagine experts in lab coats sitting around a table discussing each case. This began to change in the 1980s, by which point the US Parole Commission, which considers parole cases involving federal offenders, had adopted a seven-factor test to assess likelihood of recidivism. Called the Salient Factor Score, this tool was quite basic but still marked a significant improvement over the Burgess method, if only because it didn’t rely on ill-defined social categories or punish people for having foreign blood.

Over the next two decades, the number of available instruments began to proliferate as academics, prison officials, and private companies turned their attention to the problem. Meanwhile, numerous studies were published indicating that using the tools was a more effective way to identify high-risk offenders than just trusting the conclusions of experts. (None of the tests are perfect: Today a tool is considered top-of-the-line if 75 percent of the time, a randomly chosen inmate who did return to crime receives a higher risk score than one who stayed out of trouble.) Today a majority of US states, and at least three-quarters of those with an active parole system, have incorporated a risk assessment tool into their parole procedures, according to Matthew DiMichele, a researcher at the American Probation and Parole Association.

The way the tools actually work depends on who’s using them and for what purpose, but generally speaking they take the form of a questionnaire administered by a prison official or clinician — essentially a list of factors that have been shown to correlate one way or another with criminality. How old was the subject at the point of first contact with the criminal justice system? Has he or she ever held down a job or a long-term relationship? Is there a history of drug abuse on file? What about gang activity? Did the person stay out of trouble while incarcerated?

Cinelli’s record on these points was almost cartoonishly alarming. He was a heroin user starting at the age of 15 and had committed multiple armed robberies by the time of his arrest in 1976. He also tried to escape from prison, not once but twice, robbing a jewelry store and shooting a security guard the first time and stealing a gun from a sheriff the second. According to the state’s review of the case, COMPAS, the tool now used by the Massachusetts parole board, would have assigned him a risk level of nine out of ten.

But the business of risk assessment isn’t all common sense. Sometimes there are surprises. The 1993 study that formed the basis of a widely used tool known as the VRAG, for instance, is where researchers determined that in a sample of offenders who’d been locked up for serious violent crimes, those whose victims had been women were less likely to reoffend than those whose victims were men. The study also found that murderers were less likely to commit more crime upon release than people who’d merely injured their victims.

How to explain these puzzling results? To a surprising extent, the authors of the tests don’t try: They’re measuring probabilities, they say, not probing the criminal mind.

“It is not a theory of crime,” said VRAG creator Vernon Quinsey, a psychologist at Queens University in Canada. The factors on the questionnaire, he explained, aren’t necessarily causes of criminal behavior: They just match certain patterns. “Some of the offenders we followed had killed their wives. Most of these guys are not career criminals and they have relatively low recidivism rates,” he said. “Some of the offenders were homosexual pedophiles — these guys have relatively high recidivism rates. These observations likely explain why murder is good and male victims bad in terms of recidivism.”

There are those in the risk assessment field who object to Quinsey’s approach — in part because they hope to use the tools to help treat offenders, and argue that if we don’t know what actually causes crime, it’s impossible to know what treatments might keep criminals on the right side of the law. Some of the current risk instruments — including the COMPAS test now used in Massachusetts — consider so-called dynamic factors such as person’s attitudes, values, and social connections, as well as the more basic factual details of their lives. The two types of tools represent poles of a fierce, ongoing debate in the field: On one side are people who believe an individual’s risk level does not change over time, and on the other are those who think that with proper treatment — and the right kind of assessment tool — even the most dangerous people can be rehabilitated.

“They don’t promote treatment — we do,” said Paul Gendreau, a Canadian psychologist who is a leading proponent of the “dynamic” approach. The belief of researchers in his cohort, he says, is that “offenders can change, and that if we do something constructive using the appropriate kinds of therapies, someone could have a law-abiding life.”

As the risk-assessment industry has grown, some critics have begun raising a deeper objection. The University of Chicago’s Harcourt argues that the entire practice is misguided: Just because we have the capability to statistically predict who is likely to commit crime, he says, does not mean we should. Harcourt, a professor of law and criminology, says that doing so is tantamount to letting technology dictate our conception of justice, and perverts our belief that people who have committed the same crime should receive the same punishment. And although the current tests scrupulously avoid taking a criminal’s race into account, Harcourt argues that their long lists of factors are essentially a proxy for race and class, and only reinforce structural inequality in our society.

Even harder questions may lie ahead. Berk, the University of Pennsylvania professor, said that as the data available to researchers get better, and the algorithms that are used to analyze it improve, we may find ourselves staring at uncomfortable predictions that leave us at a loss as to what to do with them. Berk’s method is to take into account as much data about people as is available — even if there’s no reason to think it would correlate with crime — and let massively powerful computers figure out what’s useful and what isn’t. Conceivably, these computers could discover that predictions could be made using someone’s shoe size and the kind of car their parents drove when they were kids.

“This is the nightmare that I have,” Berk said. “Supposing I am able to tell a mother that her 8-year-old has a one in three chance of committing a homicide by age 18. What the hell do I do with that information? What do the various social services do with that information? I don’t know.”

Leon Neyfakh is the staff writer for Ideas. E-mail lneyfakh@globe.com.