We live in an age of algorithms. Nowhere is this truer than in the way we consume media: from the new music we discover on Spotify, to the next original series produced by Netflix, to the advertisements and opinion pieces that populate our newsfeeds, our experience of the outside world is increasingly shaped by computer programs that silently and powerfully predict—and deliver—exactly what we want to see and hear.

The reason most of us willingly outsource much what we could call taste is that the algorithms so frequently and wonderfully get it right. It’s deeply satisfying when a website or app exhibits fine-tuned attention to our every preference—even a little flattering. When they work correctly, algorithms and the technologies they support offer us the experience of feeling known, and perhaps more importantly, understood. Of course, we know algorithms aren’t perfect; but even when a recommendation misses the mark, the stakes are usually fairly low. Scroll down, swipe left, skip to the next song, and the sub-par prediction is in the past.

Increasingly, however, algorithms are being used not only to predict our preferences, but also to predict our behavior; and these predictions are being used to make decisions about credit scores, housing policy, job applications—even criminal sentencing. This week, we highlight two articles on the use of algorithms to supplement (or supplant) human judgment in the criminal justice system.

The first is a brief Wall Street Journal piece that introduces the controversy around “predictive policing,” an increasingly common tactic in which algorithms comb law enforcement databases to try to predict where and when crimes are mostly likely to occur, and direct police to intervene preemptively. While proponents of this tactic say it will increase police departments’ efficiency and restrain the biases of individual officers, critics are concerned that it will reinforce systemic biases by increasing police attention on minority communities. By juxtaposing both perspectives in the form of brief op-eds—one from the director of a data analytics program at Johns Hopkins, and one from a senior attorney at a civil liberties advocacy group—this piece reveals a shared desire to reduce discriminatory bias in policing, but a deep disagreement about how that goal is best pursued.

Can the data upon which predictive algorithms are based ever be “neutral”?

One objection to predictive policing—raised recently against a pilot program being used by the Chicago Police Department—is that its effectiveness is still unproven. However, there can be little doubt that predictive algorithms are improving, and will continue to improve. A more substantive concern focuses on the inherent limitations of using historical data to predict future behavior: Can the data upon which predictive algorithms are based ever be “neutral”? For example: Is it fair to consider factors outside a defendant’s control—such as their age, sex, and the criminal histories of their parents and friends? Is it fair to consider a defendant’s history of arrests, rather than merely their convictions? To what extent are racial biases already coded into this kind of data? And perhaps most fundamentally, as a major study by The Marshall Project puts it: “Is it fair to look at the behavior of a group when deciding the fate of an individual?”

This point is well illustrated by a recent ProPublica study on “risk scores,” algorithmically generated predictions about criminal defendants’ likelihood to commit future crimes. Though initially developed to help judges assign treatment services, risk scores are quickly becoming a staple of the criminal sentencing process as well. Defendants’ scores are supplied to judges as part of their case files, and those deemed “likely to reoffend” by a risk assessment algorithm are more likely to receive longer sentences. The problem is that even sophisticated, “race-blind” algorithms can reinforce discrimination. As Julia Angwin reports, ProPublica’s study of a major risk assessment algorithm used in Florida found that “the formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants”—even though race was not one of the inputs considered by the formula.

This turn to predictive policing represents a confluence of two strong cultural currents: first, the widespread recognition of implicit bias and discussion regarding the need for law enforcement reform; and second, the growing role of algorithms in nearly every aspect of contemporary life. Predictive policing raises the stakes for both. When predictive algorithms are used by a streaming music service, aggregate effectiveness is good enough; most of us, most of the time, are willing to forgive Spotify for the song or two in our weekly playlists that misses the mark. But the same can’t be said for predictive policing. When it comes to criminal justice, aggregate effectiveness isn’t enough; our judges and police (and the lines of code they increasingly rely upon) must strive for justice for each and all. This isn’t to say algorithms can’t play an invaluable role in the criminal justice process—only that we should be aware of their limitations. The reality is that neither human judgment nor algorithms can fully circumvent implicit bias. Justice is best served when the two work together.

 


For those interested in tracking ProPublica’s ongoing investigation on the role of algorithms, see their “Machine Bias” homepage. Questions, comments, concerns, or kudos on this week’s briefing? Contact Sam Speers at sspeers@newcitycommons.com.