Growing trend of using predictive algorithms in courtrooms and human services offices raises concerns over their current lack of transparency

Since the inception of predictive algorithm software in U.S courtrooms, more than a million Americans have been analyzed using the technology.

Predictive algorithms have been aiding us for years (see several of Google’s products, for instance) under the guise of making our lives easier by helping us make decisions.

But when do predictive algorithms cross the ethical line? Recently more jurisdictions at the state and local levels in the US have been buying software products from companies that use such algorithms along with data mining to make decisions that could have irreversible impacts on individuals and communities — such as determining jail sentences and predicting public policies.

This growing power has several people in the tech space worried, including Hany Farid, a professor of computer science at Dartmouth College. Farid recently co-published in the journal Science Advances a paper that revealed the findings after examining one such predictive algorithm tool that is used to predict recidivism risk, or the likelihood that criminals will commit another crime. The co-author was Julia Dressel, an undergraduate at Dartmouth who based her thesis around the research.

Farid says that since the inception of the algorithm programs in courtrooms more than a million people have been analyzed with them. Although Farid does not object to artificial intelligence (AI) as a whole, he does raise a red flag with the lack of transparency in the predictive process that these private companies are currently allowed to hide under, covered under the veil of proprietary information.

“I think perhaps one of the most important things, before we talk about efficacy, accuracy and fairness, is that this is a commercial entity and the details of the algorithm are very tightly kept secret,” Farid says. “So, these decisions are being made inside of a black box where we don't know what is actually happening and I think, … given the stakes here, that should be a little concerning to us."

Farid and Dressel found the program they examined had an accuracy rating of 65 percent — which, to their surprise, ended up being as good as the rate that resulted after polling 400 people online using Amazon’s Mechanical Turk crowdsourcing marketplace. The people paid a dollar to answer 50 questions after seeing a short paragraph about the defendant.

“They know nothing else,” Farid says. “And they're basically as accurate as the software."

The researchers began their studies by plugging in the simplest possible learning classification system algorithm when it comes to law, which takes into account two factors: how old defendants are and how many convictions they have had. In what is probably not a surprise, defendants who are young and have committed several crimes are at the high-risk end of the spectrum, with older defendants with fewer convictions on the other end.

What was a bit more eye-opening was that the number of prior convictions was a proxy for race, taking away any notion that algorithms could be race-blind, Farid says.

“There should be more understanding of what the algorithm does so that we can give a proportional weight to it, so that we can say, ‘You know what? This is a pretty simple thing. I've got two numbers: how old they are, how many prior convictions. I'm aware of prior convictions having racial symmetry. I'm going to use those numbers in a way that is proportional to my confidence in this estimation,’" Farid says.

Research found that one predictive algorithm used in U.S. courtrooms only had an accuracy rating of 65 percent.
Research found that one predictive algorithm used in US courtrooms only had an accuracy rating of 65 percent.Douglas Palmer / CC BY 2.0

“Once we understand [the algorithms] then we can deal with the limitations and the strengths, and look, in the end — 10, 20 years from now — they may do better than humans. They may eliminate some of the biases that we know exist, but I don't think we are there yet and in the process I think people are suffering because of failures of these algorithms."

In matters regarding human services and public policy, algorithms are also being used for several determinations, including detecting high-risk homes for child abuse and accessing the abilities of teachers. These two areas alone have generated a high amount of lawsuits, says Ellen Goodman, professor of law and co-director of the Rutgers Institute for Information, Policy & Law at Rutgers University.

“None of these decisions — not human, not algorithmic — are perfectly accurate,” Goodman says. “Therefore they are attuned one way or another to privilege certain policies. So, in the criminal justice context, we may want more false positives than false negatives, right? We may want to be conservative about sentencing so that we make sure even people who pose a lower risk are locked up rather than let them go and have them commit a crime. So that's a policy choice. … Those preferences are built into the algorithm, but we don't know what they are.”

Last August, Goodman co-wrote an academic paper in the Yale Journal of Law & Technology to document the lack of information that is being shared about the predictive algorithm software. Goodman and her colleague sent out requests to 42 jurisdictions at the state and local levels. They did not expect to receive the software itself, but they hoped to glean some insight about high-level objectives or the policies that were incorporated into the software. They received little to no response. A few replies came with actual contracts with the software companies.

“And so we think that the claims were essentially either that they had no information because cities are not bargaining for this information or that they were protected by trade secret,” Goodman says.

Goodman says that there is a misconception that this technology is going to be a cheaper solution compared to the traditional human decision-making. She says that from an ethics standpoint, people should want their jurisdictions to demand pertinent sets of data information, public records and checks. Then there is additional auditing to consider.

“We’re going to need a lot more transparency and it's going to cost money,” Goodman says. “If it's done on the cheap, I think ultimately it's not going to be cheap because we're going to have a lot of litigation around it because these due process concerns are not going to be handled well."

This article is based on an interview on PRI’s Science Friday with Ira Flatow.

Sign up for our daily newsletter

Sign up for The Top of the World, delivered to your inbox every weekday morning.