Miscalculation of justice: Glenn Rodríguez was denied parole because of an inaccurate risk assessment that was nearly impossible to challenge.

One day in early January, a letter appeared on my desk marked DIN92A5501, an inmate’s identification number from the Eastern Correctional Facility in upstate New York. The author, Glenn Rodríguez, had drafted it in upright, even letters, perfectly aligned. Here, in broad strokes, is the story he told:

Rodríguez was just sixteen at the time of his arrest, and was convicted of second-degree murder for his role in an armed robbery of a car dealership that left an employee dead. Now, twenty-six years later, he was a model of rehabilitation. He had requested a transfer to Eastern, a maximum-security prison, in order to take college classes. He had spent four and a half years training service dogs for wounded veterans and eleven volunteering for a youth program. A job and a place to stay were waiting for him outside. And he had not had a single disciplinary infraction for the past decade.

Yet, last July, the parole board hit him with a denial. It might have turned out differently but, the board explained, a computer system called COMPAS had ranked him “high risk.” Neither he nor the board had any idea how this risk score was calculated; Northpointe, the for-profit company that sells COMPAS, considers that information to be a trade secret. But Rodríguez may have been stuck in prison because of it.

Proprietary algorithms are flooding the criminal justice system. Machine learning systems deploy police officers to “hot spot” neighborhoods. Crime labs use probabilistic software programs to analyze forensic evidence. And judges rely on automated “risk assessment instruments” to decide who should make bail, or even what sentence to impose.

Supporters claim that these tools help correct bias in human decisionmaking and can reduce incarceration without risking public safety by identifying prisoners who are unlikely to commit future crimes if released. But critics argue that the tools disproportionately harm minorities and entrench existing inequalities in criminal justice data under a veneer of scientific objectivity.

Even as this debate plays out, the tools come with a problem that is slipping into the system unnoticed: ownership. With rare exceptions, the government doesn’t develop its own criminal justice software; the private sector does. The developers of these new technologies often claim that the details about how they work are “proprietary” trade secrets and, as a result, cannot be disclosed in criminal cases. In other words, private companies increasingly purport to own the means by which the government decides what neighborhoods to police, whom to incarcerate, and for how long. And they refuse to reveal how these decisions are made—even to those whose life or liberty depends on them.

The issue has been percolating through criminal proceedings for years. I work for the Legal Aid Society of New York City defending criminal cases that involve computer-derived evidence. I regularly see defendants denied information that they could use to cross-examine the evidence against them because it’s a trade secret.

Right now, in Loomis v. Wisconsin, the U.S. Supreme Court is deciding whether to review the use of COMPAS in sentencing proceedings. Eric Loomis pleaded guilty to running away from a traffic cop and driving a car without the owner’s permission. When COMPAS ranked him “high risk,” he was sentenced to six years in prison. He tried to argue that using the system to sentence him violated his constitutional rights by demoting him for being male. But Northpointe refuses to reveal how it weights and calculates sex.

We do know certain things about how COMPAS works. It relies in part on a standardized survey where some answers are self-reported and others are filled in by an evaluator. Those responses are fed into a computer system that produces a numerical score. But Northpointe considers the weight of each input, and the predictive model used to calculate the risk score, to be trade secrets. That makes it hard to challenge a COMPAS result. Loomis might have been demoted because of his sex, and that demotion might have been unconstitutional. But as long as the details are secret, his challenge can’t be heard.

What surprised me about the letter from Eastern was that its author could prove something had gone very wrong with his COMPAS assessment. The “offender rehabilitation coordinator” who ran the assessment had checked “yes” on one of the survey questions when he should have checked “no.” Ordinarily, without knowing the input weights and predictive model, it would be impossible to tell whether that error had affected the final score. The mistake could be a red herring, not worth the time to review and correct.

Glenn Rodríguez had managed to work around this problem and show not only the presence of the error, but also its significance. He had been in prison so long, he later explained to me, that he knew inmates with similar backgrounds who were willing to let him see their COMPAS results. “This one guy, everything was the same except question 19,” he said. “I thought, this one answer is changing everything for me.” Then another inmate with a “yes” for that question was reassessed, and the single input switched to “no.” His final score dropped on a ten-point scale from 8 to 1. This was no red herring.

So what is question 19? The New York State version of COMPAS uses two separate inputs to evaluate prison misconduct. One is the inmate’s official disciplinary record. The other is question 19, which asks the evaluator, “Does this person appear to have notable disciplinary issues?”

Advocates of predictive models for criminal justice use often argue that computer systems can be more objective and transparent than human decisionmakers. But New York’s use of COMPAS for parole decisions shows that the opposite is also possible. An inmate’s disciplinary record can reflect past biases in the prison’s procedures, as when guards single out certain inmates or racial groups for harsh treatment. And question 19 explicitly asks for an evaluator’s opinion. The system can actually end up compounding and obscuring subjectivity.

That’s what happened to Glenn Rodríguez. “It took a lot of energy and effort to maintain a clean record for the duration I had,” he told me. Looking at his fellow inmates’ COMPAS reports, he realized that some guys who had engaged in violent behavior within the past two years, but whose evaluators had checked “no,” had gotten a low prison misconduct score.

Rodríguez went before the parole board last July. “This panel has concluded that your release to supervision is not compatible with the welfare of society,” the board explained. Of significant concern was his “high COMPAS risk score for prison misconduct.”

Trade secrets are a form of intellectual property for commercial know-how that are both stronger and weaker than patents. When the government grants a patent, no one is allowed to use your invention, period—but only for twenty years. After that, it’s open season. By contrast, a trade secret lasts as long as you can keep it, well, secret. If rivals obtain the secret through “misappropriation”—that is, lying, spying, or fraud—you can sue them, and they can even face criminal charges. But if they reverse-engineer your product or simply come up with the same idea on their own—or if you just do a bad job hiding it—then it’s too bad for you. Software developers like trade secrecy because their technology is not always patentable, and because patents are expensive to acquire and enforce.

There is debate among legal scholars about why the law recognizes trade secrets. Some even argue that it shouldn’t. But the most commonly accepted rationale is that granting protections for information that may not be patentable, like an abstract idea or a mathematical formula, will incentivize new intellectual creations.

What’s alarming about protecting trade secrets in criminal cases is that it allows private companies to withhold information not from competitors, but from individual defendants like Glenn Rodríguez. Generally, a defendant who wants to see evidence in someone else’s possession has to show that it is likely to be relevant to his case. When the evidence is considered “privileged,” the bar rises: he often has to convince the judge that the evidence could be necessary to his case—something that’s hard to do when, by definition, it’s evidence the defense hasn’t yet seen.

Based on state appellate court opinions, the invocation of trade secrets to prevent criminal defendants from accessing evidence against them didn’t start happening frequently until the 1990s, when companies began refusing to disclose details about DNA testing kits that were being adopted by forensic labs around the country. There was some early pushback by judges and experts, but eventually most courts ruled that DNA test kit manufacturers were entitled to keep aspects of their methods secret—and that prosecutors could still use the results as evidence to convict. In the past five years, courts in at least ten states have ruled this way for DNA analysis software programs.

Private companies increasingly purport to own the means by which the government decides what neighborhoods to police, whom to incarcerate, and for how long. And they refuse to reveal how these decisions are made—even to those whose life or liberty depends on them.

But, like any technology, DNA testing can be flawed. In 2016, Michael Robinson, a death penalty defendant in Pennsylvania, tried unsuccessfully to subpoena the source code for a probabilistic genotyping software program called True-Allele. A thirty-year-old “family guy” with no prior criminal history, Robinson had been charged with murdering two people. TrueAllele matched his DNA to a bandanna found near the scene of the crime.

Probabilistic genotyping software results are not gold standard DNA evidence. The programs were developed to test tiny amounts and complicated mixtures of DNA, and their accuracy is disputed. Last September, President Obama’s Council of Advisors on Science and Technology found that more testing is needed to establish the validity of programs like TrueAllele.

Robinson sought to evaluate the TrueAllele code and check whether it worked the way its developer claimed. The judge denied his request. One reason she gave stands out: TrueAllele’s developer, Mark Perlin, had said that ordering the code disclosed to the defense could “cause irreparable harm to the company, as other companies would be able to copy the code and put him out of business.” As a result, the judge decided that compelling disclosure would be unreasonable. “Dr. Perlin could decline to act as a Commonwealth expert,” she wrote, “thereby seriously handicapping the Commonwealth’s case.”

Robinson was forced to defend himself without access to the code. Despite the DNA test results, in February he was acquitted on all counts. While the outcome was ultimately a happy one for Robinson, he had to sit through the trial knowing that the jurors were weighing evidence that he was unable to fully scrutinize and contest. How many future defendants will be wrongfully convicted based on misleading “proprietary” DNA software that they couldn’t see or challenge?

Recently, companies have begun invoking proprietary secrets in the context of police investigatory tools. Accessing information about how these technologies work can be critical to a defendant’s case. He or she might want to argue that they violate privacy rights, or aren’t reliable enough to justify an arrest. In our adversarial legal system, these claims by individual defendants are often the main way to hold police to account.

That’s what happened with Stingrays. A Stingray is a military surveillance device that masquerades as a cell phone tower in order to suck up information from your phone. When the manufacturer, the Harris Corporation, applied for certification by the Federal Communications Commission, it requested that information about the technology be kept secret for both law enforcement purposes and to maintain its commercial “competitive interests.” As a result, police departments around the country signed non-disclosure agreements promising to conceal details about how the technology works—and even its mere existence—from defendants, courts, legislatures, and the public.

One man undid the secrecy scheme. When Daniel Rigmaiden was arrested for wire fraud and identity theft in 2008, he insisted that police must have used a secret device to beam “rays into his living room” and gather data about his location. After years representing himself pro se from a prison cell, he proved that he was right. He noticed a handwritten note with the word “Stingray” buried in his own 14,000-page court file. Internet searches for the term yielded a Harris Corporation brochure and a purchase order for the device from the police department that had arrested him. Some courts have since found that warrantless use of Stingray devices violates the Fourth Amendment—holdings that would have been impossible without Rigmaiden’s efforts. That means police spent years getting away with potentially unconstitutional Stingray searches and hiding their tracks with non-disclosure agreements.

Of course, not all secrecy is driven by profit. Some investigative methods must be kept under wraps to be effective. If anyone could predict IRS audits or airport security screenings, fraudsters and terrorists could avoid getting caught. The trouble is that the flip side is also true: excessive secrecy can let police evade accountability for illegal or unconstitutional methods. Proprietary technology makes this too easy: First, outsource policing techniques to private companies. Then, claim those techniques are trade secrets.

The use of predictive policing tools shows how this can occur. These computer systems use machine learning to forecast where crimes are likely to be committed. One leading vendor, PredPol, has refused for years to reveal certain details about how its system forecasts future crimes. Wanting to protect that information is understandable: the tool took six years to develop and now generates an estimated $5–6 million in annual revenue.

In January 2016, PredPol finally responded to public pressure and published a general description of its algorithm. That allowed independent researchers from the Human Rights Data Analysis Group to re-implement and test it. They showed that applying the algorithm to police records could exacerbate past racially biased policing practices: even when crimes were spread evenly throughout a city, PredPol would home in on areas that were overrepresented in police databases, intensify policing in those same areas, then use the foreseeable spike in crime reports to justify its earlier predictions. The problem comes from the data, not the algorithm, but PredPol’s describing the algorithm publicly helped researchers to demonstrate the issue empirically.

That doesn’t mean we should never use machine learning systems; researchers are developing methods to audit, simplify, and try to reduce bias in predictive models. But it means that if courts allow the systems to be cloaked in secrecy, we may not be able to find the flaws, much less be able to fix them. What kinds of transparency we need most is a matter of technical debate. Trade secrets should play no role in determining the answer.

Once Glenn Rodríguez had figured out that a single survey response had swung his “prison misconduct” score from low to high, he sent a written complaint to a supervisor at Eastern. The “yes” response to question 19, he argued, was at odds with his exemplary behavioral record and was likely to hurt him at his next parole hearing, in January. Without COMPAS, his case for parole was nearly perfect. Question 19 was standing between him and freedom.

Rodríguez got farther than most people in his position. Thanks to the network of fellow inmates who shared their COMPAS scores, he was able to convince the rehabilitation coordinator that his score was inaccurate. “The question surrounding your disciplinary should be changed,” she wrote in a letter to Rodríguez last September. “Since you will be going to the Board in less than a year, we need to make sure the original one isn’t used.”

For Rodríguez, the next step was to wait. And wait. Despite the written assurances, no new COMPAS was provided. He sent letters to attorneys. (One arrived on my desk.) New Year’s came and went. He filed a formal complaint with the Inmate Grievance Resolution Committee. The score was never fixed.

There’s no question that we could use some innovation in the criminal justice domain. New technologies could help us find and convict criminals, exonerate the innocent, reduce human bias, and incarcerate fewer people. But recognizing the benefits of innovation does not require permitting developers to withhold their secrets from individual defendants. It’s one thing to argue that forcing companies to disclose trade secrets in public would hurt business and derail technological progress. It’s another to claim that making them share sensitive information with the accused and their defense team, in the controlled context of a criminal proceeding, would do the same.

The most common justification for withholding proprietary information from a defendant is that without that guarantee, innovative companies will be deterred from investing in new criminal justice technology or from selling existing products to the government. But it isn’t always clear that this concern is legitimate. Take TrueAllele, the probabilistic DNA testing software. Mark Perlin, who developed it, did not answer my requests for an interview. But he has submitted declarations to courts across the country explaining that allowing defendants, their attorneys, and defense expert witnesses to see his “source code would enable the reverse engineering of the TrueAllele technology, allowing others to learn the trade secrets that keep [his company] solvent.” Prosecutors have fallen in line with Perlin’s view. One warned a court that ordering the code disclosed to a defense team would be “financially devastating.”

But Perlin’s more transparent competitors appear to be doing just fine. TrueAllele’s main rival, a program called STRmix, which claims a 54 percent U.S. market share, has an official policy of providing defendants access to its source code, subject to a protective order. Its developer, John Buckleton, said that the key to his business success is not the code, but rather the training and support services the company provides for customers. “I’m committed to meaningful defense access,” he told me. He acknowledged the risk of leaks. “But we’re not going to reverse that policy because of it,” he said. “We’re just going to live with the consequences.”

And remember PredPol, the secretive developer of predictive policing software? HunchLab, one of PredPol’s key competitors, uses only open-source algorithms and code, reveals all of its input variables, and has shared models and training data with independent researchers. Jeremy Heffner, a HunchLab product manager and data scientist, explained why this makes business sense: only a tiny amount of the company’s time goes into its predictive model. The real value, he said, lies in gathering data and creating a secure, user-friendly interface.

HunchLab is not alone. In March, another start-up in the field, CivicScape, published its source code and examples of its input variables online. Publishing models and the actual data used to train them would be even better, but the disclosure was a step in the right direction.

The fact that competitors in the same field as products like TrueAllele and PredPol have no problem revealing details about their methods should make courts less willing to take a company’s word when it comes to the need for total secrecy. It’s too easy for claims about financial devastation to mask a more troubling motive: avoiding scrutiny. Developers of tools that a jurisdiction has already purchased may decide that they have nothing to gain from letting the defense poke holes in their software.

That explains why even governmental bodies have claimed trade secret protection. The New York City Office of the Chief Medical Examiner has argued for years that the source code for a forensic software program that it developed itself, using public funds, should be privileged. This is absurd: the government has no legitimate commercial interest in keeping details about forensic technology from the defense. Yet the agency has won. Whether the motive is winning or profit, trade secrets can be abused as a way to keep the defense in the dark. (Last year, a federal judge finally ordered the city to turn over the program’s source code to one defendant; expert witnesses promptly discovered an undisclosed code function likely to aid prosecutors.)

What’s alarming about protecting trade secrets in criminal cases is that it allows private companies to withhold information not from competitors, but from individual defendants.

Even when revealing a company’s techniques could spell economic ruin, there are already ways to protect them without barring the defense from examining the information. In the business world, companies working on a deal sign non-disclosure agreements promising not to misuse any valuable information that gets revealed as part of negotiations. In civil lawsuits, judges often order that proprietary information be shared subject to a protective order, which is like a non-disclosure agreement but with extra sanctions for a violation, such as being held in contempt of court. A federal judge in Delaware once even ordered Coca-Cola to hand over its secret formula in a contract dispute.

The same approach should work in criminal cases. It’s true that the protective order solution can fail. People cheat, especially in a cutthroat industry where only a few people know the technology and the opposing party’s expert witness could be a competitor. As some measure of these anxieties, Coca-Cola chose to concede certain disputed facts rather than comply with the Delaware court’s order.

But even in cases where legitimate business risks exist, withholding information from the accused is the wrong answer. Where a company’s business strategy falls on the secrecy-transparency spectrum should not limit the full array of arguments available to a criminal defendant. The law already lets businesses sue if someone steals their trade secret. While that might not be a foolproof solution, making up for any imperfections on the backs of the accused is unfair.

The Supreme Court has an opportunity in Loomis v. Wisconsin to rule that the Constitution forbids the government from taking life or liberty based on proprietary secrets. If the Court declines, state legislatures should lead the way by passing laws that direct criminal courts to safeguard valid trade secrets with a protective order and nothing more. It’s time to make clear that no one owns the means of decisionmaking in the criminal justice system.

It took weeks for me to get Glenn Rodríguez on the phone. Coordinating collect calls, dialing restrictions, and scheduling from a maximum-security prison takes work. A few times, we narrowly missed each other. Finally, in mid-March, another colleague emailed to say he was on the line and I dashed into her office to speak with him.

No one had granted his COMPAS reassessment, he said. He had gone in front of the parole board again in January 2017 with the same “high risk” score. “I went in there feeling confident about my accomplishments,” he told me. “I said, even though the score is high for my prison misconduct, I know that score doesn’t represent who I am. I was a different person now, and that would shine through.” The hearing started out like a trial, with the commissioners focused in uncomfortable detail on the crime he had committed a quarter century before. Then, Rodríguez recalled, it shifted. “You’re forty-three,” one commissioner said. “You were sixteen. You’re still young. You still have the opportunity to rebuild your life. We’d like to give you that opportunity.”

Rodríguez made parole. He would leave Eastern with 110 college credits from Bard University and a plan to finish his degree and go to graduate school for social work or public health. I asked him if he had any final words on his experience with COMPAS.

“Guys are seeing that pretty much one question can skew the whole thing,” he said. “Why is it that there’s so much secrecy surrounding this? This is evidence that’s being used against you. They are making a determination on a person’s life on the basis of this evidence. So you should have a right to challenge it.”

Our ideas can save democracy... But we need your help! Donate Now!

Rebecca Wexler is a Yale Public Interest Fellow at the Legal Aid Society of New York City, a lawyer in residence at the Data & Society Institute, and a visiting fellow at the Yale Information Society Project.