Major tech companies want to remain anonymous while criticizing the automation of criminal justice

A probation officer sorts through automated risk scores in Cleveland.
A probation officer sorts through automated risk scores in Cleveland.
Image: AP Photo/Dake Kang
We may earn a commission from links on this page.

The Partnership on AI, a trade organization founded by Apple, Amazon, Facebook, Google, IBM, and Microsoft to set best-practice standards on AI research and implementation, released a new report today (April 26) condemning the use of algorithms to assess bail in criminal justice.

But despite recommending in the report that governments stop using these automated tools for criminal justice, the large tech companies supporting the report are choosing to remain anonymous—except for Microsoft.

“As research continues to push forward the boundaries of what algorithmic decision systems are capable of, it is increasingly important that we develop guidelines for their safe, responsible, and fair use,” Andi Peng, an AI resident at Microsoft Research, says in the report.

Quartz reached out to Apple, Amazon, DeepMind, Facebook, Google, and IBM, to ask whether they supported the report. DeepMind, Google, and IBM declined to comment on why they were not publicly supporting the report. The rest weren’t immediately available to comment.

“Though this document incorporated suggestions or direct authorship from around 30-40 of our partner organizations, it should not under any circumstances be read as representing the views of any specific member of the Partnership. Instead, it is an attempt to report the widely held views of the artificial intelligence research community as a whole,” the report reads.

The Partnership on AI acts under Chatham House Rule, meaning no ideas shared can be attributed to any one company, a representative who did not want to be identified due to the rule told Quartz.

This leaves the Partnership’s report on an awkward footing. The reason why the Partnership on AI has validity and expertise on large-scale implementation of technology is due to its support from major tech companies, but those companies are also unwilling to publicly support potentially contentious work the Partnership does.

“Though supported and shaped by our Partner community, the Partnership is ultimately more than the sum of its parts and makes independent determinations to which its Partners collectively contribute, but never individually dictate,” a spokesperson for PAI told Quartz.

But without knowing who that “partner community” is, it’s difficult to know the sum of the organizations’ parts, since the parts are unknown.

As Quartz wrote when the Partnership on AI added civil rights groups in 2017, the organizations’ best practices and suggestions are also non-binding for members. That means that companies could be part of an organization that disavows the use of algorithms in pre-trial risk assessment, but also works on projects involving them, without the public or investors necessarily knowing. Pre-trial risk assessment algorithms weigh the person accused of a crime’s personal history—in some cases their criminal history, their demographics, or even their education—against historical data to give a recommendation on whether they should be released on bail.

Many of the organization’s founding tech companies have close ties to law enforcement and criminal justice. Microsoft, Google, and Amazon all regularly compete for government contracts in the space. Many workers for these companies have organized against these contracts, saying that the AI technology tech companies use should not be used for surveillance and military uses. IBM has a large swath of government contracts as well, and has provided AI-powered tools for the criminal justice system. In 2017, the company partnered with a Dayton, Ohio judge to test machine-learning software for analyzing documents in juvenile court cases. As Quartz reported April 25, IBM also is working to implement predictive policing algorithms in cities around the US. In Lancaster, California, for instance, law enforcement officials are using IBM systems to help deploy their police forces in areas where IBM’s algorithms recommend that there might be criminal activity.

Update: This article has been updated to add that IBM declined to comment.