Scientists push for algorithms to make judicial decisions as MIT economist suggest AI could help improve trial outcomes

Researchers have suggested giving algorithms power over one of the most crucial backbones of American society - the justice system.

Scientists from MIT proposed the tech could be used to make pre-trial bail decisions fairer after their study found human judges are systematically biased.

The team analyzed more than one million cases in New York City, finding 20 percent of judges made their conclusions based on the defendant's age, race or criminal history.

The paper found that decisions of at least 32 percent of judges were inconsistent with the actual ability of defendants to post a specified bail amount and real the risk of them failing to appear for trial.

A new paper found that New York judges sometimes made a mistake based on their own biases when setting bail for a new defendant. That researcher said it might be useful to replace the judges decision with an algorithm.

A new paper found that New York judges sometimes made a mistake based on their own biases when setting bail for a new defendant. That researcher said it might be useful to replace the judges decision with an algorithm. 

Before a defendant is even tried for their crime, a judge holds a pre-trial hearing where they determine if someone should be allowed out in to the world before their court case begins or if they're liable to flee, and need to be held in custody. 

If they decide to let someone free, they set a price that the person has to pay to be let out - their bail.  

How they decide what a person's bail is and whether or not they should be allowed out of custody is up to the individual judge. That is where human bias comes in, according to study author Professor Ashesh Rambachan.

The paper, which was published in the Quarterly Journal of Economics, combed over 1,460,462 previous court cases from 2008 to 2013 in New York City.

It found that 20 percent of the judges made decisions that were biased based on someone's' race, age or prior record. 

This resulted in a mistake in about 30 percent of all bail decisions. 

This could mean that someone was allowed out of jail and tried to flee, or it could mean that they decided to keep someone in custody who wasn't a flight risk. 

Professor Rambachan therefore argues that using an algorithm to replace or improve the judge's decision making in a pretrial hearing could make the bail system more fair. 

This, he wrote, would depend on building an algorithm that fits to the desired outcomes accurately, which doesn't yet exist. 

This might sound farfetched, but AI has been slowly making its way into court rooms across the world. In late 2023, the British government ruled that ChatGPT could be used by judges to write legal rulings. 

Earlier that same year, two algorithms successfully mimicked legal negotiations, drafting and settling on a contract that lawyers deemed sound. 

But elsewhere, the weaknesses of AI have been on full display.

Earlier this year, Google's image generating Gemini AI was called out for churning out diverse, yet historically inaccurate, pictures for users.

For example, when users asked the website to show them a picture of a Nazi - the image they generated was of a black person in SS uniform. Google, in response, admit their algorithm, was 'missing the mark' of what it was built to do. 

 Other systems, like Open AI's Chat GPT, have been shown to commit crimes when left unattended. 

When ChatGPT was asked to perform as a financial trader in a hypothetical scenario, it committed insider trading 75 percent of the time. 

These algorithms can be useful when designed and applied correctly. 

But they are not held to the same standards or laws that humans are, scholars like Christine Moser argue, which mean they shouldn't make decisions that require human ethics. 

Professor Moser, who studies organization theory at Vrije Univeristy, in the Netherlands, wrote in a 2022 paper that allowing AI to make judgement decisions could be a slippery slope.

Replacing more human systems with AI, she said, 'may substitute human judgment in decision-making and thereby change morality in fundamental, perhaps irreversible ways.'