AI & Jurisprudence
"AI" Rest My Case
Does Technology Make
A Better Judge than Man?
A Better Judge than Man?
Bias might be more visible in engines than people, but they might not be as transparent on a case-by-case basis.
It’s interesting to think that there are faces behind the Law, and that there once was a debate, a vote, and a signature, for every policy we have in the United States. Court cases can be landmark decisions, and with the legal system currently so dependent on precedents, data, and consistency with the past, Artificial Intelligence has easily begun to seep into our legal system. Legal Generative AI can organize and analyze large-scale data, allowing lawyers to “focus on the information that matters most”, according to Bloomberg Law. Precedence Research estimates its market growth to be about 31%, or 1.3 billion dollars, by 2034. It’s no surprise to see growth where AI is concerned; but where does it sit… behind the bench?
To answer this question, a metric of comparison must be established; the ideals of the United States’ justice system, represented in the US Judge Code of Conduct, work well. Though they must, as per the US Judge Code of Conduct, maintain impartiality, judiciary decisions are inevitably linked to who the Judge is, which means verdicts can vary judge-to-judge. In Theodore Schroeder’s “The Psychologic Study of Judicial Opinion” [1] The argument is that there can be no judge without prejudice, whose reasoning will always combine “both conscious and subconscious elements”. AI, however, has its own bias, due to the datasets and programming it can be trained with, especially concerning racial and gender-based discrimination. With increased transparency, that bias could be more easily disclosed. Due to its reliance on past data to function, Generative AI is also less equipped to handle newer, less historically-backed elements of a case.
Timothy Capurso discusses the idea that every judicial decision can be broken into three parts: One, rule of law; two, the facts of the case; three, the decision of the judge. The first two AI can replicate easily; the analysis necessitates analysis. The two moral thoughts judges follow in decision-making are, according to Unprincipled Punishment by Aaron J. Rappaport, “utilitarian”, or based on the consequences for society; the second is anti-consequentialist, which isolates the case from its effect on the world around it. Utilitarianism would arguably be the most common resort for an all-factor-calculating AI, but bias is the biggest concern for today’s experts. In another Bloomberg report, an AI model sorted white male candidates for higher-prestige positions. In a ProPublica study, AI rated a black first-time offender, who had stolen a child’s bike, more likely to re-commit an offense than a white ex-convict who had served time for armed robbery. To be fair, this wasn’t an AI raised on skewed or “racist data”; it accounted for economic status over race, the unfortunate correlation between which subjected black demographics to discrimination. [2] However, if learning upon data aligned with a certain moral code, discrimination and “inconsideration” might be mitigated; the issue, then, is the new bias under which it would operate. The bias in Artificial Intelligence occurs in the coding and datasets it has learned with and upon; their outputs are inevitably – like human judges, subconsciously – compromised in that regard.
Unsurprisingly, the Code of Conduct doesn’t take into account the moral element in judging; morals may vary person-to-person, and are fallible in the image of the faceless judge, guided by justice, not scruples. The Law, however, is irrevocably moral. Samuel E. Stumpf’s The Moral Element in Supreme Court Decisions argues this best, stating that “Law has been shaped to fit the contours of moral conviction.” How able AI is taking into account moral codes – which would have to be programmed in, again bringing to question potential bias – is up for debate. The true solidifier of law is consistency; under it, judgments are validated by other cases in assent, whether they be in the past, present or future. An experiment conducted by Kieran Newcomb regarding recidivist sentencing, which analyzes the likelihood of a convict or ex-convict to recommit a crime, has the AI algorithm at 65% accuracy, while the human test group had 66.7%. He then goes to argue that AI, ever improving, will ultimately surpass human ability. The question that remains, then, isn’t of when that change will occur; it’s when humans will recognize it, and redefine themselves, their Law, and their roles.
Both Judges and AI are inevitably partial. Both can consider all aspects of a case, though Judges may understand newer and circumstantial elements more easily. Depending on the programming; depending on the person, the verdict can change. The similarities between what leads a judge and an engine to their verdicts should not be ignored. Though AI isn’t holding the gavel yet, with growth, conditioning, and adaptation, it’s a long-term possibility.
[1] Not Psychological presumably due to the grammatical acceptance of Psychologic in the published year, 1927
[2] What should be examined are the ethics of “cleansing” AI of discriminatory bias as a result of data like this by training it on “falsified”, idealistic information (that would inevitably result in a different bias of its own) for the sake of developing a more “equal” perception.
Angwin, Julia. “Machine Bias.” ProPublica. Last modified March 23, 2016.
Accessed October 19, 2024. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
The comments on this article, as well as the writing in its entirety, are
incredibly helpful to gauge opposing viewpoints in the AI-bias debate.
Bloomberg Industry Group. “AI for Legal Professionals.” Bloomberg Law. Accessed
October 23, 2024. https://pro.bloomberglaw.com/insights/technology/
ai-in-legal-practice-explained/#what-is-artificial-intelligence.
Capurso, Timothy. “How Judges Judge: Theories on Judicial Decision Making.”
University of Baltimore Law Forum 29.1 (1998). Accessed October 17,
- https://scholarworks.law.ubalt.edu/lf/vol29/iss1/2/
?utm_source=scholarworks.law.ubalt.edu%2Flf%2Fvol29%2Fiss1%2F2&utm_medium=PDF&utm
_campaign=PDFCoverPages.
This was a very useful -and interesting- source regarding judicial
decision-making.
Chinn, Stuart. “The Meaning of Judicial Impartiality: An Examination of Supreme
Court Confirmation Debates and Supreme Court Rulings on Racial Equality.”
University of Oregon School of Law 2019, no. 5 (2020). Accessed October
18, 2024. https://dc.law.utah.edu/ulr/vol2019/iss5/1/.
Judicature. “AI in the Courts: How Worried Should We Be?” Judicature (Durham,
NC), 2024, vol. 107.3 edition. Accessed October 23, 2024.
https://judicature.duke.edu/articles/ai-in-the-courts-how-worried-should-we-be/.
Newcomb, Kieran. “The Place of Artificial Intelligence in Sentencing Decisions.”
University of New Hampshire Inquiry Journal. Accessed October 22, 2024.
https://www.unh.edu/inquiryjournal/blog/2024/03/
place-artificial-intelligence-sentencing-decisions.
Polonski, Vyacheslav. “Can We Teach Morality to Machines? Three Perspectives on
Ethics for Artificial Intelligence.” Oxford Internet Institute. Last
modified December 19, 2017. Accessed October 22, 2024.
https://www.oii.ox.ac.uk/news-events/
can-we-teach-morality-to-machines-three-perspectives-on-ethics-for-artificial-int
elligence/.
Stumpf, Samuel. “The Moral Element in Supreme Court Decisions.” Vanderbilt Law
Review 6, no. 1 (1952): 43-65. Accessed October 20, 2024.
https://scholarship.law.vanderbilt.edu/cgi/
viewcontent.cgi?article=4485&context=vlr#:~:text=Over%20a%20century%20ago%20Chief
,an%20action%2C%20a%20given%20conduct.
Vasdani, Tara. “Robot Justice: China’s Use of Internet Courts.” The Lawyer’s
Daily, 2020. Accessed October 24, 2024. https://www.lexisnexis.ca/en-ca/
ihc/2020-02/robot-justice-chinas-use-of-internet-courts.page.