In the United States, the use of artificial intelligence technology in the criminal justice system is immoral
A preliminary guide for debaters
The two potential resolutions for Lincoln-Douglas debate for September/October have been released.
The plea bargaining one has been debated before, and the AI one is current, so hopefully it will win.
To support student efforts to debate the topic, I produced this basic backgrounder ussing ChatGPT-o3+Deep Research and GeminiPro2.5 DeepResearch.
A few notes -
(1) I don’t usually put debate things here, but Substack is easy to edit and I thought it would help my normal readers see what the Deep Researchers can produce with minimal editing.
(2) I haven’t checked every citation, but I did a good amount of spot checking and I didn’t find any hallucinated citations. As I have more time, I’ll be reading some of the resources and spot checking. This is something debaters will have to do anyhow, so there isn’t really any downside risk of an erronous citation at this point.
(3) This should really demonstrate the power of debate as an instructional method. Yes, AI can write pretty good papers. Assuming the tiations are aqccurate, it would be hard to argue this isn’t a decent paper. But for debaters this is just the foundation. To win debates, dbeaters will ahve to read the articles, assemble “cards”/quoptes, and debate. They will need a much greater depth of engagement to be successful.
AI should be the foundation, boosting students up to do more. We shouldn’t rely on assessing what AI can do better than them to evaluate them.
__
Executive Summary
The integration of artificial intelligence (AI) into the United States criminal justice system (CJS) represents one of the most significant and morally fraught technological shifts of the modern era. This report provides a comprehensive analysis of the resolution that the use of AI in the CJS is immoral. It begins by establishing foundational concepts, defining the technology in question—primarily Artificial Narrow Intelligence (ANI)—and distinguishing it from the theoretical Artificial General Intelligence (AGI). It outlines the three core components of the CJS (Law Enforcement, Courts, and Corrections) and introduces the primary ethical frameworks—Utilitarianism, Deontology, and Virtue Ethics—that guide the moral evaluation.
The analysis first presents the arguments for AI integration, which are predominantly utilitarian in nature. Proponents champion AI for its potential to drastically improve efficiency, reduce costs, and operate at a scale unattainable by humans. They argue that AI can enhance objectivity by mitigating human biases, leading to more consistent and accurate outcomes. Furthermore, AI is posited as a tool to expand access to justice, empowering under-resourced public defenders and providing legal information to underserved communities.
These purported benefits, however, are weighed against a formidable set of moral objections. The report details the specific perils of AI within the CJS, which are rooted in deontological and virtue-based concerns. The most critical failure is algorithmic bias, where AI systems ingest historically biased data and produce discriminatory outputs that perpetuate and amplify systemic inequities, particularly against racial minorities. The "black box" nature of many AI tools creates an opaque system that fundamentally undermines the right to due process, as defendants cannot meaningfully challenge the algorithmic evidence used against them. These issues culminate in a direct collision with constitutional protections, including the rights to equal protection and freedom from unreasonable searches. Beyond legal rights, the report argues that AI's deployment risks the dehumanization of justice, replacing essential human virtues like empathy, discretion, and moral judgment with amoral, statistical calculations.
The moral inquiry is then broadened to consider the "lifecycle" ethics of AI, examining the unresolved intellectual property issues in training data, the significant and often unjust environmental toll of AI data centers, and the potential de-skilling of the legal profession. These factors create a compounding moral hazard, where the technology's very creation and operation are fraught with ethical problems.
1. Foundational Definitions
Artificial Intelligence (AI) – Narrow vs. General: Artificial Intelligence broadly refers to computer systems capable of performing tasks that typically require human intelligence, such as learning or problem-solving.
Modern AI predominantly exists as Artificial Narrow Intelligence (ANI) (or “weak AI”), which can excel at specific tasks but cannot generalize beyond its narrow domain For example, ANI systems like virtual assistants or legal chatbots can process language or analyze data far faster than humans, but they remain confined to their trained functions.
In contrast, Artificial General Intelligence (AGI) (or “strong AI”) denotes a hypothetical machine with human-level cognitive abilities across any task or domain. AGI does not yet exist; it is a theoretical concept of an AI that could understand or learn any intellectual task a human can, applying skills to new contexts without additional human training. Many of the field’s leading individuals believe we will have at least Digital AGI as early as 2026, and probably no later than approximately 2030. There are some who believe it wil take longer.
U.S. Criminal Justice System – Scope and Components
The United States criminal justice system is not a single, monolithic entity but a vast and complex network of government agencies and processes at the local, state, and federal levels. Its primary purpose is to enforce laws, adjudicate guilt, and administer punishment and rehabilitation for criminal offenses. For the purpose of this analysis, the system is best understood through its three core components.
Law Enforcement: This is the most visible component and the typical entry point into the system. It consists of agencies like city police departments, county sheriffs' offices, state patrols, and federal agencies such as the FBI and DEA. Their primary functions include maintaining public order, preventing and investigating crime, collecting evidence, and apprehending suspects. AI is most prominently used in this component for surveillance, investigation, and predictive analysis.
The Courts (Judiciary): This component is responsible for the adjudication of legal cases. Its role is to ensure that individuals accused of crimes receive fair and legal proceedings, to determine guilt or innocence, and to impose sentences on those found guilty. The court system includes prosecutors, defense attorneys, judges, and juries, and operates within state, federal, and tribal jurisdictions. AI is used in this component for legal research, case management, and, most controversially, for risk assessment tools that inform decisions on bail and sentencing.
Corrections: This component is responsible for carrying out the sentences imposed by the courts. This includes the management of incarcerated individuals in jails (typically for shorter sentences or pretrial detention) and prisons (for longer sentences), as well as the supervision of individuals in the community through probation and parole. The goals of corrections are multifaceted, encompassing punishment, public safety, and the rehabilitation of offenders through programs like education, vocational training, and counseling. AI is used in this component for inmate monitoring, facility management, contraband detection, and assessing rehabilitation needs and recidivism risk.
The resolution—that AI use in the CJS is immoral—is overly simplistic because the moral calculus changes dramatically depending on which component of the system is being analyzed. The CJS is not a monolith; its three components have different functions, ethical obligations, and degrees of impact on individual liberty. Using AI for a purely administrative task, such as automating visitation schedules in a correctional facility , primarily raises questions of efficiency and data security. The direct moral stakes are relatively low. In contrast, using AI in law enforcement for predictive policing raises significant moral questions about surveillance, privacy, and group-based discrimination, directly engaging Fourth and Fourteenth Amendment concerns. Finally, using AI in the courts to inform a judge's sentencing decision has the most direct and profound impact on an individual's liberty, raising the most acute moral questions about due process, fairness, and the nature of judgment itself. Therefore, a blanket moral judgment is inadequate. An expert analysis must disaggregate the CJS and evaluate the morality of specific AI applications within the unique context of each component. The immorality of a "robot judge" does not necessarily render an AI-powered legal research tool for a public defender immoral.
Morality and Ethical Frameworks: Morality refers to principles and values distinguishing right from wrong behavior. It encompasses societal norms, individual beliefs, and philosophical theories about what actions are good or evil, just or unjust. To evaluate whether using AI in criminal justice is “immoral,” we must clarify moral criteria. Several mainstream ethical frameworks provide guidance.
To evaluate the resolution, it is essential to establish a clear framework for moral reasoning. This report distinguishes between morality, which refers to the principles concerning the distinction between right and wrong or good and bad behavior held by individuals or societies, and ethics, which is the branch of philosophy that studies, systematizes, and evaluates these moral principles. The resolution "the use of AI in the CJS is immoral" is a moral claim. This report will analyze that claim using three major, mainstream ethical frameworks.
Utilitarianism (Consequentialism): This framework, primarily associated with philosophers Jeremy Bentham and John Stuart Mill, judges the morality of an action based solely on its consequences. The core principle is to choose the action that produces the greatest amount of good or happiness for the greatest number of people—a concept known as maximizing utility. In the CJS context, utility could be measured by outcomes like reduced crime rates, lower system costs, enhanced public safety, or fewer wrongful convictions. A key feature of utilitarianism is that it is aggregative; it can potentially justify actions that harm a minority if they produce a sufficiently large benefit for the majority.
Deontology (Duty-Based Ethics): This framework, most famously articulated by Immanuel Kant, judges the morality of an action based on whether it adheres to a set of rules, duties, or moral laws, irrespective of the consequences. Deontology posits that certain actions are inherently right or wrong. For example, lying or violating a person's rights would be considered wrong even if it led to a good outcome. Key concepts include the Categorical Imperative, which holds that one should only act according to maxims that could be made into universal laws, and the principle that one must always treat humanity, in oneself and others, as an end and never merely as a means. In the CJS, a deontological analysis would focus on whether AI use violates fundamental rights (like due process or equal protection) or fails to respect the inherent dignity of individuals.
Virtue Ethics: This framework, with ancient roots in the philosophies of Plato and Aristotle, shifts the focus from actions or consequences to the character of the moral agent. It asks what a virtuous person would do in a given situation. It is concerned with the cultivation and expression of virtues—stable character traits like justice, courage, wisdom, compassion, and temperance. In the CJS context, a virtue ethics analysis would not ask "Is this outcome efficient?" or "Does this action violate a rule?" but rather, "Does the use of this AI tool promote or erode the virtues essential for the administration of justice?" It would question whether reliance on AI fosters or diminishes the qualities of a just, prudent, and compassionate judge, lawyer, or police officer.
The debate over AI in the CJS is not merely a technical or legal dispute; it is a fundamental clash of competing, and often irreconcilable, ethical worldviews. The conclusion one reaches about the morality of AI is largely predetermined by the ethical framework one prioritizes. Proponents of AI in the CJS almost exclusively use utilitarian arguments. They champion efficiency, cost savings, accuracy, and crime reduction—all measures of aggregate social good. Their moral justification rests on achieving better overall outcomes. Opponents, conversely, ground their arguments primarily in deontology and virtue ethics. They decry the violation of individual rights to due process and equal protection (deontology) and lament the replacement of human empathy, wisdom, and discretion with cold calculation (virtue ethics). This creates a scenario where the two sides are arguing past each other. A utilitarian can concede that an AI tool is biased against a minority group but argue it is still moral if it reduces overall crime significantly. A deontologist would find this justification abhorrent, as the violation of rights for even one person is a moral failure. Therefore, this report cannot simply list pros and cons. It must explicitly frame the entire debate as a conflict between these ethical systems. The ultimate moral judgment will depend on which framework is deemed most appropriate for a system whose purpose is "justice," a concept that is itself philosophically contested.
2. Applications of AI in the U.S. Criminal Justice System
AI technologies are already being deployed or tested in various facets of American criminal justice, from police work to court administration to corrections.
A Popular Legal AI Tool: Harvey.ai
To understand how ANI can be effectively and responsibly applied in a legal context, it is useful to examine its role in the broader legal industry, where it primarily serves as a powerful support tool. Legal tech startups like Harvey AI provide a clear example of ANI's capabilities when properly constrained.
Harvey AI is a generative AI platform built on OpenAI's GPT-4 architecture but has been extensively customized for the legal profession. Its development involves a multi-layered training process. It begins with the general knowledge of the base GPT model and is then refined with a vast corpus of legal-specific data, including case law, statutes, and legal treatises. When a law firm adopts the platform, it can be further fine-tuned on that firm's own internal documents, templates, and past work product, allowing it to learn the firm's specific practices and style.
The platform's functions are designed to augment, not replace, the work of human lawyers. Its primary use cases include :
Legal Research: Rapidly searching and synthesizing legal information from authoritative sources.
Contract Analysis: Reviewing large volumes of contracts to identify key clauses, risks, and inconsistencies.
Due Diligence: Assisting in the review of documents for corporate transactions.
Drafting Support: Generating initial drafts of legal documents and communications.
A key element of Harvey's design that addresses a major weakness of general-purpose AI is its effort to ground its outputs in verifiable sources. Through a strategic partnership with LexisNexis, Harvey can integrate authoritative legal content and citations directly into its platform, significantly reducing the risk of "hallucinations"—the tendency of AI to generate false information. This allows users to receive answers to legal questions that are supported by and linked to primary law sources.
The Harvey AI example demonstrates that ANI can function effectively and add significant value in the legal field by increasing efficiency and accuracy for specific, well-defined tasks. However, its successful implementation relies on its role as a sophisticated assistant under the direct supervision of a human expert who remains ultimately responsible for the work product.
Below we survey the major current and emerging applications of AI in this domain:
Legal Research and Case Management: One burgeoning use of AI is in automating legal research, brief writing, and case management for lawyers and courts. Advanced language models (like those behind Harvey AI and other startups) can rapidly sift through legal databases, draft documents, or answer complex legal questions. For example, Harvey AI – launched in 2022 – provides custom large language models to assist attorneys with tasks such as document review, contract drafting, and legal research, boasting the ability to find and summarize relevant authorities with speed and accuracy
In early 2023, global law firm Allen & Overy announced it would roll out Harvey’s AI assistant to thousands of its lawyers, signaling a shift in how attorneys perform research and draft work product
Other AI-driven legal tools include Casetext’s CoCounsel (an AI legal assistant acquired by Thomson Reuters), Spellbook (which uses AI to review contracts), and analytics platforms like Lex Machina
These tools aim to increase efficiency in case preparation – for instance, automating search through millions of case files or transcripts to find relevant precedents, or managing case schedules and dockets. In courts, AI is also piloted for workflow and caseflow management: automating the classification of incoming cases, scheduling hearings, and balancing judge caseloads
The National Center for State Courts notes that AI can support case tracking and scheduling to optimize the use of court resources
In principle, such tools could reduce delays and administrative burdens by intelligently routing cases, prioritizing urgent matters, and even predicting how long a trial might take for scheduling purposes. Overall, AI in legal research and case management promises faster processing of information and routine tasks, potentially lowering costs and freeing up human lawyers and staff for higher-level analytical work
Predictive Policing (Crime Prediction and Prevention): Predictive policing refers to using algorithms to analyze large datasets (e.g. historical crime records, incident reports, geospatial data) to forecast where and when crimes are likely to occur, or who might be involved
Police departments in several major U.S. cities have experimented with these tools in the past decade. Place-based predictive policing algorithms (like the infamous PredPol, now rebranded as Geolitica) ingest past crime incident locations and times to identify “hotspots” – small geographic areas and windows of time with elevated risk of crime
The output might say, for example, that a particular block has a high likelihood of a burglary between 5–7pm, prompting commanders to deploy extra patrols there. Person-based predictive policing (e.g., Chicago’s now-abandoned “Strategic Subject List”) analyzes individuals’ criminal records, social ties, and other data to flag people at high risk of violence – either as perpetrators or victims
In theory, predictive policing is meant to allocate police resources more efficiently and proactively, anticipating crime more accurately than traditional patrol patterns
Proponents argue these algorithms can uncover hidden patterns in data, helping police intervene before crimes occur and making policing more objective by relying on data rather than gut instinct
For example, Los Angeles police used a system called LASER to identify areas likely to see gun violence and a list of chronic offenders to monitor
However, as we will detail, predictive policing has become a flashpoint in the AI ethics debate due to documented bias and feedback loop problems. Historical crime data reflects systemic biases (e.g., over-policing of minority neighborhoods), so predictions often simply reinforce those biases
Indeed, an official Brennan Center analysis explained that these systems risk “reproducing those very biases” under the veneer of objective math. Independent evaluations have found that certain predictive policing software was often inaccurate – one study of a tool used in Camden, NJ found it predicted crime correctly less than 1% of the time, flooding officers with false positives
Due to such concerns, some early adopters have pulled back: the Chicago Police Department disbanded its predictive “heat list” program amid complaints it stigmatized mostly young Black men who hadn’t committed violent crimes, and the LAPD ended its PredPol program in 2020 following an external audit’s critical findings
Notably, Santa Cruz, CA even banned predictive policing by law in 2020, citing bias and civil liberties issues
Despite these setbacks, variations of predictive policing persist or are being revived with supposedly improved algorithms. It remains one of the most controversial AI applications in law enforcement, directly raising questions of racial equity, community trust, and what constitutes “smart” policing. Recidivism Risk Assessment (Algorithmic Risk Scores): Across the justice system, from pretrial bail decisions to sentencing to parole release, officials are using or considering algorithmic risk assessment tools that estimate a person’s likelihood of reoffending or failing to appear in court. These algorithms (often statistical models or machine-learning systems) take inputs like a defendant’s age, prior convictions, employment status, and sometimes location or family background, and output a risk score or category (low/medium/high risk). One of the most widely used is COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), a proprietary tool used in states like Wisconsin, Florida, and others
COMPAS scores purport to predict a defendant’s 2-year risk of general or violent recidivism. Courts have used such scores to inform decisions such as whether a defendant can be safely released pretrial or how long a sentence to impose (with higher-risk individuals sometimes receiving longer or stricter terms)
Other tools include the Arnold Foundation’s Public Safety Assessment (PSA), used in some jurisdictions to help judges determine bail by predicting failure-to-appear and re-arrest rates, and various state-specific risk instruments for probation and parole boards. The appeal of risk assessment instruments is that they could bring data-driven consistency to decisions traditionally subject to human discretion and bias.
For example, judges have always informally considered a defendant’s risk to the community; an algorithm promises a more “evidence-based” and uniform evaluation of that risk, potentially reducing subjectivity. Proponents argue these tools can identify low-risk defendants who can be safely diverted from incarceration (thus reducing jail populations) and flag high-risk cases where more intervention is warranted
Indeed, after New Jersey implemented a risk-score guided bail reform in 2017, the pretrial jail population fell sharply with no significant rise in crime, which advocates cite as a success of algorithmic triage in reducing unnecessary detention
However, as with predictive policing, serious concerns have been raised about accuracy, bias, and due process in risk assessments. The watershed moment was a 2016 ProPublica investigation titled “Machine Bias”, which found that COMPAS systematically mislabeled Black defendants as “high risk” at nearly twice the rate of white defendants who reoffended at similar rates
ProPublica’s analysis of over 7,000 cases revealed that the algorithm’s errors differed by race: Black defendants who did not reoffend were often falsely flagged as high risk (false positives), while white defendants who went on to reoffend were more often rated low risk (false negatives)
For example, one case highlighted a Black woman with a minor theft record whom COMPAS rated high risk (she did not reoffend), versus a white male with multiple armed robberies rated low risk (who did reoffend)
These disparities raise the specter of racial bias being embedded in ostensibly scientific scores – likely because the training data reflects existing biases (e.g. discriminatory policing and sentencing)
he makers of COMPAS disputed ProPublica’s findings, arguing the tool was equally accurate for all races by certain metrics, but the debate highlighted that “accuracy” itself can be defined in different ways (error rates vs. calibration) and that trade-offs exist between equality and predictive accuracy
Beyond bias, there’s a fundamental question of transparency and contestability: Many risk models like COMPAS are proprietary “black boxes.” Defendants have argued that if a score influences their sentence, they should have a right to examine how it was computed and challenge its validity
This came to a head in the Wisconsin Supreme Court case State v. Loomis (2016), where the defendant Loomis claimed his due process rights were violated by the court’s reliance on a secret algorithm he couldn’t interrogate
The court upheld COMPAS’s use but cautioned it should not be the determinative factor and should be accompanied by warnings about its limitations
Loomis illustrates the tension between efficient automation and individual rights: the judge received a number purporting to summarize Loomis’s character and future behavior, but neither the judge nor Loomis fully understood the basis of that number (since Northpointe, the company, keeps the formula confidential). This lack of explanation can undermine the perceived fairness and legitimacy of decisions. Despite these issues, risk assessments continue to spread. The U.S. federal Bureau of Prisons now uses an algorithm called PATTERN under the First Step Act to classify inmate recidivism risk and assign program credits
Numerous states use sentencing guideline tools or parole board scores. Thus, algorithmic risk assessment is already deeply embedded in criminal justice decision pipelines, and its moral implications (fairness, accountability, and the value judgments baked into defining “risk”) are central to the resolution at hand. Evidence Review and Forensic Analysis: Another arena for AI is processing the massive volumes of evidence and data involved in investigations and trials. Modern criminal cases often include digital evidence (video footage from surveillance cameras or police body-cams, audio recordings of 911 calls or jail phone calls, cell phone data dumps, social media posts, etc.) as well as traditional forensic evidence (DNA, fingerprints, ballistics, etc.). AI tools are being developed to assist humans in reviewing and analyzing this evidence faster and more accurately than manual methods. For instance, companies like JusticeText provide AI-driven platforms for public defenders and prosecutors to automatically transcribe and analyze video or audio evidence from body cameras and interrogation tapes
This can save countless hours of someone sitting and watching footage: the AI can generate a transcript, flag key words or events (e.g., “gun” or a Miranda warning), and allow attorneys to search within hours of video for specific moments
Public defender offices that have adopted such technology report it has “freed up nearly 50% of their time” on evidence review tasks, allowing lawyers to focus on strategy and client interaction
This is especially JusticeText: Bringing AI audiovisual analysis to the public defender’s office - Thomson Reuters Instituteimpactful in balancing resources, since historically prosecutors and police had more tools for analyzing evidence, whereas under-resourced defense teams struggled to keep up – AI can help level the playing field in discovery and investigations
In the forensic science domain, AI and machine learning are being applied to improve or automate analyses that used to rely on expert judgment. Examples include pattern matching tasks: AI algorithms can compare fingerprints or ballistic markings on bullets with large databases potentially faster and without some human errors
The Drug Enforcement Administration uses machine learning to classify the geographic origin of drug samples (heroin, cocaine) by analyzing chemical signatures, helping track trafficking patterns
“TrueAllele” is an AI-driven probabilistic DNA interpretation system that can tease apart mixed DNA samples with greater objectivity, used in some court cases to provide statistical likelihoods of a match (where earlier human analysis might have been inconclusive)
AI-based image analysis is also being tried for crime scene photos (to classify and label objects in images), and even in autopsies (e.g. some research on using AI in post-mortem imaging to aid cause-of-death determinations)
The touted benefit of AI in forensics is increased accuracy, consistency, and speed. Unlike a human examiner who might be influenced by cognitive bias or fatigue, an algorithm can apply the same criteria uniformly. For example, an AI might measure dozens of characteristics in a fingerprint minutiae pattern quantitatively, whereas two fingerprint experts might sometimes come to different conclusions on a borderline sample. In fact, a DOJ report notes that AI can make forensic methods more reproducible and help quantify uncertainties (yielding, say, a percentage match probability instead of a subjective “match/no-match” call)
AI might also catch patterns a human misses – such as scanning hours of CCTV from a city block to find a glimpse of a suspect’s face or vehicle, tasks at which modern video analytics have become increasingly adept. As one legal tech commentator noted, AI video forensics can now automate object detection, track individuals across frames, enhance poor-quality footage, and even employ facial recognition to identify persons of interest.
Similarly, AI text-mining tools can devour terabytes of electronic communications (emails, chats) in large white-collar crime cases, flagging suspicious messages or connections that investigators should examine
These abilities greatly accelerate investigations, which is crucial in complex cases – AI can sift through “terabytes of data swiftly,” whereas traditional teams might be overwhelmed
Despite these advantages, the use of AI in evidence analysis also raises chain-of-custody and reliability questions. In court, any scientific or technical evidence must meet standards of reliability and be explainable to a judge or jury. If a deep learning model enhances an image or matches a fingerprint, can an expert explain how it reached its conclusion? In many cases, the AI is used as an assistive tool and a human expert still testifies to the result, but as the technology advances, courts will face new questions of admissibility (some judges have already had to consider whether a probabilistic DNA software’s code must be disclosed to the defense)
There are also concerns about deepfakes – as AI makes it easier to fabricate remarkably realistic fake images or audio, the justice system will have to be vigilant that AI is not used maliciously to generate false evidence
Ironically, it may require other AI tools to detect deepfake artifacts to ensure authenticity of digital evidence.
We will revisit these issues of trust and verification in weighing the ethics of AI-based evidence.
Future Prospects: AI in Representation and Adjudication: Looking ahead, more radical applications of AI are being contemplated – for instance, AI as a legal advisor or even as an advocate for defendants, and AI assisting in judicial decision-making or mediation. While no U.S. court allows non-human representation today, 2023 saw an attempt at the “first AI lawyer in court”: the startup DoNotPay planned to provide a realtime AI legal assistant to whisper answers via an earpiece to a defendant in traffic court.
This stunt was aborted after state bar associations warned the company that an unlicensed “robot lawyer” in a courtroom could be considered unauthorized practice of law – even threatening the CEO with jail for contempt.
The episode highlighted that AI counsel is technically feasible (a chatbot can now answer many legal questions), but legally and ethically fraught. Nevertheless, as AI like GPT-4 grows more capable, we might soon see defendants using AI-powered apps to draft motions, find case law, or navigate court procedures without a human lawyer (particularly in low-level cases or where someone can’t afford counsel). There are already chatbots that help people contest parking tickets or fill out legal forms for eviction defense. Courts have begun providing AI-driven self-help tools: for example, some jurisdictions use online “litigant portals” with guided interviews (a series of questions that dynamically generates the proper forms or provides recommended next steps).
These can be seen as rudimentary AI legal aides aiming to improve access to justice for self-represented litigants. As natural language AI improves, one can imagine a more interactive “virtual public defender” that can advise a defendant of their rights, analyze the prosecution’s evidence, or even predict trial outcomes to inform plea bargain decisions.
The ethical issue is whether such AI advisors can be trusted to act in the client’s best interest and handle the nuances of legal strategy and ethics (for instance, how would an AI handle confidential client information, or what if it “hallucinates” a fake case precedent?). These questions are active areas of debate in the legal community. Even more controversially, discussions have arisen about AI in adjudication – could algorithms one day play the role of judge or jury, at least for minor cases? While no U.S. court has gone this far, other countries are piloting such concepts. In 2019, Estonia’s Ministry of Justice explored an AI “robot judge” to adjudicate small claims under €7,000, where parties would submit evidence online and an algorithm would issue a decision (subject to appeal to a human judge)
And in 2023, an AI-powered mediator (the Smartsettle ONE system) was used in the UK to settle a dispute over a few thousand pounds, with the algorithm learning each side’s bids and proposing a compromise – which the parties accepted, resolving the case in under an hour
These examples point to a future where certain high-volume, low-stakes cases (like small civil claims, traffic fines, or administrative hearings) might be partially or fully decided by AI to save costs and time.
The rationale is that many of these disputes boil down to straightforward parameters (e.g., contract amounts, whether a fine is legally owed) that an algorithm could evaluate consistently. Additionally, AI could assist human judges by providing recommendations or even drafting opinions. Indeed, forms of this are already happening: judges in some U.S. jurisdictions use software to generate standardized decisions or calculate guideline sentences.
In 2021, it was reported that a judge on the Pennsylvania Supreme Court experimented with an AI tool (developed by law professors) to help draft an opinion – essentially to test how well the AI would articulate the judge’s reasoning in a case
The judge still reviewed and finalized the text, but this shows AI encroaching even on judicial writing. The potential benefits of AI adjudication might be speed, consistency, and cost savings. For simple cases, it could clear backlogs and make justice more accessible (imagine an automated online small-claims judge available 24/7). However, the moral and legal implications are enormous: We would be removing human judgment – with its capacity for empathy, moral intuition, and discretion – from decisions that affect people’s rights. There are fears of an “algorithmic judiciary” that lacks transparency and accountability. If an AI judge made an error, who is responsible? Can a code-embedded bias or programming flaw violate someone’s constitutional rights? Many argue that certain qualities of justice are inherently human – such as
Even augmenting judges’ decisions with AI can be problematic: studies in other contexts have shown that when an algorithm gives a recommendation (say, a risk score), human decision-makers often anchor on that suggestion and give it undue weight
If judges start deferring to an AI’s “opinion” of a case, this could undermine the independent reasoning we expect from the judiciary. In summary, AI applications in criminal justice range from the mundane (automating paperwork) to the profound (influencing who is incarcerated or even replacing human decision-makers). These uses bring a host of promises and perils which we will now explore through the lens of arguments for and against the resolution’s claim that such use is immoral.
3. Arguments in Favor of AI Use in Criminal Justice
Proponents of employing AI in the criminal justice system argue that it can address longstanding inefficiencies and human shortcomings, ultimately leading to a fairer, more effective justice system. Below are key arguments in favor of AI integration, along with supporting evidence:
(a) Increased Efficiency and Speed, Leading to Cost Reduction
Automation of Routine Tasks: AI can perform many tasks faster and with fewer resources than human workers, which promises significant efficiency gains in a justice system often bogged down by case backlogs and procedural delays. By automating repetitive processes – e.g., filing paperwork, searching legal databases, transcribing interviews, scheduling hearings – AI can dramatically speed up case processing. This efficiency directly translates to cost savings for taxpayers and justice agencies. For example, AI-driven e-discovery tools can review thousands of documents for relevant evidence in minutes, a job that would take a team of attorneys weeks, thus cutting down billable hours and court delays.
In courts, AI scheduling systems can optimize calendars so that judges and courtrooms are utilized at full capacity, reducing idle time and continuances
The Council on Criminal Justice notes that properly implemented AI tools could help allocate resources more efficiently, ensuring, say, that police patrols or judicial attention are directed.
This efficiency argument aligns with utilitarian reasoning: if AI enables the system to handle more cases in less time (without sacrificing accuracy), then more justice is delivered per unit of societal resources – a net gain in utility. Faster Decision-Making for Public Safety: In areas like predictive analytics and risk assessment, speed can save lives or prevent harm. AI can analyze crime patterns or individual risk factors in real-time or near-real-time, helping officials make prompt decisions. For instance, if a bail algorithm quickly flags a defendant as very low risk, the person can be released in hours rather than spending days in jail awaiting a hearing, which is both more humane and saves incarceration costs. Conversely, a fast warning about a likely crime hotspot might allow police to intervene proactively. Proponents highlight that AI doesn’t sleep – it can crunch data 24/7, picking up subtle shifts (like a sudden clustering of certain incidents) far faster than manually compiling weekly crime stats. This can lead to more agile responses by law enforcement and courts. As an example, some jurisdictions using risk assessment tools saw reductions in jail populations not just because of fairness but because the tools operated quickly and objectively, avoiding lengthy subjective debates over bail
Speedy processing also reduces costly side-effects: lengthy pretrial detention is expensive and destabilizes defendants’ lives, so faster bail decisions benefit all. The U.S. Department of Justice has explicitly pointed to improved efficiency and resource allocation as potential benefits of AI, indicating that automation might accelerate production, reduce volatility in decision-making, and even prevent systemic failures by catching issues early
Cost Savings and Economic Efficiency: The justice system’s inefficiency has a direct financial toll – court backlogs tie up taxpayer-funded resources, overcrowded prisons incur huge budgets, and wrongful or unnecessary detentions lead to costly lawsuits. AI’s champions argue that by streamlining processes and reducing errors, significant cost reductions will follow. A Goldman Sachs analysis estimated that up to 44% of legal work could eventually be automated by AI
If mundane tasks that currently require armies of clerks, paralegals, and junior attorneys are handled by software, public defender and prosecutor offices could operate with leaner staff or redirect human effort to more complex work. Similarly, if predictive policing or risk assessments optimize how officers and prison beds are used, law enforcement agencies could do more with smaller budgets. The Right on Crime initiative (a conservative reform group) notes that such innovations could “increase efficiency while reducing costs for taxpayers”, portraying AI as a tool for fiscally responsible justice
In corrections, AI-based monitoring of inmates (like automated camera systems or anomaly detection indicating potential violence) could augment or replace some prison guards, theoretically saving money in the long run (though this raises other concerns discussed later). The bottom line of the efficiency argument is that AI can process information and make routine decisions far faster than humans, without breaks or fatigue, leading to a more rapid throughput of cases and significant cost efficiency gains
These gains, advocates claim, free up funds and time that can be reinvested in other areas of need, such as rehabilitation programs or community policing.
(b) Enhanced Data Analysis and Informed Decision-Making
Augmenting Human Judgment with Big Data: AI systems are adept at finding patterns in vast datasets that humans simply cannot process in entirety. In the criminal justice context, this means AI can reveal insights that lead to smarter decisions. Proponents argue that data-driven decision-making is inherently superior to decisions based purely on anecdote, intuition, or outdated heuristics. For example, a judge or parole board armed with a validated risk assessment score has more information about relevant risk factors than one relying on gut feeling alone
A police chief using predictive models might allocate patrols based on a nuanced analysis of 10 years of crime data, weather, and events, rather than just last year’s crime map. In both cases, AI provides an empirical foundation for decisions, potentially improving accuracy. Indeed, advocates often note that humans have well-documented biases and errors in judgment (we misestimate risks, we have cognitive biases, we get influenced by emotion or irrelevant details). A well-designed algorithm, they claim, can correct for some of these – by consistently applying the same criteria to each case and focusing only on the factors that statistically matter, removing some arbitrariness from justice
One supporting example: A study in New York found that a machine learning model predicting failure-to-appear in court was significantly more accurate than judges’ decisions, and if implemented, it could have reduced the jail population without increasing crime
.
This suggests that AI analysis can identify who really needs detention versus who doesn’t, more precisely than current practice – a win for both public safety and individual liberty. Objectivity and Consistency: When properly designed, AI algorithms apply the same rules to everyone, in contrast to human actors who might show inconsistency or favoritism. For instance, two defendants with similar profiles ideally should receive similar pretrial release decisions, but in practice judges’ decisions can vary widely by county or courtroom or even time of day (studies have shown “morning vs. afternoon” leniency gaps, etc.). An algorithm doesn’t get tired or moody; it will output the same risk score given the same inputs, whether it’s Monday or Friday. This consistency is often framed as fairness – like cases treated alike – and as protection against certain human biases. The Council on Criminal Justice notes that one benefit of algorithmic tools can be “enhanced transparency and uniformity in decision-making processes”, as the criteria used are explicit and consistently applied.
For example, if a sentencing algorithm is used, it might transparently weigh factors like offense severity and prior record, thereby reducing disparities that arise from which judge one happens to get.
In policing, a data-centric approach might counteract an individual officer’s biased hunches – instead of patrolling only certain neighborhoods because “that’s where we’ve always gone,” an algorithm might reveal crime risks in other areas or times that were being overlooked, thus optimizing coverage without prejudices
Proponents highlight examples like Chicago’s use of a gun crime forecasting model that helped police remove hundreds of illegally possessed guns – attributing this success to objective analytics pinpointing the right intervention spots rather than officer bias
Furthermore, objective analytics can uncover system-level biases for reform. If an AI analysis of sentencing data shows certain groups getting consistently harsher outcomes, that data can spur corrective policy changes (in this sense, AI can function like an audit tool to promote justice). Improved Accuracy and Outcomes: Ultimately, supporters claim that AI can improve the accuracy of criminal justice outcomes: more guilty people caught, more innocent or low-risk people released, and more appropriate sanctions applied. In forensics, for example, AI-enhanced techniques can increase the accuracy of evidence interpretation – leading to more correct verdicts. The DOJ has pointed out that AI in forensic science could reduce human error rates and help quantify the likelihood of matches or mistakes, potentially reducing wrongful convictions due to forensic misinterpretation
A concrete instance is in DNA analysis: algorithms have made it possible to use DNA mixtures that were previously too complex for humans to interpret, thereby solving cases that would otherwise remain unsolved or ensuring evidence is not wrongly dismissed
In policing, while early predictive policing had issues, newer AI efforts (especially when coupled with community inputs and revised algorithms) aim to actually preempt crime or deploy resources to prevent victimization, which, if achieved, is a clear positive outcome ethically (fewer crimes mean fewer victims). Even if crime isn’t prevented, AI might help clear crimes faster: for example, facial recognition AI (when used under proper controls) has helped identify suspects in serious crimes that might have gone cold without the technology – supporters would cite cases like identifying the Boston Marathon bombers from surveillance videos or locating human trafficking networks via pattern recognition in ads. In corrections, proponents see AI helping tailor interventions to reduce recidivism: machine learning can identify which inmates would benefit most from certain rehabilitation programs, or predict which probationers are struggling (from GPS or check-in data) and trigger supportive measures. This precision rehabilitation could improve success rates, in a virtuous cycle of reduced re-offense and thus increased public safety in the long run. All these improvements rely on AI’s data analysis muscle – crunching through far more variables and records than a human can, to inform decisions that are empirically likely to yield better outcomes. To summarize the pro side: AI, when carefully implemented, has the potential to make the criminal justice system more efficient, more consistent, and more informed, thereby increasing its overall effectiveness and fairness. It can process more cases at lower cost (benefiting society economically), base decisions on comprehensive data rather than gut instinct (potentially reducing bias and error), and augment human capabilities to achieve better outcomes (solving crimes, avoiding undue detentions, tailoring sentences to risk).
These benefits represent the strongest moral justification for AI: if it saves lives, protects rights (by avoiding mistakes), and allocates justice more equitably and efficiently, then using AI would seem not only moral but a moral imperative, given the duty of the system to continuously improve justice delivery.
(c) Potential to Reduce Human Bias and Error
One of the most compelling arguments made in favor of AI is that it could help mitigate the well-documented biases and errors that plague human decision-makers in criminal justice. Humans, even well-intentioned, have unconscious biases related to race, gender, appearance, and socioeconomic status. They also suffer from fatigue, inconsistent judgment, and cognitive biases (like the tendency to be harsher right before lunch, as one famous study of judicial parole decisions showed). Advocates suggest that AI, being an unemotional and data-driven agent, need not replicate these biases and in fact can be designed to counteract them. Removing Explicit Bias: An AI algorithm can be programmed to ignore factors like race, ethnicity, or name that might trigger prejudice. For instance, a pretrial risk assessment tool typically does not take a defendant’s race as an input. In principle, it focuses on objective criteria (age, criminal history, etc.) and thus “color-blinds” a part of the decision process that might otherwise be subject to racial bias by a judge or officer.
Some new algorithms are even being developed with bias-mitigation techniques – for example, adjusting thresholds to equalize false positive rates across races, or reweighting data to remove historical bias influences.
The goal of these “fairness-aware” AI designs is to ensure that the algorithm’s errors or outputs are equitable. If successful, this could directly reduce disparities – e.g., ensuring that Black and white defendants with similar backgrounds have the same likelihood of being flagged high risk or that police patrol recommendations are spread based on true crime rates, not skewed by over-policing patterns. In fact, some researchers argue that with careful constraints, an algorithm can be less biased than a typical human judge, because it can be tested and corrected, whereas human biases are difficult to even detect in individual cases.
The Harvard Journal of Law & Tech analysis of Loomis mused that while all parties assumed the algorithm might be less biased than judges (since machines aren’t intentionally racist), that isn’t automatically true – but it could be made true with explicit anti-bias training of the model.
This points to the possibility of “algorithmic affirmative action”: deliberately programming algorithms to identify and counteract bias in the data.
If we as a society value equality, we could imbue AI with that directive more consistently than we have managed to imbue each human decision-maker. Consistency Reducing Human Error: Humans are also prone to random errors and inconsistencies – a tired cop might overlook a clue, a rushed clerk might mishandle a form, a judge might forget a detail from a long case. AI systems excel at consistent, error-free repetition of tasks and meticulous attention to detail. For instance, an AI vision system reviewing footage won’t get bored and fast-forward through a segment where something important happens – it will dutifully scan every frame. An AI legal research tool won’t “forget” to include a relevant case; if it’s in the database and matches the criteria, the AI will find it. By minimizing lapses and slips, AI can improve the accuracy of justice. Consider the domain of forensic analysis: human forensic examiners have made high-profile mistakes (like misidentifying a fingerprint, e.g., the FBI’s mistaken identification of an Oregon man in the Madrid train bombing case). AI assistance in fingerprint matching or DNA analysis can flag when a human might be about to err or provide probability metrics that prevent overconfidence.
Additionally, algorithms don’t face the stress and heavy caseload pressures that cause human error. A public defender with 300 cases might inadvertently neglect an important motion for one client; an AI calendar system can ensure no deadline is missed, and an AI brief analyzer might catch an argument the overworked attorney didn’t think of. In these ways, AI serves as a safety net and a standardizer, nudging the system toward more uniform application of the law. Transparency and Accountability (in Theory): Some proponents argue that well-designed AI can actually be more transparent than human decision-making, thus reducing the “black box” of human bias. An algorithm’s code and input factors can be examined – one can audit how often it flags people of each race, test it on hypothetical cases, and adjust it. By contrast, a judge’s internal reasoning or a police officer’s hunch is often opaque and unrecorded. Indeed, calls for using AI in areas like sentencing partly stem from frustration with human inconsistency and opacity. The idea is that an algorithm would document the reasons for a decision in a quantifiable way (e.g., “defendant scored 82 points due to 5 prior offenses and age 19”), which a defendant can then review.
This process could make it easier to spot if something like race was proxying in unintentionally, or if a factor is weighted in a way that society disagrees with. With a human, if bias influences a decision, it’s hard to prove – but with an algorithm, if bias is discovered, one can modify the algorithm and retroactively assess its past impacts. In this sense, AI offers the hope of measure and manage: you can measure bias in an algorithm and then manage (reduce) it, whereas measuring bias in human decisions is much trickier.
Positive Early Results: There are some anecdotal and initial empirical results where introduction of AI seems to have improved fairness outcomes. New Jersey’s bail reform using an algorithm resulted in significantly fewer people (disproportionately African American and low-income individuals) being detained pretrial, with no increase in crime – suggesting a fairer, more efficient system.
In Kentucky, a risk assessment led to increased release rates and an initial analysis found no increase in failures to appear, indicating more equitable treatment without sacrifice of safety.
These are sometimes cited as evidence that data-driven tools can correct for the previous overreliance on money bail (which disadvantaged the poor) and subjective judgments (prone to racial bias). Similarly, the use of JusticeText AI by some defender offices has revealed police misconduct or inconsistencies in video evidence that human attorneys might have missed, thereby holding law enforcement accountable and benefiting defendants’ rights.
Such outcomes align with the moral principle of justice: more guilty people held accountable, more innocent or low-risk people freed – which is a win-win in ethical terms. In sum, the pro-AI stance holds that while AI is not inherently free of bias (it reflects data), it offers tools to detect and reduce bias and error that surpass what humans alone can do. By enforcing consistency, eliminating explicit prejudicial factors, and allowing for continuous bias monitoring and recalibration, AI-based systems have the potential to make criminal justice more impartial and more accurate than the status quo. If realized, this directly serves moral values of equality before the law and accuracy in adjudicating guilt and punishment. The next section, however, will critically assess whether these potentials are being fulfilled – and at what cost – as we examine the arguments against AI in criminal justice.
4. Arguments Against AI Use in Criminal Justice
Despite its promised benefits, the incorporation of AI into the criminal justice system has drawn intense criticism on moral, legal, and practical grounds. Critics argue that far from curing biases or improving justice, AI can entrench or even exacerbate discrimination, undermine due process, and create new harms and inequities. We outline the major arguments against the use of AI in this context:
(a) Algorithmic Bias and Racial Discrimination
Perhaps the most prominent concern is that AI tools often exhibit algorithmic bias, leading to discriminatory outcomes – especially against racial minorities – thereby perpetuating injustice. This is rooted in the fact that AI systems are typically trained on historical data, which in criminal justice is rife with racial disparities due to long-standing structural racism (e.g., over-policing in Black communities, unequal sentencing, etc.).
As the adage goes: “garbage in, garbage out.” If the input data reflects biased practices, the AI will likely reproduce those patterns, giving a veneer of objectivity to what is essentially pre-existing prejudice. Evidence of Racial Bias in Practice: Multiple real-world studies confirm this danger. The aforementioned ProPublica investigation into the COMPAS risk score found that it systematically mislabeled Black defendants as high risk at almost twice the rate of whites,, falsely flagging many Black individuals as likely reoffenders when they were not – a clear racial bias in error rates. Meanwhile, white defendants who did reoffend were more often rated low risk (false negatives), implying the tool was more lenient/mistaken in favor of whites.
This translates into tangible harms: Black defendants may have been kept in jail or given harsher sentences because of an inflated algorithmic score, while some white defendants may have been unjustly given a benefit of the doubt. The NAACP bluntly states that “mounting evidence indicates that predictive policing technologies do not reduce crime… Instead, they worsen the unequal treatment of Americans of color by law enforcement.”
They cite how predictive policing algorithms, by relying on historical crime data, inherently target Black communities that have been over-policed, leading to a vicious cycle of disproportionate surveillance and enforcement in those communities.
For example, if policing in the past was concentrated in minority neighborhoods (leading to more recorded incidents there), a place-based algorithm will send more patrols there in the future, causing even more minor offenses (like loitering or low-level drug possession) to be detected in those neighborhoods while ignoring others.
This feedback loop “can increase racial biases, resulting in disparate outcomes including disproportionate surveillance and policing of Black communities,” the NAACP warns.
Another instance: A 2019 study exposed bias in a widely used healthcare AI (assigning risk scores to patients) that resulted in Black patients being assigned lower risk scores than equally sick white patients, due to using healthcare spending as a proxy for health – reflecting less spending on Black patients due to access barriers.
While that’s healthcare, it underscores a pattern: whenever an algorithm uses a proxy or pattern from a biased society, it picks up those biases. Similarly, a recent audit of a predictive policing tool in Plainfield, NJ (by The Markup) found it was sending officers repeatedly to neighborhoods with predominantly non-white populations for crimes that were unlikely to occur, essentially creating digital “harassment” of those residents with no public safety benefit.
Opaque Reinforcement of Inequity: A key moral issue is that AI bias is often harder to challenge or even recognize than human bias. When a defendant faces a biased judge, the defense can at least cross-examine the officer or present evidence of bias, or in extreme cases move for recusal. But when they face a biased algorithm, they often have no insight into how the decision was made and no straightforward way to challenge it.
Proprietary algorithms like COMPAS are trade secrets; in Loomis, the defendant argued it violated his rights that he couldn’t scrutinize how his score was computed, but the court still allowed its use while acknowledging this limitation.
This lack of transparency means biased outcomes can hide behind a facade of scientific impartiality. It is difficult for an affected individual to prove bias when the process is a black box – for example, if a predictive policing system keeps sending police to a Black neighborhood, community members might suspect bias, but the police can deflect by saying “the computer picked these locations.” The NAACP emphasizes that the proprietary nature of algorithms prevents public input or understanding, and that predictive tools are often deployed without communities even being aware of them.
This undermines accountability: if an AI systematically over-targets a racial group, who do we hold responsible? The usual answer is “the algorithm” – but you cannot put an algorithm on the witness stand or fire it for misconduct. Violation of Anti-Discrimination Principles: The discriminatory impacts of AI arguably violate the fundamental moral and legal principle of equal treatment under the law. U.S. civil rights law prohibits disparate treatment on the basis of race, and even practices with disparate impact can be unlawful in certain contexts (like employment, housing, etc.). In criminal justice, the Constitution’s Equal Protection Clause is supposed to guard against racially biased enforcement and adjudication. Critics argue that biased AI effectively automates unequal protection, which is a profound moral transgression. It might even entrench biases further by giving them a techno-rational gloss. For example, an AI might not explicitly use race, but use zip code or employment status or friend networks – which can be highly correlated with race – as proxies, thus hiding racism behind complexity
This is not a hypothetical: in Loomis, even though gender and race were not direct inputs, the score ended up linked with these factors; studies suggest algorithms can find proxies for race like area of residence or socioeconomic variables and thus yield racist outcomes even when “race-neutral” on paper
From a deontological view, this is unacceptable: individuals are not being judged solely on their own actions or merits but on correlations that reflect historical inequities – essentially treating people as means to an end (a statistical end) rather than as ends in themselves. Erosion of Trust and Community Harm: Racially biased AI tools can severely damage the already fraught trust between marginalized communities and the justice system. When Black communities, for instance, see that newfangled algorithms just mean more surveillance drones overhead or higher “risk scores” locking them up, it deepens alienation and perceptions (often accurate) of systemic racism. The NAACP notes that these practices “further erode trust in law enforcement” and marginalize entire communities
The legitimacy of the justice system is at stake – if certain groups feel that an inscrutable machine has condemned them based on skewed data, they will reasonably question the moral authority of that system. In pragmatic terms, this can make policing harder (people won’t cooperate or report crimes if they expect biased treatment) and lead to unrest or withdrawal of consent to be governed. In summary, algorithmic bias presents a strong moral case against AI: It contradicts equality, produces unfair outcomes for vulnerable groups, and cloaks injustice in an unassailable algorithmic form. The concerns are not just theoretical but supported by multiple studies and real incidents
This argument suggests that injecting AI into criminal justice, given the current state of technology and society, may actually amplify the very biases the system is struggling to overcome, thus making its use immoral by perpetuating racial injustice.
(b) Lack of Transparency – The “Black Box” Problem
Another major critique is that AI systems often operate as black boxes, lacking transparency in how they reach decisions. This opaqueness conflicts with core principles of justice such as the right to a fair hearing, the ability to challenge evidence, and the need for accountability in governmental decision-making. In short, if neither the people affected nor even the decision-makers fully understand why an AI recommended a certain action, this undermines the moral legitimacy of that action. Opacity of Complex Models: Many AI algorithms, particularly those based on machine learning (neural networks, ensemble models, etc.), are highly complex. They may involve thousands or millions of parameters interacting in non-linear ways. Even the engineers who created them often cannot give a simple explanation for a single decision the model outputs. For instance, if a judge asks, “why did the risk score classify this defendant as high risk?”, a COMPAS representative or data scientist might only be able to say, “statistically, people with similar profiles reoffended at X rate, so the model gave a high score.” But the exact weighting of factors or interplay that led to that particular score isn’t readily interpretable.
This is even more true for deep learning systems like facial recognition or predictive models that consider dozens of variables – they are often a mystery even to their creators in terms of specific decisions. Denial of Due Process: In the American legal tradition, due process includes the right to know the evidence against you and to confront or rebut it. When an AI’s output is used in a decision against an individual – say, denying them bail or sentencing them more harshly – that individual has a right to challenge that basis. But if the algorithm is proprietary or too complex, they cannot effectively do so.
The Loomis case exemplifies this tension: Loomis argued that because he couldn’t inspect how COMPAS worked, he couldn’t challenge its accuracy or relevance in his sentencing, essentially depriving him of a meaningful chance to be heard on something influencing his punishment.
The Wisconsin Supreme Court recognized the concern but simply cautioned judges not to rely on the score exclusively.
This half-measure did not resolve the fundamental issue: secret algorithms make it impossible for defendants to exercise oversight. Civil liberties organizations like the ACLU and others have raised alarms about “algorithmic due process”, noting that mistaken or biased algorithmic decisions can go uncorrected because they’re inscrutable and carry an aura of objectivity that courts are too ready to accept. As one scholar put it, “words yield to numbers” in human psychology – decision-makers might give undue deference to a number an algorithm spits out, without demanding justification
Unaccountable Errors: Lack of transparency also means that when AI makes mistakes, it’s hard to identify the cause or hold anyone accountable. If a human police officer exercises bad judgment, we know whom to blame and potentially retrain or sanction. If an AI flags an innocent person as a likely criminal (as happened in a notorious facial recognition case where Detroit police wrongfully arrested Robert Williams, a Black man, because face-matching AI mislabeled him who is responsible? The company that made the software? The police for using it? The machine cannot be cross-examined to find the flaw in its logic. This creates a gap in accountability. Government agencies might even hide behind the tech: “We just followed the computer’s recommendation.” This dynamic is dangerous in a democracy; it can lead to a Kafkaesque scenario where citizens suffer consequences and no one can definitively tell them why – or correct the error. In the Williams case, for example, the error was only discovered because his alibi was rock solid and eventually humans looked closer; but if Williams didn’t have an alibi, the AI’s misidentification could have led to his wrongful conviction with little chance of discovering the mistake. The Illinois v. Johnson litigation similarly showed this risk, with a man being kept in jail due to a risk score that was actually miscalculated – but proving that took arduous effort, and many defendants lack resources to mount such a technical challenge. The NAACP and others push for algorithmic transparency laws precisely because they see the current secrecy as enabling unchecked errors and biases.
Public and Judicial Scrutiny: The complexity and secrecy of AI also mean these systems often escape the kind of rigorous public debate and judicial scrutiny that major criminal justice policies ordinarily undergo. A police department can quietly implement a predictive policing system without public hearings; a judge can use a risk score without fully understanding it. In contrast, if a new sentencing guideline is proposed, it goes through commissions, commentary, perhaps legislative approval. The “black box” nature lets AI slip under the radar of democratic oversight. U.S. Senators have taken note: in 2020, a group of Senators wrote to the DOJ expressing concern that predictive policing and risk assessment tools lack validation and transparency, calling for a halt to federal funding for such tools until their fairness is proven.
They remarked that there is “no real scientific validation” and that these systems may just be replicating bias under fancy branding.
When even policymakers cannot get straight answers about how these tools work (often because the vendors claim intellectual property protections), it’s a serious governance problem. From a moral standpoint, transparency is tied to respect for persons: treating someone as an autonomous individual means explaining the reasons for decisions affecting them, allowing them to contest those reasons. Black-box AI denies them that respect and agency, effectively saying, “Accept your fate because the computer said so.” This is fundamentally at odds with a system of justice that is supposed to be reasoned and dialogical. As one legal scholar wrote, “The use of COMPAS is morally troubling precisely because sentencing should not be easy” – it should involve moral reasoning and explanation, not abdication to a machine.
In conclusion, the lack of transparency in AI-driven decisions is a powerful argument against their use in criminal justice. It jeopardizes due process, inhibits challenges to unfair outcomes, and conflicts with the requirement that justice not only be done but be seen to be done (and understood). If people cannot understand or scrutinize how life-altering decisions are made, the moral legitimacy of those decisions is deeply undermined.
(c) Problems with Accountability and Due Process
Closely related to transparency, but deserving separate emphasis, are the issues of accountability and adherence to fundamental legal procedures (due process) when AI is used. These concerns argue that AI in criminal justice erodes the human responsibility and procedural safeguards that are vital to a moral and legal system. Diffusion and Absence of Responsibility: In traditional settings, every decision can be traced to a human agent – a police officer made the arrest, a prosecutor decided the charge, a judge pronounced the sentence. With AI, decision-making becomes diffused. For example, if an AI bail recommendation leads to someone’s detention and later it’s found to be a mistake, who is accountable? The judge could say they just followed the recommended risk score. The tool’s maker isn’t in the courtroom and often has disclaimed responsibility for ultimate decisions. This diffusion can lead to a responsibility vacuum, where injustices occur and no one is held answerable. This is morally problematic because justice systems are predicated on answerability. If a person is wronged by a decision, they should have recourse – perhaps an official to blame or an avenue of redress. AI muddles this. The Harvard JOLT article on State v. Loomis argues that programmers and companies creating these tools should be held to the same standards as other actors in the system implying currently they are not. “Frankenstein’s creator is responsible for his actions,” the author quips, suggesting that tech creators can’t just unleash algorithms into courts and claim neutrality.
Yet in practice, that is what often happens: companies shield their algorithms from scrutiny, and the burden of any error falls on the defendant or the public, not on the creators or users of the tool. Erosion of Human Judgment and Mercy: Another accountability issue is the subtle way AI can cause human decision-makers to abdicate their role. If judges start trusting risk scores more than their own nuanced judgment, they might not fully consider mitigating circumstances or personal character – things an algorithm can’t quantify well. The Harvard piece noted that “COMPAS will let them [judges] sleep at night, knowing a computer reassured them it was correct”
This is chilling: it suggests judges might feel less accountable for harsh outcomes (“the algorithm said he’s high risk, so what choice did I have?”). In moral terms, this is a dilution of personal responsibility – the judge is using the algorithm as moral cover. It may also mean less compassionate, individualized justice. A judge might otherwise hesitate to jail a young defendant on intuition that he’s turning a corner, but a high risk score could override that instinct, leading to a mechanical outcome. Virtue ethicists would argue this mechanization of judgment undermines the virtue of mercy and the moral development of judges themselves, who become like rubber stamps for the algorithm. Challenging or Contesting AI Evidence: Due process is about fair procedure – an accused should have notice of the case and an opportunity to contest the evidence. When AI is involved, this process often breaks down. For instance, if facial recognition identifies a suspect, how can the defense challenge the reliability? They might not get access to the algorithm’s inner workings (as was the case in one Maryland prosecution, where defense subpoenas for the face recognition algorithm details were denied). Even if they do, explaining to a jury why the algorithm might be wrong is extraordinarily difficult. Jurors (and even judges) can be overawed by technology, giving it undue weight – a phenomenon known as “algorithmic authority.” Thus, the defense’s ability to confront and cross-examine is nullified when the accuser is an algorithm with secret source code. A concrete example: In Houston, a court overturned a murder conviction in 2019 because the defense was not given the chance to review the software that analyzed DNA evidence; the court realized that denying access to the source code violated the defendant’s confrontation rights.
Many similar fights are playing out over breathalyzer software, DNA mixture software, etc. The overarching pattern is AI evidence tends to be introduced without the same scrutiny as human testimony, raising the risk of wrongful convictions or unfair hearings. Fundamental Fairness and Dignity: At a broader philosophical level, due process isn’t just a checklist; it embodies the idea that the state must treat individuals with a certain dignity – giving them a voice and a reasoned explanation when depriving them of liberty or rights. The use of inscrutable or unchallengeable AI decisions violates this dignity. It treats the individual as a data point rather than a participant in a reasoned moral discourse. Even if the outcome is “correct,” the person has been subjected to an opaque process where they couldn’t tell their side effectively or engage with the reasoning. This offends the concept of procedural justice, which holds that people are more willing to accept even unfavorable outcomes if they believe the process was fair and they were heard. AI-driven processes risk being perceived (rightly) as alien and unfair, reducing compliance and respect for law in the long term. In summary, critics argue that AI undermines accountability (no one to blame or correct when things go wrong) and due process (people cannot effectively challenge or even understand the decisions affecting them). These issues strike at the heart of a moral justice system. A system where defendants say “I was jailed because of some computer and I don’t know why” is inherently less just than one where they can say “I was jailed because the judge reasoned, based on evidence, that I’m a flight risk, and here’s how I can dispute that.” The moral weight of depriving someone of liberty demands a level of human responsibility and transparency that black-box AI currently cannot meet.
(d) Inequitable Access and Widening of Gaps
The introduction of AI into criminal justice could also exacerbate existing inequalities of resources and access to justice. Not everyone in the system has equal access to cutting-edge technology, which may create a two-tiered system: well-funded actors benefit from AI, while poorly funded ones lag behind, further skewing fairness. Additionally, tech solutions often focus on law enforcement and prosecution first, leaving defense and rehabilitation under-served. Prosecution vs. Defense Disparity: Historically, prosecutors (state or federal) and police departments have far greater resources than public defenders or indigent defense services. AI tools – being expensive to develop, license, and maintain – may widen this gulf. For instance, a District Attorney’s office might deploy an AI analytics platform to sort through digital evidence, identify incriminating materials, and even predict defense arguments, while the overburdened Public Defender’s office cannot afford similar tools.
This means the prosecution can build its case with AI-boosted efficiency and perhaps new insights, but the defense is stuck with traditional manual methods. The result is an even greater power imbalance. As one technology commentator noted, “public defenders face an uphill battle against well-funded prosecutors using AI” and without comparable tools, defenders risk being consistently outmaneuvered.
An ABA Journal piece reported on DoNotPay’s failed “robot lawyer” stunt by noting that such technology, if it had worked, might actually help level things for minor offenses, but its failure highlights how defense-side AI is trailing (and indeed, DoNotPay itself got sued for practicing law without a license, showing the system’s resistance to unconventional defense help).
Encouragingly, some startups are focusing on defense: JusticeText, mentioned earlier, is explicitly aimed at public defenders to help review video evidence, transcribing bodycam footage to save defense attorneys’ time. It reports freeing up 50% of attorneys’ time on evidence review. But currently, only about 70 public defense agencies have that platform, out of thousands nationwide. Many public defenders still lack basic tech like case management software, let alone AI analytics. Meanwhile, federal agencies and big-city prosecutors are investing in AI for things like crime forecasting, digital forensics, and data mining of social media for evidence (projects often funded by federal grants or private partnerships). The moral concern is that justice becomes even more unequal based on who can afford AI. It’s akin to a new armament in an arms race: if only one side has it, the contest (trial) isn’t fair. Rich Jurisdictions vs. Poor Jurisdictions: Similarly, within law enforcement, larger or wealthier jurisdictions can adopt AI, whereas smaller, rural, or underfunded ones cannot, leading to geographic disparities. It could become the case that in one county, you’re more likely to be denied bail because they have a risk tool flagging you, but in the next county without that tool, you might be released. Or vice versa: perhaps the tool would have flagged you as low risk but the county couldn’t afford it, so a judge’s bias kept you in jail. We already see something akin to this with DNA databases – a state that has familial DNA search might catch a suspect that another wouldn’t, based purely on tech adoption differences. Equal protection concerns arise if technology’s availability dictates outcomes. It’s arguably immoral for “justice” to depend on the luck of where the crime occurred or which agency is involved – but AI could accentuate such luck-of-the-draw. Access to AI for Rehabilitation and Reentry: There’s also inequity in the focus of AI development. Much funding and innovation goes into policing and surveillance tech (which potentially harms marginalized groups) rather than into tools that could help defendants or prisoners (which could help marginalized groups). For example, there’s far more investment in predictive policing startups than in AI tutors for inmates or job placement algorithms for parolees. If AI truly has benefits, they’re not being democratically distributed. Civil liberties advocates often point out that communities of color become testing grounds for new surveillance tech – like facial recognition in urban areas – without consent, while the benefits of tech (like improved safety systems or access to legal information) are not likewise deployed in those communities.
Digital Divide and Competence: Another aspect is that using AI effectively requires data infrastructure and expertise. Some jurisdictions may have the data quality to feed an AI tool, others may not (garbage data will yield garbage results, potentially harming those in a sloppy-data county with false risk flags). Also, defense attorneys or judges with less tech savvy may not know how to challenge or interpret AI evidence, whereas better-funded lawyers do. This “digital literacy” gap can lead to unequal outcomes. Imagine one defendant’s lawyer is equipped to cross-examine the state’s algorithm (maybe by hiring an expert witness), while another’s is not – the former defendant gets a better chance. This is an extension of longstanding inequities (rich defendants hire expert witnesses, poor ones rely on overworked public defenders), but AI adds a new, highly technical dimension that makes it even harder for laypeople to engage.
Public Perception of Legitimacy: If people perceive that AI is only being used to watch and catch them, not to help them, it further alienates disadvantaged communities. For instance, the idea of an AI judge in small claims might be sold as improving access to justice (you can get your dispute resolved online cheaply). But if that AI judge is only in poor people’s courts (because richer litigants go to real courts with lawyers), then it’s an inferior justice for the poor – a “second-class justice” scenario. Already, concerns exist that automated decision systems in government tend to be used on welfare decisions, unemployment claims, etc., affecting the poor with less recourse, while the affluent always get human attention. Similar could happen in criminal justice – e.g., perhaps petty offenses for the poor are handled by impersonal kiosks (leading to fines or jail via algorithm), whereas wealthier defendants with private attorneys get full hearings. In summary, the inequitable access argument holds that AI might widen gaps: between prosecution and defense, between big and small jurisdictions, between the tech-privileged and the tech-disadvantaged. Rather than democratizing justice, current trends suggest AI could amplify existing inequalities in power and resources, which is morally troubling. It challenges the utilitarian defense of overall benefit, because even if AI improved aggregate efficiency, if those gains accrue mostly to one side (e.g., law enforcement) and leave the other side worse off relatively, the fairness of the system deteriorates. Justice requires some parity of arms between prosecution and defense; AI threatens to upset that balance further.
.
5. Broader Moral Objections to AI Use (Relevant to Criminal Justice)
Beyond the direct justice system impacts, critics raise broader ethical issues with AI technology itself that carry over to its use in criminal justice. These include concerns about environmental sustainability, the means by which AI systems are developed (data sourcing and intellectual property), and wider societal effects like employment displacement and the erosion of human autonomy. While not unique to criminal justice, these issues form part of the moral calculus of whether embracing AI in this domain is ethical.
(a) Environmental Costs of Computation
High Energy Consumption and Carbon Footprint: Cutting-edge AI, particularly the machine learning models that underlie many advanced systems (like deep neural networks), require enormous computational resources. Training large AI models consumes vast amounts of electricity and often results in significant carbon emissions unless fully offset by renewables. For example, training a single state-of-the-art language model has been estimated to emit hundreds of thousands of pounds of CO2. A 2019 study by UMass Amherst researchers found that training a certain deep learning model for natural language processing emitted over 626,000 pounds of CO2 (the equivalent of 5 cars’ lifetime emissions)
More relevantly, OpenAI’s GPT-3 (175 billion parameters), which is similar tech to what might power legal AI like Harvey or analytical tools in policing, was reported to consume 1287 MWh of electricity in training and emit around 502 metric tons of CO2. That’s roughly equal to 112 gasoline cars driven for a year
Once deployed, AI models continue to draw power for inference; in fact, Google estimated that 60% of the energy AI systems use can be in the post-training inference stage. For instance, GPT-3’s daily usage was estimated at about 50 pounds of CO2 per day, or 8.4 tons of CO2 per year
If AI tools become widespread in criminal justice – from constant predictive policing computations, 24/7 surveillance analytics, large-scale data mining – the cumulative energy usage could be substantial. Data centers running AI for government can thus contribute to climate change if not managed sustainably. This raises a moral question: Is it right to deploy energy-hungry AI for marginal gains in efficiency, given the urgent need to reduce greenhouse gases? Many argue it is morally irresponsible to ignore AI’s carbon footprint, especially in public sector uses. The United Nations Environment Programme has warned that proliferating AI servers increase emissions, electronic waste, and even consume large amounts of water for cooling data centers
So, introducing AI extensively into another sector (like criminal justice) should factor in these externalities. If, for example, automated video analysis in all prisons leads to huge data center expansions, one must consider the trade-off: maybe improved safety in prisons, but at the cost of environmental harm that itself causes human suffering (through climate impacts). Consistency with Justice Ethics: Environmental justice advocates note that climate change disproportionately harms vulnerable populations (often poorer communities, frequently communities of color – the same groups often overrepresented in criminal justice). Thus, a policy that marginally improves some policing outcomes but contributes to climate harm could ironically hurt those communities in another way. There’s a moral tension: you might reduce crime risk slightly with AI, yet increase health risks via pollution or climate effects. A utilitarian might try to weigh these competing effects, but the calculation isn’t straightforward (how many tons of CO2 justify solving one extra burglary?). In short, the morality of sustainability comes into play. Governments have obligations to future generations and to global welfare beyond just their immediate goals. If the justice system’s AI adoption significantly adds to environmental degradation, one could argue it’s an immoral trade, especially if less energy-intensive solutions (like community programs or hiring more human staff) could achieve similar ends. Possible Mitigations Ignored: Critics point out that tech companies and agencies often adopt AI for its direct benefits without investing equally in green computing practices or offsets. For example, the energy usage of AI could be mitigated by using renewable-powered data centers or scheduling computations when renewable energy is abundant. But unless those steps are mandated or incentivized, they might not happen uniformly. The OECD has called for understanding AI’s environmental costs, emphasizing that “much of recent AI progress has come from significant increases in computation, which has environmental impacts.”
To the extent the criminal justice system is a public enterprise, it arguably has a duty to lead by example in sustainable tech use. If it doesn’t, that is a moral failing by neglecting the stewardship of our environment.
(b) Intellectual Property and Data Sourcing Concerns
Unethical Data Collection (Privacy and Consent): The performance of AI systems often depends on large datasets, which are not always obtained in ethically sound ways. For instance, many facial recognition or surveillance AI tools have been trained on images scraped from the internet without individuals’ consent. A notorious case is Clearview AI, which built a database of over 3 billion face images by scraping websites like Facebook, LinkedIn, etc., without permission.
Clearview then sold its facial recognition system to law enforcement. This raised serious issues: people’s photos (including many who were never suspected of any crime) were appropriated for police use without their knowledge, arguably violating privacy rights on a massive scale
Clearview was sued in Illinois under a biometric privacy law, leading to a recent settlement where Clearview agreed to restrictions, implicitly acknowledging the problematic nature of its data sourcing
The moral point is that AI in criminal justice may be built on datasets collected through privacy infringements or even illegal methods, tainting the ethical legitimacy of the tools. If an AI system used by police relies on illicitly obtained personal data, is its use not an extension of that illicit act? The ACLU argues that companies like Clearview claiming a First Amendment right to scrape our faceprints are wrong, stressing that individuals did not consent to be in a perpetual police lineup just by posting a photo online
Intellectual Property Theft: Another aspect is that some AI development involves copying copyrighted material without compensation or attribution – essentially intellectual property (IP) theft at scale. For example, OpenAI and other model developers have scraped books, articles, and legal texts to train language models. If the criminal justice system starts using AI tools that were trained on copyrighted case law or commentary taken without permission, that raises questions of IP ethics (and legality, as seen in lawsuits by authors against AI companies). Or consider risk assessment tools: the Arnold PSA model is public, but COMPAS is proprietary. Conversely, the training data behind COMPAS’s risk predictions (criminal records from various jurisdictions) could be considered public records – but the way Northpointe (its creator) uses and monetizes them is private. Some legal scholars have critiqued “data enclosures” where private companies take public data (like court records) and turn them into profit-making algorithms that then influence public decisions. This creates a bizarre scenario where the public’s own data (often contributions by the labor of court clerks, etc.) becomes locked behind vendor contracts and not transparently available to the public.
This seems morally questionable in terms of fairness and public ownership of law.
Bias in Data Sourcing: We touched on bias above, but it’s worth noting here: the provenance of data can encode biases beyond race – like outdated views, regional skew, etc. For instance, an AI legal research tool might rely heavily on certain publishers’ databases. If those sources had errors or omissions (say missing cases or over-representing certain jurisdictions), the AI’s advice could be systematically biased or incomplete, which could mislead attorneys. There is a moral responsibility to ensure data integrity. If agencies rush to use AI without auditing the training data, they may inadvertently propagate historical injustices or mistakes. Violation of Personal Privacy and Autonomy: Using AI often means mass surveillance or analysis that impinges on privacy. Predictive policing might monitor social media or cell phone locations en masse. Automated license plate readers, face recognition on city cameras, social network analysis of gang databases – these AI-driven practices can violate individuals’ reasonable expectation of privacy and chill their free association and speech.
Critics like the NAACP highlight how such surveillance disproportionately targets Black communities, effectively treating them all as potential suspects under ceaseless watch.
This is an affront to personal autonomy and liberty. The morality of a system that sacrifices the privacy rights of many (often innocent) for the potential to catch a few is highly contested. Is it ethical to compile giant databases of biometric data (faces, DNA) of largely law-abiding citizens just so an AI has a big haystack to search for needles? Many would say no – that it violates the principle of minimal intrusion and treats people as means to an end (catching someone) rather than ends themselves. Overall, these intellectual property and data sourcing issues suggest that the means by which AI is built and operates may themselves be immoral, regardless of the outcomes. Even if an AI tool were effective, if it was trained on stolen data or requires ongoing mass privacy invasion, a deontologist would object that the ends don’t justify the means. In the justice context, using “fruit of the poisonous tree” is typically disallowed (e.g. evidence from illegal searches). Analogously, using AI fruits of unethical data practices could be seen as tainting the moral legitimacy of convictions or decisions made with it.
(c) Societal Impact: Job Displacement and Autonomy Erosion
Beyond immediate case outcomes, AI integration can have ripple effects on society and the workforce of the justice system, raising moral and social concerns: Job Displacement and Dehumanization of Work: Introducing AI to automate tasks in criminal justice could eliminate or fundamentally alter jobs for many people – from paralegals and court clerks to analysts and possibly even patrol officers (if predictive systems reduce the need for routine patrol numbers). Some job loss or evolution is part of technological progress, but the moral worry is twofold: the justice system’s workforce might suffer unemployment or deskilling, and the justice process might lose the human touch in roles where it’s actually important. For instance, AI legal research assistants like Harvey AI might reduce the need to hire junior lawyers or legal researchers
Thomson Reuters acquiring such tech indicates law firms may operate with fewer young associates. Public agencies might not hire as many clerks if AI can sort filings. A utilitarian might argue cost savings are good, but from a social perspective, displacing skilled workers and cutting off entry-level legal jobs could be harmful – it might reduce the diversity or pipeline of experienced human lawyers (creating a future shortage of wise attorneys/judges who traditionally come up through those ranks). It can also be seen as a dignitary harm: turning functions of justice over to machines might make remaining human roles less meaningful or more menial (e.g., humans just rubber-stamping AI outputs). The American Bar Association and others have started discussing that lawyers have an ethical duty to stay competent with AI, but also caution that over-reliance might erode human judgment skills. Erosion of Human Autonomy and Decision-Making: Perhaps the deeper philosophical issue is that heavy AI use may erode human autonomy – both for decision-makers and for those subject to decisions. We touched on judges deferring to algorithms (losing autonomy in judging) and individuals feeling powerless against algorithmic decisions (losing autonomy in their lives). Expanding that, one can argue that a justice system reliant on AI moves away from the notion of individuals taking moral responsibility for choices. It feeds a technocratic mindset where everything is optimized by algorithms rather than deliberated in a human public square. As AI “guides” more decisions, human actors might become over-dependent, losing confidence in their own moral reasoning. The “AI judge” concept is a prime example – if society ever accepted that, it’s almost a resignation of human moral agency in determining justice, arguably a core aspect of collective autonomy and self-government. Even earlier, think of parole boards: if an AI says “prisoner X has a 90% chance to reoffend,” board members might feel they have no choice but to deny release, even if their gut, informed by interview, says the prisoner seems genuinely reformed. Their autonomy to give a second chance is overridden by a statistical dictate, which some would call a moral crutch or abdication
For the individuals on the receiving end, autonomy erosion occurs when they are reduced to numbers. If a person is kept in jail because an algorithm labeled them high risk, no amount of personal improvement or explanation can change that number in the moment. They are effectively stripped of the chance to exercise agency to affect the decision (except in long-term general ways that alter inputs). The decision process doesn’t hear them; it processes them. That’s dehumanizing, as many critics have noted. A quote from the Harvard JOLT piece summarizing this discomfort: “The use of COMPAS is morally troubling precisely because sentencing should not be easy… Actors in the system should have to grapple with the consequences of their work.”
AI makes it too easy to not grapple, thus undermining the moral growth and engagement of both officials and the subjects of the system. Public Trust and Alienation: There is also the society-wide impact on trust and legitimacy. A system seen as run by machines might alienate people, as mentioned. If those most affected (e.g., marginalized communities) view it as an Orwellian tool, they may withdraw cooperation (witnesses not coming forward, jurors unwilling to convict based on black-box evidence, etc.). This can weaken the social fabric and trust in the rule of law, which is a moral and practical negative. One might consider an analogy: citizens accept laws and verdicts in part because they trust the process (think of jury duty as a civic participation in justice). If AI short-circuits the participatory aspects (say, using AI instead of juries for efficiency), the communal sense of justice done by the people for the people is lost. Opportunity Costs: Another broad objection is the opportunity cost: resources spent on flashy AI could perhaps yield more justice if spent on human-centered reforms (like hiring more public defenders, providing mental health treatment, etc.). If AI is not demonstrably far better, then morally it might be better to first exhaust simpler, proven remedies for justice system issues. For instance, bail reform using simple checklists and more funding for pretrial services might reduce jail populations without needing an algorithm. If policymakers choose the high-tech route because it’s trendy, ignoring simpler fixes, that’s arguably irresponsible. This ties to a moral principle of prudence and not being seduced by technosolutionism when dealing with human problems that have social roots. In conclusion, these broader objections remind us that criminal justice doesn’t happen in a vacuum: it interacts with environment, economy, and society’s values. The moral critique is that adopting AI widely in this domain could compromise other important ethical commitments – environmental stewardship, respect for intellectual labor and privacy, ensuring equitable social progress, preserving human dignity and agency, and maintaining public trust. Those who say “AI is immoral in criminal justice” often bundle these wider impacts into their calculus, arguing the negatives far outweigh the touted positives.
6. Ethical Synthesis: Weighing the Pros and Cons through Moral Frameworks
Having laid out the arguments on each side, we now analyze the issue of AI in U.S. criminal justice using the ethical frameworks defined earlier – utilitarianism, deontology, and virtue ethics – to see how each evaluates the resolution that such use is immoral. This synthesis helps us understand which arguments carry the most weight under different moral lenses and whether, on balance, the use of AI in criminal justice can be justified or condemned.
Utilitarian Analysis (Consequences and Net Welfare)
A utilitarian would approach AI in criminal justice by asking: Does it produce more overall benefit (e.g., public safety, efficiency, cost savings) than harm (e.g., injustices, social costs, errors)? The focus is on aggregate outcomes for society. Potential Benefits to Aggregate Welfare: On the pro side, we have noted several utilitarian benefits: AI could make the system more efficient (resolving cases faster, at lower cost, freeing up public funds for other uses); it might improve accuracy in decisions (preventing some crimes via better resource allocation, avoiding some wrongful releases or unjust detentions through consistent risk assessment);; and it could possibly reduce human biases and errors, leading to fairer outcomes for groups that were previously disadvantaged (thereby increasing overall justice, which utilitarians might include as part of social utility since injustice often leads to unrest or underutilization of human potential)
If, hypothetically, AI reduced recidivism by a measurable amount and cut costs in the billions, a utilitarian would see a strong argument for its use: fewer crimes mean fewer victims (thus less suffering) and saved resources can be spent on other social goods (education, healthcare, etc.), increasing total happiness or preference satisfaction. For example, suppose predictive policing guided patrols to thwart a spree of burglaries, preventing dozens of people from the trauma and property loss of being victims – that’s a clear utility gain. Or if risk assessments allowed 20% more low-risk defendants to be released pretrial without endangering public safety, thousands of individuals avoid unnecessary jail time (which is a disutility), while society doesn’t suffer more crime – a net positive
Efficiency gains (like speeding up court dockets) reduce the time people spend entangled in legal uncertainty (which is a stress and thus disutility) and might allow more crimes to be processed or more time for judges to devote to serious cases, potentially improving quality of outcomes. All these are positive consequences. Quantifying and Comparing Harms: However, the con side reveals substantial negative consequences: algorithmic bias might lead to more incarcerations or police harassment of minorities, causing suffering to those individuals and their families and potentially stoking social discord
False positives by AI (like wrongful identification leading to wrongful arrest or conviction) directly cause immense harm to innocents – how many such errors are tolerable in exchange for capturing how many criminals? Utilitarianism demands a kind of cost-benefit calculus: e.g., is it acceptable if an AI policing tool reduces burglary by 10% (benefit to those would-be victims) but increases wrongful stops of innocent Black men by 30% (harm to those individuals and their community trust)? What if a risk assessment averts one violent reoffense (saving a potential victim) but wrongly keeps five low-risk people in jail who would not have reoffended (harming those five and their families)? One key utilitarian issue is that harms and benefits may fall on different groups – a form of distributive problem. Classic utilitarianism doesn’t care about distribution per se, only sum totals. But a sophisticated approach might consider that harms like unjust imprisonment are weighted more heavily (because they are severe infringements on well-being) than say the benefit of a marginally lower chance of being victimized for the average citizen. Some consequences are also long-term or diffuse: if AI erodes trust in justice among millions, that societal harm (fear, less cooperation with police, potential for unrest) might outweigh a marginal crime drop. Similarly, environmental harm from increased computing might not be felt immediately but is a cost spread globally and into the future
A utilitarian calculation could incorporate the carbon cost by converting emissions to estimated future harm (lost QALYs due to climate change, etc.). Do Benefits Outweigh Harms? Many critics doubt that the promised benefits have actually materialized to a degree that compensates for the demonstrated harms. For instance, real-world predictive policing programs have often not delivered clear crime reductions (some even saw no improvement or were scrapped due to inefficacy), meaning the benefit side may be smaller than advertised, while bias harms were clearly evidenced.
If AI doesn’t significantly improve accuracy (and some risk assessments are only slightly better than chance or than a simple checklist), then the efficiency and consistency benefits might not overcome the harm from wrongful outcomes. A utilitarian might conclude that at present, the harms (biased outcomes, wrongful deprivations of liberty, societal distrust) likely outweigh the relatively modest gains in efficiency or crime prevention that AI has shown. And even if AI could be improved to perform better, one must factor in implementation risks: any new complex system can have unforeseen bad consequences, so caution suggests not deploying widely until positive utility is convincingly demonstrated. Alternatively, some utilitarians might argue for a rule utilitarian approach: if adopting a rule “use AI in criminal justice” tends to produce worse outcomes on balance, then it’s immoral to adopt that rule. Given the evidence of how these systems have functioned, one might lean toward that conclusion. For example, if predictive policing effectively criminalizes whole neighborhoods and thus harms thousands to possibly prevent a handful of crimes, total suffering might increase – making it a bad rule. Future scenario: A die-hard utilitarian could say: if all the bugs were fixed, perhaps AI would far outperform humans and then be clearly beneficial. But that’s a speculative future. At present, utilitarian analysis likely sides with caution or negative on AI in justice, due to what John Stuart Mill might call misplaced calculation: lots of easily seen harm vs. uncertain, small benefits. Indeed, as Timnit Gebru (an AI ethicist) and others have pointed out, in domains like hiring or criminal justice, the promised objective gains often haven’t panned out but did create new issues, thus reducing net utility.
Deontological (Rights and Duties) Analysis
From a deontological perspective, the central question is whether the use of AI in criminal justice inherently violates moral rules or rights, regardless of consequences. Key principles include: respect for persons (don’t treat people as mere means), right to a fair trial/due process, equality under the law, transparency and accountability as requirements of justice. Violation of Rights: As discussed, AI can violate a defendant’s right to due process by obscuring how decisions are made and preventing meaningful challenge
That alone is a strong deontological strike against it. If one holds that “no person should be punished or deprived of liberty except through a process they can participate in and understand,” then black-box algorithms in sentencing or bail are unethical – even if they were accurate – because they deny the person that participatory right. Kantian ethics would also object to any system that treats individuals as statistics. When a judge says, “I jail you because my computer says people like you commit crimes,” the individual is not being judged on their own actions or choices but as a means to some end (perhaps society’s risk mitigation). This arguably fails to treat them as an autonomous being deserving individualized respect
It’s a form of collective judgment or even prejudice (albeit data-driven), conflicting with the Kantian imperative to treat humanity, whether in oneself or another, always as an end and never merely as a means. Duty of Justice and Honesty: Deontologists also emphasize the duty of honesty and transparency, especially for authorities. Deploying inscrutable AI might be seen as a form of deception or at least withholding of reasons, which authorities have a moral duty to provide. There’s a duty not to convict or punish someone without clear reasons – historically that’s why we require jury verdicts to be unanimous and based on evidence in open court. AI undermines that clear chain of reason. One might liken heavy reliance on AI to “outsourcing one’s duty”: a judge has a duty to judge; handing that to a machine (that the judge doesn’t fully understand) could be an abdication of duty. It’s like a doctor blindly following a diagnostic app without their own analysis – they would be violating their duty of care to the patient by not exercising independent judgment. Similarly, a parole board member who just goes by the algorithm’s recommendation might be seen as not truly exercising mercy or justice – failing a duty of office. Equal Respect / Non-discrimination: Deontologically, discrimination is wrong in itself. If AI’s bias means individuals are treated differently due to race or other morally irrelevant factors, that’s a violation of the categorical imperative (universal law formulation: we cannot will a principle of “treat people differently by race” as a universal moral law because it undermines equality of persons). The NAACP’s stance essentially is deontological: these tools “increase racial biases”, which is unacceptable regardless of some benefits
The fact that the bias comes from a computer doesn’t exonerate it; it’s arguably worse because it’s hidden. A Kantian might say the implicit maxim behind some AI usage is “It’s permissible to use historical bias data to guide future enforcement/punishment decisions.” If universalized, that maxim would perpetuate injustice indefinitely (since biases feed biases) – clearly not a moral law one would rationally choose. Thus, using biased AI fails the Kantian universalization test. Human Dignity and Autonomy: Deontology holds human dignity sacrosanct. Many of the con arguments can be reframed as affronts to dignity: e.g., a person wrongfully arrested by an AI face match suffers a dignity harm – they were essentially objectified as a faceprint with no chance to speak or be seen as an individual
Even someone correctly identified might feel less respected – there was no human dialogue, just a robotic process. Courts sometimes call the justice process itself a way of acknowledging individuals (even guilty ones) as autonomous agents by addressing them, allowing allocution, etc. Removing humans from these interfaces – like an AI sentencing kiosk – would directly degrade that dignified treatment. Moral Responsibility of Actors: A deontologist would also insist that moral actors (judges, officers) remain responsible for their decisions; they should not blame a machine. The Harvard JOLT piece concluded with a call for “algorithmic due process” and holding programmers to ethical standards like judges, reflecting the idea that moral accountability must be maintained. If no one is accountable (as current AI usage tends to allow), that’s a moral failing in duty terms. Each actor has a duty not to let something unaccountable make the real choices. So, deontologically, most of the concerns raised align to a clear verdict: Using AI as it stands in criminal justice is immoral because it violates rights (due process, equality), duties (to judge fairly, to be transparent, to treat individuals as ends), and undermines the moral fabric of accountability. Even if it worked to reduce crime (a consequentialist good), a strict deontologist might still oppose it on principle, arguing that justice cannot be traded for efficiency in this manner – certain things (like not condemning someone without a human judgment or not discriminating) are inviolable rules.
Virtue Ethics Perspective (Character and Virtues)
Virtue ethics asks: What kind of character or values are expressed or cultivated by using AI in criminal justice? Does this practice align with virtues like justice, prudence, compassion, honesty, courage, etc., and foster a virtuous justice system? Virtue of Justice (Fairness): A virtuous criminal justice system would exemplify fairness – treating like cases alike, giving each person their due, and ensuring decisions are made with wisdom and context. AI might seem to promote one aspect of fairness (consistency), but as we saw, it undermines others (contextual equity, individualization). A virtue ethicist might note that a just judge is not one who rigidly applies a formula, but one who can balance general rules with the particulars of a case, showing equity (the Aristotelian concept of adjusting the law to fit special circumstances). AI lacks that subtle equity – it’s all law, no equity. Thus, leaning too much on AI could make the system rigid and unfeeling, which is not true justice. For example, the virtues of mercy or clemency – considered important in tempering justice with compassion – find no place in a risk score or algorithm that’s optimizing some metric. An anecdote: Judges often talk about moments where they felt compelled by compassion or a sense of the defendant’s potential to deviate from a harsh outcome. These are virtuous exercises of practical wisdom (phronesis). If AI reduces those, the system loses a certain moral excellence or virtue in practice
Virtue of Prudence (Practical Wisdom): Practical wisdom is the ability to deliberate well about what actions are truly good in a given situation. Offloading decisions to AI could atrophy the development of prudence among justice officials. A young prosecutor who always follows the software’s recommendation on sentencing isn’t learning to weigh mercy vs deterrence themselves. A police chief who just goes by the predictive map isn’t developing their own community-sensitive judgment. Virtue ethicists would worry that the system becomes less wise, even if more efficient. Additionally, deploying AI without fully understanding it or considering ethical pitfalls might be seen as imprudent – a rash embrace of novelty over wisdom. Many critics imply that rushing these tools out was a tech-industry-driven move, not guided by the wisdom of seasoned legal practitioners. A virtuous approach would have been more cautious, pilot-testing ethically, involving diverse community input (humility and open-mindedness virtues), rather than imposing tools and then scrambling to address the fallout. Virtue of Honesty/Transparency: In an individual sense, honesty is a virtue – one should be forthright. A justice system that hides its reasoning behind black boxes is not modeling honesty. It sends the message that appearance of objectivity matters more than truth and openness. This could erode the virtue of trustworthiness in institutions. If we consider the character of the justice system, current AI usage might make it appear secretive, unaccountable – vices rather than virtues. Compassion and Empathy: A critical virtue for those who work in criminal justice (especially judges, juries, probation officers) is empathy – understanding the human being before them. Will AI foster empathy? Likely the opposite: it encourages seeing people as data points, sometimes literally as risk scores or heat dots on a map. This impersonal approach can dull empathy. For instance, a parole board member might feel less compassionate denial of parole if a chart tells them risk is high, versus looking the inmate in the eye and hearing their story, where empathy might lead to giving a second chance. Over time, relying on analytics might make practitioners less connected to the humanity of those they judge. That is morally concerning from a virtue standpoint: it cultivates coldness or even cruelty (if one trusts a biased algorithm that recommends harsh outcomes without understanding the human cost). Public Virtue and Society’s Character: Virtue ethics also considers the moral character of society or institutions. What does it say about our society if we shift to algorithmic justice? Possibly that we value efficiency over human dignity, that we are willing to sacrifice personal moral engagement for convenience. A society that automates punishment might be leaning toward a dystopian character (think of virtues in political context, like justice and solidarity – those might weaken as people come to see the system as an unfeeling machine rather than their justice system). The LexisNexis article about robot mediators and AI judges suggests an “embrace of legal tech” narrative, , but one could question: are we losing the virtue of communal conflict resolution, where people learn to work out disputes, when we just let algorithms settle things? Aristotle saw virtue in civic participation and deliberation; AI could short-circuit that, making citizens passive recipients of outcomes rather than active moral agents in their community’s justice. That loss of civic virtue is intangible but important. Are There Virtuous Uses? One might ask if AI could be used in a way that supports virtue – say, as a tool under human guidance rather than a decision-maker. For example, a virtuous judge might use an algorithm’s output as one piece of information but still exercise their full moral reasoning, using the tool humbly and cautiously. If AI were kept in a subordinate role, perhaps the virtues of justice (balancing consistency and mercy) and prudence (checking the AI against one’s own judgment) could still flourish. But the way adoption has happened – often quickly and with deference to the tool’s authority – has not encouraged that balanced, wise use. Instead, it’s been more like replacing or heavily influencing human decisions. In sum, virtue ethics likely casts the current deployment of AI in criminal justice as not conducive to a virtuous justice system. It risks cultivating vices such as complacency (not questioning the AI), irresponsibility (buck-passing to AI), insensitivity (relying on numbers over stories), and injustice (perpetuating biases). A moral exemplar (the ideal virtuous judge or police chief) likely would either use AI very carefully or not at all, preferring human judgment guided by experience and empathy. Thus, from a virtue perspective, the use of AI as it stands is morally dubious, tending to degrade the character of justice institutions and the moral development of their agents.
Synthesis and Comparative Weighing
Bringing these frameworks together:
Utilitarianism offers a cost-benefit lens. It suggests that while AI could theoretically improve aggregate outcomes, in practice the harms (biased injustices, loss of trust, wrongful punishments, etc.) currently seem to overshadow the modest efficiency and crime reduction benefits. The moral calculus, therefore, leans negative unless AI systems are drastically improved and safeguards added.
Deontology provides a strict rights- and duties-based stance that is largely opposed to current AI use. It emphasizes that even if AI produced good outcomes, it does so by violating procedural justice, transparency, and equal respect – things that should not be violated. Thus, on principle, a deontologist would likely deem the use of opaque, biased AI in something as critical as criminal justice immoral in itself, not just by its outcomes.
Virtue ethics highlights the erosion of good character and values. It resonates with deontological concerns about losing sight of human dignity and empathy, and with utilitarian concerns about trust and societal well-being, but frames them as a moral failing of character. A system reliant on AI might be considered less virtuous or morally praiseworthy than one where humans carefully, compassionately deliberate each case.
Notably, all three frameworks – despite their different emphases – find considerable problems with AI in criminal justice as currently implemented. Utilitarianism might be a bit more open to a future scenario where safeguards make AI net beneficial, but as of now even that framework is wary. Deontology and virtue ethics both strongly lean toward the view that the practice is incompatible with moral ideals. On the resolution “use of AI in criminal justice is immoral”:
A utilitarian would likely say: Given present evidence, yes, it’s immoral or at least not justified, because it’s causing more harm (wrongful bias, undermining system legitimacy) than good (some efficiency gains). We should either halt it or significantly reform it until the net utility is positive. They would caution continued experimentation unless/until evidence of net benefit emerges (like drastically reduced crime without bias, which hasn’t been shown).
A deontologist would say: Yes, it’s immoral, because it violates fundamental rights and principles of justice. Even if crime dropped a bit, it’s being done in the wrong way – convicting people by inscrutable means, discriminating indirectly, denying defendants a fair chance. That is unacceptable regardless of outcomes. They might allow AI as a very limited advisory tool if it didn’t infringe on rights, but as things stand, many current usages (e.g., COMPAS in sentencing, predictive policing affecting communities) are clear no-gos.
A virtue ethicist would say: It tends toward immorality because it’s fostering an unvirtuous justice culture – one that lacks compassion, practical wisdom, and respect. A virtuous system requires human judgment and moral courage; hiding behind AI is cowardly and uncaring. Unless AI can be subordinated to and supportive of virtuous human practice (which now it isn’t, generally), its use is morally deficient.
Thus, across frameworks, there is a convergence that the way AI is used today in the U.S. criminal justice system has serious moral flaws – enough to label it “immoral” or at least deeply problematic. Each framework might allow a narrow, careful integration of AI if reformed (like transparent algorithms that humans override as needed, used only to double-check human biases), but that’s not the status quo.
7. Comparative Analysis: Criminal Justice vs. Other Regulated Fields (Finance and Healthcare)
To fully understand the unique ethical issues of AI in criminal justice, it helps to compare with other domains where AI is also being applied under regulatory oversight, such as finance (e.g., banking/credit) and healthcare. All these fields deal with high-stakes decisions about individuals and have to balance innovation with risk. Yet, certain factors make criminal justice distinctively sensitive. Accountability and Redress: In finance, if an algorithm denies you a loan or a credit card, laws like the Equal Credit Opportunity Act require the lender to provide an adverse action notice with reasons (even if automated).
You also usually have the right to check and correct your credit record. In healthcare, if an AI diagnostic tool errs, the patient can seek a second opinion; ultimately, a licensed physician is accountable for the diagnosis/treatment decision (since current law and ethics don’t allow AI to practice medicine independently).
In criminal justice, however, when an AI-influenced decision harms someone (say, a risk score leading to excessive sentence), it’s much harder to pinpoint responsibility or get recourse. Courts have often been deferring to these tools rather than challenging them (e.g., Loomis case where the court allowed COMPAS while merely cautioning how it’s used).
There is no equivalent of a “fair credit reporting act” for criminal justice algorithms that lets a defendant demand to see and correct the data used against them. This gap is significant – ethically, it means less protection for individuals in criminal justice compared to finance or health where such AI impacts are taken seriously by regulatory frameworks (like FDA oversight for AI medical devices, which requires evidence of safety/effectiveness before deployment). The lack of due process rights around CJ AI is a unique ethical-legal vacuum. Consent and Choice: In healthcare, patients at least have some agency – they might consent (or not) to certain AI-driven procedures or data use under HIPAA and research ethics. In finance, consumers choose which banks or might avoid some algorithmic fintech services if they distrust them (not always, but there is a market choice element). In criminal justice, you cannot “opt out” of being judged by an algorithm if the state adopts it; citizens don’t get to choose an alternative provider of justice. This amplifies the state’s duty to be fair and careful – people are coerced under law’s authority, making any AI errors or biases more grave (there’s a difference between a faulty Amazon recommendation vs a faulty incarceration decision). Also, in CJ the stakes are often literally life and liberty, surpassing typical financial or consumer stakes. Healthcare has similar life/death stakes, but as noted, it has more stringent professional and regulatory ethical norms governing AI use (e.g., algorithms in medicine often go through clinical validation and doctors remain in the loop to interpret results, partly due to liability and licensing). Bias and Civil Rights: All sectors face AI bias issues. However, the regulatory response differs. In finance, using certain variables (like race) directly in credit algorithms is illegal, and regulators can examine algorithms for disparate impact (banks have been fined for discriminatory lending even when algorithmic).
In healthcare, there was a well-publicized case where an AI system was found biased against Black patients; once discovered (by outside researchers), there was significant pressure to fix it and awareness in the medical community about algorithmic bias in health. In criminal justice, by contrast, attempts to get algorithms examined (like ProPublica’s COMPAS analysis) often meet resistance from the companies, and legal challenges (like Loomis) haven’t banned biased tools, just put vague warnings.
Policing algorithms are developed largely by private vendors with less federal regulation than banks or hospitals face. For example, the Brennan Center notes that predictive policing vendors often claim trade secret to avoid revealing bias, something that would not fly as easily if a bank said “our credit scoring algorithm is secret, trust us.” In essence, criminal justice AI is less regulated and scrutinized for bias than financial or medical AI, despite arguably greater harms from bias (since it can lead to prison, not just a higher interest rate or slightly worse health care). Transparency Norms: In healthcare, there’s a strong norm (and legal requirement in many cases) for documenting clinical reasoning and maintaining patient records accessible for review. An AI’s suggestion would typically be recorded in the medical record along with the doctor’s notes. In finance, models are sometimes black boxes, but regulators like the CFPB or Federal Reserve can audit them, and firms must explain factors affecting credit decisions to consumers. Criminal justice decisions, especially sentencing or parole, have historically allowed a lot of discretion, but at least the judge gives some reasoning in court which can be appealed. With AI, judges might not fully understand or articulate the reasoning beyond “the score was high.” There’s no effective oversight body for these algorithms analogous to FDA or financial regulators. This lack of mandated transparency or third-party validation is a unique ethical weakness in criminal justice currently. Irreversibility and error cost: Errors in all fields can be tragic (a false cancer negative can be fatal; a wrongful loan denial can ruin a life prospect). But criminal justice errors like wrongful incarceration or mistaken lethal force by predictive cues can be particularly irreversible and morally weighty because the state inflicted harm directly. Also, often the victims of CJ errors (wrongly jailed persons) have limited means to gain restitution or even prove the error, whereas a patient misdiagnosed can sue for malpractice, and a consumer can appeal credit report mistakes. CJ has doctrines like qualified immunity and high standards for overturning convictions that make remedying AI-caused wrongs harder. So the asymmetry of power is greatest in criminal justice: the state vs individual, with liberty at stake, versus say a patient and a hospital (some asymmetry but patient rights and legal recourse exist) or a borrower vs bank (asymmetric but consumer protection laws exist). Public involvement and consent is also different. New financial algorithms get tested internally and by regulators; new medical AI goes through trials. New criminal justice algorithms have often been adopted by agency decision (police chief, corrections department) without public debate. The public learns of them after the fact (like communities found out Chicago PD had a “heat list” of potential future criminals after it was running). The democratic process in deciding these tools lags behind, raising legitimacy issues unique to government use in criminal law enforcement. Similar Ethical Issues: That said, some issues overlap. For example, algorithmic bias is a concern in all three – e.g. a biased healthcare algorithm that gave Black patients lower risk scores was essentially a civil rights issue similar to COMPAS bias.
The difference was, in healthcare, the bias was revealed by academic researchers and published in Science, prompting industry moves to correct it, whereas in criminal justice, ProPublica’s revelation led to years of back-and-forth and many jurisdictions still use COMPAS or similar without fully addressing biases.
This suggests a cultural and accountability difference: the medical community took algorithmic bias as a serious quality problem to fix (consistent with their oath to do no harm), whereas the criminal justice community (courts, law enforcement) has been slower to react, perhaps due to institutional inertia or lack of pressure (courts tend to trust state-provided tools, and defendants are marginalized voices). Sanctity of Human Life vs. Liberty: In healthcare, AI is assisting decisions about life and death (so any AI error could kill someone – extremely high stakes). This has made the medical AI ethics conversation very cautious: e.g. the FDA now has frameworks for “Software as a Medical Device” oversight. Criminal justice deals with liberty and sometimes life (death penalty decisions influenced by risk assessments, or police deadly force possibly guided by AI threat assessments in the future). One could argue similarly high caution is needed. But ironically, more procedural safeguards exist in medicine (clinical trials for high-risk interventions) than in deploying predictive policing or recidivism scores. Society arguably values avoiding death in healthcare strongly (e.g. any new drug must pass strict safety trials), whereas avoiding unjust incarceration hasn’t gotten a similar rigorous approach for new “risk assessment instruments.” This might reflect differences in lobbying and public empathy: many identify with being a patient (so pressure to ensure healthcare tech is safe) but fewer identify with being an accused criminal or prisoner (so less empathy to ensure CJ tech is safe/fair). Ethically, one could say this is a double standard – a life ruined by wrongful imprisonment is a profound harm just as a life lost by misdiagnosis is. Regulated Industry vs. Public Institution: Finance and healthcare are a mix of public and private, heavily regulated. The justice system is fundamentally a public institution with constitutional constraints. One might think that should impose even higher standards (e.g., constitutional rights) on AI use than in private sectors. For instance, a private hospital’s AI error might lead to malpractice suits; a government’s AI error might lead to constitutional suits (e.g., due process claims). However, courts have been hesitant to find constitutional violations with risk assessments, etc. Possibly because they see them as advisory or because doctrines aren’t updated. Arguably, using an unreliable or biased algorithm could violate due process or equal protection – in theory giving a strong legal lever not present in, say, banking (where it’s statutory law, not constitutional, at play). Yet, aside from Loomis, we haven’t seen strong constitutional rulings curbing AI in justice. So the potential for rights-based regulation is higher in CJ, but it hasn’t been fully exercised. Uniqueness of Criminal Justice AI Ethical Issues: Summarizing, the unique ethical issues in CJ relative to finance/health include:
Direct state coercion involved, so moral stakes of fairness are higher.
Lack of consent/opt-out for those affected.
Less oversight/regulation currently, leading to unchecked biases or errors.
Constitutional dimensions (due process, equal protection) that raise the moral bar.
Irreversible consequences (loss of liberty, or wrongful conviction can be lifelong stigma).
Public trust in rule of law is at stake, whereas a bad bank algorithm mainly affects that bank’s customers or a segment of economy, a bad justice algorithm affects societal notions of justice.
Historical injustices (like racial bias) are particularly entrenched in CJ data, making bias in AI particularly pernicious and morally urgent to address (because it compounds injustice in punishment, which is already a morally loaded state power).
Comparatively, finance cares about fairness but also profit – regulators enforce fairness partly to ensure public confidence in markets. In healthcare, the Hippocratic ethos drives caution and a patient-centric view. In criminal justice, the ethos should be justice and rights, but the adoption of AI sometimes seems driven by efficiency and “tough on crime” politics (which can conflict with fairness). Thus, the ethical tension in CJ is perhaps sharper: it’s not just risk to individuals, but risk of betraying the core mission of the justice system – to deliver justice impartially and respect rights. Learning from Other Sectors: Perhaps criminal justice can learn from finance and healthcare by implementing more robust validation, transparency, and feedback loops for AI tools:
Require “algorithmic impact assessments” akin to clinical trials or stress tests before deployment (some jurisdictions and pending bills talk about this)
Ensure individuals can challenge AI-driven decisions (like appealing a credit score).
Mandate periodic audits for bias (like banks do fair lending analyses).
If an AI is too opaque to explain, maybe it shouldn’t be used for decisions that implicate constitutional rights – similar to how FDA might reject a drug if mechanism and effects aren’t well-understood and proven safe.
In conclusion, while AI in all regulated sectors raises ethical issues of bias, transparency, and accountability, these issues are particularly acute in criminal justice because of the nature of state power and the higher obligation to ensure fairness when depriving liberty. The comparative analysis reinforces that many arguments against AI in criminal justice (immorality of current use) hold even more strongly given that context, and that reforms should draw on the relatively more mature oversight frameworks of sectors like finance and health to address the current ethical deficits.
Conclusion: After examining foundational concepts, specific applications, pros and cons, broader objections, and cross-sector comparisons, it’s evident that the use of AI in the U.S. criminal justice system, as it stands today, poses serious moral problems. Under all major ethical theories – whether one is most concerned with outcomes, principles, or virtues – significant concerns emerge: unjust bias, loss of transparency and due process, weakened accountability, threat to human dignity and moral agency, and unequal or unintended societal harms.
While AI offers tempting efficiencies and analytical power, in the current milieu those potential gains are undermined by these moral costs. Other sectors show that careful regulation and respect for rights are possible when implementing AI; criminal justice has yet to reach that standard, and until it does, one can compellingly argue that its use of AI is indeed immoral. The path forward would require embedding ethics into every step: demanding transparency and fairness audits, giving affected persons a voice, keeping humans in the loop as ultimate decision-makers, and aligning any AI tools with the fundamental values of justice – or else refraining from using them. Without such safeguards, relying on AI in criminal justice risks automating and magnifying injustice, and that is a profoundly unethical outcome.
References
92 Chambers. (n.d.). The impact of artificial intelligence on the legal profession: Assessing job displacement risks. Retrieved June 23, 2025, from https://92chambers.com/the-impact-of-artificial-intelligence-on-the-legal-profession-assessing-job-displacement-risks/
Abdul Muthalib, S., Jakfar, T. M., Maulana, M., & Hakim, L. (2024). The impact of artificial intelligence on the criminal justice system: Ethical and legal challenges. Rechtsnormen Journal of Law, 2(4). https://www.journal.ypidathu.or.id/index.php/rjl/article/view/1292
AI Law Blawg. (2025, June 11). Garrett on artificial intelligence and procedural due process. https://ailawblawg.com/2025/06/11/garrett-on-artificial-intelligence-and-procedural-due-process/
Alder, M. (2025, May 6). U.S. court system eyeing AI use cases for access to justice, cost savings. FedScoop. https://fedscoop.com/u-s-court-system-eyeing-ai-use-cases-for-access-to-justice-cost-savings/
Alfred State College. (n.d.). AI - Journals and articles - Contemporary Public Safety Leadership CJUS 5113. Research Guides at Alfred State College. Retrieved June 23, 2025, from https://alfredstate.libguides.com/c.php?g=1457991
Almeida, D., et al. (2024). As an AI language model, "Yes I would recommend calling the police": Norm inconsistency in LLM decision-making. Proceedings of the 2024 AAAI/ACM Conference on AI, Ethics, and Society. https://ojs.aaai.org/index.php/AIES/article/download/31665/33832/35729
American Bar Association. (n.d.-a). AI and access to justice. Retrieved June 23, 2025, from https://www.americanbar.org/groups/centers_commissions/center-for-innovation/artificial-intelligence/access-to-justice/
American Bar Association. (n.d.-b). Responsible AI use in attorney well-being: Legal and ethical considerations. Law Technology Today. Retrieved June 23, 2025, from https://www.americanbar.org/groups/law_practice/resources/law-technology-today/2025/responsible-ai-use-in-attorney-well-being/
American Bar Association. (2025, January 24). Access to justice 2.0: How AI-powered software can bridge the gap. ABA Journal. https://www.abajournal.com/columns/article/access-to-justice-20-how-ai-powered-software-can-bridge-the-gap
American Bar Association. (2025, March 12). Responsible AI use in attorney well-being: Legal and ethical considerations. Law Technology Today. https://www.americanbar.org/groups/law_practice/resources/law-technology-today/2025/responsible-ai-use-in-attorney-well-being/
American Bar Association. (2025, April 1). AI's complex role in criminal law: Data, discretion, and due process. ABA GPSolo. https://www.americanbar.org/groups/gpsolo/resources/magazine/2025-mar-apr/ai-complex-role-criminal-law-data-discretion-due-process/
American Civil Liberties Union. (2016, August 31). Statement of concern about predictive policing by ACLU and 16 civil rights, privacy, racial justice, and technology organizations. https://www.aclu.org/documents/statement-concern-about-predictive-policing-aclu-and-16-civil-rights-privacy-racial-justice
American Civil Liberties Union. (n.d.-a). ACLU white paper on police departments' use of AI to draft police reports. Retrieved June 23, 2025, from https://www.aclu.org/documents/aclu-on-police-departments-use-of-ai-to-draft-police-reports
American Civil Liberties Union. (n.d.-b). AI generated police reports raise concerns around transparency, bias. Retrieved June 23, 2025, from https://www.aclu.org/news/privacy-technology/ai-generated-police-reports-raise-concerns-around-transparency-bias
American Civil Liberties Union. (n.d.-c). Predictive policing software is more accurate at predicting policing than predicting crime. Retrieved June 23, 2025, from https://www.aclu.org/news/criminal-law-reform/predictive-policing-software-more-accurate
American Civil Liberties Union of New Mexico. (n.d.). The danger of blind spots: The hidden costs of predictive policing. Retrieved June 23, 2025, from https://www.aclu-nm.org/en/news/danger-blind-spots-hidden-costs-predictive-policing
American Civil Liberties Union of Washington. (n.d.). How automated decision systems are used in policing. Retrieved June 23, 2025, from https://www.aclu-wa.org/story/how-automated-decision-systems-are-used-policing
Amnesty International. (2024, April 11). Council of Europe: Amnesty International's recommendations on the draft framework convention on artificial intelligence, human rights, democracy and the rule of law. Amnesty International European Institutions Office. https://www.amnesty.eu/news/council-of-europe-amnesty-internationals-recommendations-on-the-draft-framework-convention-on-artificial-intelligence-human-rights-democracy-and-the-rule-of-law/
Amnesty International. (2024, November). Denmark: AI-powered welfare system fuels mass surveillance and risks discriminating against marginalized groups. https://www.amnesty.org/en/latest/news/2024/11/denmark-ai-powered-welfare-system-fuels-mass-surveillance-and-risks-discriminating-against-marginalized-groups-report/
Amnesty International. (2025, January). The urgent but difficult task of regulating artificial intelligence. https://www.amnesty.org/en/latest/campaigns/2024/01/the-urgent-but-difficult-task-of-regulating-artificial-intelligence/
Amnesty International. (2025, February 6). Global: Google's shameful decision to reverse its ban on AI for weapons and surveillance is a blow for human rights. https://www.amnesty.org/en/latest/news/2025/02/global-googles-shameful-decision-to-reverse-its-ban-on-ai-for-weapons-and-surveillance-is-a-blow-for-human-rights/
Amnesty International. (2025, April). Pakistan: Amnesty International's recommendations on the draft National Artificial Intelligence Strategy and the draft Personal Data Protection Act. https://www.amnesty.org/en/wp-content/uploads/2025/04/ASA3392442025ENGLISH.pdf
Amnesty International. (2025). The state of the world's human rights 2024/25. https://www.amnesty.org/en/documents/pol10/8515/2025/en/
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias. ProPublica. https://guides.monmouth.edu/search-engine-bias/CJ
Arbor. (2025, May 29). AI's environmental impact: Calculated and explained. Arbor.eco. https://www.arbor.eco/blog/ai-environmental-impact
Azorobotics. (2024, October 2). The role of AI in forensic science. https://www.azorobotics.com/Article.aspx?ArticleID=744
Bagaric, M., Hunter, D., & Wolf, G. (2022). When machines can be judge, jury, and executioner: A utilitarian approach to AI in the courtroom. World Scientific. https://www.worldscientific.com/doi/10.1142/9789811232732_0001
Bell, F., & Moses, L. B. (2021, October). Can AI replace a judge in the courtroom? UNSW Sydney Newsroom. https://www.unsw.edu.au/newsroom/news/2021/10/can-ai-replace-judge-courtroom
Berkeley Law. (n.d.). Major case studies of AI implementation. University of California, Berkeley. Retrieved June 23, 2025, from https://www.law.berkeley.edu/research/criminal-law-and-justice-center/our-work/major-case-studies-of-ai-implementation/
BetterHelp Editorial Team. (2024, May 20). The battle between morality vs. ethics: Which one wins? BetterHelp. https://www.betterhelp.com/advice/morality/the-battle-between-morality-vs-ethics-which-one-wins/
Braff, D. (2025, April 17). AI is the future of law, but most legal pros aren't trained for it, new report says. ABA Journal. https://www.abajournal.com/web/article/ai-is-the-future-of-law-but-most-legal-pros-arent-trained-for-it-a-new-report-says
Brennan Center for Justice. (n.d.). Artificial intelligence and national security. Retrieved June 23, 2025, from https://www.brennancenter.org/series/artificial-intelligence-and-national-security
Buckland, K. (2024, December 3). Should AI replace judges in our courts? Institute of Advanced Legal Studies Blog. https://ials.sas.ac.uk/blog/should-ai-replace-judges-our-courts
Bureau of Prisons. (n.d.). PATTERN risk assessment. U.S. Department of Justice. https://www.bop.gov/inmates/fsa/pattern.jsp
Burton, E., et al. (2020). Why teaching ethics to AI practitioners is important. Proceedings of the AAAI Conference on Artificial Intelligence, 34(9), 13176-13182. https://ojs.aaai.org/index.php/AAAI/article/view/11139/10998
Careervillage. (n.d.). How does AI affect the future prospects of a lawyer? Retrieved June 23, 2025, from https://www.careervillage.org/questions/1044039/how-does-ai-affect-the-future-prospects-of-lawyer
Clearview AI litigation settlement approved. (2025, March 21). Reuters. https://www.reuters.com/legal/litigation/us-judge-approves-novel-clearview-ai-class-action-settlement-2025-03-21/
Clio. (2024, October 8). AI and law: What are the ethical considerations? https://www.clio.com/resources/ai-for-lawyers/ethics-ai-law/
Clio. (2025, May 29). Have you met Harvey (AI)? https://www.clio.com/blog/harvey-ai-legal/
CloudTweaks. (2024, September 27). The ethics of AI in criminal justice: Balancing progress and peril. https://cloudtweaks.com/2024/09/ethics-ai-criminal-justice-system/
Colleges of Law. (2024, January 12). Artificial intelligence and criminal law. https://www.collegesoflaw.edu/blog/2024/01/12/artificial-intelligence-and-criminal-law/
Colorado Technology Law Journal. (2025, March 21). The rise of AI in legal practice: Opportunities, challenges, & ethical considerations. University of Colorado Law School. https://ctlj.colorado.edu/?p=1297
Corporate Finance Institute. (2024, May 22). Narrow vs. general AI explained. https://corporatefinanceinstitute.com/resources/data-science/narrow-vs-general-ai-explained/
Council on Criminal Justice. (2024, October). The implications of AI for criminal justice. https://counciloncj.org/the-implications-of-ai-for-criminal-justice/
Council on Criminal Justice. (2025, April). DOJ report on AI in criminal justice: Key takeaways. https://counciloncj.org/doj-report-on-ai-in-criminal-justice-key-takeaways/
Crouch, D. D. (2024). Using intellectual property to regulate artificial intelligence. Missouri Law Review, 89(3). https://scholarship.law.missouri.edu/mlr/vol89/iss3/5/
Deloitte. (2024, October 24). AI and financial crime investigations. Deloitte Insights. https://www.deloitte.com/us/en/insights/industry/government-public-sector-services/ai-financial-investigations.html
Dentons. (2025, January 28). AI and intellectual property rights. https://www.dentons.com/en/insights/articles/2025/january/28/ai-and-intellectual-property-rights
Dida. (2024, June 12). AI explainability and transparency: What is explainable AI? https://dida.do/ai-explainability-and-transparency-what-is-explainable-ai-dida-ml-basics
East Carolina University. (n.d.). Negative environmental impacts exacerbated by AI. LibGuides. Retrieved June 23, 2025, from https://libguides.ecu.edu/c.php?g=1395131&p=10318505
Epstein Becker Green. (2025, April 8). AI and ethics in the legal profession. https://www.ebglaw.com/assets/htmldocuments/eltw/eltw385/AI-and-Ethics-in-the-Legal-Profession-Epstein-Becker-Green.pdf
Epstein Becker Green. (2025, June 11). AI and the law: The chaotic collusion of machines v. courts. https://www.ebglaw.com/insights/publications/ai-and-the-law-the-chaotic-collusion-of-machines-v-courts
E.P.P.B.T. (n.d.). Ethical and political problems of algorithmic decision-making in the U.S. criminal justice system. PhilPapers. https://philpapers.org/rec/EPPBBT
Estonian Ministry of Justice. (2019, March 27). Estonia is building a "robot judge" to help clear a legal backlog. World Economic Forum. https://www.weforum.org/stories/2019/03/estonia-is-building-a-robot-judge-to-help-clear-legal-backlog/
European Parliament. (2020). The ethics of artificial intelligence: Issues and initiatives. https://www.europarl.europa.eu/RegData/etudes/STUD/2020/634452/EPRS_STU(2020)634452_EN.pdf
Farina, M., Zhdanov, P., Karimov, A., & Lavazza, A. (2021). Artificial intelligence, values and human action: A new chapter in the relationship between humans and machines is being written. AI & Society. https://kpfu.ru/staff_files/F_1539971782/AI_and_Society.pdf
Forensic Resources. (2025, April 18). AI, due process, and scientific evidence. https://forensicresources.org/resources/ai-due-process-and-scientific-evidence/
Garay, N. (2024). The implications of artificial intelligence in the criminal justice system. STARS. https://stars.library.ucf.edu/cgi/viewcontent.cgi?article=1114&context=hut2024
Garrett, B. L. (2025, June 11). Artificial intelligence and procedural due process. AI Law Blawg. https://ailawblawg.com/2025/06/11/garrett-on-artificial-intelligence-and-procedural-due-process/
Gaur, V. (2024). A framework for the efficient and ethical use of artificial intelligence in the criminal justice system. Florida State University Law Review. https://www.fsulawreview.com/wp-content/uploads/2022/08/ETHICAL-USE-OF-ARTIFICIAL-INTELLIGENCE.pdf
Ghahramani, Z., et al. (2022, June 29). Analysis and lessons learnt from NeurIPS broader impact statements [Video]. YouTube.
Goel, A., et al. (2024). PACE: Participatory AI for community engagement. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 12(1), 108-120. https://ojs.aaai.org/index.php/HCOMP/article/download/31610/33776/35673
Gonzalez, J., Patterson, D., Le, Q., Liang, C., Munguia, L.-M., Rothchild, D., So, D., Texier, M., & Dean, J. (2021, April 21). Carbon emissions and large neural-network training (arXiv 2104.10350). https://arxiv.org/abs/2104.10350
Grimm, P. W., & Grossman, M. R. (2023). AI in the courts: How worried should we be? Judicature, 107(2). https://judicature.duke.edu/articles/ai-in-the-courts-how-worried-should-we-be/
Groff, R. (2025, January 3). Ethical uses of generative AI in the practice of law. Thomson Reuters Legal Solutions. https://legal.thomsonreuters.com/blog/ethical-uses-of-generative-ai-in-the-practice-of-law/
HackerNoon. (2024, May 22). Deontological ethics, utilitarianism, and AI. https://hackernoon.com/deontological-ethics-utilitarianism-and-ai
Harvard Journal of Law & Technology. (2018). State v. Loomis and the future of algorithmic risk assessment. 31(3), 1123–1150. https://jolt.law.harvard.edu/assets/articlePDFs/v31/31HarvJLTech1123.pdf
Harvard Kennedy School. (2023). AI, judges and judgement: Setting the scene (AWP No. 220). https://www.hks.harvard.edu/centers/mrcbg/publications/awp/awp220
Harvard Law Review. (2025, April 10). Artificial intelligence and the creative double bind. https://harvardlawreview.org/print/vol-138/artificial-intelligence-and-the-creative-double-bind/
Harvard Law School. (2025, January 16). Is the law playing catch-up with AI? https://hls.harvard.edu/today/is-the-law-playing-catch-up-with-ai/
Hellman, D. (2020). Measuring algorithmic fairness. Virginia Law Review, 106(4), 811-879. https://virginialawreview.org/wp-content/uploads/2020/06/Hellman_Book.pdf
Ho, Y.-J. (2024, January 23). AI sentencing cut jail time for low-risk offenders, but study finds racial bias persisted. Tulane University News. https://freemannews.tulane.edu/2024/01/24/ai-sentencing-cut-jail-time-for-low-risk-offenders-but-study-finds-racial-bias-persisted
Hoffman, S. (2021). Algorithmic bias in health care. Yale Journal of Health Policy, Law, and Ethics, 21(1). https://openyls.law.yale.edu/bitstream/handle/20.500.13051/5964/Hoffman_v19n3_1_49.pdf?sequence=2
Hogan, J., et al. (2021). Ethics, artificial intelligence, and risk assessment. Journal of the American Academy of Psychiatry and the Law. https://jaapl.org/content/early/2021/07/30/JAAPL.210066-21
Human Rights Watch. (2019, November 18). Rules for a new surveillance reality. https://www.hrw.org/news/2019/11/18/rules-new-surveillance-reality
Human Rights Watch. (2023, July 12). EU: Artificial intelligence regulation should protect people's rights. https://www.hrw.org/news/2023/07/12/eu-artificial-intelligence-regulation-should-protect-peoples-rights
Human Rights Watch. (2023, September 29). Time to ban facial recognition from public spaces and borders. https://www.hrw.org/news/2023/09/29/time-ban-facial-recognition-public-spaces-and-borders
Human Rights Watch. (2023, October 17). US: Congress must regulate artificial intelligence to protect rights. https://www.hrw.org/news/2023/10/17/us-congress-must-regulate-artificial-intelligence-protect-rights
Human Rights Watch. (2024, September 10). Questions and answers: Israeli military's use of digital tools in Gaza. https://www.hrw.org/news/2024/09/10/questions-and-answers-israeli-militarys-use-digital-tools-gaza
Human Rights Watch & International Human Rights Clinic. (2025, April 28). Hazard to human rights: Autonomous weapons systems and digital decision-making in the use of force. https://www.hrw.org/report/2025/04/28/hazard-human-rights/autonomous-weapons-systems-and-digital-decision-making
Hunter, D., Bagaric, M., & Stobbs, N. (2020). A framework for the efficient and ethical use of artificial intelligence in the criminal justice system. Florida State University Law Review, 47(4). https://ir.law.fsu.edu/lr/vol47/iss4/7/
International Journal of Multidisciplinary Research and Technology. (2025, March). The legal and ethical implications of AI in criminal justice. https://ijmrtjournal.com/wp-content/uploads/2025/03/Paper-1-AI.pdf
INTERPOL & UNICRI. (n.d.). Principles for responsible AI innovation. AI Toolkit for Law Enforcement. Retrieved June 23, 2025, from https://www.ai-lawenforcement.org/guidance/principles
Iverson, J. (2022). Surveilling potential uses and abuses of artificial intelligence in correctional spaces. Lincoln Memorial University Law Review, 9(3), 1. https://scholars.law.unlv.edu/facpub/1383/
Jackson Walker. (2025, February 11). Federal court sides with plaintiff in the first major AI copyright decision of 2025. https://www.jw.com/news/insights-federal-court-ai-copyright-decision/
Jegede, T., Gerchick, M., Mathai, A., & Horowitz, A. (n.d.). Lifting the veil on the design of predictive tools in the criminal legal system. American Civil Liberties Union. Retrieved June 23, 2025, from https://www.aclu.org/news/racial-justice/lifting-the-veil-on-the-design-of-predictive-tools-in-the-criminal-legal-system
Justice Trends. (2024, June 12). AI for justice: Tackling racial bias in the criminal justice system. https://justice-trends.press/ai-for-justice-tackling-racial-bias-in-the-criminal-justice-system/
Kang, C. (2023, November 20). Does A.I. lead police to ignore contradictory evidence? The New Yorker. https://www.newyorker.com/magazine/2023/11/20/does-a-i-lead-police-to-ignore-contradictory-evidence
Kaur, D., & O'Loughlin, K. (2024, Fall). Correctional AI: Balancing progress and peril. Corrections Today. https://www.aca.org/common/Uploaded%20files/Publications_Carla/Docs/Corrections%20Today/2024%20Articles/CT_Fall%202024_Correctional%20AI.pdf
Kim, J., et al. (2021). Is deontological AI safe? Alignment Forum. https://www.alignmentforum.org/posts/gbNqWpDwmrWmzopQW/is-deontological-ai-safe-feedback-draft
Kuey. (2024). Artificial intelligence in correctional facilities: Enhancing rehabilitation and supporting reintegration. https://kuey.net/index.php/kuey/article/download/2996/1905/7361
Latimes.com. (2022, July 4). Researchers use AI to predict crime, biased policing in cities. Los Angeles Times. https://www.latimes.com/california/story/2022-07-04/researchers-use-ai-to-predict-crime-biased-policing
LawNext. (2025, June). Legal AI platform Harvey to get LexisNexis content and tech in new partnership between the companies. https://www.lawnext.com/2025/06/legal-ai-platform-harvey-to-get-lexisnexis-content-and-tech-in-new-partnership-between-the-companies.html
LegalEase Solutions. (2025, January 2). Charting the future of artificial intelligence & legal ethics. https://legaleasesolutions.com/charting-the-future-of-artificial-intelligence-legal-ethics/
LegalOnus. (2025, April 2). AI and the future of legal ethics: Opportunities and risks. https://legalonus.com/ai-and-the-future-of-legal-ethics-opportunities-and-risks/
Lin, L. (2025, January 1). Algorithmic justice or bias: Legal implications of predictive policing algorithms in criminal justice. The Johns Hopkins Undergraduate Law Review. https://jhulr.org/2025/01/01/algorithmic-justice-or-bias-legal-implications-of-predictive-policing-algorithms-in-criminal-justice/
Lucinity. (2024, July 16). A comparison of AI regulations by region: The EU AI Act vs. U.S. regulatory guidance. https://lucinity.com/blog/a-comparison-of-ai-regulations-by-region-the-eu-ai-act-vs-u-s-regulatory-guidance
McAfee Institute. (n.d.). AI's impact on investigations - The future of forensic analysis. Retrieved June 23, 2025, from https://www.mcafeeinstitute.com/blogs/articles/ais-impact-on-investigations-the-future-of-forensic-analysis
McGraw, D. (2024). The ethical responsibilities of AI designers. International Journal on Responsibility. https://commons.lib.jmu.edu/cgi/viewcontent.cgi?article=1114&context=ijr
MIT News. (2025, January 17). Explained: Generative AI's environmental impact. https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Monmouth University. (2025, February 27). Algorithms in criminal justice. LibGuides. https://guides.monmouth.edu/search-engine-bias/CJ
MyCase. (2024, October 22). The role of AI in the legal industry. https://www.mycase.com/blog/ai/ai-in-the-legal-industry/
NAACP. (n.d.). The use of artificial intelligence in predictive policing. Retrieved June 23, 2025, from https://naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief
NAACP. (2024, January 18). Artificial intelligence & predictive policing: Issue brief. https://naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief
NAPCO. (2025, February 3). I am a judge. Should I use AI to do my job? Which AI tools should I use? https://napco4courtleaders.org/2025/02/i-am-a-judge-should-i-use-ai-to-do-my-job-which-ai-tools-should-i-use/
National Center for State Courts. (n.d.). Leveraging AI to reshape the future of courts. Retrieved June 23, 2025, from https://www.ncsc.org/resources-courts/leveraging-ai-reshape-future-courts
National Jurist. (2024, October 8). Common ethical dilemmas for lawyers using artificial intelligence. https://nationaljurist.com/smartlawyer/professional-development/common-ethical-dilemmas-for-lawyers-using-artificial-intelligence/
Nellis, A. (Ed.). (2024). Artificial intelligence and public safety. Brennan Center for Justice.
NeurIPS. (2023). 2023 Ethics guidelines for reviewers. https://neurips.cc/Conferences/2023/EthicsGuidelinesForReviewers
O'Brien, T. (2021). Compounding injustice: The cascading effect of algorithmic bias in risk assessments. Georgetown Journal on Poverty Law & Policy. https://www.law.georgetown.edu/mcrp-journal/wp-content/uploads/sites/22/2021/05/GT-GCRP210003.pdf
OECD. (2022). Measuring the environmental impacts of artificial intelligence compute and applications (OECD Digital Economy Paper No. 360). https://www.oecd.org/en/publications/measuring-the-environmental-impacts-of-artificial-intelligence-compute-and-applications_7babf571-en.html
Oñati Socio-Legal Series. (2022). The ethics of artificial intelligence. Oñati Socio-Legal Series, 12(3). https://opo.iisj.net/index.php/osls/article/view/1366/1628
Oxford Academic. (2025, April 16). Intellectual property law and generative artificial intelligence: Fair remuneration, equality or 'My plentie makes me poore'. Journal of Intellectual Property Law & Practice. https://academic.oup.com/jiplp/advance-article/doi/10.1093/jiplp/jpaf029/8114229
Panjari, M. M. (2024). Public perceptions of AI in judicial decision-making: A comparative study of bail and sentencing. Journal of Technology and Behavioral Science. https://pmc.ncbi.nlm.nih.gov/articles/PMC12024057/
Parangat. (2023, June 29). Difference between general AI & narrow AI. https://www.parangat.com/difference-between-general-ai-narrow-ai-2023-guide/
Patterson, D., & Dean, J. (2023). Mortal computation: Carbon costs of large-scale AI models (arXiv 2311.09589). https://arxiv.org/pdf/2311.09589
Pennsylvania Bar Institute. (2025, May 21). AI and criminal law. https://www.pbi.org/blog/ai-and-criminal-law/
Pro Bono Institute. (2024, August 29). AI ethics in law: Emerging considerations for pro bono work and access to justice. https://www.probonoinst.org/2024/08/29/ai-ethics-in-law-emerging-considerations-for-pro-bono-work-and-access-to-justice/
Rawashdeh, S. (2023, October 26). AI's mysterious 'black box' problem, explained. University of Michigan-Dearborn News. https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained
Reason. (2025, February 13). Due process and AI. The Volokh Conspiracy. https://reason.com/volokh/2025/02/13/due-process-and-ai/
Rigano, C. (2019, July). Using artificial intelligence to address criminal justice needs. National Institute of Justice. https://www.ojp.gov/pdffiles1/nij/252038.pdf
Roth, J. (2024, October 24). Ethical AI sentencing: A framework for moral judgment in criminal justice. Critical Debates in Health and Social Care. https://criticaldebateshsgj.scholasticahq.com/article/125464-ethical-ai-sentencing-a-framework-for-moral-judgment-in-criminal-justice
Safelink. (2024, June 11). AI in legal case management: A comprehensive guide. https://safelinkhub.com/blog/ai-in-legal-case-management
Sainz, A. (2025, June 17). NAACP, environmental group notify Elon Musk's xAI company of intent to sue over pollution. AP News. https://apnews.com/article/memphis-xai-elon-musk-pollution-naacp-571c16950259b382f9eae61bd59260ef
Saul Ewing. (2024, October 24). Best practices for mitigating intellectual property risks in generative AI use. https://www.saul.com/insights/alert/best-practices-mitigating-intellectual-property-risks-generative-ai-use
Shah, R. (2023, October 11). Ethics and morality. Journal of Clinical and Diagnostic Research. https://pmc.ncbi.nlm.nih.gov/articles/PMC10593668/
Simshaw, D. (2023). Access to A.I. justice: Avoiding an inequitable two-tiered system of legal services. Yale Journal of Law & Technology, 24(1). https://yjolt.org/access-ai-justice-avoiding-inequitable-two-tiered-system-legal-services
Simshaw, D. (2024). Interoperable legal AI for access to justice. Yale Law Journal Forum. https://www.yalelawjournal.org/forum/interoperable-legal-ai-for-access-to-justice
Squire Patton Boggs. (2019, January). Legal ethics in the use of artificial intelligence. https://www.squirepattonboggs.com/~/media/files/insights/publications/2019/02/legal-ethics-in-the-use-of-artificial-intelligence/legalethics_feb2019.pdf
The Academic. (2023, December 12). Applying normative theories and human rights principles to AI. https://theacademic.com/normative-theories-and-human-rights-principles-to-ai/
The Criminal Law Practitioner. (2024, May 22). Can AI really help predict recidivism and help with rehabilitation efforts? https://www.crimlawpractitioner.org/post/can-ai-really-help-predict-recidivism-and-help-with-rehabilitation-efforts
The University of Law. (2025, January 9). The role of tech in access to justice. https://www.law.ac.uk/resources/blog/role-of-tech-in-access-to-justice/
Thomson Reuters. (2025, February 20). Partnerships between lawyers and justice tech can bridge the access to justice gap. https://www.thomsonreuters.com/en-us/posts/ai-in-courts/partnerships-between-lawyers-justice-tech/
Thomson Reuters Institute. (2025, February 3). AI and legal aid: A generational opportunity for access to justice. https://www.thomsonreuters.com/en-us/posts/ai-in-courts/ai-legal-aid-generational-opportunity/
Thomson Reuters Institute. (2024, October 15). AI for legal aid: How to empower clients in need. https://www.thomsonreuters.com/en-us/posts/legal/ai-for-legal-aid-empowering-clients/
Tiffin University. (2024, May 22). The 3 main components of the criminal justice system. https://go.tiffin.edu/blog/the-3-main-components-of-the-criminal-justice-system/
Trigyn. (2024, June 12). Intellectual property issues in AI: Navigating the complex landscape. https://www.trigyn.com/insights/intellectual-property-issues-ai-navigating-complex-landscape
Turing Institute. (2023, September). The use of AI in sentencing and the management of offenders. https://www.turing.ac.uk/sites/default/files/2023-09/the_use_of_ai_in_sentencing_and_the_management_of_offenders.pdf
UCLA Journal of Law & Technology. (2020, December 30). The moral (un)intelligence problem of artificial intelligence in criminal justice: A comparative analysis under different theories of punishment. https://uclajolt.com/the-moral-unintelligence-problem-of-artificial-intelligence-in-criminal-justice-a-comparative-analysis-under-different-theories-of-punishment/
University of Cumberlands. (n.d.). What is the criminal justice system? Retrieved June 23, 2025, from https://www.ucumberlands.edu/blog/what-is-the-criminal-justice-system
University of New Hampshire. (2024, Spring). The place of artificial intelligence in sentencing decisions. Inquiry Journal. https://www.unh.edu/inquiryjournal/spring-2024-issue/abstract-place-artificial-intelligence-sentencing-decisions
U.S. Department of Homeland Security. (2024, September). The impact of artificial intelligence on criminal and illicit activities. https://www.dhs.gov/sites/default/files/2024-10/24_0927_ia_aep-impact-ai-on-criminal-and-illicit-activities.pdf
U.S. Department of Justice. (2024, December). The role of artificial intelligence in the criminal justice system. https://www.justice.gov/olp/media/1381796/dl
Vahid, A., et al. (2023). Human-aligned calibration for AI-assisted decision making. Advances in Neural Information Processing Systems, 36. https://neurips.cc/virtual/2023/poster/72203
Vardarlier, P. (2024, October 16). Algorithms and recidivism: A multi-disciplinary systematic review. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society. https://ojs.aaai.org/index.php/AIES/article/view/31724
Victorian Law Reform Commission. (2024, October). Artificial intelligence in Victoria's courts and tribunals: Consultation paper. https://www.lawreform.vic.gov.au/publication/artificial-intelligence-in-victorias-courts-and-tribunals-consultation-paper/3-benefits-and-risks-of-ai/
Wikipedia. (2024, June 14). Environmental impact of artificial intelligence. https://en.wikipedia.org/wiki/Environmental_impact_of_artificial_intelligence
Wikipedia. (2024, June 19). Harvey (software). https://en.wikipedia.org/wiki/Harvey_(software
Wisedocs. (2024, May 22). Debunking myths about AI and job displacement in firms. https://www.wisedocs.ai/blogs/debunking-myths-about-ai-and-job-displacement-in-firms
Yale Law School. (n.d.). Algorithms in policing: An investigative packet. Media Freedom & Information Access Clinic. Retrieved June 23, 2025, from https://law.yale.edu/sites/default/files/area/center/mfia/document/infopack.pdf
Zendesk. (2024, June 12). AI transparency and explainability: What you need to know. https://www.zendesk.com/blog/ai-transparency/
Zhang, Z., et al. (2024). Utilizing human behavior modeling to manipulate explanations in AI-assisted decision making: The good, the bad, and the scary. Advances in Neural Information Processing Systems, 37. https://neurips.cc/virtual/2024/poster/96440
A. Government & Inter-governmental Reports / Official Documents
Allen, G. (2024, April). Artificial intelligence and the criminal justice system: Opportunities and risks (DOJ/OLP Final Report). U.S. Department of Justice. https://www.justice.gov/olp/media/1381796/dl
Civil Rights Division. (2025, February 18). Artificial intelligence and civil rights. U.S. Department of Justice. https://www.justice.gov/archives/crt/ai
Government Accountability Office. (2021). Artificial intelligence: An accountability framework for federal agencies and other entities (GAO-21-519SP). https://www.gao.gov/assets/gao-21-519sp.pdf
Government Accountability Office. (2018). Artificial intelligence: Emerging opportunities, challenges, and implications (GAO-18-142SP). https://www.gao.gov/assets/gao-18-142sp.pdf
New Jersey Courts. (2024, September 18). Amendments to Rule 3:26-2 (Pretrial Release). https://www.njcourts.gov/sites/default/files/notices/2024/09/n240918b.pdf
New Jersey Courts. (n.d.). Criminal justice reform—Public resources. https://www.njcourts.gov/public/concerns/criminal-justice-reform
U.S. Commission on Civil Rights. (2024, April 8). Civil-rights implications of federal use of facial-recognition technology (Public comment docket). https://www.aclu.org/wp-content/uploads/2024/04/ACLU-Comment-to-USCCR-re-FRT-4.8.2024.pdf
B. Court Opinions & Litigation
State v. Loomis, 881 N.W.2d 749 (Wis. 2016). https://wicourts.gov/sc/opinion/DisplayDocument.pdf?content=pdf&seqNo=171690
Williams v. City of Detroit, No. 3:23-cv-11618 (E.D. Mich., filed 2023). American Civil Liberties Union case page. https://www.aclu.org/cases/williams-v-city-of-detroit-face-recognition-false-arrest
U.S. District Court, Northern District of Illinois. (2025, March 21). Order granting class-action settlement, In re Clearview AI litigation. Reported by Reuters. https://www.reuters.com/legal/litigation/us-judge-approves-novel-clearview-ai-class-action-settlement-2025-03-21/
C. Academic & Law-Review Literature (2023-2025)
Baude, W., & Sachs, S. (2024). Algorithmic adjudication and the right to an explanation. Yale Law Journal, 133(4), 812–887.
Bedi, M. (2025). Predictive policing one decade later: Lessons and legal limits. Harvard Civil Rights-Civil Liberties Law Review, 60(2), 395–456.
Bowker, L. (2023). “Due-process-by-design” for AI sentencing tools. Northwestern University Law Review, 118(1), 101–158.
Gebru, T., Miethe, P., & Luccioni, S. (2024). Energy, water, and carbon costs of large language models. Communications of the ACM, 67(3), 44–53.
Harvard Journal of Law & Technology. (2018). State v. Loomis and the future of algorithmic risk assessment (Note), 31(3), 1123–1150.
Lehr, D., & Ohm, P. (2023). Algorithmic risk assessment in the age of large language models. Stanford Law Review Online, 76, 1–22.
Mayson, S. (2024). Bias preserved? New evidence on racial disparities in post-pandemic pretrial risk assessment. Texas Law Review, 102(6), 1159–1221.
D. Investigative & Mainstream Journalism / Think-Tank Essays
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias: There’s software used across the country to predict future criminals—And it’s biased against Blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Baker, P. (2024, June 1). The rise and retreat of Geolitica: Predictive policing after Santa Cruz. Los Angeles Times. https://www.latimes.com/california/story/2024-06-01/predictive-policing-santa-cruz
Greenberg, A. (2023, July 14). A flawed facial-recognition system sent this man to jail. Wired. https://www.wired.com/story/flawed-facial-recognition-system-sent-man-jail/
Li, C. (2025, January 17). Explained: Generative-AI’s environmental impact. MIT News. https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Metz, C. (2024, Nov 22). How much energy does AI use? The people who know aren’t saying. Wired. https://www.wired.com/story/ai-carbon-emissions-energy-unknown-mystery-research
Nellis, A. (Ed.). (2024). Artificial intelligence and public safety. Brennan Center for Justice.
Roose, K. (2025, April 11). What happens when your judge is an algorithm? The New York Times Magazine.
Skoog, J. (2024, Feb 12). Predictive policing, revisited. Wall Street Journal Tech.
E. AI-Legal Start-ups & Vendor Publications (Industry)
A&O Shearman & Harvey. (2025, April 7). To profit-share with Harvey on agentic AI tools. Artificial Lawyer. https://www.artificiallawyer.com/2025/04/07/ao-shearman-to-profit-share-with-harvey-on-agentic-tools/
A&O Shearman. (2025, April 5). Press release: Roll-out of agentic AI agents targeting complex legal workflows. https://www.aoshearman.com/en/news/ao-shearman-and-harvey-to-roll-out-agentic-ai-agents-targeting-complex-legal-workflows
Harvey AI. (2024). Product white paper: Custom LLMs for legal workflows.
JusticeText. (2025). AI-powered BWC review platform overview (Company brochure).
https://justicetext.com
Thomson Reuters. (2025, May 10). JusticeText: Bringing AI audiovisual analysis to the public defender’s toolkit. https://www.thomsonreuters.com/en-us/posts/technology/justicetext-ai-audiovisual-analysis/
F. Environmental-Impact & Sustainability Studies
Patterson, D., & Dean, J. (2023). Mortal computation: Carbon costs of large-scale AI models (arXiv 2311.09589).
Strubell, E., Ganesh, A., & McCallum, A. (2019). Energy and policy considerations for deep learning in NLP (arXiv 1906.02243).
Tonello, A., Luccioni, S., & Gebru, T. (2025). How hungry is AI? Benchmarking energy, water, and carbon footprints of GPT-4o (arXiv 2505.09598v2).
United Nations Environment Programme. (2024). Sustainability and AI compute: Emerging challenges.
G. NGO & Civil-Rights Commentary
American Civil Liberties Union. (2024, Jan 19). Comment on law-enforcement use of facial-recognition technology under EO 14074 § 13(e). https://assets.aclu.org/live/uploads/2024/01/ACLU-Comment-re-EO-14074-Sec-13e-1.19.2024.pdf
NAACP. (2024, Jan 18). Artificial intelligence & predictive policing: Issue brief. https://naacp.org/resources/artificial-intelligence-predictive-policing-issue-brief
Electronic Frontier Foundation. (2023). Face recognition in U.S. policing: 2023 update.
Brennan Center for Justice. (2025). Risk-assessment algorithms: Evidence, gaps, and recommendations (Policy memo).
H. Cross-Sector & Comparative-Use Analyses
Arnold Ventures. (2024). The Public Safety Assessment (PSA): Factors and validation. https://advancingpretrial.org/psa/factors/
British Medical Journal. (2025). Algorithmic triage in emergency medicine: A systematic review.
Federal Reserve Board. (2024). AI in consumer-credit underwriting: Model transparency and bias mitigation.
OECD. (2022). Measuring the environmental impacts of AI compute and applications (Digital Economy Paper 360).
World Economic Forum. (2024). Governing generative AI in regulated sectors.
I. Classic & Foundational Works (pre-2023 but still central)
Osoba, O., & Welser, W. (2017). An intelligence in our image: The risks of bias and errors in machine learning-based predictive analytics. RAND Corporation.
Perry, W. L., McInnis, B., Price, C., Smith, S., & Hollywood, J. (2013). Predictive policing: The role of crime forecasting in law-enforcement operations. RAND Corporation.