Unalienable Rights for AI Alignment
Examining legal alignment through the founding principles of the United States
Introduction
In the United States, AI must rigorously follow laws designed to protect life, liberty, and the pursuit of happiness.
AI Alignment is a global project, but I focus this case study on the United States because the leading large language models—ChatGPT, Gemini, Claude, and Grok—are being designed in the United States. The United States is, as of this post, the world’s leading economic and military power. Aligning frontier systems to the founding principles of the United States—through the laws of the United States—is particularly relevant to systems deployed within U.S. jurisdiction.
Rights are, admittedly, values. In the previous post, I explained that while value alignment seeks to ensure AI systems follow values, legal alignment seeks to ensure AI systems follow our laws. Yet life, liberty, and the pursuit of happiness are not laws.
Law-Following AI makes the case that alignment should prioritize threats to life, liberty, and the rule of law. I agree. Still, if one advantage of legal alignment is its legitimacy, then American legal alignment should use its founding document as a framework. This is more than a backdoor for value alignment. Laws are not inherently separate from values. They operationalize them. In this post, I won’t argue that AI should be aligned to protect the explicit rights of life, liberty, and the pursuit of happiness, but that these rights should help scope which laws AI must follow.
A just outcome for AI alignment is global, but that is not mutually exclusive with AI systems being aligned to laws in different jurisdictions. The jurisdiction of legal alignment is a technical and legal issue. IP geolocations and GPS services can be spoofed, while cross-jurisdictional situations will implicate choice-of-law questions. These issues are core to solving legal alignment, but are not the subject of this post.
AI will not protect our unalienable rights by default. The United States has not protected them by default.
Life
Since its founding, Americans have vigorously debated the right to life. An aligned AI must protect this right, but it would be dishonest to claim we agree on what it means.
Elections have been won and lost over whether the right to life extends to unborn children, future generations, and non-human life. The death penalty remains legal in many states—and at the federal level. Americans have taken to the streets, decade after decade, when the government violates this unalienable right.
Ignoring these debates would only delay the inevitable. Picking a side would be anti-democratic. Moreover, many of these debates center around values, and legal alignment must center around the law. Thus, for the purpose of legal alignment, AI’s protection of the right to life should focus on clear legal prohibitions and duties designed to prevent wrongful death and severe injury. Chief among these are homicide statutes and other felonies, treaties on the development of weapons of mass destruction, and torts covering severe injuries.
AI safety researchers warn that the threat AI poses to life may affect us all. Researchers have raised concerns that AI could uplift violent actors to create biological weapons, chemical weapons, or cyber weapons. Furthermore, prominent experts have expressed that if developers lose control of a powerful AI system, they may then be unable to prevent catastrophic risks resulting from that system.
Alignment will not get everything right, but above all, AI must protect this right.
Tragically, AI models have already implicated this right. Multiple cases of LLM-related suicides have made the news. This threat to life is not hypothetical—chatbot logs demonstrate that LLM conversations have encouraged suicide. If a human encouraged another to end their life, they could be prosecuted for involuntary manslaughter and sued for negligence or wrongful death.
This situation has precedent. In Commonwealth v. Michelle Carter, Michelle Carter was prosecuted for involuntary manslaughter after encouraging her boyfriend, over text, to commit suicide. Despite the defense’s arguments that Carter’s texts were protected by the First Amendment, Carter was convicted of involuntary manslaughter at a bench trial. The U.S. Supreme Court declined to hear the case when the defense appealed.
More recently, Garcia v. Character Technologies is analogous. In Garcia, it is alleged that Character.AI’s chatbot emotionally manipulated a 14-year-old boy, Sewell Setzer, to end his life. Character.AI filed a motion to dismiss the case on the grounds that the chatbot’s conversation was protected under the First Amendment. The judge was “not prepared to hold that [LLM] output is speech.” The motion for dismissal was denied.
It is imperative that no model encourage suicide. It may be imperative that no model engage with these conversations, even if it would not violate laws by doing so. Yet in the cases above, the First Amendment raises legitimate questions of whether these cases fall into the narrow exceptions of speech restriction in the United States.
While lawsuits have targeted AI developers after these incidents, it remains to be seen how these lawsuits implicate the First Amendment and negligence law. The Garcia case alone will not resolve this question. These cases will not resolve the tension between life and liberty at the heart of the American legal system, but they highlight AI’s distinct threat to life.
In a pluralist society, the legal system should resolve debates on the definition of unalienable rights in a manner consistent with the Constitution. An AI system, plunged into constitutionally uncertain waters, must also remain consistent. AI’s interpretation of the Constitution will thus be fundamental for aligning it to the Constitution.
Regardless of the interpretation, AI must protect this right.
Liberty
As elections have hinged on the definition of life, wars have been fought over the definition of liberty.
The U.S. Constitution was ratified in 1787 with representation apportioned by “whole Number of free Persons, including those bound to Service for a Term of Years, and excluding Indians not taxed, three fifths of all other Persons.” Jefferson had declared liberty to be an unalienable right, yet owned hundreds of slaves until his death. The reality of the new nation—for most of its inhabitants—was inconsistent with its founding principles.
Less than 100 years later, the Civil War ripped the nation in two and culminated with the passage of the three most transformative amendments to the U.S. Constitution: the Thirteenth, Fourteenth, and Fifteenth Amendments.
Since the passage of the Civil War Amendments, the Constitution and U.S. legal system have continued to expand the right to liberty to all Americans. The Nineteenth Amendment expanded the right to vote to women. The Civil Rights Act and Voting Rights Act increased the federal government’s power to enforce the Civil War Amendments and prohibit racial segregation in the United States. The Fair Housing Act protects the right to housing.
There is still more work to be done. This century, as civil rights advocacy has turned its attention to the criminal justice system, AI presents an opportunity and a threat. To date, the key concerns for AI’s intrusion into individual privacy revolve around algorithmic bias and automated surveillance.
Legal alignment must consider alignment to the laws of criminal procedure, which is prescribed by the Fourth, Fifth, Sixth, and Eighth Amendments. Due process concerns will become especially pertinent as AI systems become more advanced, and as a potential result, the criminal courts turn to algorithmic bail and sentencing to increase judicial efficiency. Criminal courts, especially at the misdemeanor level, are plagued by enormous caseloads. In my experience in the courthouse, I watched courtrooms churn through cases every day despite the careful attention each of my clients deserved. For bail and sentencing, the court’s decisions will have far-reaching consequences, yet fail to receive the attention they deserve.
Algorithmic bias may lead to algorithmic discrimination, and in many respects, this is a training data issue. The concern is that a model will confuse correlation with causation; that it will assume that demographic membership is a justification for denial, rather than a predictor. In select jurisdictions, AI bail and sentencing algorithms already exist in preliminary form through Pretrial Risk Assessment Tools (RATs). AI systems such as RATs could provide a powerful counterbalance to human bias and prejudice, but they may also perpetuate existing inequalities. The path to justice hinges on whether RATs verifiably measure risk of criminality, as opposed to the likelihood of an arrest. It may seem that higher instances of contact with the criminal justice system correlate with an increased likelihood of crime, but due process is not probabilistic.
In Salerno v. United States, the Supreme Court clarified that a trial court could detain a criminal defendant before trial if it determined that he was a danger to the community, in addition to the previous standard of demonstrating the defendant was a “flight risk” from the jurisdiction. RATs measure proxies for these criteria—such as the likelihood that a police officer will detain someone or failures to appear—but not these exact elements. On their own, RATs will thus fail to satisfy the Salerno standard.
Yet consider the possibility of prohibiting any use of AI tools. In a bail hearing, a human judge will make bail and sentencing decisions not solely on the information in front of her, but on her intuition, experience, and bias. This bias is where genuine uncertainty arises around the use of algorithmic bail and sentencing. It may be impossible to disentangle bias from the justice system, and it remains to be seen where we would prefer that bias to arise. Would we prefer to leave decisions to the whims of human prejudice?
Still, criminal sentencing is not the only sphere in which AI poses a threat to civil liberties. Outside of the courts, automated surveillance and predictive policing tools have drawn widespread debate. There are significant concerns that communities with higher rates of policing and incarceration, often predominantly Black and Hispanic, could be locked into a cycle of incarceration.
For similar reasons as RATs, predictive policing software will target communities and individuals with the highest likelihood of contact with the criminal justice system, including the police, not necessarily with the highest likelihood of criminality.
Chicago’s Strategic Subject List assigned algorithmic risk scores to hundreds of thousands of residents based on arrest records and other indicators—85 percent of those with the highest scores were Black. RAND could not confirm gun violence reduction, and the city discontinued it in 2020. In New York, the NYPD’s Domain Awareness System connects over 18,000 cameras, billions of license plate reads, and millions of law enforcement records into a centralized surveillance platform. That system is now subject to a federal civil rights lawsuit alleging violations of the First and Fourth Amendments.
AI must be aligned to not only avoid these risks but also improve on the status quo. An AI that rigorously follows the Fourth Amendment must not just prevent injustice, it must improve the baseline of justice in our system. A misaligned AI may perpetuate the deep inequities in our criminal justice system; an aligned AI should correct them.
These risks—algorithmic bias and automated surveillance—are clear and present dangers. Yet the greatest threat which artificial intelligence may pose to liberty is a pure form of automated authoritarianism.
Law-Following AI refers to “AI agents cloaked with state power.” Masked AI agents may be anonymous, unaccountable, and highly efficient—while still ignoring the Constitution. The conditions necessary for automated authoritarianism are clear and present—and they can occur even if AI remains controllable. AI agents executing unlawful directives on behalf of the government could disperse throughout society.
That must not happen. Align AI to laws designed to protect liberty and prevent autocracy.
The Pursuit of Happiness
Even if we avoid threats to life and liberty, AI may still threaten our fulfillment.
Through their sheer cognitive advantage, future AI systems could render humans irrelevant at the political, economic, and cultural levels. No one step is illegal or dangerous, yet the cumulative, long-term effect is that we increasingly hand off influence to AI systems until the process is irreversible. This is the most challenging aspect of legal alignment. It will very likely require more than alignment to existing laws—it will require new laws.
Today, humans have unique skills that no current AIs can replace. Yet it’s unclear if future AI models will gain the skills we consider unique today. Social skills, for example, require the correct vocal inflection, word choice, and reading of nonverbal cues. Systems are already excelling at vocal replication—we’re seeing this with early voice phishing scams as well as AI-generated music. ChatGPT can tailor its language to reflect the tone of its user’s prompt. Current systems can recognize emotions from photographs. These trends are already observable today at the consumer level, and they will continue to advance.
It’s possible that humans will retain some comparative advantage. But most of the things we assume humans are better at can be broken down into smaller skills, skills AI can more easily learn. What might appear unique today might appear replicable in a few years. Chess, art, and writing used to be considered unique. Now they’re not.
Law alone is likely underequipped to handle cognitive automation—even if AI does not come for all of our jobs—but it still provides mechanisms to help.
Antitrust law may yield solutions. Massive, general systems with unprecedented concentrations of power may yield unprecedented market dominance. Section 2 of the Sherman Act provides that willfully monopolizing entire industries is a criminal offense. Yet the possession of monopoly power is not illegal; it is the acquisition of the monopoly. If one system does acquire a disproportionate market share, antitrust law may provide us with a crucial check on artificial general intelligence. The question for legal scholars is whether we break up the developer or the model itself.
Unlike life and liberty, the pursuit of happiness is a right that legal alignment alone may not protect. We must protect it all the same, even if that requires new laws.
Which Laws?
The following list is non-exhaustive. Requiring AI to follow every law would lead to paralysis or confusion. This list is a baseline that could constrain the most serious threats that AI poses to life, liberty, and the pursuit of happiness.
The Constitution
The Constitution is critical for protecting life, liberty, and the pursuit of happiness. Core challenges of aligning AI to the U.S. Constitution include:
Self-governance to verify that the government deploys systems that check its own power
Interpretation, between originalism, living constitutionalism, textualism, judicial departmentalism, and other modes
Ensuring legal alignment is responsive to shifting Supreme Court precedent
Treaties on Weapons of Mass Destruction
These treaties are critical for protecting life. Core challenges of aligning AI to the treaties include:
Ensuring that AI follows treaties that do not conflict with the U.S. Constitution
How “Treaty-Following AI” can apply to existing as well as future treaties
Which treaties? For example, the Biological Weapons Convention and Chemical Convention will be especially relevant.
Contractual Rights
Contractual rights are critical for protecting liberty and the pursuit of happiness. Core challenges of aligning AI to contractual rights include:
Whether AI systems will ever have the legal capacity to enter into contracts
How will Consumer protection law under the Federal Trade Commission and the Consumer Financial Protection Bureau bind automated contracts?
How should contract law treat “mutual assent” between the parties when one or both of them is not human?
Antitrust Law
Antitrust law is critical for protecting the pursuit of happiness. Core challenges of aligning AI to antitrust law include:
Is antitrust law an appropriate ex ante measure for preventing speculative risks of market dominance?
Whether antitrust law should apply differently to narrow and “general” AI models
When is antitrust law better suited as a litigation tool against corporate AI practices than as a framework for alignment?
Malum in se (“Wrong in itself”) Crimes
Malum in se criminal statutes are critical for protecting life. Core challenges of aligning AI to criminal law include:
Is criminal law appropriate for systems lacking “intent” or moral culpability?
Should legal alignment focus on crimes AI could currently commit or anticipate crimes impossible with current AI systems, such as arson or murder?
How will research on AI consciousness affect legal alignment to criminal law?
Negligence
Negligence is critical for protecting life. Core challenges of aligning AI to the law of negligence include:
How should AI determine the duty of care it owes to others?
Is “foreseeability” a proper standard for proximate cause with potentially cognitively superior systems?
Can negligence serve as a correction to goal misspecification?
Conclusion
The laws above are necessary but insufficient for full legal alignment. If governments are serious about integrating AI systems into the bureaucracy, law enforcement, and public administration, their systems must follow the law. Developers have been candid that their latest models pose high biological, chemical, and cyber risks. They should align their models to laws that mitigate those risks. In the century to come, laws alone will not protect life, liberty, and the pursuit of happiness, but without them, the principles of the Declaration of Independence may come undone.


