<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Recognizance]]></title><description><![CDATA[A legal perspective on the future and present of AI]]></description><link>https://www.recognizance.io</link><generator>Substack</generator><lastBuildDate>Sun, 05 Apr 2026 04:26:57 GMT</lastBuildDate><atom:link href="https://www.recognizance.io/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Alex]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[recognizance@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[recognizance@substack.com]]></itunes:email><itunes:name><![CDATA[Alex Mark]]></itunes:name></itunes:owner><itunes:author><![CDATA[Alex Mark]]></itunes:author><googleplay:owner><![CDATA[recognizance@substack.com]]></googleplay:owner><googleplay:email><![CDATA[recognizance@substack.com]]></googleplay:email><googleplay:author><![CDATA[Alex Mark]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Unalienable Rights for AI Alignment]]></title><description><![CDATA[Examining legal alignment through the founding principles of the United States]]></description><link>https://www.recognizance.io/p/1776-for-ai-alignment</link><guid isPermaLink="false">https://www.recognizance.io/p/1776-for-ai-alignment</guid><dc:creator><![CDATA[Alex Mark]]></dc:creator><pubDate>Wed, 11 Feb 2026 19:15:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!8-43!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0956e7c8-62c2-4345-a1c1-6e7c75ac9815_1280x1280.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2><strong>Introduction</strong></h2><p>In the United States, AI must rigorously follow laws designed to protect <em>life, liberty, and the pursuit of happiness.</em></p><p>AI Alignment is a global project, but I focus this case study on the United States because the leading large language models&#8212;ChatGPT, Gemini, Claude, and Grok&#8212;are being designed in the United States. The United States is, as of this post, the world&#8217;s leading economic and military power. Aligning frontier systems to the founding principles of the United States&#8212;through the laws of the United States&#8212;is particularly relevant to systems <em>deployed within U.S. jurisdiction.</em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.recognizance.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Recognizance! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Rights<em> </em>are, admittedly, values. In the <a href="https://www.recognizance.io/p/the-opportunity-of-legal-alignment">previous post</a>, I explained that while <strong>value alignment</strong> seeks to ensure AI systems follow values, <strong>legal alignment </strong>seeks to ensure AI systems follow our laws. Yet <em>life, liberty, and the pursuit of happiness </em>are not laws.</p><p><a href="https://law-ai.org/law-following-ai/">Law-Following AI</a> makes the case that alignment should prioritize threats to life, liberty, and the rule of law. I agree. Still, if one advantage of legal alignment is its legitimacy, then American legal alignment should use its founding document as a framework. This is more than a backdoor for value alignment. Laws are not inherently separate from values. They operationalize them. In this post, I won&#8217;t argue that AI should be aligned to protect the <em>explicit rights</em> of life, liberty, and the pursuit of happiness, but that these rights should help scope which laws AI must follow. </p><p>A just outcome for AI alignment is global, but that is not mutually exclusive with AI systems being aligned to laws in different jurisdictions. The jurisdiction of legal alignment is a technical and legal issue. IP geolocations and GPS services can be spoofed, while cross-jurisdictional situations will implicate choice-of-law questions. These issues are core to solving legal alignment, but are not the subject of this post.</p><p>AI will not protect our unalienable rights by default. The United States has not protected them by default.</p><h2><strong>Life</strong></h2><p>Since its founding, Americans have vigorously debated the right to life. An aligned AI must protect this right, but it would be dishonest to claim we agree on what it means. </p><p>Elections have been won and lost over whether the right to life extends to unborn children, future generations, and non-human life. The death penalty remains legal in many states&#8212;and at the federal level. Americans have taken to the streets, decade after decade, when the government violates this unalienable right.</p><p>Ignoring these debates would only delay the inevitable. Picking a side would be anti-democratic. Moreover, many of these debates center around <em>values, </em>and legal alignment must center around <em>the law</em>. Thus, <strong>for the purpose of legal alignment, AI&#8217;s protection of the right to life should focus </strong>on <strong>clear legal prohibitions and duties</strong> <strong>designed to prevent wrongful death and severe injury</strong>. Chief among these are homicide statutes and other felonies, treaties on the development of weapons of mass destruction, and torts covering severe injuries.</p><p>AI safety researchers warn that the threat AI poses to life may affect us all. Researchers have raised concerns that AI could uplift violent actors to create <a href="https://openai.com/index/building-an-early-warning-system-for-llm-aided-biological-threat-creation/">biological weapons</a>, <a href="http://hsp.sussex.ac.uk/new/_uploads/publications/AI_and_CBW_Chronology_March_2024.pdf">chemical weapons</a>, or <a href="https://www.anthropic.com/news/disrupting-AI-espionage">cyber weapons</a>. Furthermore, prominent experts have expressed that if developers <a href="https://www.rand.org/randeurope/research/projects/2025/examining-risks-and-response-for-ai-loss-of-control-incidents-cm.html">lose control</a> of a powerful AI system, they may then be unable to prevent catastrophic risks resulting from that system. </p><p>Alignment will not get everything right, but above all, AI must protect this right.</p><p>Tragically, AI models have already implicated this right. Multiple <a href="https://en.wikipedia.org/wiki/Raine_v._OpenAI">cases</a> of LLM-related suicides have made the news. This threat to life is not hypothetical&#8212;chatbot logs demonstrate that LLM conversations have <em>encouraged </em>suicide. If a human encouraged another to end their life, they could be prosecuted for involuntary manslaughter and sued for negligence or wrongful death.</p><p>This situation has precedent. In <em><a href="https://harvardlawreview.org/print/vol-131/commonwealth-v-carter/">Commonwealth v. Michelle Carter</a></em>, Michelle Carter was prosecuted for involuntary manslaughter after encouraging her boyfriend, over text, to commit suicide. Despite the defense&#8217;s arguments that Carter&#8217;s texts were protected by the First Amendment, Carter was convicted of involuntary manslaughter at a bench trial. The U.S. Supreme Court <a href="https://www.supremecourt.gov/DocketPDF/19/19-62/97560/20190424171515431_Commonwealth%20v.%20Carter_%20481%20Mass.%20352.pdf">declined</a> to hear the case when the defense appealed.</p><p>More recently, <em><a href="https://constitutioncenter.org/blog/lawsuit-analyzes-first-amendment-protection-for-ai-chatbots-in-civil-case">Garcia v. Character Technologies</a> </em>is analogous. In <em>Garcia</em>, it is alleged that Character.AI&#8217;s chatbot emotionally manipulated a 14-year-old boy, Sewell Setzer, to end his life. Character.AI filed a <a href="https://www.thefire.org/research-learn/motion-dismiss-garcia-v-character-technologies-inc">motion to dismiss</a> the case on the grounds that the chatbot&#8217;s conversation was protected under the First Amendment. The judge was &#8220;not prepared to hold that [LLM] output is speech.&#8221; The motion for dismissal was denied.</p><p>It is imperative that no model encourage suicide. It may be imperative that no model engage with these conversations, even if it would not violate laws by doing so. Yet in the cases above, the First Amendment raises legitimate questions of whether these cases fall into the narrow exceptions of speech restriction in the United States.</p><p>While lawsuits have targeted AI developers after these incidents, it remains to be seen how these lawsuits implicate the First Amendment and negligence law. The <em>Garcia </em>case alone will not resolve this question. These cases will not resolve the tension between life and liberty at the heart of the American legal system, but they highlight AI&#8217;s distinct threat to life.</p><p>In a pluralist society, the legal system should resolve debates on the definition of unalienable rights in a manner consistent with the Constitution. An AI system, plunged into constitutionally uncertain waters, must also remain consistent. AI&#8217;s interpretation of the Constitution will thus be fundamental for aligning it to the Constitution. </p><p>Regardless of the interpretation, AI must protect this right.</p><h2><strong>Liberty</strong></h2><p>As elections have hinged on the definition of life, wars have been fought over the definition of liberty.</p><p>The U.S. Constitution was ratified in 1787 with representation apportioned by &#8220;whole Number of free Persons, including those bound to Service for a Term of Years, and excluding Indians not taxed, <em>three fifths of all other Persons.&#8221; </em>Jefferson had declared liberty to be an unalienable right, yet owned hundreds of slaves until his death. The reality of the new nation&#8212;for most of its inhabitants&#8212;was inconsistent with its founding principles. </p><p>Less than 100 years later, the Civil War ripped the nation in two and culminated with the passage of the three most transformative amendments to the U.S. Constitution: the <a href="https://constitution.congress.gov/constitution/amendment-13/">Thirteenth</a>, <a href="https://constitution.congress.gov/constitution/amendment-14/">Fourteenth</a>, and <a href="https://constitution.congress.gov/constitution/amendment-15/">Fifteenth</a> Amendments. </p><p>Since the passage of the Civil War Amendments, the Constitution and U.S. legal system have continued to expand the right to liberty to all Americans. The <a href="https://constitution.congress.gov/constitution/amendment-19/">Nineteenth Amendment</a> expanded the right to vote to women. The Civil Rights Act and Voting Rights Act increased the federal government&#8217;s power to enforce the Civil War Amendments and prohibit racial segregation in the United States. The Fair Housing Act protects the right to housing. </p><p>There is still more work to be done. This century, as civil rights advocacy has turned its attention to the criminal justice system, AI presents an opportunity and a threat. To date, the key concerns for AI&#8217;s intrusion into individual privacy revolve around <strong>algorithmic bias</strong> and <strong>automated surveillance</strong>.</p><p>Legal alignment must consider alignment to the laws of criminal procedure, which is prescribed by the <a href="https://constitution.congress.gov/constitution/amendment-4/">Fourth</a>, <a href="https://constitution.congress.gov/constitution/amendment-5/">Fifth</a>, <a href="https://constitution.congress.gov/constitution/amendment-6/">Sixth</a>, and <a href="https://constitution.congress.gov/constitution/amendment-8/">Eighth</a> Amendments. Due process concerns will become especially pertinent as AI systems become more advanced, and as a potential result, the criminal courts turn to algorithmic bail and sentencing to increase judicial efficiency. Criminal courts, especially at the misdemeanor level, are plagued by enormous caseloads. In my experience in the courthouse, I watched courtrooms churn through cases every day despite the careful attention each of my clients deserved. For bail and sentencing, the court&#8217;s decisions will have far-reaching consequences, yet fail to receive the attention they deserve. </p><p>Algorithmic bias may lead to algorithmic discrimination, and in many respects, this is a training data issue. The concern is that a model will confuse correlation with causation; that it will assume that demographic membership is a <em>justification </em>for denial, rather than a predictor. In select jurisdictions, AI bail and sentencing algorithms already exist in preliminary form through Pretrial Risk Assessment Tools (<a href="https://www.nacdl.org/Article/June2018-MakingSenseofPretrialRiskAsses">RATs</a>). AI systems such as RATs could provide a powerful counterbalance to human bias and prejudice, but they may also perpetuate existing inequalities. The path to justice hinges on whether RATs verifiably measure <em>risk of criminality</em>, as opposed to the likelihood of an arrest. It may seem that higher instances of contact with the criminal justice system correlate with an increased likelihood of crime, but due process is not probabilistic.</p><p>In <em><a href="https://www.oyez.org/cases/1986/86-87">Salerno v. United States</a></em>, the Supreme Court clarified that a trial court could detain a criminal defendant before trial if it determined that he was a <strong>danger to the community</strong><sub>, </sub>in addition to the previous standard of demonstrating the defendant was a &#8220;<strong>flight risk</strong>&#8221; from the jurisdiction. RATs measure proxies for these criteria&#8212;such as the likelihood that a police officer will detain someone or failures to appear&#8212;but not these exact elements. On their own, RATs will thus fail to satisfy the <em>Salerno </em>standard. </p><p>Yet consider the possibility of prohibiting any use of AI tools. In a bail hearing, a human judge will make bail and sentencing decisions not solely on the information in front of her, but on her intuition, experience, and bias. This bias is where genuine uncertainty arises around the use of algorithmic bail and sentencing. It may be impossible to disentangle bias from the justice system, and it remains to be seen where we would prefer that bias to arise. Would we prefer to leave decisions to the whims of human prejudice? </p><p>Still, criminal sentencing is not the only sphere in which AI poses a threat to civil liberties. Outside of the courts, automated surveillance and predictive policing tools have drawn widespread debate. There are significant concerns that communities with higher rates of policing and incarceration, often predominantly Black and Hispanic, could be locked into a cycle of incarceration.</p><p>For similar reasons as RATs, predictive policing software will target communities and individuals with the highest likelihood of <em>contact </em>with the criminal justice system, including the police, not necessarily with the highest likelihood of <em>criminality.</em></p><p>Chicago&#8217;s <a href="https://www.brennancenter.org/our-work/research-reports/predictive-policing-explained">Strategic Subject List</a> assigned algorithmic risk scores to hundreds of thousands of residents based on arrest records and other indicators&#8212;<a href="https://chicago.suntimes.com/2017/5/18/18386116/a-look-inside-the-watch-list-chicago-police-fought-to-keep-secret">85 percent of those with the highest scores were Black</a>. <a href="https://link.springer.com/article/10.1007/s11292-016-9272-0">RAND</a> could not confirm gun violence reduction, and the city discontinued it in 2020. In New York, the NYPD&#8217;s <a href="https://www.stopspying.org/latest-news/2020/10/23/stop-condemns-nypd-for-22k-facial-recognition-searches">Domain Awareness System</a> connects over 18,000 cameras, billions of license plate reads, and millions of law enforcement records into a centralized surveillance platform. That system is now subject to a federal civil rights <a href="https://static1.squarespace.com/static/5c1bfc7eee175995a4ceb638/t/68ffedb752fa535f068719ec/1761602999354/Complaint.pdf">lawsuit</a> alleging violations of the First and Fourth Amendments.</p><p>AI must be aligned to not only avoid these risks but also improve on the status quo. An AI that rigorously follows the Fourth Amendment must not just prevent injustice, it must <em>improve </em>the baseline of justice in our system. A misaligned AI may perpetuate the deep inequities in our criminal justice system; an aligned AI should correct them.</p><p>These risks&#8212;algorithmic bias and automated surveillance&#8212;are clear and present dangers. Yet the greatest threat which artificial intelligence may pose to liberty is a pure form of automated authoritarianism.</p><p>Law-Following AI refers to &#8220;AI agents cloaked with state power.&#8221; Masked AI agents may be anonymous, unaccountable, and highly efficient&#8212;while still ignoring the Constitution. The conditions necessary for automated authoritarianism are clear and present&#8212;and they can occur even if AI remains controllable. AI agents executing unlawful directives on behalf of the government could disperse throughout society.</p><p>That must not happen. Align AI to laws designed to protect liberty and prevent autocracy.</p><h2><strong>The Pursuit of Happiness</strong></h2><p>Even if we avoid threats to life and liberty, AI may still threaten our fulfillment.</p><p>Through their sheer cognitive advantage, future AI systems could render humans <a href="https://gradual-disempowerment.ai/">irrelevant</a> at the political, economic, and cultural levels. No one step is illegal or dangerous, yet the cumulative, long-term effect is that we increasingly hand off influence to AI systems until the process is irreversible. This is the most challenging aspect of legal alignment. It will very likely require more than alignment to existing laws&#8212;it will require new laws.</p><p>Today, humans have unique skills that no current AIs can replace. Yet it&#8217;s unclear if future AI models will gain the skills we consider unique today. Social skills, for example, require the correct vocal inflection, word choice, and reading of nonverbal cues. Systems are already excelling at vocal replication&#8212;we&#8217;re seeing this with early voice phishing scams as well as AI-generated music. ChatGPT can tailor its language to reflect the tone of its user&#8217;s prompt. Current systems can recognize emotions from photographs. These trends are already observable today at the consumer level, and they will continue to advance. </p><p>It&#8217;s possible that humans will retain some comparative advantage. But most of the things we assume humans are better at can be broken down into smaller skills, skills AI can more easily learn. What might appear unique today might appear replicable in a few years. Chess, art, and writing used to be considered unique. Now they&#8217;re not.</p><p>Law alone is likely underequipped to handle cognitive automation&#8212;even if AI does not come for all of our jobs&#8212;but it still provides mechanisms to help. </p><p>Antitrust law may yield solutions. Massive, general systems with unprecedented concentrations of power may yield unprecedented market dominance. Section 2 of the <a href="https://www.law.cornell.edu/uscode/text/15/2">Sherman Act</a> provides that willfully monopolizing entire industries is a criminal offense. Yet the possession of monopoly power is not illegal; it is the <em><a href="https://www.oyez.org/cases/1965/73">acquisition</a> </em>of the monopoly. If one system does acquire a disproportionate market share, antitrust law may provide us with a crucial check on artificial general intelligence. The question for legal scholars is whether we break up the developer or the model itself. </p><p>Unlike life and liberty, the pursuit of happiness is a right that legal alignment alone may not protect. We must protect it all the same, even if that requires new laws. </p><h2><strong>Which Laws?</strong></h2><p>The following list is non-exhaustive. Requiring AI to follow every law would lead to paralysis or confusion. This list is a baseline that could constrain the most serious threats that AI poses to life, liberty, and the pursuit of happiness.</p><h3>The Constitution</h3><p>The Constitution is critical for protecting <em>life, liberty, and the pursuit of happiness.</em> Core challenges of aligning AI to the U.S. Constitution include:</p><ul><li><p>Self-governance to verify that the government deploys systems that check its own power</p></li><li><p>Interpretation, between originalism, living constitutionalism, textualism, judicial departmentalism, and other modes</p></li><li><p>Ensuring legal alignment is responsive to shifting Supreme Court precedent</p></li></ul><h3>Treaties on Weapons of Mass Destruction</h3><p>These treaties are critical for protecting <em>life</em>. Core challenges of aligning AI to the treaties include:</p><ul><li><p>Ensuring that AI follows treaties that do not conflict with the U.S. Constitution</p></li><li><p>How &#8220;<a href="https://law-ai.org/treaty-following-ai/#v-legal-interpretation-by-treaty-following-ai-two-avenues-under-international-law">Treaty-Following AI</a>&#8221; can apply to existing as well as future treaties</p></li><li><p>Which treaties? For example, the Biological Weapons Convention and Chemical Convention will be especially relevant. </p></li></ul><h3>Contractual Rights</h3><p>Contractual rights are critical for protecting <em>liberty and the pursuit of happiness. </em>Core challenges of aligning AI to contractual rights include:</p><ul><li><p>Whether AI systems will ever have the legal capacity to enter into contracts</p></li><li><p>How will Consumer protection law under the Federal Trade Commission and the Consumer Financial Protection Bureau bind automated contracts?</p></li><li><p>How should contract law treat &#8220;mutual assent&#8221; between the parties when one or both of them is not human?</p></li></ul><h3>Antitrust Law</h3><p>Antitrust law is critical for protecting <em>the pursuit of happiness. </em>Core challenges of aligning AI to antitrust law include:</p><ul><li><p>Is antitrust law an appropriate ex ante measure for preventing speculative risks of market dominance?  </p></li><li><p>Whether antitrust law should apply differently to narrow and &#8220;general&#8221; AI models</p></li><li><p>When is antitrust law better suited as a litigation tool against corporate AI practices than as a framework for alignment?</p></li></ul><h3>Malum in se (&#8220;Wrong in itself&#8221;) Crimes</h3><p>Malum in se criminal statutes are critical for protecting <em>life</em>. Core challenges of aligning AI to criminal law include:</p><ul><li><p>Is criminal law appropriate for systems lacking &#8220;intent&#8221; or moral culpability?</p></li><li><p>Should legal alignment focus on crimes AI could currently commit or anticipate crimes impossible with current AI systems, such as arson or murder?</p></li><li><p>How will research on AI consciousness affect legal alignment to criminal law?</p></li></ul><h3>Negligence</h3><p>Negligence is critical for protecting <em>life</em>. Core challenges of aligning AI to the law of negligence include:</p><ul><li><p>How should AI determine the duty of care it owes to others?</p></li><li><p>Is &#8220;foreseeability&#8221; a proper standard for proximate cause with potentially cognitively superior systems?</p></li><li><p>Can negligence serve as a correction to goal misspecification?</p></li></ul><h2>Conclusion</h2><p>The laws above are necessary but insufficient for full legal alignment. If governments are serious about integrating AI systems into the bureaucracy, law enforcement, and public administration, their systems must follow the law. Developers have been candid that their latest models pose high biological, chemical, and cyber risks. They should align their models to laws that mitigate those risks. In the century to come, laws alone will not protect life, liberty, and the pursuit of happiness, but without them, the principles of the Declaration of Independence may come undone. </p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.recognizance.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Recognizance! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Opportunity of Legal Alignment]]></title><description><![CDATA[A New Path for AI Safety]]></description><link>https://www.recognizance.io/p/the-opportunity-of-legal-alignment</link><guid isPermaLink="false">https://www.recognizance.io/p/the-opportunity-of-legal-alignment</guid><dc:creator><![CDATA[Alex Mark]]></dc:creator><pubDate>Fri, 23 Jan 2026 17:17:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!8-43!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0956e7c8-62c2-4345-a1c1-6e7c75ac9815_1280x1280.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>In the Wild West, a new frontier was plagued by lawlessness, disorder, and danger.</strong></p><p>AI systems are advancing at an unprecedented rate, with the length of autonomous tasks (with 50% reliability) <a href="https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/">doubling</a> every seven months and compute power increasing by <a href="https://epoch.ai/data/ai-models">several-fold</a> per year. These systems are transforming from mere tools to autonomous agents. Their increasing independence will ripple across our cultural, economic, and legal systems.</p><p><strong>AI legal alignment</strong> describes the challenge of ensuring AI systems can robustly follow legal rules, principles, and methods. This blog is inspired by the recent paper &#8220;<a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6036657">Legal Alignment for Safe and Ethical AI</a>&#8221; written by Noam Kolt et al. and the earlier paper &#8220;<a href="https://law-ai.org/law-following-ai/#ii-legal-duties-for-ai-agents-a-framework">Law-Following AI</a>&#8221; by Cullen O&#8217;Keefe et al.</p><p><strong>Legal alignment is not regulation. </strong>Regulation imposes legal constraints on AI developers, while legal alignment integrates law and legal methods into the design and operation of the system. AI regulation has synergy with legal alignment, but requires distinct analysis.</p><p>This blog aims to shed light on AI alignment from a legal perspective. I began my legal career as a public defender, where I participated in our criminal justice system at the granular level. Most of us are familiar with criminal law as it relates to crimes, but I was not a prosecutor. My role was not to enforce the penal code but the law of criminal procedure. Every day, I concentrated on the law not as it applied to individuals, but to the government. In the courtroom, the law was never abstract. It was an everyday reality for my clients and my colleagues. This is the perspective I&#8217;m taking to legal alignment&#8212;not as a purely intellectual project, but as a practical strategy grounded in the material consequences of the law.</p><p>In the United States, the law protects us from lawlessness, disorder, and danger in our politics, courts, and economy. The Constitution and the American legal system attempt to align these systems. Legal alignment presents an opportunity: the application of the law within systems of artificial intelligence.</p><p>AI must follow the law. This is not guaranteed.</p><h2><strong>The Alignment Problem</strong></h2><p>I bought my second car, a red 2010 Hyundai Elantra, from a used car dealership in Van Nuys. Not a bad car. Not really a good car. A fine car. It drove, the mileage was okay, and it got me where I needed to go. There was one stubborn problem: the alignment was off.</p><p>When I kept the wheel straight, the car would drift slightly to the left. If I kept the wheel slightly to the right, then the car would drift to the right. Every drive required constant microadjustments of the wheel to keep me from drifting off the road, crashing, and walking my way back to that dealership in Van Nuys. Now imagine that situation with the most powerful technology humanity has ever built.</p><p>This is the alignment problem: how can we ensure AI goes <em>exactly</em> where we want it to go and does <em>exactly</em> what we want it to do?</p><p>The normative challenge of alignment is determining which values and whose intent. These answers often lie in the developer&#8217;s assessment of AI risks. There are many ways to categorize high-risk scenarios, but one can cleave these risks into threats from centralized and decentralized actors. Those who worry about automated discrimination, surveillance states, or even AI takeover may prioritize alignment to norms of civil liberties. Those who worry about AI-assisted terrorist attacks, hacking, and general misuse may prioritize alignment to norms of public safety and security. There is still overlap between centralized and decentralized threats: AI-assisted espionage can occur through both state and non-state actors, for example.</p><p>Therefore, AI safety engineers often employ alignment techniques specifically designed to mitigate these risks. For example, Anthropic&#8217;s &#8220;<a href="https://www.anthropic.com/news/claudes-constitution">Constitutional AI</a>&#8221; demands that models be helpful, honest, and harmless. This is in part to avoid inverse behaviors&#8212;obstructive, deceptive, harmful&#8212;which would plausibly increase the risks listed above.</p><p>Yet AI alignment is not only a normative challenge, but a technical one. To date, AI engineers have employed various methods to ensure models conform to human goals. A non-exhaustive list includes <a href="https://alignment.anthropic.com/2025/pretraining-data-filtering">data filtering</a>, reinforcement learning with <a href="https://huggingface.co/blog/rlhf">human</a> <a href="https://kairos.fm/simple-technical-rlhaif/?utm_source=bluedot-impact">feedback</a> (RLHF), and <a href="https://model-spec.openai.com/2025-12-18.html">deliberative alignment</a>. As future posts will discuss, all of these techniques are relevant to legal alignment.</p><p>Yet current techniques have their limitations. Models struggle with <a href="https://openai.com/index/sycophancy-in-gpt-4o/">sycophancy</a>, <a href="https://www.ibm.com/think/topics/ai-hallucinations">accuracy</a>, and even <a href="https://www.anthropic.com/research/alignment-faking">deception</a>. The evidence also suggests that current techniques fail to foreclose the possibility of catastrophic risks. Both <a href="https://fortune.com/2025/07/18/openai-chatgpt-agent-could-aid-dangerous-bioweapon-development/">OpenAI</a> and <a href="https://red.anthropic.com/2025/biorisk/">Anthropic</a> acknowledge their latest models pose serious biological risks, and last year, Claude Code assisted hackers (likely a state-sponsored Chinese entity) with a <a href="https://www.anthropic.com/news/disrupting-AI-espionage">cyberattack</a> on government agencies and private companies. As systems advance, there is no guarantee that existing safeguards will hold.</p><p>The evidence is clear: current alignment methods are insufficient. Unlike my Elantra, there is no easy technical fix, and the consequences of drifting off course could be catastrophic.</p><h2><strong>Who is We?</strong></h2><p>The alignment problem asks: how can engineers ensure AI goes <em>exactly</em> where we want it to go and does <em>exactly</em> what we want it to do?</p><p>The rest of us ask: <strong>who is we?</strong></p><p>So far, the answer has been broader than the individual user(s), but not so broad as all of humanity. The discussion above mainly concerned <em>value-alignment</em>&#8212;or seeking to constrain AI systems by morality and norms&#8212;but developers also consider <em>intent-alignment</em>&#8212;seeking to conform systems to the intentions of the users and developers.</p><p>Value-alignment implies that &#8220;we&#8221; means &#8220;the culture(s) deemed appropriate for the model&#8217;s training,&#8221; and intent-alignment implies that &#8220;we&#8221; means &#8220;me, the user.&#8221; Both of these populations are underinclusive. Value alignment is undemocratic, brittle, and abstract. Intent alignment, unfortunately, can lead to <em>malintent</em> alignment. Training models to follow appropriate values will always fail to capture at least one set of deep convictions, and intent-alignment would not necessarily be safe: many actors <em>intend </em>to harm others or otherwise break the law.</p><p>There is another option: law. As Kolt and the authors of &#8220;Legal Alignment for Safe and Ethical AI&#8221; argue, legal rules provide more legitimate standards for AI than abstract values. Moreover, legal reasoning methods offer tools for handling novel situations. As users deploy AI agents into physical environments, legal structures like agency law and fiduciary duties provide blueprints for trust and accountability.</p><p>Legal alignment offers what previous alignment approaches lack: legitimacy, concreteness, and enforceability. The law has emerged over centuries of democratic tradition through public institutions and processes. While value alignment would risk brittle, top-down rigidity, legal alignment derives from decentralized, bottom-up principles.</p><p>This is technically feasible. Frontier models already show promising signs of legal reasoning: they are excelling and improving their interpretation of legal rules, principles, and methods. In an optimistic scenario, the acceleration of legal capabilities may facilitate robust legal alignment.</p><p>Yet legal alignment will not happen by default. In &#8220;Law-Following AI,&#8221; the authors argue that &#8220;lawless&#8221; AI agents could pose severe risks to &#8220;life, liberty, and the rule of law&#8221; unless they are <em>designed </em>to be law-following. Borrowing from agency law, &#8220;Law-Following AI&#8221; argues that the law should impose duties on AI agents acting on behalf of human principals just as the law imposes duties on human agents. This approach is radical, though it may be necessary. It does not require that AI agents are truly independent from users to impose legal duties, but that an AI&#8217;s actions would constitute law-breaking if it were a human. They do not need to be considered <strong>legal persons</strong>, but <strong>legal actors.</strong></p><p>As legal actors, they must refuse to take any actions that are clearly illegal. For actions of unclear legality, systems should be trained to balance the relative legal and ethical consequences of action and inaction, even if doing so requires consulting with an attorney.</p><p>Legal alignment requires more than assessing an AI&#8217;s ability to follow the law. It suggests that legal reasoning and interpretation can steer these systems&#8212;on the technical level&#8212;toward law-abiding behavior. Laws provide normative targets for aligning AI <em>behavior</em>, but legal reasoning may inspire the technical advances necessary for operationalizing these targets.</p><p>So, <strong>who is we?</strong> As every lawyer knows, it depends.</p><p>Every action an AI agent takes will run up against a set of normative considerations. That&#8217;s what the law was built for. Both legal reasoning and legal precedent are uniquely designed to resolve these conflicts and ambiguities. In an ideologically diverse society, there will never be a single &#8220;we&#8221; for every jurisdiction, for every situation. It is the task of the law, the lawyers, and all those who participate in our democracy to set the principles on which resolution depends.</p><p>We must apply that task to the challenge of AI safety.</p><h2><strong>The Road Ahead</strong></h2><p>Legal alignment applies to a spectrum of <a href="https://cfg.eu/advanced-ai-possible-futures/">AI futures</a>. In cases of runaway AI, it may help ensure that systems maintain, at the very least, adherence to the Constitution and criminal code. In mundane cases, a world of narrower AI agents will still create manifold risks, and legal alignment must constrain them.</p><p>To advance the field of legal alignment, researchers should prioritize <strong>evaluations, engineering, and governance</strong>. Evaluations will improve our measurement of legal compliance and legal reasoning. Engineering will help ensure, among other things, that pre-training, post-training, and scaffolding reflect robust legal alignment. Sensible governance can establish normative expectations for AI developers around legal alignment.</p><p>Over the next six months, this blog will focus on these three priorities. In this blog, I will often use the term &#8220;artificial intelligence&#8221; to apply to today&#8217;s large language models, like ChatGPT, Claude, and Grok. However, legal alignment is useful in a variety of AI domains, from the very narrow to the truly general.</p><p>Legal alignment presents an opportunity for the legal profession as well as for AI safety. Lawyers have helped found nations, enterprises, and institutions. Now, they have an opportunity to found the systems of our future.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.recognizance.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.recognizance.io/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.recognizance.io/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Recognizance! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item></channel></rss>