The World Needs an AI Constitution. Here Are the 25 Things It Must Include.
Why voluntary pledges and toothless summits won’t cut it — and what a binding global framework actually looks like.
Let me start with a thought experiment.
Imagine it is 1945. The atomic bomb exists. It has been used — twice. Now imagine that instead of the international community spending the next several decades building the Nuclear Non-Proliferation Treaty, the International Atomic Energy Agency, and an entire architecture of global arms control, the world’s response was... a series of dinner parties. Nice declarations. Voluntary pledges from the companies enriching uranium. A few strongly worded letters.
You would call that insane.
And yet — here we are with artificial intelligence.
We have a technology that Geoffrey Hinton, the Nobel Prize-winning pioneer who helped create modern AI, says carries a 10–20% chance of causing human extinction. We have a technology that Sam Altman himself — the CEO of OpenAI — says urgently needs international regulation modeled on nuclear safeguards. We have Dario Amodei, CEO of Anthropic, publicly stating he is deeply uncomfortable with a handful of companies — including his own — making these decisions for all of humanity.
And the sum total of binding international AI law? One treaty. The Council of Europe Framework Convention on AI, opened for signature in September 2024. It has not yet entered into force. It exempts national security uses. And most of the world has not signed it.
So far so good? No. Not even close.
Why a Convention, Not Just Guidelines
Here is the core problem, and I want to be direct about it: voluntary frameworks do not work for technologies that can reshape civilization.
We know this. We learned it with nuclear weapons, with chemical weapons, with climate change. The OECD AI Principles (adopted by 47 countries), UNESCO’s Recommendation on the Ethics of AI (endorsed by all 193 member states), the Bletchley Declaration, the Seoul commitments — they are all admirably intentioned. But not one of them is enforceable. Not one carries penalties for non-compliance. Not one has teeth.
Meanwhile, the EU AI Act — the world’s first comprehensive AI regulation — applies only within Europe. The United States, under its current administration, has explicitly rejected international AI governance, refused to sign the Paris AI Action Summit declaration, and dismantled its own domestic safety infrastructure. China is building its own parallel regulatory system. India and the Global South are racing to develop AI without waiting for permission from anyone.
What we have, in short, is a patchwork. A regulatory quilt with gaping holes. And AI does not respect borders.
This is why we need something bigger. Not guidelines — a convention. Not recommendations — a constitution. A binding, enforceable, globally negotiated framework that establishes the non-negotiable rules for developing and deploying artificial intelligence. Think of it as the Geneva Conventions for the digital age — except instead of regulating how wars are fought, it regulates how the most powerful technology in human history is built, governed, and controlled.
Yoshua Bengio, the Turing Award winner who chaired the landmark International AI Safety Report, has said it plainly: international AI agreements are in every nation’s rational self-interest. You want the other side to follow rules — and they want the same from you. That is the basic logic behind every successful arms control treaty in history.
Stuart Russell, whose textbook literally defines the field of AI, has called for regulation modeled on ICAO — the body that governs international aviation — where nations agree to standards, write them into law, and national regulators ensure compliance. He co-convenes the International Dialogues on AI Safety, explicitly modeled on the Cold War Pugwash Conferences that helped prevent nuclear catastrophe.
The precedents exist. The expertise exists. The institutional will — at least from enough countries — exists. What does not exist yet is the document itself.
So here is my proposal for what it should contain.
The AI Constitution: 25 Elements, in Order of Priority
I will not claim to cover everything here. AI governance is a sprawling, multi-dimensional challenge that smarter people than me are grappling with daily at places like the Center for AI Safety, the Future of Life Institute, the AI Now Institute, the Centre for the Governance of AI, and the Institute for AI Policy and Strategy. But I believe these 25 elements represent the structural minimum — the load-bearing walls without which the entire house collapses.
I have organized them into three tiers: the top 3 (non-negotiable foundations), the next 7 (critical architecture), and the remaining 15 (essential infrastructure).
TIER 1: The Non-Negotiables (Top 3)
These are the elements without which no AI constitution is worth the server it is stored on.
1. Mandatory Human Oversight of High-Risk AI Decisions
No AI system — none, anywhere, under any circumstances — should make autonomous decisions about human life, liberty, criminal punishment, access to healthcare, or eligibility for fundamental services without meaningful human oversight.
Why is this number one? Because without it, everything else is cosmetic. Just imagine: an AI system denies your mortgage application, flags you as a terror suspect, or recommends your child be removed from your home — and no human being reviewed the decision. This is not science fiction. It is already happening. The Carnegie Endowment for International Peace found at least 75 countries actively deploying AI surveillance systems. Algorithmic decision-making in criminal justice, welfare, immigration, and credit scoring is expanding globally — often without any appeals process.
The constitution must establish a universal right to human review of consequential AI decisions.
2. A Global Ban on Lethal Autonomous Weapons Without Human Control
This is, as Austria’s Foreign Minister put it at the 2024 Vienna Conference, our generation’s Oppenheimer moment. The United Nations Office for Disarmament Affairs has been working on this since 2014. The International Committee of the Red Cross has called for binding restrictions. In December 2024, the UN General Assembly voted overwhelmingly — 166 in favor, 3 against — for a resolution on autonomous weapons. Stuart Russell and over 3,000 AI researchers signed an open letter warning that these systems could trigger a third revolution in warfare.
Just imagine a world where swarms of AI-powered drones can be deployed by any state — or non-state actor — with no human being deciding who lives and who dies. Where the cost of targeted killing drops to nearly zero. Where the decision to take a life is made in milliseconds by an algorithm trained on data you cannot audit.
A global convention must prohibit fully autonomous lethal weapons systems. Period.
3. Transparency and Explainability Requirements for All AI Systems Affecting the Public
If an AI system makes a decision about you — whether it is screening your job application, diagnosing your illness, determining your insurance premium, or moderating the content you see — you have a right to know that AI was involved and to understand, in meaningful terms, how the decision was made.
Kate Crawford, co-founder of the AI Now Institute, has argued that AI systems are fundamentally systems of power. Her point is critical: without transparency, there is no accountability. Without accountability, there is no governance. And without governance, you have a handful of tech companies exercising quasi-sovereign power over billions of people — which is exactly what Ian Bremmer and Mustafa Suleyman described as the emergence of a technopolar order.
TIER 2: The Critical Architecture (Next 7)
These elements build on the non-negotiable foundations. Without them, the top 3 remain aspirational.
4. An International AI Safety Body (The “IAEA for AI”)
Altman proposed it. Bengio endorsed the concept. The aitreaty.org coalition — led by Bengio, Max Tegmark, and Hinton — calls for a Compliance Commission modeled on the IAEA, with authority to monitor, inspect, and license the most powerful AI training runs. The constitution must establish this body with genuine enforcement power — not as another advisory panel.
5. Binding Anti-Discrimination and Bias Auditing Standards
Timnit Gebru and Joy Buolamwini’s landmark Gender Shades research showed facial recognition misidentifying Black women at dramatically higher rates. This is not a theoretical concern — it is a documented, systematic failure baked into systems already deployed in law enforcement, hiring, and healthcare. The constitution must require independent bias audits before deployment of high-risk AI and mandate that results be public.
6. Mandatory Pre-Deployment Safety Testing for Frontier Models
If you cannot put a new pharmaceutical on the market without clinical trials, why can you release an AI model capable of generating bioweapon instructions or facilitating cyberattacks without any mandatory safety evaluation? The UK’s AI Security Institute has tested over 30 frontier models — but participation is voluntary. The constitution must make pre-deployment testing mandatory above defined capability thresholds.
7. Data Sovereignty and Privacy Protections
AI systems are trained on data — often scraped from the internet without consent, often containing personal information, often reflecting the biases and blind spots of the cultures that produced it. The constitution must establish minimum global standards for data consent, data sovereignty (especially for indigenous communities and the Global South), and the right to opt out of having your data used for AI training.
8. Intellectual Property and Creative Rights Protections
Here is where things get personal for millions of artists, writers, musicians, and creators. Generative AI systems trained on copyrighted works — without permission, without compensation — are already displacing creative professionals. The constitution must establish clear frameworks for attribution, compensation, and consent when AI systems are trained on or generate content derived from human creative works.
9. Environmental Impact Standards
Training a single large AI model can consume as much electricity as a small city uses in a year. The water used to cool data centers is staggering. As AI scales, its environmental footprint scales with it. The constitution must require environmental impact disclosures for large-scale AI training and incentivize the development of energy-efficient AI architectures. We cannot solve climate change with a technology that accelerates it.
10. Protection Against AI-Powered Mass Manipulation
Yuval Noah Harari has warned that AI has mastered the operating system of human civilization: language. UNESCO has flagged deepfakes as creating a fundamental crisis of knowing — where citizens can no longer distinguish truth from fabrication. The constitution must ban the deployment of AI for covert political manipulation, require watermarking of AI-generated content, and establish criminal penalties for the creation and distribution of non-consensual deepfakes.
TIER 3: Essential Infrastructure (Next 15)
These are the mechanisms that turn principles into practice.
11. A Global AI Incident Reporting System. When an AI system causes harm — a wrongful arrest, a medical misdiagnosis, a financial crash — there must be a centralized, international mechanism for reporting, investigating, and learning from AI failures. Aviation has the NTSB and international equivalents. AI has nothing.
12. Liability Frameworks for AI-Caused Harm. Who is responsible when an autonomous vehicle kills a pedestrian? When an AI medical device misdiagnoses cancer? When an algorithmic trading system crashes a market? The constitution must establish clear chains of liability — from developers to deployers to operators — so that “the algorithm did it” is never an acceptable defense.
13. Right to Human Alternative. Every person should have the right to request a human alternative to any AI-mediated decision in essential services: healthcare, education, justice, immigration, banking, and housing. No one should be forced to interact with an AI system for services fundamental to their dignity and survival.
14. Protections for AI Researchers and Whistleblowers. Timnit Gebru was fired from Google after co-authoring a paper highlighting risks in large language models. She is not the only one. The constitution must protect researchers, employees, and whistleblowers who raise legitimate safety or ethical concerns about AI systems from retaliation.
15. Prohibition of AI-Enabled Social Scoring. China’s social credit system was the first large-scale experiment. It will not be the last. The constitution must prohibit governments and private entities from using AI to assign citizens behavioral scores that determine access to services, travel, or economic opportunity.
16. Children’s Protections. AI-powered recommendation algorithms are already shaping the cognitive development, mental health, and attention spans of an entire generation. The constitution must establish special protections for minors — including restrictions on AI-driven behavioral profiling and content targeting of children.
17. Equitable Access Provisions (Global South Inclusion). If the AI constitution is written solely by and for wealthy nations, it will fail. The Atlantic Council noted the Global South’s rising voice at the Paris AI Action Summit. The constitution must include mechanisms for technology transfer, capacity building, and ensuring that AI’s benefits reach developing nations — not just its risks.
18. Mandatory Disclosure of AI Interactions. When you are talking to an AI, you should know it. When an AI is making a decision about you, you should know it. Full stop. No dark patterns. No chatbots pretending to be human.
19. Compute Governance and Training Run Thresholds. The aitreaty.org proposal calls for global compute thresholds — essentially, any AI training run above a certain scale (measured in floating point operations) triggers mandatory safety protocols, registration, and international oversight. This is the equivalent of requiring permits for nuclear reactors.
20. Anti-Concentration and Competition Provisions. Right now, the development of the most powerful AI systems is concentrated in a handful of companies — most of them American. Bremmer has argued that these firms exercise quasi-sovereign power. The constitution must include provisions preventing monopolistic control of AI infrastructure, data, and capability.
21. Cultural and Linguistic Diversity Requirements. AI systems trained predominantly on English-language data reproduce the biases, perspectives, and knowledge gaps of the Anglophone world. The constitution must require that high-impact AI systems be tested across diverse linguistic and cultural contexts — and that non-English-speaking populations are not treated as afterthoughts.
22. Periodic Review and Adaptation Mechanisms. AI evolves faster than law. The constitution must include mandatory review cycles (every 3–5 years), sunset clauses for specific provisions, and fast-track amendment processes that allow governance to keep pace with technological change. Failure to build in adaptability is failure by design.
23. International Cooperation on AI Safety Research. The aitreaty.org proposal calls for a “CERN for AI Safety” — a collaborative international laboratory where researchers from all nations work together on alignment, interpretability, and safety. AI safety should not be a competitive advantage — it should be a shared global public good, as Bengio has argued.
24. Restrictions on AI in Democratic Processes. AI-generated content in political campaigns, AI-powered voter profiling, AI-driven gerrymandering, AI-generated candidates (yes, this is coming) — all of these represent existential threats to democratic governance. The constitution must establish guardrails specifically protecting electoral integrity and democratic participation from AI manipulation.
25. An Enforcement Regime with Meaningful Penalties. This is last but arguably most important. A constitution without enforcement is a wish list. The EU AI Act carries penalties up to 35 million euros or 7% of global turnover. The international convention needs equivalent teeth — including sanctions, trade restrictions, and mechanisms for holding both states and corporations accountable.
The Clock Is Ticking
I want to end with a scenario — not to frighten, but to focus.
It is 2030. Artificial general intelligence — or something close enough to make the distinction academic — has arrived. It can write legislation, design drugs, run military operations, manage economies, and generate persuasive content indistinguishable from human thought. It is controlled, in practice, by three or four companies and two or three governments.
There is no binding international framework. There is no global enforcement body. There is no agreed set of red lines.
Now ask yourself: in that world, who is making the rules?
Mustafa Suleyman wrote that the containment problem is the defining challenge of this century. Nick Bostrom compared humanity to children playing with a bomb. Dario Amodei puts the probability of catastrophic outcomes at one in four.
These are not alarmists on the margins. These are the people building the technology.
The window for action is not closed. But it is closing. An AI constitution — a real one, binding, global, enforceable, and inclusive — is not a luxury. It is a necessity. And the time to write it is now, while we still can.
One can only dream that we are wise enough to act before the dream becomes the only place where we had the chance.
This article was published by the HAIA Foundation. To stay informed about AI governance and join the conversation, subscribe to our newsletter at substack.haia.foundation. Your voice matters in shaping the rules for a technology that will shape everything else.
References and further reading are hyperlinked throughout this article. Key frameworks cited include the OECD AI Principles, UNESCO Recommendation on the Ethics of AI, the EU AI Act, the Council of Europe Framework Convention on AI, and the International AI Safety Report. Organizations working on these issues include the Center for AI Safety, the Future of Life Institute, the AI Now Institute, the Centre for the Governance of AI, the International Dialogues on AI Safety, and the Partnership on AI.




