When Washington Writes the Rules, the Wrong People Are in the Room
From OxyContin to AI, the same pattern keeps repeating — and the cost lands on you.
Here is a thought experiment. Imagine you are throwing a dinner party to decide how your household should spend its money for the next ten years. You invite the banker who sells you loans, the insurance broker who wrote your policy, and the contractor who wants to renovate your kitchen. You forget to invite your accountant, your spouse, and your kids.
How well do you think dinner is going to go?
That, in caricature, is how federal regulation is being written in the United States today — across pharmaceuticals, banking, and now artificial intelligence. The industry being regulated is at the table. The lobbyists are at the table. Often, former regulators now on industry payrolls are at the table. The people conspicuously missing? The independent experts who would tell you the truth without worrying about their next job, and the ordinary citizens who will live with the consequences.
This is not a new story. The economist George Stigler described it in 1971 as “regulatory capture” — the tendency of agencies, over time, to serve the interests of the industries they are supposed to oversee rather than the public. What is new is the velocity and the scale. And what is newer still is that we are about to do it all over again, in real time, on the most consequential technology of our lifetimes.
Let me show you the pattern. Three sectors, three acts, one playbook.
[[Image: A large empty round conference table in a wood-paneled hearing room, with most chairs occupied by silhouetted figures in suits and only two empty chairs clearly visible at the far end, dramatic lighting from above, editorial illustration style. Caption: “The rooms where regulations are written are not empty. They are full — of the wrong people.”]]
Act One: The pill that killed a quarter-million Americans
In 1995, an FDA medical officer named Curtis Wright signed off on a new opioid painkiller made by Purdue Pharma. The label he approved included a now-infamous sentence claiming that the drug’s delayed-release formulation was “believed to reduce the abuse liability.” No study supported it. Wright and Purdue’s executives had worked closely together in a hotel near FDA offices during the approval process.
About a year later, Wright left the FDA. He joined Purdue at roughly triple his government salary. The drug was OxyContin. The marketing campaign that sentence unlocked helped kill, by conservative counts, a quarter-million Americans between 1999 and 2019 from prescription opioid overdoses alone.
Fast forward to 2021. The FDA’s independent advisory committee votes 10-0 (with one “uncertain”) against approving Biogen’s Alzheimer’s drug aducanumab, marketed as Aduhelm. The evidence is inconclusive. The agency approves it anyway, using a surrogate endpoint the committee had been explicitly told was not on the table. Three committee members resign in protest, including Harvard’s Aaron Kesselheim, who called it “probably the worst drug approval decision in recent US history.” Biogen prices the drug at $56,000 a year. A subsequent congressional probe faulted the FDA for an “atypical” process that gave Biogen privileged access to agency staff.
Why does this keep happening? Follow the money — literally. Under the Prescription Drug User Fee Act, industry fees now fund roughly three-quarters of FDA’s drug review budget. The agency you count on to tell pharma “no” gets paid by pharma to say “yes” on a deadline. Every five years, FDA and industry sit down and re-negotiate the terms — a cycle that academic analyses have shown consistently favors industry through weaker standards and faster approvals.
Meanwhile, Scott Gottlieb, who ran the FDA from 2017 to 2019, joined Pfizer’s board of directors less than 90 days after resigning. Senator Elizabeth Warren called it “revolving door influence-peddling” that “smacks of corruption.” The law allowed it. It still does.
Act Two: The bank that no one saw coming (except the people who did)
Same movie, different theater. In March 2023, Silicon Valley Bank collapsed in the second-largest bank failure in US history. Depositors pulled $40 billion in a single day. The Federal Reserve’s own post-mortem was, for the Fed, unusually direct: “regulatory standards for SVB were too low,” and the 2019 weakening of rules under the Economic Growth, Regulatory Relief, and Consumer Protection Act — a bill the mid-sized bank lobby had pushed aggressively — “impeded effective supervision.”
Translation: the industry lobbied to loosen the rules. Congress loosened them. The rules failed. Taxpayers and uninsured depositors caught the bill.
And who wrote the original rules that got loosened? The same sort of people. Robert Rubin moved from Goldman Sachs to Treasury Secretary, helped repeal the Glass-Steagall wall between commercial and investment banking, then left Treasury and joined Citigroup — the merger his repeal had legalized. He collected roughly $126 million in compensation over the following decade. Citigroup then required a $45 billion bailout in 2008. The Financial Crisis Inquiry Commission concluded Rubin “may have violated the laws of the United States.” He was never charged.
So far so good. (Well, obviously not — but you see the pattern.)
Act Three: The AI fork in the road — and we are standing on it right now
This is where things get interesting, and where you should pay attention. We are about to repeat this playbook on artificial intelligence, except the stakes are considerably higher than pill-shaped and bank-shaped.
In October 2023, the Biden administration issued Executive Order 14110, the most comprehensive US AI governance action to date. It required the most powerful AI developers to share safety test results with the government. It created the US AI Safety Institute. You could argue about whether it went far enough. What you cannot argue is that it existed.
On January 20, 2025 — hours after his second inauguration — President Trump rescinded it. Three days later, Executive Order 14179 replaced it with a directive to pursue “global AI dominance” free of “ideological bias.” Venture capitalist David Sacks was appointed AI and Crypto Czar.
Meanwhile, in California, State Senator Scott Wiener introduced SB 1047 — a bill that would have required developers of the largest AI models to implement basic safety protocols and whistleblower protections. It passed the California Senate 32-1 and Assembly 48-16. A YouGov poll found 78% national voter support. More than 113 current and former employees of OpenAI, Google DeepMind, Anthropic, Meta and xAI wrote in support. Governor Newsom vetoed it on September 29, 2024, after pressure from OpenAI, Meta, Google, and a politically coordinated opposition campaign.
Who was at the table? The largest AI labs, their lawyers, and well-funded academic voices aligned with their positions. Who was not? The independent safety researchers who had been warning about exactly this problem for years. The workers whose jobs are being reshaped by these systems. The parents watching their kids’ social lives get restructured by recommendation algorithms. The judges, teachers, loan officers, and doctors whose authority is being quietly transferred to models they cannot inspect.
In short: the people who will live downstream of every decision being made.
The “impartial people” — and why they are missing on purpose
Here is the part that deserves to be said plainly. When we say “impartial experts,” we do not just mean academics with no industry funding, though we mean that too. We mean:
The technical specialists who do not work at the lab being regulated. Most serious AI safety researchers are employed by the same six companies developing the frontier. The few who are not — and the academics (DAIR, the AI Now Institute, independent voices at Public Citizen — often struggle to get heard at the table where rules are actually written.
Civil society groups representing affected communities. Patient advocates. Consumer-protection organizations. Labor unions. Disability-rights organizations. Parents. Teachers. People who have been harmed by previous regulatory failures and can tell you exactly where the weak points were.
Whistleblowers from inside the regulated companies. Frances Haugen — the Facebook product manager who in 2021 disclosed internal research showing that 13.5% of UK teenage girls said Instagram worsened suicidal thoughts — produced more accurate regulatory intelligence in six months than most agency reports produce in six years. Five years later, no federal legislation has resulted.
Comparative-jurisdictional voices. The European Union’s AI Act was drafted with extensive input from academic researchers, civil society, and cross-sectoral expert groups. You do not have to agree with every provision to notice that the process was structurally different. The US preferred the “move fast and lobby” approach. Europe preferred the “move thoughtfully and consult” approach. You can see the results in the final texts.
Why are these voices missing? Well, because they do not fund campaigns, do not offer post-government board seats, and do not hire lobbyists by the platoon. The cryptocurrency industry alone spent over $133 million through the Fairshake super PAC in the 2024 election cycle. The Washington Post found that virtually none of the ads it ran even mentioned crypto. That is the sophistication of modern regulatory influence. It does not lobby — it elects.
Four futures to consider (if none of this changes)
Let me close with a little imaginative work — because abstract critique is easy, and making the stakes vivid is harder. Just imagine a 2030 in which the pattern has continued unchecked.
Just imagine a hiring system, deployed by a major bank, that silently deprioritizes applicants from certain zip codes. The model is proprietary. The regulators who would audit it have been defunded. The civil rights groups raising alarms cannot get the discovery they need. Three years later, a class action proves the discrimination. The fine is smaller than the quarterly profit.
Just imagine a new generation of AI-generated pharmaceuticals approved through an FDA pathway designed when “computational drug discovery” meant a chemist with a spreadsheet. The data is proprietary. The trial endpoints are chosen by the sponsor. The approval committee is advisory. One of the drugs turns out to cause a rare cardiac effect that does not appear until year four.
Just imagine an autonomous-vehicle fleet granted operational approval because the relevant agency accepted simulation data rather than on-road testing, after a quiet rulemaking comment period dominated by the manufacturers themselves. A bus stop full of schoolchildren, late 2030s, Tuesday morning.
Just imagine a financial “stablecoin” backed by assets no one has independently audited, now holding a meaningful fraction of American retirement savings, because the regulator who would have asked hard questions was replaced by one who would not.
These are not predictions. They are extrapolations of trends already underway — trends that, in every case, proceed because the table is full of the regulated and empty of the rest of us.
What a sane system would look like
None of this is inevitable. A better system is neither utopian nor European — it is just not the one we have. A few concrete pieces, briefly:
Lengthen cooling-off periods to five years for senior regulators, not one — and make them cover board service and “consulting,” not just lobbying. Publicly fund the review budgets of agencies that regulate industries (no more PDUFAs, no more user fees). Mandate civil-society representation on advisory committees by statute, not by custom. Protect whistleblowers with bite, including in AI labs. Require independent audit rights for any algorithmic system deployed in hiring, lending, healthcare, or criminal justice. And finally — the hardest — rebuild the comment-and-rulemaking process so that a retired teacher has a realistic path to being heard, not just a trade association.
The HAIA Foundation exists precisely because AI is the domain where we still have a chance to build the system right the first time, rather than learning the hard way in body counts and bailouts. That window is closing quickly. If the last forty years of American regulation has a single lesson, it is this: the room where the rules are written gets crowded fast, and it is always the wrong people who show up first.
My vote? Invite the rest of the table. Before dinner is over, and the bill has already been signed.

