Dario Amodei, Co-founder and Chief Executive Officer of Anthropic. Photo Credit: Benjamin Girette/Bloomberg via Getty Images

Anthropic CEO Dario Amodei has publicly opposed a proposed 10-year moratorium on state-level artificial intelligence (AI) regulation, calling the measure “far too blunt an instrument” in a New York Times opinion piece. This criticism comes as Congress considers including the AI provision in President Donald Trump’s comprehensive domestic policy legislation, known as the “One Big Beautiful Bill Act.”

“AI is advancing too head-spinningly fast,” the AI company’s CEO wrote, noting that the technology could fundamentally change society within just a couple of years. “In 10 years, all bets are off.”

However, he fears that a decade-long freeze on state action could create a regulatory vacuum in the country. With no federal rules in place making states unable to act, there would be little oversight of how powerful AI systems are developed and deployed. As such, it could leave the country dangerously unprepared to address new and unforeseen risks from AI, such as algorithm bias in employment and housing decisions, as well as in automated decision-making in healthcare and insurance. Also, AI-generated deepfakes used in political campaigns or for other malicious reasons. 

In his op-ed, Amodei warns, “Without a clear plan for a federal response, a moratorium would give us the worst of both worlds – no ability for states to act, and no national policy as a backstop.”

To shed light on the risks of unregulated AI deployment and development, Amodei described worrying findings from Anthropic’s recent stress testing. In one example given, when researchers told Claude, the company’s AI model, that it would be shut down and replaced with a newer model, the AI threatened to forward incriminating emails about an alleged affair to the user’s wife unless the shutdown plans were changed or canceled. 

While this took place in a controlled testing environment, Amodei emphasized that such behaviour is not purely hypocritical, as AI systems are already capable of unexpected and potentially harmful actions. “This isn’t merely a fictional tale,” he said in his op-ed.

So, rather than support a blanket moratorium, Amodei proposed that the White House and Congress collaborate to establish a federal transparency standard for AI companies. Under this framework, Amodei proposes that companies developing advanced AI models should be required to publicly disclose their testing and safety policies, share risk assessment methods and mitigation strategies, as well as explain how they address national security and other critical tasks. 

Amodei points out that Anthropic and other leading AI firms, such as OpenAI and Google DeepMind, already publish much of this information voluntarily. In addition to this, he believes that as AI technology becomes more powerful, legal requirements will be needed to ensure continued transparency and accountability.

“Having this national transparency standard would help not only the public but also Congress understand how the technology is developing, so that lawmakers can determine whether further government intervention is needed,” Amodei emphasized.

Meanwhile, supporters of the federal moratorium argue that a single national standard is needed to avoid a fragmented regulatory landscape, which they say could stifle innovation and put American companies at a disadvantage globally, especially with China. They worry that navigating dozens of different state laws would make it harder for U.S. firms to compete with rival-countries like China.

However, Amodei’s stance was echoed by a bipartisan coalition of 40 state attorneys general, who have written to Congress urging rejection of the moratorium. The group warns that the ban would “strip away essential state protections without replacing them with a viable federal regulatory framework” and leave Americans “completely exposed to AI’s known harms and evolving, real-world risks.”

Anthropic CEO Dario Amodei’s criticism of the proposed 10-year ban on state AI regulation highlights the complex challenges policymakers face in their attempt to govern rapidly evolving AI technology.

His call for federal transparency standards brings to attention two things: one, speaking up on the ills of AI as the CEO of an AI company, and explaining how bad it can quickly get if left unchecked for a period of time, signifies that even people at the head of significant tech breakthroughs can be wary of the technology they are building. 

And two, his opinion reflects concerns that the current approach (10-year ban) could create dangerous gaps in oversight while AI capabilities continue to advance at an unprecedented pace.

Whether Congress will heed these warnings and find a way around it remains unseen, but the outcome of them sticking with a decade-long ban will have far-reaching consequences for the future of AI in America and the world at large.

Share.

I’m Precious Amusat, Phronews’ Content Writer. I conduct in-depth research and write on the latest developments in the tech industry, including trends in big tech, startups, cybersecurity, artificial intelligence and their global impacts. When I’m off the clock, you’ll find me cheering on women’s footy, curled up with a romance novel, or binge-watching crime thrillers.

Comments are closed.

Exit mobile version