Attorney General Phil Weiser letter to Congress RE: AI Moratorium Comment

Dear Speaker Johnson, Majority Leader Thune, Minority Leader Jeffries, and Minority Leader Schumer:

As Attorney General for the State of Colorado, I write separately from my State Attorney General colleagues to emphasize certain points made in their letter opposing a congressionally imposed broad moratorium on state action in the artificial intelligence (“AI”) arena as well as to share additional views informed by my experience in Colorado.

First, let me emphasize the closing point made in the bipartisan State Attorney General letter: “If Congress is serious about grappling with how AI’s emergence creates opportunities and challenges for our safety and well-being, then the states look forward to working with you on a substantive effort.”  Indeed, now is the time for Congress to undertake a serious and sustained effort to create a governance framework for AI.  We stand ready to work with you on that effort.

This work is urgent. Discrete and acute harms related to AI-based technology are already becoming apparent. I recently met with parents who lost children to suicide or overdose following engagement with social media and AI-enabled chatbot platforms. Their message to lawmakers was direct and powerful: don’t wait.[1] Reports of problematic chatbot behavior are widespread—from encouraging suicide and self-harm, to engaging children in graphic, inappropriate sexual conversations, to promoting violence or impersonating licensed therapists. These dangers demand meaningful federal oversight.

Ideally, Congress would enact an effective and sensible governance framework for AI that establishes clear, enforceable guardrails while empowering state attorneys general to uphold them. Because potential uses for AI vary widely, any federal laws should be targeted, flexible, and responsive to the varying risks AI poses across different sectors. Assuming such a framework was robust and protective, it would be appropriate—as I have explained in the context of data privacy—for the federal government to preempt state regulation while allowing for state attorney general enforcement.[2] As I have previously cautioned, “[t]his is a risky time for our nation, as state action [in the AI context] risks creating a patchwork quilt of obligations and unnecessary and burdensome costs, such as the Colorado law’s imposition of costs on those who use AI in certain contexts like hiring.”

The absence of thoughtful federal action leaves consumers exposed and creates the risk of the patchwork quilt approach outlined above.  At the same time, an ill-defined and broad moratorium on any state action—in the absence of a substantive federal framework—would only exacerbate known risks and leave communities across the country vulnerable to harms, such as those posed by AI chatbots.  In short, preventing oversight of AI in all contexts—with a lengthy moratorium and not one limited in scope—promises great mischief, including the risk of immunity for those companies using AI to cause harm.[3]

As we pursue American leadership in this new technological frontier, the central question around AI governance should be how to create a framework that encourages innovation, competition, and economic growth while protecting consumers and promoting trust.  Those goals require effective oversight, a strong system of enforcement, and collaboration across state and federal governments—and cooperation across party lines.

I was dismayed by the Trump administration’s threat, in a draft Executive Order, to withhold funds from Colorado because Colorado adopted a state AI anti-discrimination law.  To be sure, I believe Colorado’s law has much room for improvement, and I have called on Colorado’s General Assembly to revise it.[4]  But attempts by the federal government to coerce policy change through intimidation and the illegal withholding of funds are unlawful and unconstitutional.  If the administration proceeds to adopt this draft order, we will again turn to the courts to defend the rule of law and protect the people of Colorado.

Congress should take the work of crafting an AI governance system seriously and approach the challenge with a commitment to protecting consumers, encouraging competition, and enabling trust.  As noted above, the optimal framework to do so would, in most areas, preempt state authority and authorize state attorneys general to enforce uniform sensible and effective federal standards.  But until such a framework can be developed, it would be a mistake to impose a system of broad preemption along the lines now being contemplated.

I recognize the concern that some state laws may impose undue burdens, hinder innovation, or undermine the introduction of new products or services through unwise action. And I believe and support the promise of emerging AI tools, which can have a positive impact in areas ranging from water management to wildfire risk mitigation to public safety.  But where states adopt problematic laws, the answer is to improve them, not to block all states from adopting any forms of consumer protection, particularly those aimed at safeguarding our children.

As elected leaders, we have a shared responsibility to address new and emerging challenges with careful and timely action and to forge a responsible and responsive framework for AI governance. I join my colleagues in urging you to reject calls for a blanket and broad moratorium and instead encourage you to focus your attention on developing the thoughtful, principled framework this moment demands.

Sincerely,

Phil Weiser
Attorney General

***

[1] https://www.denver7.com/news/local-news/grieving-parents-push-for-social-media-changes-after-veto-of-sweeping-colorado-bill (opens new tab).

[2] Here is what I said in 2019 on that topic: “A first best solution would be a comprehensive federal law that protected consumer privacy. Such a law, like the Dodd-Frank law, should authorize State AGs to protect consumers. When Congress starts working on such a law, I will be eager and willing to support such an effort. After all, differing laws and reporting requirements designed to protect privacy creates a range of challenges for companies and those working to comply with different—and not necessarily consistent—laws.”  https://coag.gov/press-releases/04-11-19/.

[3] In our litigation against social media companies related to mental health harms, the companies are attempting to invoke Section 230 immunity—adopted at a very different time and for very different purposes—as a defense against their liability for deceptive actions and for the knowing implementation of their product in ways that they know harms kids.  See, e.g., Brief for Meta Defendants-Appellants, People of the State of California, et al. v. Meta Platforms, Inc., et al.. Nos. 24-7032, 24-7037, 24-7265, 24-7300, 24-7304, 24-7312 (9th Cir. March 31, 2025).

[4] https://coag.gov/blog-post/attorney-general-phil-weiser-prepared-remarks-artificial-intelligence-symposium-9-30-2025/.