Prepared remarks: Artificial Intelligence Symposium (Sept. 30, 2025)
Towards Effective AI Governance: Promoting Trust and Preventing Harms
It is a pleasure to join this Artificial Intelligence Symposium for an important conversation during a pivotal moment for technology policy. We are both bracing for and working to adapt to the societal impact of artificial intelligence, which promises to deliver significant benefits to and significant challenges for our society. Yet, in many ways, our legal and regulatory frameworks are still struggling to respond to the last great technological transformation—the Internet. In my talk, I will outline why the way forward on AI is a thoughtful, balanced regulatory response that supports innovation and responsible governance of AI.
I’ll begin my talk by situating AI in the broader historical pattern of emerging technologies and offer some foundational considerations for regulatory policy in the face of technological change. From there, I will offer a framework for approaching AI governance, drawing lessons from responses to prior emerging technologies and underscoring what’s at stake in ensuring the AI ecosystem is dynamic, competitive, and centered on the public interest. Finally, I will discuss the opportunities and challenges ahead in Colorado.
I. The Promise and Perils of AI
The arrival of emerging technologies—whether the Internet, cloud computing, Blockchain technology, what have you—often follows a predictable journey known as the hype cycle.[1] The first step is an amazement by the seemingly limitless possibilities a new technology brings. Soaring expectations, however, quickly begin to outpace the technology’s capabilities, and we enter the next phase of the hype cycle—called (perhaps melodramatically) “the trough of disillusionment.” Consider the case of the Internet. In the late 1990s, the promise of the Internet’s transformative impact was so great that even the development of a website, Pets.com, quickly attracted massive investment. As the hype of the new technology—and individual companies along with it—fails to meet lofty and unrealistic expectations set in the early days, there is a bust, as we saw in the early 2000s with the Internet. But, over a longer time horizon, an emerging technology like the Internet can truly transform our society, with lasting impacts on commerce, entertainment, communication, and more.
AI promises to be a transformative technology along the lines of the Internet. And it, too, has its own hype cycle. Investment in AI startups is surging, to take one leading indicator. Venture capital investments in AI rose to $65 billion in the first quarter of 2025, up more than 5 times from the quarter before ChatGPT launched in 2022.[2] Companies across industries are adding “AI” to their names and mentioning the technology in earnings calls—and as a result many of them have seen their stock prices rise.[3] Some claim, moreover, that “artificial general intelligence” is right around the corner, that AI will soon replace a significant portion of white-collar jobs,[4] and that we should start preparing for a world radically transformed by AI.
AI, like the Internet, constitutes a transformative technology that we need to take seriously. If implemented properly and responsibly, AI promises to deliver significant benefits to consumers and to society. Already, it is being used to help doctors detect disease earlier and more accurately,[5] to enable more precise climate predictions, to detect wildfires in their early phases, and to support companies in decarbonizing their operations.[6] Here in Colorado, for instance, startups like Amp Robotics are applying AI to increase recycling efficiency and Magic School is using AI to support teachers and help save them valuable time and make them more effective.
Like the Internet, AI is likely to have some dark sides. For years, we failed to examine how social media impacted kids. Today, we know that adolescents are being harmed by sustained use of these technologies and there is now significant litigation and a call for oversight to address these concerns.[7] Similarly, AI’s growing capacity to mimic human behavior presents a concern as to, for example, AI chatbots’ impact on children.[8]
Another challenge raised by AI is the development of AI-generated “deepfakes,” which are becoming a dangerous and sophisticated tool. These hyper-realistic images are being used in disturbing ways, including to create explicit and pornographic content without individuals’ consent. Deepfakes of public figures and politicians are also being used to spread misinformation and could potentially undermine the integrity of democratic processes. In some instances, companies have facilitated these illegal deepfake tools by offering services like hosting, sign-on systems, and payment options.[9]
In Colorado, we can both unleash and enable innovation and economic growth from AI as well as respond to emerging risks. To that end, we are looking to welcome and adopt AI in ways that address societal challenges. We have enacted legislation that targets the use of deepfakes of those seeking elected office[10] and established a cause of action for victims of nonconsensual intimate imagery (including those generated by AI).[11] Approaching AI with both its opportunities and challenges in mind is imperative
II. AI Panic vs. AI Policy
A core challenge with respect to regulating emerging technologies is to address genuine harms without stifling innovation as well as to ensure that new tools can mature to deliver benefits to society. This requires identifying the specific risks we are concerned about and addressing them in a focused manner. Too often, however, AI is invoked as a concern without precision. While some anxieties about the technology are indeed justified, turning AI into a “catch-all” for abstract worries about our digital future would be counterproductive and misdirected.
The first step in regulating AI is to define what it is—and what it is not. It is worth recognizing that AI is not entirely new. While generative AI has recently drawn mainstream attention, earlier forms of the technology—such as machine learning and natural language processing—have been used for over a decade, including in industries like banking and healthcare.
At its core, artificial intelligence is a type of software. Much like the transition from desktops to mobile phones or from local programs to cloud computing, AI is the next step in how software is developed and deployed. But not all software is AI, and it is becoming increasingly difficult to disentangle the two as more and more software leverages the technology.
Importantly, not all algorithms are AI. Many rules-based programs make decisions or carry out tasks without AI’s adaptive capabilities. For example, the Consumer Financial Protection Bureau took action when Citigroup violated the Equal Credit Opportunity Act by using non-AI tools that discriminated against Armenian American credit applicants based on their surnames.[12] These distinctions matter, as sound policy begins with precision. If we don’t carefully define AI, we risk undesirably burdening many software applications while overlooking both the persistent and novel challenges AI genuinely poses.
This introduces a second threshold step in developing the appropriate regulatory approach to new technology: accurately diagnosing the underlying concerns regulation should address. Consider, for example, an AI-powered tool that helps low-income renters navigate complex housing applications versus a non-AI, rules-based system that screens applicants for that housing using historically biased data. If we are not precise about our concerns, we could imagine a flawed regulatory response that subjects the AI assistant to strict compliance rules while allowing the more harmful—but technically not AI—system to escape regulatory guardrails. In short, our oversight must target the nature and the impact of the risk and not merely fixate on “AI” as a label.
For our society, we need to ask whether we are concerned about artificial intelligence posing inherent risks or about the risks it presents when used in automated decision-making without human oversight. Many AI systems, particularly large language models, operate as “black boxes,” making it difficult to trace how outputs are generated. This lack of transparency can be problematic when the technology is used in high-stakes domains like healthcare and financial services. Such a challenge is not unique to AI systems, however. Non-AI automation and algorithmic decision-making can raise similar concerns about accountability and transparency.
Another key issue is bias. Large language models can reflect and amplify biases in their training data. But, as illustrated in the Citigroup example, this is also a risk in traditional statistical modeling and in human-centric processes as well. Addressing bias requires a broader framework not limited to AI-specific tools, and in some instances, a law of general applicability may be more effective at targeting concrete concerns than one targeted solely at the use of AI.
III. Establishing Responsible and Responsive Oversight of AI
AI is the next progression in the software continuum, and our responses to earlier advances yield two important lessons as we develop strategies for AI governance. First, a regulatory response should be principles-based rather than rules-based. Second, a governance framework should be adaptable so that it can keep pace with technology’s development.
For a cautionary tale in technology policy, consider the evolution of Section 230. When enacted, Section 230 aimed to nurture the early Internet by shielding online platforms from liability for user-generated content.[13] The theory was that platforms should not be treated as publishers—and thus not held liable—for good-faith efforts to moderate content and keep their spaces safe. But Section 230’s statutory language is overly broad and largely static, leaving it unable to evolve as the type of platforms it governs evolve from upstarts to behemoths. Consequently, large tech companies now regularly invoke Section 230 as a shield against liability, even when their platforms take on a much more active role than the passive intermediary role that Section 230 originally contemplated. Much like traditional media publishers, social media platforms now prioritize and promote content via their algorithms. These algorithms, which seek to drive user engagement, can actively amplify content that causes harm.[14]
Tensions over Section 230 are now playing out in the federal courts. In Twitter v. Taamneh, the Supreme Court ultimately sidestepped a Section 230 question in a case about whether online platforms can be held liable for allegedly aiding and abetting terrorist activity by recommending user-generated content linked to terrorist organizations.[15] By contrast, in Anderson v. TikTok, the Third Circuit ruled that Section 230 did not fully immunize TikTok from liability for claims based on its algorithmic recommendations—specifically, the promotion of a dangerous viral challenge that killed a young user.[16]
When algorithms cause harm, they should be subject to enforcement. The way that Section 230 operates today, however, threatens to sideline enforcement efforts that are outside the law’s original spirit of protecting emerging platforms that were merely passive conduits for speech of others. To the extent that Section 230 does not operate in an adaptable way, it is potentially vulnerable to being used as a shield far from its original purpose.
The Colorado Privacy Act recognized that a new framework was needed to ensure consumers are protected, informed, and in control of their personal data in the digital world. The Act uses a principles-based approach to protect the personal data of Colorado residents. Rather than anchoring itself in prescriptive requirements based on current technology, the law establishes a broad set of rights and protections for consumers that will remain relevant as technology advances—grounded in the foundational principle that consumers have a right to access and control their data. This premise guided, for example, our response to the risk that sensitive data (a term the law clearly defines) can be shared without consumer knowledge and used in ways consumers may not approve of. These concerns required a more protective opt-in consent regime appropriate for higher-risk data. We also adopted a “universal opt out mechanism” that gives users a single system to communicate their preferences on how their data is collected and used. This allows consumers to easily maintain control over their data while giving companies a clear way to comply with the law while they continue to innovate.
The rulemaking process for the Colorado Privacy Act was informed by an inclusive and transparent stakeholder process. By actively and thoughtfully engaging with individual consumers, privacy advocates, and industry representatives, we were able to develop rules that carefully balanced the concerns of each stakeholder group. Further, our enforcement of the Colorado Privacy Act is harms-based. We are looking for those engaged in willful violation of the law—not those acting in good faith who make minor or unintentional errors—and we are focused on education and building a culture of compliance. Taken together, the Act’s framework enables a regulatory approach that offers both the predictability and flexibility requisite for technologists to invest in innovation while giving consumers confidence about their rights in an evolving digital world.
The Colorado Privacy Act offers a model for our oversight of AI—a model that supports the development of trustworthy tools without hampering innovation. By contrast, a prescriptive regulatory regime is not well-suited to the challenge. Notably, a principles-based approach that prioritizes transparency, reliable testing, and continuous evaluation will be far more adaptable. Regardless of the specific path we take, our oversight of AI should be harms-based and acknowledge that certain use cases for artificial intelligence pose less cause for concern than others. For example, an AI-assisted grammar or scheduling tool likely poses less risk than AI tools that make healthcare or credit decisions without human intervention.
There are a few other critical principles that an AI governance framework should follow. First, the burden of regulation should be proportional to the risk posed. This is why the principal focus should be on developers of high-risk systems, like “foundation models,” rather than on smaller companies that deploy those systems downstream. Second, like the Colorado Privacy Act, it is important to protect the ability of early-stage companies—which have fewer resources than incumbents to manage compliance burdens—to innovate; that’s why it is important for obligations not to kick in until firms reach a certain size.[17] Third, like with the Colorado Privacy Act, it is important that we avoid a fragmented patchwork of state-specific laws that create inconsistent standards and higher compliance costs without delivering corresponding benefits and instead work to develop state laws that are interoperable with one another.
Finally, as policymakers develop appropriate obligations, we should draw lessons from past successes. For example, as we saw with the Colorado Privacy Act, risk assessments do not need to be complex or overly costly to be impactful when they are rooted in substance over form. As obligations are developed or implemented, one promising model to consider is the use of accessible multi-stakeholder processes like LEED certification for green buildings, which seek to operate in a transparent, inclusive, and credible fashion that both promotes trust in the product or service and furthers innovation in the sector.[18]
IV. American Leadership of Trustworthy AI
U.S. leadership in AI will be essential to our economic competitiveness—supporting innovation, creating high-quality jobs, and ensuring American companies continue to set the pace in the global economy. It is also important to our national security: the U.S. stands to benefit from maintaining control over key technologies, infrastructure, and intelligence, as well as from advancing democratic values in how transformative technologies are used and deployed. But for America to lead on AI, we need responsible, adaptable policy—including for government use.
For starters, it bears emphasis that America will develop the best AI products and services in a competitive AI ecosystem. To that end, we need to continue to rely on antitrust law—both merger review and oversight of potential monopolization efforts—to prevent any individual company from emerging as a monopolist in the space. In our recent victory in Colorado v. Google, we demonstrated the need to restore competition in the search industry after years of unlawful monopolistic conduct that allowed Google to control almost 90% of the market—depriving consumers of the choice, innovation, and fair prices that an open and competitive market facilitates.[19]
The importance of antitrust enforcement to enable competition from disruptive technological change is evident from a look at Google’s past. Google emerged as the dominant search engine on the desktop and faced a significant technological threat from the emergence of mobile search queries. As the Internet moved to mobile platforms, there was a chance that Apple’s iPhone would embrace a rival to Google as the default search engine—or even that Apple would create its own search engine—such that Google would face a major threat to its dominance. Google met that challenge with exclusionary distribution agreements—payments to Apple and others—to ensure that rivals did not emerge against it in the mobile search market.
In his remedies opinion, Judge Mehta recognized this history, understood the logic of preventing it from recurring, and articulated the proper legal requirement that a remedy must extinguish the illegal actions that enabled Google to maintain its monopoly and must restore competition. Nonetheless, he expressly permitted Google to pay for default placement,[20] even though we know, and he recognized, that past exclusionary conduct yields monopoly profits that Google uses to buy the defaults that make it harder for competitors to challenge Google.[21] These payments include paying billions of dollars to Apple to keep it from entering the market.
Judge Mehta’s failure to prevent Google from continuing its monopolistic conduct—and his toleration of the status quo—risks allowing Google to maintain its dominant position and thwart the future entry of generative AI platforms into Google’s monopoly markets. In particular, Judge Mehta’s ruling means that the most direct paths that could be used by generative AI platforms to reach users are exactly the paths that he permits Google to control through the continued use of default placement. This ruling fails to recognize the full magnitude of the threat that Google’s exclusionary conduct and maintenance of its distribution channels pose to the success of AI platforms, just as mobile search failed to check Google’s monopoly power.[22] Judge Mehta did leave open the possibility of adding additional remedies later, which may well prove too little, too late to address Google’s illegal monopolistic conduct and to enable generative AI rivals to challenges its dominance.
In his opinion, Judge Mehta concluded that Google’s payment for distribution should be allowed to continue because ending such payments would have negative consequences to companies who received those payments. This conclusion is wrong because it would justify any monopolist—say, the old AT&T—continuing to act as a monopoly and use its monopolistic tactics because it was supporting worthy causes (such as funding basic research or affordable phone service) out of its monopoly profits. This conclusion puts antitrust enforcement on its head, allowing a monopoly to continue and potentially disabling competition from disruptive technologies that could displace a monopoly if it could not continue its exclusionary tactics.
With respect to creating any oversight of AI, the United States has yet to take any comprehensive action on the national level. At present, we have a window to proactively develop rules for an AI-enabled future and to lead on one of the most important technological forces of our time. The goal of AI governance should be to foster a competitive and diverse AI industry—one that is grounded in safety and accountability, earns public trust, and bolsters innovation. In the White House’s strategy on AI Action Plan, however, there appears to be a heavy focus on the concern that some AI is “woke.”[23] Ideally, the focus of national policy would instead be on enabling innovation and competition while safeguarding against harms, including promoting critical research that advances national competitiveness in this critical field. Sadly, cuts to the National Science Foundation—a core source of support for such research—is leaving the agency at its lowest level of support in decades.[24]
As for what a model federal framework might look like, Colorado led a bipartisan coalition of states in suggesting a series of key measures to promote trust and prevent harms.[25] In particular, we suggested that “commitments to robust transparency, reliable testing and assessment requirements, and after-the-fact enforcement” provide a promising approach.[26] We also urged that legislation should focus on high-risk AI applications, especially those influencing decisions with significant legal or personal consequences, like access to financial services, education, housing, or employment. These laws should include clear transparency requirements, asking for disclosure of what decisions are powered by AI, what data informs those decisions, and how users can challenge them—giving consumers the protection they need to engage with this new technology. Moreover, we called on the Commerce Department to foster “trust in AI systems through transparent and verifiable policies and practices driven by appropriate standards, including a code of ethics,” as well as impact assessments and third-party audits for high-risk AI applications.[27]
In the absence of federal legislative action, the question is how states will work to fill the void. In the case of data privacy and social media oversight, we are well down this road. Last fall, I offered some guidance on how to approach AI governance. That guidance and the principles discussed above emphasize the importance of focusing on actual harms, call for principle-based strategies (rather than prescriptive requirements), and involve an adaptable model of governance. Following this guidance, I emphasized my earlier call to revise the initial legislation adopted in Colorado, which did not follow this approach.[28] Notably, the law adopted in Colorado imposes obligations on those (including smaller companies) using AI within software applications with minimal risk of harm rather than emphasizing testing and transparency obligations on developers. Moreover, the law’s affirmative obligations do not distinguish effectively between truly high and low-risk use of the technology and may chill a range of healthy innovations as a result.
Without congressional leadership, as states like Colorado look at regulating AI, it will be important not to make the mistake of targeting regulations on the specific technology because of concerns about what it can enable—as opposed to focusing on the relevant harm and addressing that outcome. If, for example, one is concerned with automated decision-making in critical areas that can cause harm—like preventing access to credit—the use of AI rather than software is a technology choice and should not be the subject of regulation per se. Similarly, as to the concern about “deepfakes,” the better practice is to focus regulation on the relevant harm that raises concerns, not the use of AI per se.
In Congress, as part of the recent omnibus H.R. 1 bill, there was an effort to impose a ban on state AI regulatory actions. Put most charitably, this effort was designed to require legislating on issues of general concern—like undermining access to credit unfairly or creating “revenge porn” using technology—rather than AI in particular. The concern that I and many other state attorneys general had with this provision, however, was at least two-fold—(1) the moratorium was for far too long period of time (10 years); and (2) the generous interpretation is not the only possible one, as the language could potentially allow industry actors to claim that any use of AI is privileged and exempt from state regulation (even under a generally applicable law). And not to mention, legislation on a topic of this importance should not be rushed and hidden in a major bill as opposed to being subject to more careful consideration and reflection.
Until Congress is able to develop a national framework, we will continue to see states develop measures that govern AI. This is a risky time for our nation, as state action here risks creating a patchwork quilt of obligations and unnecessary and burdensome costs, such as the Colorado law’s imposition of costs on those who use AI in certain contexts like hiring. Ideally, states can follow the principles discussed above, but that remains to be seen.
In Colorado, the General Assembly failed to change the artificial intelligence law that raised the concerns I called out last fall in the 2025 legislative session.[29] That’s why, at the end of the legislative session, I called on the General Assembly to delay the effective date of this law so that companies would not be asked to follow unnecessarily burdensome requirements and the legislature would have time to address its unworkable provisions.[30] Unfortunately, the General Assembly failed to make any changes to the law during the regular session.
During a special session last summer, the General Assembly postponed the effective date for Colorado’s AI law until July 1, 2026, suggesting that further revisions would be made during the 2026 legislative session. As a top regulatory priority of mine is to work to enable compliance and provide certainty for regulated entities, it would not be efficient, fair, or wise for us to begin a rulemaking process that may well become out of date just weeks or months later. Therefore, our department will hold off on any rulemaking process to implement the law until after the 2026 legislative session concludes and the General Assembly has an opportunity to make substantive changes to the AI law. This approach will allow us to create the initial rules from a place of more certainty and stability in the underlying law as well as enable us to best use and conserve our limited law enforcement resources.
* * *
We are at a new frontier in terms of the opportunities and challenges in using AI. I am excited about the opportunities ahead, in terms of how AI can improve how government operates, how we educate our kids (using products made by Colorado-based companies like Magic School), and how we take on a range of issues from water management to wildfire risk mitigation to protecting public safety. I also recognize the need to address reasonable concerns about this emerging technology. As we do so, we must ensure that our approach to designing, implementing, and enforcing any guardrails does not give rise to unintended consequences such as creating barriers to growth, product development, and investment capital for innovation in our state. Currently, our AI law does just that and thus warrants revision as soon as possible.
[1] Gartner, Gartner Hype Cycle (opens new tab).
[2] Cade Metz, The A.I. Frenzy Is Escalating. Again., N.Y. Times (Jun. 27, 2025) (opens new tab).
[3] Joy Wiltermuth, AI Talk is Surging During Company Earnings Calls — and So Are Those Companies’ Shares, MarketWatch (Mar. 16, 2024) (opens new tab).
[4] Chip Cutter and Haley Zimmerman, CEOs Start Saying the Quiet Part Out Loud: AI Will Wipe Out Jobs, Wall Street Journal (Jul. 2, 2025) (opens new tab).
[5] AI In Healthcare: The Future of Patient Care and Health Management, Mayo Clinic (Mar. 27, 2024) (opens new tab).
[6] Victoria Masterson, 9 Ways AI is Helping Tackle Climate Change, World Economic Forum (Jan. 2024) (opens new tab).
[7] See, e.g., Bipartisan Coalition of Attorneys General File Lawsuits Against Meta for Harming Youth Mental Health Through Its Social Media Platforms, Colorado Department of Law (Oct. 24, 2023).
[8]Social AI Chatbots, Colorado Department of Law (May 2025) (PDF).
[9] Matt Burgess, AI ‘Nudify’ Websites Are Raking in Millions of Dollars, Wired (Jul. 14, 2025) (opens new tab).
[10] Colo. Rev. Stat. § 1-46-103 (2025).
[11] Colo. Rev. Stat. § 13-21-1501-1507 (2025).
[12] CFPB Orders Citi to Pay $25.9 Million for Intentional, Illegal Discrimination Against Armenian Americans, Consumer Financial Protection Bureau (Nov. 8, 2023) (opens new tab).
[13] 47 U.S.C. § 230 (“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”).
[14] See, e.g., Smitha Milli, Micah Carroll, Yike Wang, Sashrika Pandey, Sebastian Zhao and Anca Dragan, Engagement, User Satisfaction, and the Amplification of Divisive Content on Social Media, 24-01 Knight First Amend. Inst. 14 (Jan. 3, 2024) (opens new tab).
[15] Twitter, Inc. v. Taamneh, 598 U.S. 471, 499, 501 (2023).
[16] Anderson v. TikTok, Inc., 116 F.4th 180, 184 (3d Cir. 2024).
[17] As I stated in my testimony to the General Assembly’s Committee on Business, Labor, and Technology last spring:
It is important that startup companies—including those eligible for the Advanced Industries Tax Credit (namely, companies with no more than $10 million in capital raised, $5 million in revenue generated, and five or less years in existence) and those under a certain size—be exempt from such requirements. Such an approach, wisely taken in the Colorado Privacy Act, appreciates the importance of leaving companies room to develop the necessary scale and sophistication to handle such obligations without being a regulatory burden that hampers their ability to innovate and grow.
[18] See Philip J. Weiser, Entrepreneurial Administration, 97 B.U. L. Rev. 2011 (2017) (opens new tab).
[19] Memorandum Opinion at 13, 276, State of Colorado et al v. Google LLC, No. 20-cv-3715 (D.D.C. 2024); Coalition of State Attorneys General, DOJ Submit Final Proposed Fix to End Google’s Search Monopoly, Weiser Announces, Colorado Department of Law (Mar. 7, 2025).
[20] Memorandum Opinion at 128, State of Colorado et al v. Google LLC, No. 20-cv-3715 (APM) (D.D.C. Sept. 2, 2025).
[21] Id. at 96 (“Google has used these monopoly profits to secure the next iteration of exclusive distribution deals, paying out billions of dollars in revenue share each year.”).
[22] Judge Mehta himself recognized the importance of including Gen AI companies among those eligible to participate in the remedies that he did order. Id. at 103.
[23] Exec. Order No. 14319, 90 FR 35389 (2025) (opens new tab); Executive Office of the President of the United States, America’s AI Action Plan (Jul. 23, 2025) at 1, 4, (PDF).
[24] See The Upshot, Trump Has Cut NSF Funding to Its Lowest Level in Decades, N.Y. Times (May 22, 2025) (opens new tab); see also Scott Wallsten, Want AI Leadership?: Stop Attacking the Science That Creates It, Technology Policy Institute (July 30, 2025) (opens new tab) (criticizing the Trump Administration for “attacking the science and global connections” that enable innovation in AI, thereby “reveal[ing] a fundamental misunderstanding of how breakthroughs occur and threatn[ing] the historically bipartisan foundations of the innovation it seeks to champion.”).
[25] Memorandum from State Attorneys General on Artificial Intelligence (“AI”) System Accountability Measures and Policies to Ms. Stephanie Weiner, Acting Chief Counsel, National Telecommunications and Information Administration (Jun. 12, 2023) (PDF).
[26] Id.
[27] Id.
[28] Attorney General of Colorado Phil Weiser, Prepared Remarks at Silicon Flatirons Conference on Privacy at the State Level (Sept. 17, 2024).
[29] Attorney General of Colorado Phil Weiser, Prepared Remarks at Silicon Flatirons Conference on Privacy at the State Level (Sept. 17, 2024). This call to change the law followed up on and further developed thoughts included in a joint letter I wrote with Governor Polis and Majority Leader Rodriguez. Letter from Jared Polis, Governor of Colorado, Phil Weiser, Attorney General of Colorado & Robert Rodriguez, Majority Leader, Colorado Senate, to Innovators, Consumers, and All Those Interested in the AI Space (Jun. 13, 2024) (PDF).
[30] Attorney General Phil Weiser Testimony on Senate Bill 25-318 (May 5, 2025) – Before the Committee on Business, Labor, and Technology, 2025 Leg., 75th Session (Colo. 2025) (statement of Attorney General Phil Weiser).