When the European Union adopted the EU Artificial Intelligence Act in 2024, it set a global benchmark for risk-based AI regulation. That law’s phased implementation, starting with bans on high-risk systems and moving to comprehensive governance by 2026, has influenced legislative agendas worldwide. For businesses in the United States, the law was a sign that AI governance will soon become a compliance imperative, not just a best practice.
Colorado’s Landmark Law Will Have Ripple Effects
In May 2024, Colorado became the first state to enact a broad-based AI law, the Colorado Artificial Intelligence Act. Modeled partly on the EU framework, Colorado imposes obligations on developers and deployers of high-risk AI tools, such as those used when making employment, housing, healthcare, and lending decisions. These obligations include impact assessments, risk management programs, transparency, and human oversight.
Originally slated to take effect in February 2026, enforcement of the Colorado law is delayed until June due to industry pressure and legislative amendments. Nonetheless, Colorado’s landmark law has inspired similar legislation in other states, such as California and Illinois, and may yet creep into New Hampshire’s current legislative session.
Momentum Is Building in State Legislatures
While Colorado lead with a comprehensive AI law, other state legislatures are advancing both broad-based and targeted laws. California has adopted several such laws, including the Transparency in Frontier Artificial Intelligence Act, which mandates disclosures and safety protocols for developers of advanced AI models. California also has laws that address chatbot safety and consumer protection, with enforcement of them beginning in 2026.
Illinois and New York City have focused on employment-related AI use. Those laws require notification or consent from applicants before using AI tools in the hiring process, and prohibit or require auditing of automated employment decisions. Broad-based privacy laws, including New Hampshire’s privacy law, also impose restrictions on automated decision-making, which extend to employment decisions as well as other contexts.
New Hampshire has not yet adopted a broad-based AI law, instead opting for narrower measures addressing specific risks. For example, current New Hampshire law prohibits state agencies from using AI for real-time biometric surveillance and discriminatory profiling without a warrant, as well as specific uses of generative AI, such as for deepfakes and communications with minors.
Federal Legislation and Executive Orders
At the federal level, comprehensive AI legislation remains elusive. Instead, the policy landscape is shaped by Executive actions. In early 2025, President Trump signed Executive Order 14179, titled Removing Barriers to American Leadership in Artificial Intelligence, which repealed prior safety-focused mandates and prioritized innovation. More recently, a draft Executive Order leaked in November 2025 signaled an intent to preempt state AI laws, citing concerns over a “patchwork” of regulations that could stifle competitiveness. The draft proposed creating a federal AI task force and conditioning federal funding on states’ compliance with national rules. While the order has not been finalized or issued, it underscores the tension between federal uniformity and states’ rights – a debate that will shape AI governance in 2026 and beyond.
Businesses Should Start Now To Prepare for Compliance
Regardless of whether state or federal regulations emerge in this legislative session or in the near term future, businesses should start now to prepare themselves for compliance. The following are three overarching steps to do so.
- Conduct an AI Use Assessment. Inventory all AI tools that the business is already using, and identify AI technologies that will be beneficial for the organization to start using.
- Establish an AI Governance Framework. Create a cross-functional AI governance team that includes leadership from throughout the business, as well as technology and legal advisors with AI expertise. Develop written policies that align with existing regulations and emerging standards, such as the EU AI Act and the AI Risk Management Framework promulgated by the National Institute of Standards and Technology.
- Integrate AI into Operations. Operationalization AI usage through testing, prototyping, and ultimate use of AI in production environments. Ensure appropriate due diligence is performed and contracts are signed with vendors addressing their AI use.
AI is not a distant concept. It is a present-day business reality. With hundreds of AI-related bills introduced across the United States and global frameworks like the EU AI Act and the NIST AI Risk Management Framework setting a high bar for compliance, businesses need to act now to retain their competitive edge. Don’t wait for a law to force you to comply. Lead the way.