An AI-generated illustration of a desk with writing materials and a laptop overlaid with the quote "The real race isn't about developing the fastest AI—it's about earning lasting trust"..

AI Data Governance Best Practices: How California, Colorado & Utah Are Shaping the Future of AI

2 Jul 2025

Legal
Ethical AI
Global Affairs

The U.S. isn’t just following the EU’s lead on AI regulation—it’s writing its own playbook. From Silicon Valley to Salt Lake City, a new era of AI governance is emerging. And for businesses it’s not just about compliance—it’s about competitive advantage. With insights from Defined.ai’s Director of Legal, we explain what your company needs to do to stay safe and innovate.

AI in the U.S.: The Shift from Inspiration to Leadership

For years, the European Union’s General Data Protection Regulation (GDPR) and the newly passed EU AI Act have set the tone for global data privacy and AI governance. U.S. companies initially scrambled to align with those rules, not because they wanted to but because they had to.

But things are changing—and fast.

The U.S., led by forward-thinking states like California, Colorado and Utah, is now stepping up with legislation that blends the spirit of EU oversight with a uniquely American focus on innovation, flexibility and economic growth. “The real race is between the market and surveillance-driven instincts and fundamental rights preservation”, says Defined.ai Director of Legal Melissa Carvalho. Specializing in ethical AI and compliance, her expertise in data protection, IP and corporate law keeps Defined.ai’s internal processes, data and services on the straight and narrow.

If you’re building or deploying AI in the U.S., this isn’t just a legal update. It’s a strategic moment. So, how can your business not only stay compliant but get ahead?

In this post we’ll explore:

  • Which AI rules and regulations are currently in play
  • Why your company should comply with AI laws—and what happens if you don’t
  • What the benefits are for your business if you invest in AI governance
  • Actions you can take to reach and retain AI compliance

The EU AI Act: A Summary

The EU AI Act is the world’s first major AI-specific law. Designed as product safety legislation, it applies to all AI systems entering the EU market, whether developed in Europe or abroad. The law introduces a tiered, risk-based approach that:

  • Bans certain AI applications that pose unacceptable risk (like social scoring or real-time biometric surveillance)
  • Requires transparency and safety checks for high-risk systems (used in areas like hiring, education, and law enforcement)
  • Imposes obligations on providers, importers, and users—including impact assessments and human oversight mechanisms

"The EU must stand firm against the false choice between innovation and oversight. This is not about stifling progress—it’s about securing a future sustained by digital sovereignty", says Melissa. While promoting innovation remains a goal, the EU prioritizes the protection of fundamental rights, democratic values and public trust. Enforcement will be backed by significant penalties and coordinated across national supervisory authorities.

The message is clear: if you want to operate in the AI space in the EU, it must be safe, ethical and accountable.

Learn about Defined.ai's Ethical Manifesto

Manifesto_mockup_cover_1.png

Colorado AI Act: Transparency and Fairness First

Colorado’s legislature is making moves to ensure AI works for everyone—not just the people building it. They’re creating a framework for responsible AI development and use, particularly for sensitive areas like housing, employment and credit. At a high level, it would:

  • Force developers and deployers of high-risk AI systems to complete impact assessments
  • Require businesses to inform consumers when AI is involved in significant decisions
  • Give people rights to opt out, correct, and appeal AI-driven outcomes

Utah AI Law: Laying the Groundwork for Ethical AI

While California and Colorado are rolling out enforceable rules, Utah is playing the long game with a focus on ethical innovation and policy experimentation. The state’s approach blends oversight with innovation, making it easier for businesses to experiment while still laying the groundwork for future enforcement by:

  • Establishing the Office of AI Policy and an AI Learning Laboratory
  • Encouraging businesses and developers to test ethical frameworks and self-regulate responsibly
  • Aiming to set foundational standards that evolve as AI use matures

The California AI Bill: Innovation with a Kill Switch

With 18 new laws that came into effect on January 1, 2025, California isn’t just the birthplace of tech giants—it’s now at the forefront of AI risk regulation. Although the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act didn’t pass, it would have required developers of powerful AI models to undergo mandatory safety testing, operate a “kill switch” mechanism for terminating dangerous AI behaviors and grant enforcement authority to the State Attorney General. This just goes to show how fluid this space is: a single piece of legislation had the power to rewrite the rules of the game.

California is contemplating the fear that advanced AI—especially general-purpose models—could spiral beyond human control. It’s not anti-tech, it’s pro-accountability. And speaking of accountability, the pending Generative AI Copyright Disclosure Act would require companies to disclose copyrighted data used to train generative AI, putting transparency front and center.

U.S. AI Governance Goes Global—Whether Companies Like It or Not

Think you can just create a separate compliance track for the EU and ignore the rest? Think again.

For Melissa, “The collateral beauty of the EU regulatory power lies in its extraterritorial scope. The trickle-down effect of that makes it extremely tricky for firms to sustain legal compliance frameworks without considering the wider picture”.

Under Colorado’s law, extraterritorial reach is real. Just like the EU AI Act, it applies if your model or application impacts citizens in those states—no matter where your company is headquartered.

Plus, many other countries are looking at U.S. state models for their own regulatory blueprints. Responsible AI is going global, and the U.S. is becoming a trendsetter.

These Laws Aren’t Just Checkboxes—They’re a Business Imperative

Too many businesses still treat compliance as an afterthought. But today, AI governance needs to be part of your go-to-market strategy. Fines aren’t the only risk, but they can hit you where it hurts. Consider these cautionary tales:

  • British Airways: fined $230M for a 2018 data breach
  • Clearview AI: hit with €20M fine for scraping faces without consent
  • Didi Global: fined $1.2B by Chinese regulators for data violations

“The fear of sanctions or penalties doesn’t seem to be a sufficient incentive to embrace higher standards”, says Melissa. “Human-centric companies—the ones that have a strong ethical backbone—will always do more than just ticking regulatory checkboxes”. Reputation, customer trust and investor confidence are on the line—and getting it wrong in one jurisdiction could sink global operations.

What Happens If You Don’t Comply?

Fines aside, non-compliance could lead to:

  • Injunctive orders halting your product
  • Loss of public and investor trust
  • Litigation from competitors (thanks to unfair competition clauses)
  • Tarnished reputation that limits future expansion

“A serious breach can mean that businesses of any size might not live to fight another day”, warns Melissa. In short, don’t play compliance roulette.

“All AI models should have a ‘nutrition label’ for the legitimacy and quality of the data they were trained with. Most models and especially the open-source models, have been trained with ‘public data’, which for one, it doesn’t mean that it’s copyright free, nor that it’s free for use.” — Daniela Braga, Founder & CEO, Defined.ai

The Competitive Advantage of Doing AI Right

Forward-thinking companies are embedding ethics, transparency and safety into their AI from day one—not just to avoid trouble, but to win trust.

  • Microsoft has launched its AI for Good initiative focused on safety and compliance
  • IBM, SAP, and PwC are aligning development pipelines with ethical frameworks inspired by the EU and U.S. regulations

“The real race isn’t just about developing the fastest AI—it’s about earning lasting trust. Companies that embed accountability and fairness into their systems from the start won’t just keep up—they’ll lead the market”, advises Melissa. You don’t have to be a tech giant to do this. Even mid-sized enterprises building custom models for internal workflows can get ahead by doing it the right way from the start.

What Smart Companies Are Doing Right Now

Want to stay ahead of regulations and build trust with your users?

Here’s how to future-proof your AI efforts:

AI Impact Assessments

  • Evaluate how your models affect individuals and society—especially in high-risk areas like hiring, healthcare, or credit.
  • Document everything in the data lifecycle: intent, training data sources, and expected outcomes.

Transparent AI Systems

  • Use plain-language disclosures to explain when and how AI is involved in decision-making.
  • Keep records to show how the model was trained and what data powers it.

Consumer Controls

  • Let users opt out, request corrections, and appeal decisions made by your AI.
  • Make these processes easy, accessible, and fair.

Responsible Training Data

  • Vet your data sources for consent, fairness, and copyright clearance.
  • Avoid “data laundering” practices that repurpose scraped content without permission.

Build Risk Management into DevOps

  • Integrate ethical risk assessments into your model development pipeline—not just at launch.
  • Adopt ISO/IEC 42001 (AI Management Systems) and 27001 (Information Security Management Systems) & 27701 (Privacy Information Management System) where possible.

What is Data Lifecycle Management?

An infographic showing the different stages of the Data Lifecycle like "Privacy and data protection by default and by design".

Your AI Compliance Playbook: What to Do Now

Here’s your 5-step roadmap to AI governance success:

1. Map Your AI Footprint

  • Inventory all your models, their purposes, and where they're deployed.

2. Identify Risk Levels

  • Determine which systems are “high-risk” under U.S. or EU standards.

3. Assess & Document

  • Run internal assessments. Track consent, data sources, fairness, and outcomes.

4. Update Your Governance Model

  • Appoint a cross-functional compliance team and train developers.

5. Engage Legal Early

  • Don’t wait until product launch. Align legal, tech, and product teams from ideation through deployment.

Final Thoughts: Responsible AI Is Just Good Business

What’s happening in California, Colorado and Utah is just the beginning. “We can’t imagine every single scenario of how the tech future will look. Regulation might not be adored or popular, but people-pleasing needs to take a backseat. It’s fundamentally a values-aligned exercise”, says Melissa. The U.S. is shaping the global AI regulation conversation, and businesses that move now will find themselves better positioned, more trusted and more resilient.

Whether you’re a scaleup deploying generative AI, or an enterprise automating decision-making at scale, the message is clear: responsible AI isn’t a burden—it’s a differentiator.

Want to Do It Right?

Defined.ai offers the world’s largest ethically-sourced, API-integrated AI marketplace—plus the tools and expertise to help you fine-tune, evaluate and govern your AI models with confidence.

Contact us today to future-proof your AI strategy.

An illustration showing speech bubbles and multi-directional arrows to suggest the possibilities of LLM fine-tuning.

Want to get to the top of the LLM Leaderboard?

The best AI models need relevant, high-quality ethical data and expert human evaluation—all at scale. Learn how Defined.ai can help you get the most out of your LLM.


© 2025 DefinedCrowd. All rights reserved.