Colorado AI Act for Libraries: The Practical Guide
- Colorado AI Act (SB 24-205) effective June 2026 requires high-risk AI systems to have impact assessments, human review, and bias audits. Library AI is likely in scope.
- Vendor impact: compliance costs shift to vendors who build to Colorado standards and sell nationwide (since that's cheaper than building multiple compliance frameworks). Libraries get dragged into vendor legal strategy.
- 9+ other states have pending similar laws. Rather than adapting to three different frameworks, expect vendors to standardize to strictest state requirement and apply universally.
- Library action: audit your AI vendor agreements now for compliance requirements, impact assessment clauses, and who bears liability if AI fails compliance audits.
Colorado's AI Act (SB 24-205) was supposed to take effect February 1, 2026. Then in August 2025, the effective date got pushed to June 30, 2026, giving businesses five more months to figure things out.
That's interesting for two reasons. First, it shows how unprepared even big companies are. Second, it means the real squeeze is happening right now. Vendors are in crisis mode. And libraries are getting caught in the middle.
Here\'s what you need to know: Colorado\'s law is basically the American version of the EU AI Act, with some key differences. It focuses heavily on "high-risk AI" and "consequential decisions" about people. And yes, library AI probably qualifies.
This guide is for every library, not just Colorado ones. Why? Because once one major state has an AI law, vendors build to that standard and sell it everywhere. Plus 9 other states have similar laws pending. You're either dealing with this now or dealing with it in 18 months across three different state legal frameworks.
Better to get ahead of it.
What Colorado Actually Changed: The Practical Reality
Colorado\'s AI Act (CRS 6-1-1701 to 6-1-1707) isn\'t as detailed as the EU AI Act. It\'s shorter, scrappier, and in some ways more dangerous because it\'s more ambiguous.
The core rule: If you're using high-risk AI that makes "consequential decisions" about people, you need to have your act together. Documentation. Bias testing. Human oversight. Impact assessments. All of it.
And here's the kicker: The law defines "high-risk AI" and "consequential decisions" broadly enough to catch things you might not think apply.
AI compliance deadline coming up?
Colorado defines "high-risk AI" as AI that has the "potential to meaningfully impact civil liberties or civil rights." That's vague. And intentionally so.
Then it lists specific areas where AI is presumed high-risk if it makes "consequential decisions":
- Education and educational opportunities
- Employment
- Housing
- Credit and financial services
- Healthcare
- Legal services
- Insurance
- Essential government services
For libraries, the trigger is usually "education and educational opportunities." And here's where it gets real.
What's a "Consequential Decision" for Libraries?
This is the question vendors and libraries are arguing about right now.
Narrow interpretation: Only decisions that directly limit someone's access to education (like denying a student library card).
Broad interpretation: Any AI decision that affects someone's ability to pursue education, including resource recommendations, search rankings, collection suggestions, anything that shapes what someone can access.
Colorado\'s law leans toward the broad interpretation. Because "consequential" doesn\'t mean "major." It means decisions that "have or would be reasonably expected to have a significant, material effect on consumers' lives."
Question: Does an AI recommendation that affects what resources a student uses in their research have a "significant, material effect" on their education? Arguably yes. Your vendor is betting yes. So are we.
The Five Core Requirements: What You Actually Have to Do
If your library is deploying high-risk AI, Colorado requires five things. These aren't theoretical. These are compliance requirements enforceable by the Colorado Attorney General.
1. Conduct and Document an Impact Assessment
Before (or immediately upon) deploying high-risk AI, you need to complete a "high-risk AI impact assessment." This is a formal document that includes:
- What the AI system does: Plain description of purpose and functionality
- What data it uses: Where it comes from, what it represents, what biases it might contain
- Foreseeable risks: What could go wrong? Who gets harmed? How?
- Risk mitigation: How you're addressing those risks
- Bias assessment: Whether the AI has been tested for discriminatory outcomes
- Affected parties: Who this impacts and how they might be affected
- Compliance measures: How you'll meet legal requirements
This isn\'t a checklist. It\'s a multi-page document. And it needs to be updated annually or whenever the system significantly changes.
The key word here is "document." You have to be able to show regulators that you did this work. If regulators investigate and you don\'t have documentation, you're in trouble - even if you actually did the thinking.
2. Maintain and Update a Risk Management Program
Colorado requires "appropriate risk management procedures" for high-risk AI. This means written policies covering:
- Pre-deployment testing: How you validate the AI before it goes live
- Ongoing monitoring: How you track performance after deployment
- Risk response: What you do if something goes wrong
- Regular audits: How often you check the AI for bias and problems
- Incident reporting: Who gets notified if the AI causes harm
These don\'t need to be perfect. But they need to exist. And they need to show reasonable diligence - that you're taking AI risks seriously, not just saying you are.
3. Implement Meaningful Human Oversight
High-risk AI can\'t operate autonomously. Someone human needs to review and approve AI decisions, especially when they affect people\'s rights.
The law says "meaningful" oversight. What does that actually mean?
It doesn't mean rubber-stamp approval. It means someone with actual authority and competence looking at AI recommendations and deciding whether to accept, modify, or reject them.
For library applications: If AI ranks search results for students, a librarian should review flagged results. If AI recommends collection items, someone should check for bias, missing context, or inappropriate suggestions (these may not be immediately obvious without domain expertise). If AI makes decisions about resource access, a human needs to be able to override it.
The point is: Don't let AI make decisions unilaterally. Even if your AI is really good, Colorado law requires a human to maintain control.
4. Provide Clear Disclosure and Transparency
When high-risk AI makes consequential decisions about someone, they need to know it was AI. And they need information about:
- That AI was used: Clear, not buried in legal documents
- What the AI did: Plain language explanation
- How they can appeal: What process exists to challenge AI decisions
- How to get human review: How to escalate if they disagree with AI recommendations
This doesn't mean you need to hire an AI lawyer to write disclosure language. But you need something. "AI helped rank these results" is better than nothing. "Your recommendations were generated by a machine learning system trained on X, Y, Z data" is more complete and better.
5. Use Quality Training Data
High-risk AI needs training data that's representative and tested for bias. Colorado requires you to:
- Document where your training data comes from
- Demonstrate it's representative of the populations affected
- Identify and document known biases
- Show how you're mitigating bias
For library systems, this gets tricky. Most AI recommendation engines train on historical library data (circulation, searches, checkouts). That data reflects your library's existing collection biases and patron demographic patterns.
If your library's collection historically overrepresented certain subjects or demographics, the AI learns that bias and amplifies it. Under Colorado law, you need to acknowledge this and do something about it.
Options: Reweight training data to reduce bias. Diversify training sources. Add manual review to catch problematic recommendations. Test regularly for disparate impact across different populations.
The Vendor Problem: What Vendors Are Scrambling With Right Now
Based on conversations with vendors and contract reviews over the past year, here\'s the range of responses I\'m seeing:
- Some have AI compliance roadmaps. These companies are taking the law seriously, hiring compliance staff, rethinking how their AI works.
- Most are offering generic impact assessment templates. "Copy this template, fill in your context, call it done." Not really compliance, but it\'s what they\'ve got.
- Some are hoping regulators won\'t notice. They\'ll monitor what the Colorado AG actually enforces and figure it out from there.
- A few are just pushing compliance work onto customers. "We provide the AI tool. You handle compliance." Not legal, but they're trying.
Don't assume your vendor has figured this out. Ask them directly where they stand.
The Contract Negotiation Trap
When you renew your vendor contracts (or negotiate new ones), look for language that pushes compliance responsibility onto you.
Bad language: "Customer is responsible for all compliance with applicable AI regulations."
Better language: "Vendor will provide bias assessment documentation and impact assessment frameworks. Customer is responsible for conducting the assessment using vendor-provided materials and determining applicability in customer's context."
The difference: In the first version, you're responsible for everything, including things the vendor controls (like training data and algorithm design). In the second, the vendor does their part, you do yours.
Negotiate this. It matters when regulators come asking questions.
Real Example: How This Applies to a Specific Library Tool
Let\'s say you use an AI-powered discovery system that ranks search results based on machine learning. Here\'s how Colorado law applies:
Step 1: Is This High-Risk?
Question: Does this AI system make decisions that materially affect someone's ability to pursue education?
Answer: Probably yes. Students using your discovery system to find resources for assignments depend on the ranked results. If the ranking system is biased (e.g., consistently downranking certain subjects or viewpoints), it materially affects what educational resources they find.
Conclusion: This is likely high-risk under Colorado law.
Step 2: Impact Assessment
You (or your vendor) need to document:
- What the discovery system does (ranks search results using machine learning)
- What data it trains on (historical library searches and checkouts)
- What risks exist (bias against certain subjects or demographics; users not understanding why results are ranked differently; AI overconfidence in wrong rankings)
- What mitigation exists (human librarians review flagged results; users can toggle "AI ranking" on/off; system tested for bias)
- Who's affected (students, researchers, all users)
Step 3: Risk Management
You need a plan for:
- How you tested the system before deployment (bias testing? performance benchmarks?)
- How you monitor it ongoing (quarterly bias audit? user complaint tracking?)
- How you handle problems if they arise (incident response plan?)
- How you conduct regular audits (annual review of bias test results?)
Step 4: Human Oversight
Who's responsible for reviewing AI decisions? Probably your collection development team or reference librarians. They need to be in the loop - either by reviewing flagged results or by having a process for users to escalate AI recommendations they think are wrong.
Step 5: Transparency
You need to tell users the results are AI-ranked. "These results are ranked by an AI system trained on X data. You can turn off AI ranking here if you prefer chronological/relevance sorting."
Step 6: Training Data
Your vendor needs to document their training data and identify known biases. "We trained on 5 years of library circulation data. Because your library's collection has historically concentrated in X areas, the AI may over-recommend those areas. We mitigate this by weighting training data toward underrepresented subjects."
If your vendor can\'t explain this, they\'re not ready for Colorado compliance.
The Enforcement Reality: What Actually Happens
Here\'s what Colorado\'s Attorney General is probably going to do:
Year 1-2 (2026-2027): Focus on egregious violations and companies they know are non-compliant. Probably not focusing on libraries initially.
Year 3+ (2027-2028): As compliance becomes normal, start investigating complaints. If a library patron claims they were denied access to resources because of biased AI, and the library can\'t show they did a proper impact assessment - that\'s a problem.
The threat isn\'t immediate. But it\'s real. And it'll get worse as enforcement ramps up.
More immediately: If your vendor gets caught violating the law, they get fined. Which means your contract might get terminated or renegotiated. Which is expensive and disruptive.
Your incentive: Make sure your vendors are compliant. Because their non-compliance can become your problem.
What You Actually Need to Do (Practical Checklist)
Colorado AI Act Library Audit Checklist
Complete by end of Q2 2026 (before June 30 deadline).
- ☐ List every system you use that uses AI (discovery, recommendation engine, chatbot, content ranking, anything that "learns")
- ☐ For each system: Ask - does this make decisions that affect patron access to educational resources?
- ☐ Categorize as high-risk or not high-risk
- ☐ For each high-risk system: Request impact assessment documentation from vendor
- ☐ For each high-risk system: Document your current risk management practices
- ☐ For each high-risk system: Identify who has oversight authority
- ☐ For each high-risk system: Review current user-facing transparency
- ☐ Identify gaps in compliance
Impact Assessment Template (For Each High-Risk AI)
- ☐ System name and purpose: [What is this AI?]
- ☐ Training data: [What data does it learn from? Where does it come from?]
- ☐ Known biases: [What biases exist in the training data?]
- ☐ Affected populations: [Who does this affect? Describe diversity of populations]
- ☐ Foreseeable harms: [What could go wrong? Who could be harmed?]
- ☐ Risk mitigation: [How are you addressing these harms?]
- ☐ Testing approach: [How do you test for bias and problems?]
- ☐ Human oversight: [Who reviews AI decisions? When?]
- ☐ User transparency: [What do users know about the AI?]
- ☐ Appeal process: [How can users challenge AI decisions?]
Vendor Negotiation Checklist
- ☐ Ask vendor: "Which of your systems are high-risk AI under Colorado law? Why?"
- ☐ Ask vendor: "Can you provide impact assessments for high-risk systems?"
- ☐ Ask vendor: "How have you tested for bias? What did you find?"
- ☐ Ask vendor: "Can we see training data documentation?"
- ☐ Ask vendor: "What's your timeline for Colorado AI Act compliance?"
- ☐ Request contract revision: Clear allocation of compliance responsibility
- ☐ Request: Right to audit AI systems for compliance
- ☐ Request: Ability to disable AI features if they're non-compliant
- ☐ Request: Pricing terms that don't increase because of compliance costs
User Transparency Checklist
- ☐ Review website/app for AI disclosure: Is it clear where AI is used?
- ☐ Check website for opt-out options: Can users disable AI features?
- ☐ Test user appeal process: How do users challenge AI recommendations?
- ☐ Train staff: Can staff explain to patrons when AI is involved?
- ☐ Document process: Write down your human oversight procedures
- ☐ Set monitoring schedule: How often do you audit for bias? When?
The Key Questions for Your Vendors (Copy-Paste These)
Don't ask all at once. Work through these in your next vendor meeting or contract negotiation.
- "Which parts of your system use AI? What does each AI component do?" Be specific. "Learns from usage patterns" isn't specific enough.
- "Under SB 24-205, which of these are high-risk AI systems? Walk me through your analysis." This forces them to think about it.
- "Can you provide your high-risk AI impact assessments?" Not a template. Your actual assessments.
- "What training data does each system use? Can we see documentation?" Red flag if they can't answer this.
- "Have you tested these systems for bias? What methodology did you use?" Bias testing should be documented.
- "How do you implement human oversight? Who reviews AI decisions?" What's the actual process?
- "If this AI makes a mistake or produces biased results, who's responsible for fixing it?" This reveals who bears the risk.
- "What transparency are you providing to users? Can they understand why AI made a decision?" Check if users even know AI is involved.
- "What\'s your timeline for Colorado AI Act compliance? Are you on track for June 30, 2026?" If they\'re vague, they're behind.
- "If compliance requirements change, who absorbs the cost? How will that be handled in the contract?" Protect yourself from surprise costs.
The State Law Domino Effect
Colorado isn\'t alone. Here\'s what happened after SB 24-205 passed:
- 2024-2025: Colorado passed the first comprehensive state AI act. California enacted SB 942 (AI transparency for generative AI). New York City enforces Local Law 144 on automated employment tools, and New York State enacted the RAISE Act covering frontier AI developers. Several states, including Connecticut and Washington, have bills in progress or narrower AI disclosure laws on the books.
- Pending: Massachusetts, Illinois, Maryland, Virginia, and others have bills in progress (see the NCSL AI legislation tracker for current status)
- Federal: Congress is debating AI regulation. Biden's AI executive order (EO 14110) was rescinded by President Trump on January 20, 2025, and the current federal posture leans toward preempting state AI laws rather than adding transparency requirements.
The point: Colorado\'s law isn\'t an outlier. It\'s the new normal. If you're not thinking about this, you should be.
Real Talk: Why Your Library Probably Isn't Ready
Let's be honest:
- You don't have AI expertise on staff
- Your vendors are still figuring out their compliance strategy
- Your board probably doesn't understand why this matters
- You don't have budget set aside for compliance work
- The law is ambiguous enough that reasonable people disagree on what's required
You\'re not alone. Almost no libraries are ready. But the law doesn\'t care about readiness.
So here's your path forward:
- Do the audit. Figure out what AI systems you're actually using.
- Ask vendors hard questions. Push them to explain their compliance strategy.
- Document your thinking. Write down what you did, what you decided, and why. (This protects you if regulators come asking.)
- Make reasonable decisions. Based on what you know, do what makes sense. You're not expected to be perfect, just reasonable.
- Monitor and adjust. As guidance becomes clearer, adjust your practices.
That\'s compliance. It\'s not sexy, but it works.
The Intersection with Other Laws
Colorado\'s AI Act doesn\'t exist in a vacuum. You're also dealing with:
- GDPR (EU patrons): If you serve EU patrons, EU regulations apply regardless of Colorado
- ADA: AI can't be used in ways that discriminate against people with disabilities
- Your state\'s privacy law: If you're in California or another state with a privacy law, that applies too
- FTC enforcement: The FTC is increasingly aggressive about AI companies violating consumer protection laws
The good news: If you comply with Colorado law, you're usually also compliant with most other AI regulations. Colorado basically took the EU AI Act and Americanized it. So you're safe.
Related Articles
- The EU AI Act Deep-Dive for Libraries - The original regulatory framework that influenced Colorado and U.S. state laws.
- Vendor Contract AI Clauses - How vendors are scrambling to comply with the new rules.
Need help assessing your specific AI systems? It\'s not always obvious whether something is "high-risk" or what compliance looks like in your context. If you want to talk through your specific tools and what you actually need to do, that\'s what I do. Get in touch.
Further Reading:
- Colorado SB 24-205 full text: Colorado General Assembly website
- My EU AI Act deep-dive (the original framework that influenced Colorado)
- State AI regulation tracker: National Conference of State Legislatures
- FTC AI guidance: FTC AI Section
- NIST AI Risk Management: Risk Management Framework