AI Governance & Policy Framework
Five governance domains for libraries using AI -- from risk assessment to patron rights.
- AI governance breaks down into five domains: risk assessment, compliance, governance structure, vendor management, and patron rights.
- Your library is already using AI -- discovery systems, chatbots, recommendation engines. Most libraries have zero policy covering any of it.
- The EU AI Act and Colorado AI Act both take effect in 2026. If you're not building governance now, you're behind.
- This page covers the framework. For implementation, see the 9-step policy guide, board decision guide, and staff training guide.
The Governance Problem Nobody Told You About
Your vendor quietly added AI features to your discovery system last quarter. Nobody on your staff was told. That's the governance problem. Not hypothetical risk. Not future concern. Right now, AI is making decisions about what your patrons can find, what gets recommended, and what data gets collected, and most libraries have zero policy covering any of it. This framework helps you figure out what AI you're already using, what questions to ask before buying more, and how to build governance your board will actually approve. Not theory. Practical steps for libraries that don't have a dedicated AI team (which is most of you).
Time required: 2 hours for initial assessment, 4-6 weeks for full implementation
Why This Matters
Your library is using AI. Maybe you know it: a discovery system with ranking algorithms, a chatbot fielding reference questions, a recommendation engine suggesting titles to patrons. Maybe you don't: a vendor quietly integrated AI features into a system you already use, and nobody told you it was happening.
This is why governance matters. AI decisions affect patron access. When a discovery system ranks search results using AI, some patrons find what they need and others don't. When a recommendation engine suggests materials, it shapes what communities know is available in your library. These aren't neutral technical questions; they're governance questions requiring board attention, documented policy, staff training, and ongoing oversight.
It's about making intentional choices: Why you use AI, how you'll use it safely, who's accountable when something goes wrong, and how you'll protect patrons, especially vulnerable populations who depend on library access. Regulations are coming fast. The EU AI Act takes effect August 2, 2026. Colorado's AI Act takes effect June 30, 2026. State privacy laws are spreading. This guide helps you get ahead.
Complete Guide
The Five Governance Domains
AI governance is actually five different problems that happen to touch each other. Miss one and the others fall apart.
1. Risk Assessment & Management
You can't govern what you don't understand. This domain asks: What AI systems are we using? What risks do they create? How are we managing those risks?
Start with an audit. Walk through every system your library uses and identify where AI is present. Ask your vendors specifically. AI might be hiding in places you don't expect: discovery systems, chatbots, recommendation engines, even some circulation systems. For each system, ask:
- What decisions does this AI make? (Ranking results, prioritizing holds, generating recommendations, filtering content)
- What data does it use? (Search history, browse behavior, circulation data, demographics)
- Who is affected by those decisions? (All patrons? Specific groups?)
- What could go wrong? (Recommendations skew toward certain communities, search ranking favors certain types of materials, chatbot gives incorrect information)
Then classify risk. Is this AI making decisions that significantly affect patron access to education? If yes, it's high-risk under law. Is it recommending content, providing information, or generating explanations? It's limited-risk. Does it just provide generic information with no personal data? Minimal-risk.
High-risk systems require formal impact assessments documenting foreseeable harms, affected populations, and specific mitigations. Limited-risk systems require transparency, meaning patrons must know AI is involved. Minimal-risk systems follow standard privacy and accessibility rules.
2. Compliance & Legal Framework
Regulators are watching. You need documented compliance or you're exposed to regulatory investigation, fines, and liability.
The regulatory landscape changed in 2026. EU AI Act (effective August 2, 2026) applies to any library serving EU patrons. Colorado AI Act (effective June 30, 2026) applies to high-risk AI making consequential decisions about people, which includes discovery systems and access decisions. State privacy laws give patrons rights to access, delete, and correct their data. GDPR requires data minimization and deletion rights.
For each AI system, identify what laws apply. Then build compliance. For high-risk systems, you need:
- Complete impact assessment (not a checklist, a real analysis of risks and mitigations)
- Documented bias testing showing the system doesn't cause disparate impact
- Training data documentation including sources and known biases
- Meaningful human oversight (actual authority to override, not rubber-stamp approval)
- Patron disclosure (they must know AI is involved and understand limitations)
- Audit trail (who approved what decision, when, why)
- Annual compliance reviews
Vendor contracts are where compliance actually happens. You can't comply alone; you need vendor cooperation. Your contract must require vendors to:
- Not use patron data for AI training without explicit consent
- Provide impact assessments and bias testing results
- Allow you to audit their compliance
- De-identify data within 48 hours
- Notify you within 24 hours of any data breach
- Not remove core AI features without 90-day notice
3. Governance Structure & Decision-Making
Who decides whether your library uses AI? Who has authority to change systems? What happens if something goes wrong? These governance questions matter as much as the technical ones.
Your board needs to approve any new high-risk AI system. This isn't micromanagement; it's appropriate governance. Boards approve budgets, set policy, and provide oversight. AI that affects patron access to education is a board-level decision.
Form an AI governance committee with board representation, IT director, collection development, reference staff, and ideally a community advocate from vulnerable populations. This committee owns your policy, reviews vendor contracts, approves new systems, and monitors compliance.
Decision authority needs clarity:
- Board approves: New high-risk AI systems, major policy changes, budget for compliance
- Committee approves: Modifications to existing systems, vendor contracts, staff training
- IT director approves: Routine monitoring, bug fixes, performance optimization
- Staff escalate: Patron complaints about AI, suspected bias, security concerns
4. Vendor Management for AI Systems
Your vendors control most of your AI. You can't remove that dependence, since most modern systems have vendor AI built in. You can protect yourself by being a smart customer.
In vendor selection, ask AI-specific questions:
- What AI is included in this system? (Be specific: "recommendations," "search ranking," etc.)
- Can we audit your compliance with AI regulations?
- What's your liability if your AI causes harm?
- Will you de-identify our patron data? How quickly?
- Do you train your AI on our patron data? Can we opt out?
- What happens if you remove AI features?
In vendor negotiation, demand:
- Indemnification for AI failures (don't accept "AI features are as-is")
- Right to audit vendor compliance
- Clear documentation of what AI can and can't do
- Data de-identification within 48 hours
- 90-day notice before removing AI features
- Breach notification within 24 hours
- Limitations on data use for AI training
Get these in writing. Handshakes and promises don't protect you when regulators investigate. Monitor vendors ongoing. Track their compliance: ask for bias testing results, keep documentation, question deviations from contract.
5. Patron Rights & Transparency
Patrons are the people AI decisions affect most directly. They need to know, they need to understand, and they need recourse.
Transparency first. Patrons must know where AI is used, not buried in page 47 of the privacy policy, but visible and clear. When a patron searches, a note says "Results ranked by AI. Learn more." When they see a recommendation, they know it's AI-generated. When they're using a chatbot, they're told it's AI.
Understand limitations. Patrons need clear explanation of what AI can and can't do. "This AI recommends based on your search history" is transparent. "This AI found books you'll definitely like" is a promise the system can't keep.
Provide appeals. If a patron disagrees with AI recommendations or wants human judgment, there's a process. Vulnerable populations especially need this, since patrons researching sensitive topics might want search results that AI filtered out.
Protect vulnerable populations especially. People researching asylum law, domestic violence shelters, transgender healthcare, or undisclosed health conditions have serious privacy needs. Data breaches can enable ICE targeting, abuser tracking, outing, or discrimination. For these patrons, you can't rely on consent, because many would lose library access if they had to opt in to AI. So you choose privacy for them. Minimize data collection. Delete aggressively. Restrict vendor access to their data. Encrypt everything.
Language accessibility matters. Privacy policies in English don't protect monolingual Spanish speakers. AI explanations must be available in community languages at appropriate literacy levels.
Go Deeper
This overview covers the five governance domains. The hard part is implementation. These three guides walk you through it.
- Building Your AI Policy -- The 9-step process from audit to ongoing monitoring. Where most libraries get stuck and how to keep moving.
- Board AI Decision Guide -- The six categories of questions your board needs to answer, with Go/No-Go decision criteria.
- Staff AI Training Guide -- Training curriculum, communication strategies, and how to handle the hard patron conversations about AI and privacy.
Related Reading
- The AI Clauses Your Vendors Are Sneaking Into Contracts -- The contract language that makes governance possible (or impossible).
- AI Readiness Toolkit -- Printable vendor questionnaire, impact assessment, and staff policy agreement.
- Five Vendor Risk Domains -- Evaluate vendors across stability, contracts, support, security, and equity.