Building Your Library AI Policy: A 9-Step Process
Six months, nine steps, one document that keeps you out of trouble.
- Start by auditing every system you already use. Most libraries discover AI they didn't know they had.
- Vendor accountability is the hardest step. Get contract language requiring bias testing, data protections, and audit rights before you lose your leverage.
- Your policy document needs ten sections, from systems inventory to review schedule. Write it in plain language your staff can actually follow.
- Governance is ongoing. Quarterly monitoring, annual reviews, and a clear protocol for when things go wrong.
Building Your AI Governance Policy
A governance policy is the document that tells your board, your staff, and your regulators that you thought about this before something went wrong. Here's how to build one that actually works.
The timeline below assumes six months. Some libraries do it in four. Most take eight. The steps matter more than the schedule.
Step 1: Audit Your AI (Month 1)
List every system your library uses. For each one, answer:
- Is there AI in this system? (Ask the vendor if you're not sure.)
- What specifically does the AI do? (Don't accept "machine learning"; what decisions does it make?)
- What data does it process? (Patron searches, behavior, demographics)
- What risk level is this? (High, limited, or minimal)
- Who is responsible for oversight? (IT director? A specific person?)
This becomes your AI Systems Inventory. You'll use it to identify which systems need immediate attention (high-risk), which need transparency work (limited-risk), and which are fine as-is (minimal-risk).
Most libraries are surprised by this step. You probably have AI in your discovery layer, your recommendation engine, your self-checkout fraud detection, and maybe your collection development tools. The point of the audit is to stop being surprised.
Step 2: Assess High-Risk Systems (Months 1-2)
For each high-risk system, conduct an impact assessment. This is a formal document, not a checklist. It addresses:
- Purpose: What problem does this system solve?
- Data: What data goes into the system? Where does it come from? What biases might be present?
- Foreseeable risks: Who could be harmed? How? What's the real harm?
- Affected populations: Which groups of patrons does this affect most?
- Mitigations: What are you doing to prevent each identified harm?
Example: Your discovery system uses AI to rank search results. Purpose: Help patrons find materials faster. Data: Search history, circulation data, item metadata. Risks: If the AI was trained on historical circulation data, it amplifies existing collection development biases. Affected populations: Communities historically underrepresented in your collection. Mitigation: Bias testing methodology, manual review of rankings, diverse collection development feeding the system.
Document this formally. Keep it for regulatory inspection.
By this point you're probably two weeks behind schedule. That's normal. Impact assessments take longer than anyone estimates because you keep finding questions you can't answer without calling a vendor.
Step 3: Hold Your Vendors Accountable (Months 2-3)
Contact vendors with AI systems. Ask for:
- Impact assessment they've conducted
- Bias testing methodology and results
- Training data documentation
- Documentation of what oversight procedures they support
- Explanation of data retention practices
Then negotiate. Your contract should require vendors to provide impact assessments and bias testing documentation, indemnify you for AI failures, allow audits of compliance, de-identify patron data within 48 hours, prohibit AI training on patron data without consent, maintain breach notification within 24 hours, and document what decisions the AI makes.
If a vendor won't accept basic accountability, it's a red flag. Not a "maybe revisit later" flag. A "why are we paying this company" flag.
Step 4: Design Bias Testing (Month 3)
High-risk AI needs ongoing bias testing. You need a methodology documented in your policy.
The basic approach: Test the system for disparate impact. Does the AI's output significantly favor certain groups over others? For a discovery system, do searches for "civil rights history" return books about white civil rights movements disproportionately? For a recommendation system, does it recommend books by authors of color to patrons of color but not to others?
You probably can't test this perfectly because you don't control the AI; the vendor does. But you can ask the vendor to test for bias, test outputs yourself looking for patterns, track patron complaints about AI bias, review recommendations periodically for appropriateness, and conduct disparate impact analysis on high-stakes systems.
Document your methodology and results. Show that you're monitoring.
If you're feeling overwhelmed at the halfway mark, remember: you don't need a perfect methodology. You need a documented one. "We tested these five search terms monthly and tracked the results" is better than a theoretically perfect approach you never actually run.
Step 5: Implement Human Oversight (Months 3-4)
For high-risk systems, people make final decisions, not AI.
Define when human review is required. For a discovery system, maybe it's: "Any result that would rank below position 10 must pass a library staff member who confirms it matches the search." For recommendations, maybe it's: "Any recommendation involving sensitive topics goes through human review." For approval decisions, maybe it's: "Every initial denial goes to a person before rejection."
Train the people doing review. They need to understand what they're reviewing, what criteria matter, and when to escalate. This is skilled work. Create appeal mechanisms. Patrons should be able to request human review of AI recommendations or decisions.
Step 6: Draft Your Policy (Month 4)
This is where everything you've done so far becomes a single document. Your policy should include:
- AI Systems Inventory (which systems, what they do, risk classification)
- Risk Management Program (how you assess, monitor, and respond to risks)
- Impact Assessment Requirements (when required, what's covered, documentation)
- Bias Testing Plan (methodology, frequency, documentation)
- Human Oversight Procedures (when required, who decides, appeals process)
- Vendor Management (contract requirements, audit rights, data protections)
- Staff Roles (who's responsible for what)
- Patron Rights (transparency, appeals, data access/deletion)
- Compliance Monitoring (how you track ongoing compliance, reporting to board)
- Review Schedule (policy updates at least annually, sooner if AI systems change)
Use plain language. Avoid jargon. Make it something staff can actually read and understand. If your policy requires a glossary longer than the policy itself, start over.
Step 7: Get Board Approval (Months 4-5)
Present your policy to the board with a summary of AI systems identified, risk assessment of high-risk systems, proposed mitigations, budget for implementation (ongoing compliance has costs), timeline, and success metrics.
Don't frame this as "we need permission to use AI." Frame it as "we already use AI, here's how we're managing the risk." That's a different conversation, and it's a more honest one.
Expect questions you haven't prepared for. Board members will fixate on the thing you considered lowest priority. Bring backup documentation and be ready to say "I'll follow up on that" instead of improvising answers about liability.
Step 8: Train Staff (Months 5-6)
Staff are your frontline. They answer patron questions, identify problems, and implement the policy day-to-day.
Training should cover what AI systems the library uses by role, what each system does with specific functionality, how to explain AI to patrons using plain language scripts, when to escalate concerns about bias, privacy, and errors, patron privacy protection and what information is sensitive, and emergency procedures if systems fail.
Make it interactive. Bring them scenarios: "A patron asks if their search is tracked. What do you say?" Role-play difficult conversations. Give them approved answers they can adapt.
The biggest training gap isn't knowledge. It's confidence. Staff need to practice saying "this system uses AI to rank your results" out loud before a patron asks them at the desk.
Step 9: Monitor and Report (Ongoing)
You're not done. You're never done. But the monitoring gets easier once the foundation is solid.
Quarterly reviews: Track patron complaints about AI systems and look for patterns. Monitor bias testing results to identify trends. Review vendor compliance status. Assess staff training effectiveness. Document system performance issues. None of this needs to be elaborate. A spreadsheet and a 30-minute meeting with your AI oversight person is enough.
Annual compliance review: Full audit of all AI systems against policy. Update impact assessments if systems changed significantly. Review bias testing results for concerning patterns. Review vendor contracts for renewals and updates. Report to board on governance status and problems identified. This is the one that takes real time. Block out two weeks.
When problems emerge: Investigate thoroughly. Document your response. Communicate to patrons if affected. Adjust policy if needed. Report to board if significant. The worst thing you can do is pretend nothing happened. The second worst thing is panic. Have a documented incident response process and follow it.
Your first annual review will feel like starting over. Your second will feel like maintenance. That's the difference between a policy that exists on paper and one that actually governs.
Related Reading
- AI Governance Overview -- The five governance domains your policy needs to address.
- Board AI Decision Guide -- How to present your policy to the board and get approval.
- Staff AI Training Guide -- Training your staff to implement the policy day-to-day.