Board AI Decision Guide for Libraries
The six questions your board should ask before approving any AI system, and when to say no.
- Boards make the critical decisions about AI adoption. Not IT directors. Not vendors. This guide gives them the questions to ask.
- Six question categories cover strategic, legal, governance, financial, equity, and vendor dimensions of every AI decision.
- Go/Caution/No-Go criteria help boards make defensible decisions rather than rubber-stamping vendor demos.
- Case study from Little Schitt Public Library shows what governance looks like when it actually works.
Who Makes AI Decisions?
Boards make the critical decisions about AI. Not IT directors. Not vendors. Boards.
That doesn't mean boards need to understand how neural networks work. It means they need to ask the right questions, evaluate the answers, and make decisions they can defend to their community. The framework below gives them a structured way to do that.
Six Categories of Board Questions
1. Strategic Questions
Before anything else, the board needs to understand why. These five questions establish whether the AI proposal is solving a real problem or chasing a trend:
- What problem does this AI solve for patrons?
- What's the actual case for this? Does the patron benefit justify the cost and risk?
- Does this align with our mission to serve all patrons equitably?
- What happens if we don't do this? Is it competitive disadvantage, or just nice-to-have?
- What are the risks if this goes wrong?
If the director can't answer these clearly, the proposal isn't ready for a board vote. Send it back.
2. Legal and Compliance Questions
AI regulation is moving fast. The EU AI Act is in effect. Colorado's AI Act is coming. Your state may have its own requirements. The board needs to know:
- Is this high-risk under EU AI Act, Colorado AI Act, or state law?
- If yes, have we completed required impact assessments?
- Have we documented bias testing?
- Do we have vendor compliance documentation?
- What's our liability exposure?
3. Governance Questions
Who's in charge when things go wrong? The board should know before they approve:
- Who has authority to change or terminate this system?
- What's the process if problems emerge?
- How do we handle patron complaints?
- Who's accountable if this fails?
If the answer to "who's accountable" is vague, that's a governance gap. Fix it before you vote.
4. Financial Questions
The sticker price is never the real price. AI systems come with ongoing compliance, monitoring, and management costs that vendors conveniently leave out of the demo. Ask for the full picture:
- What's the total cost? Purchase plus compliance plus monitoring.
- What are ongoing costs for bias testing, audits, and vendor management?
- What's the cost to exit if the vendor fails or we need to change direction?
- What's our insurance coverage for AI-related claims?
5. Equity and Vulnerable Population Questions
This is where library boards earn the trust their communities place in them.
- Who could be harmed by this AI?
- Have we assessed impact on vulnerable populations specifically? Immigrants, LGBTQ+ youth, domestic violence survivors, people with health concerns, activists.
- How do we protect privacy for patrons with sensitive information needs?
- Do vulnerable populations have real alternatives if they opt out?
That last question matters more than most boards realize. "Patrons can opt out" sounds great until you learn that opting out means they lose access to the catalog. That's not a choice. That's coercion with better PR.
6. Vendor Questions
Your vendor is your partner in governance whether they like it or not. The board needs to verify the vendor actually understands what that means:
- Does the vendor understand their compliance obligations?
- Do we have audit rights?
- What happens if they breach?
- What happens if they remove the AI feature?
If the vendor's response to "audit rights" is blank stares or boilerplate refusal, you have your answer about their commitment to accountability.
Decision Criteria
Approve with Safeguards (Go)
The board can approve if all of the following are true:
- AI genuinely solves a patron need
- Vendor compliance documentation is strong
- Contracts include audit rights and liability sharing
- Impact assessment identifies mitigations for risks
- Board approves budget for ongoing compliance
- Staff training plan is solid
Proceed with Aggressive Safeguards (Caution)
Sometimes the answer isn't yes or no. It's "yes, but we need to protect ourselves harder than usual." This is appropriate when:
- Risk is moderate but manageable
- Strong contractual protections can be negotiated
- Vulnerable populations can be meaningfully protected
- Library has resources for ongoing monitoring
Don't Approve (No-Go)
Walk away if any of the following are true:
- Risk to vulnerable populations is severe and can't be mitigated
- Vendor won't accept reasonable compliance responsibility
- Cost of compliance exceeds benefit
- Board can't approve resources for oversight
- System poses unacceptable privacy risk
"No" is a defensible answer. "We didn't ask" is not.
Case Study: Little Schitt Public Library
Little Schitt Public Library -- two locations, one bookmobile, serving a population of 50,000 -- was looking at upgrading their discovery system. The vendor's new version had AI-powered search ranking and personalized recommendations built in. The director liked the demo. The board wanted to know if it was safe.
They started by auditing what they already had. Turns out the AI wasn't new. Their existing chatbot had been using it for two years and nobody had formally evaluated it. The discovery system upgrade just made it visible.
When they asked the vendor for an impact assessment, the vendor didn't have one. When they asked about bias testing, the vendor didn't understand the question. When they asked for audit rights, the standard contract said no.
Red flags everywhere. But the director didn't walk away. She negotiated.
Three months of back-and-forth later, they had a contract that required the vendor to conduct impact assessments, test for bias quarterly, allow audits, and de-identify patron data within 48 hours. It cost more. The vendor wasn't happy. But the board approved it because the director could show exactly what the risks were and exactly how they were being managed.
Six months in, they found the AI was ranking older materials higher for some topics. Not because the AI was biased in the way people usually mean that word, but because their collection was older in those subjects. The AI was faithfully reflecting a collection development gap. They adjusted collection development to feed more diverse and current materials into the system. That's governance working: you find the problem, you trace it to the upstream cause, you fix it, you document it.
Was it perfect? No. It was three months of negotiation the director didn't have time for and ongoing monitoring that added work to an already stretched staff. But it was intentional. They knew what they were approving and why.
When to Call a Consultant
Most of this is work your director and IT staff can handle. But there are moments where $5K on a consultant saves you from a $50K mistake.
Call a consultant if:
- Your library is large or complex -- multiple systems, significant patron base, big budget at stake
- You're negotiating a major vendor contract and the vendor has sophisticated legal counsel on their side
- Your board is nervous and wants expert validation before approving AI
- You're being investigated by regulators and need documentation yesterday
- You have a specific equity or compliance concern that exceeds your team's expertise
- Your staff is resistant and you need external credibility to shift thinking
A good consultant will help you understand your specific risks, review vendor contracts for the traps you might miss, guide your board through decision-making, train your staff credibly, and validate your governance approach. Expect to pay $5,000-20,000 depending on scope and your library's size. That's usually worth it if it saves you from a regulatory fine or a bad vendor contract that costs you for years.
Next Steps
- AI Governance Overview -- The five governance domains that frame every board decision.
- Building Your AI Policy -- The 9-step process your staff follows to implement what the board approves.
- Staff AI Training Guide -- How to train staff on the policy the board just approved.