AI IN PUBLIC LIBRARIES: WHAT I LEARNED SO YOU DON'T HAVE TO A Field Guide for Librarians Who Don't Have Time to Pretend This Is Theoretical January 2026 ================================================================================ START HERE: THE DECISION TREE ================================================================================ Before you read anything else, answer these questions. Takes 10 minutes. Tells you whether you're ready. QUESTION 1: What specific problem are you trying to solve? → "Staff spend 6 hours/week cleaning MARC records" or "After-hours patrons have no reference help" or something equally concrete Go to Question 2. → "We should have AI" or "Our board says we need to keep up" STOP. You're not ready. Come back when you can name the actual problem. QUESTION 2: Is this staff-only or patron-facing? → Staff only, with human review before anything reaches a patron Lower risk. Go to Question 3. → Patron-facing (chatbot, recommendations, search) Higher risk. Before proceeding: Have you called your liability insurance? Have you talked to legal counsel? If no, STOP and do those first. QUESTION 3: Do you actually have staff capacity? → Yes, someone owns it and can spend real hours on it Go to Question 4. → No, we're already stretched STOP. You will break this. What would you stop doing to make room? If the answer is "nothing," you're not ready. QUESTION 4: Do you know the real cost? → I've calculated licensing + training + maintenance + troubleshooting + someone's salary for oversight Go to Question 5. → I only know the sticker price or think it's free STOP. Nothing is free. Staff time costs money. Calculate the actual number and come back. QUESTION 5: Can you leave if it doesn't work? → Short contract, clear data export, I understand what vendor lock-in means for us Go to Question 6. → Multi-year commitment, vendor owns the data, unclear how we'd migrate STOP. Renegotiate the contract or don't sign it. This is how you get trapped. QUESTION 6: Do you actually know where patron data goes? → I've asked the vendor, they answered clearly, I've confirmed it with legal counsel Go to Question 7. → I don't know / vendor gave me a vague answer / vendor got defensive about the question STOP. If they won't tell you, assume they're doing something you wouldn't approve of. Because they probably are. QUESTION 7: How will you measure success? → I have specific metrics defined before launch that aren't just "we feel good about it" PROCEED. Design a 90-day pilot, document everything, be honest about what worked and what didn't. → I'll know it when I see it / success means staff use it STOP. "Staff use it" doesn't mean it worked. Define what actual success looks like first. ================================================================================ YOUR SITUATION (Pick one. Be honest.) ================================================================================ SMALL LIBRARY (1-5 FTE) You're probably the only professional. AI deployment means YOU maintain it. YOU support it. YOU troubleshoot it at midnight when a patron's download breaks. Reality: No IT support. No training budget. Board is either terrified of tech or thinks it solves everything. State library may or may not help. What makes sense: - Using Claude or ChatGPT free tier for your own work (drafting, summarizing, grant writing). Saves you 5+ hours a week. - NOT deploying anything patron-facing unless you want to support it forever. What doesn't: - Vendor contracts with implementation requirements you can't meet. - Multi-step AI systems. - Anything that requires 24/7 uptime. Your move: Use AI for your own productivity. Watch what medium and large libraries do. Learn from their mistakes, not by repeating them. When you're ready to pilot something, partner with your consortium if possible. Solo pilots are a sinkhole. MEDIUM LIBRARY (6-20 FTE) You have one person who "does technology" and they're also doing something else. AI is on the board's radar but you're not sure why. Reality: Some budget flexibility. One shot at a pilot. Everyone's stretched. You have leverage with vendors but you don't know it yet. What makes sense: - One carefully chosen pilot with clear success criteria. - Staff AI use policy (you have enough people to need boundaries). - Designating someone to track AI developments (not build infrastructure, just know what's happening). What doesn't: - Multiple pilots simultaneously. - Bleeding-edge tech. - Contracts with auto-renewal. Your move: Run the decision tree. If you get to PROCEED, design one pilot. 90 days. Specific metrics. Document what worked and what didn't. Share it with peer libraries. That's how the profession learns. LARGE LIBRARY (20+ FTE) You have technology staff. Formal planning. Vendors courting you. Your board thinks you should be leading on this. The profession is watching what you do. Reality: You have negotiating leverage. You have resources. You also have responsibility to smaller libraries. What you should be doing: - Formal AI governance (staff use policy, patron-facing policy, data handling, equity assessment). - Legal counsel review before anything patron-facing. - Insurance verification. - Vendor contract negotiation where you actually negotiate, not just accept terms. - Documenting and publishing outcomes (successes and failures). Your responsibility: When you negotiate contract terms, share them. When you pilot something, publish the results. When you discover vendor practices that harm libraries, say so publicly. Smaller libraries will copy what you do. Make sure what they're copying actually works. ================================================================================ WHAT'S ACTUALLY HAPPENING RIGHT NOW (January 2026) ================================================================================ The market looks like this: OVERDRIVE: 90% of the digital library market. Owned by KKR (private equity). Pricing increases 7% annually, automatically. Libraries can't leave. That's not market leadership, that's monopoly. BOUNDLESS: Tried to compete with better pricing and author-friendly terms. Shut down December 2025. Gone. Vendor lock-in works. CLOUDLIBRARY: Exists but tiny. Single-digit market share. PALACE PROJECT: Open-source alternative. Community-driven. Growing. Still marginal. What this means: Libraries have almost no choice. OverDrive controls the market. If you don't like their terms, your options are: (1) pay anyway or (2) don't have digital ebooks. That's it. Why does this matter? Because when you're evaluating an AI vendor, remember: the same consolidation happens. First they're helpful. Then they own the market. Then they extract. Ask OverDrive how helpful they are with 90% market share. ================================================================================ BOARD CONVERSATIONS (The Actual Scripts) ================================================================================ Your board is going to ask these questions. Here's what works: BOARD MEMBER: "I read that libraries are using AI now. Why aren't we?" NOT: "We're actively evaluating where AI could genuinely improve patron services and only 7% of libraries have actually implemented it..." YES: "We're looking at three specific use cases. One looks promising—after-hours reference support. But before we deploy anything that touches patron data, I need legal to review the contract and I need insurance to confirm we're covered. I'll bring a concrete proposal to the next meeting with costs, risks, and success metrics. Not a maybe. An actual decision." The difference: You're not explaining why you're cautious. You're explaining what you're doing about it. --- CITY COUNCIL: "Other departments are using AI. Why is the library behind?" NOT: "Libraries have unique privacy obligations..." YES: "Other departments don't hold patron reading habits, research queries, and personal questions the way libraries do. If we deploy AI to process that data, we're choosing to send it to corporate servers. That's a governance decision. We're implementing it, but we're implementing it deliberately, not because everyone else is. I'm bringing a proposal that explains what we're collecting, where it's going, and what we're not collecting. You get to decide if that's acceptable to the community." The difference: You're not defending caution. You're explaining choice. --- BOARD MEMBER: "My nephew uses ChatGPT. Why can't we just use that?" NOT: "Generic AI tools can be useful for staff productivity, and we're exploring that. But patron-facing deployment requires more care..." YES: "ChatGPT free tier is great for drafting emails and summarizing reports. We're using it. But ChatGPT plus patron data equals your reading habits training OpenAI's AI for other customers. Plus liability if it gives wrong information. Air Canada got sued for that. Before we put AI in front of patrons, we verify insurance covers it and we have legal confirm we won't be liable for its mistakes. We'll probably do it. But not because it's free. Because we've done the homework." The difference: You're explaining actual stakes, not theoretical risks. ================================================================================ WHAT LIBRARIANS ACTUALLY GET WRONG ABOUT AI ================================================================================ Mistake 1: "If we don't adopt AI, we'll fall behind." Reality: 7% of libraries have implemented AI. If you're not in that 7%, you're normal. If you're cautious, you're mainstream. Mistake 2: "The vendor says it's secure/unbiased/accurate." Reality: Ask them in writing. Ask for third-party testing. When they hedge, don't sign. Mistake 3: "We'll add privacy safeguards later." Reality: You won't. Privacy gets added before launch or it doesn't get added. Later never comes. Mistake 4: "The AI will handle this, so we can reduce staff." Reality: You'll need staff to manage the AI, review its outputs, fix its mistakes, and handle escalations. You're not replacing staff. You're shifting work. Mistake 5: "We can always switch vendors if this doesn't work out." Reality: You probably can't. Switching costs are astronomical. Data portability is unclear. Migration takes a year. Assume you can't leave. Only adopt what you can live with forever. Mistake 6: "We need to figure this out before the board pressures us." Reality: They're going to pressure you anyway. Better to have figured it out on your terms than to rush because someone read an article. ================================================================================ THE EQUITY QUESTION: WHO GETS HURT? ================================================================================ When AI fails in a library, it doesn't fail equally. Non-native English speakers: AI chatbots have higher error rates. Non-standard grammar breaks them. If your community is multilingual, AI tools will serve those populations worst. People with disabilities: Accessibility is an afterthought in AI design. Screen readers fail. Cognitive load is high. The patrons who most need accessible services will find AI tools unusable. People with low digital literacy: They're less likely to recognize an AI error, less likely to ask a human for help, and more likely to trust a confident-sounding wrong answer. The patrons who need the most help figuring out AI are the least equipped to identify when it's lying. Kids and teens: They don't distinguish AI-generated from human-created content. They're vulnerable to misinformation. If you're putting AI-generated books in your youth collection, you're creating risk. People researching sensitive topics: Someone looking for domestic violence resources, immigration help, mental health support, or legal guidance faces real consequences from AI hallucinations. These are exactly the scenarios where AI confidently makes things up. Before you deploy any AI touching patron needs: 1. Test it with your most vulnerable populations, not just your easiest users. 2. What happens when it fails for someone who can't tell it's failing? 3. Is human help actually accessible when the AI breaks? 4. Have you tested for bias? (Not "does the vendor say it's unbiased." Have you tested it yourself.) 5. Does it meet accessibility standards? If you can't answer yes to those, don't deploy it. ================================================================================ CONTRACT RED FLAGS (The Specific Language to Watch For) ================================================================================ These aren't theoretical. I watched these happen. "Aggregated and anonymized data": This means the vendor is using your patron data to train AI models that benefit their other customers. You don't get compensated. Your library gets charged more annually. "Product improvement": Same thing. They're using your data. "Confidentiality clause preventing public discussion": You can't tell other libraries what this vendor did wrong. This is how bad practices stay hidden. "Auto-renewal": Three years becomes five years becomes permanent unless you remember to opt out in a specific 60-day window you'll miss because staffing changed. "Data stored in multiple jurisdictions": Your patron data is in servers you can't control, in locations you don't know, maybe in countries with different privacy laws. "Indemnification capped at one year's fees": Liability limited to what you paid in year one. If they break trust and it costs you $100K to fix, you're out $80K. "No data portability clause": If you leave, your data stays with them. Before you sign anything with "aggregated," "improvement," "anonymous," or "multiple locations," talk to legal counsel. These aren't edge cases. These are standard practice. ================================================================================ WHAT WORKS (The Patterns That Actually Succeed) ================================================================================ Pattern 1: Staff Tool + Human Review Use: AI for first drafts of emails, grant applications, policy language. Staff reviews and edits before anything goes out. Risk: Low. You control the output. Reality: Saves 3-5 hours per week of staff time. No vendor lock-in. No patron data collected. Clear liability (staff owns what they send). This works. Do this. Pattern 2: Narrow Scope + Clear Limits + Human Backup Use: After-hours reference help via chatbot. Limited to FAQs, hours, basic policy questions. Complex questions escalate to human. Requirements: Clear disclosure that user is talking to AI. Easy path to human help. Regular review of chat logs for errors. Reality: 88% user satisfaction when done right. Extends coverage without replacing staff. Errors caught and corrected. This works. This is the right way to do patron-facing AI. Pattern 3: Specific Problem + Evaluation Metrics + 90-Day Pilot Use: Anything you're uncertain about. Design a 90-day pilot, measure specific outcomes, decide after, not before. Requirements: Clear success criteria before launch. Documentation throughout. Honest assessment after. Reality: Prevents long-term mistakes. Catches problems early. Gives you exit ramp if it doesn't work. This works. Do this for anything new. Pattern 4: Don't Deploy Anything You Can't Support Forever Rule: If you're adopting a vendor tool, assume you'll be using it forever. Price increases, features change, but you won't leave because switching costs are too high. Reality: Most vendor AI falls into this category. Only adopt what you can live with as permanent infrastructure. This works. It's not exciting, but it works. ================================================================================ WHAT DOESN'T WORK (The Patterns That Fail) ================================================================================ Pattern 1: "Autopilot" AI (No Human Review) Mistake: Trust the AI to make decisions without human oversight. Reality: 26% accuracy on cataloging metadata. Bad metadata propagates across systems. Cleanup costs more than doing it manually. Don't do this. Pattern 2: Multi-Year Contracts with No Exit Mistake: Sign three-year deal with auto-renewal and no data portability. Reality: Year one feels fine. Year two you're trapped. Year three you're being extracted. You can't leave because switching costs are $100K+ and you've lost your data. Don't do this. Pattern 3: Deploying Without Testing for Bias Mistake: Assume the vendor tested for bias against your specific community. Reality: Their testing covered generic English-language use cases. Your community's specific needs weren't tested. Recommendation algorithms reinforce stereotypes. Marginalized populations get worse service from AI than from human librarians. Don't do this. Pattern 4: Patron-Facing Without Legal Review Mistake: Launch a chatbot, assume it's fine, deal with liability if it breaks. Reality: Moffatt v. Air Canada (2024) established organizations are liable for AI misinformation. "The AI did it" is not a legal defense. Your insurance might not cover it. You need written confirmation from legal counsel and your insurance carrier before launch. Don't do this. ================================================================================ IF YOU'RE GOING TO DO THIS, DO IT RIGHT ================================================================================ Step 1: Define the actual problem. Not "we need AI." Not "we're falling behind." Not "the board says so." The actual, specific problem. "Reference desk closes at 9pm and students need help until 2am." That's specific. That works. Step 2: Calculate total cost. Licensing + staff training + staff time for implementation + staff time for ongoing maintenance + staff time for monitoring + staff time for handling escalations + liability insurance + legal review + contingency. Be honest about overhead. Step 3: Get legal and insurance on board. Email them the proposal. Ask specifically: "Does our liability insurance cover this? Are there gaps? What documentation do you need to confirm coverage?" Get written answers. Document it. Step 4: Run the decision tree. If you can't answer yes to all the questions, don't proceed. Step 5: Design a 90-day pilot. Not forever. Not "let's see how it goes." Ninety days. Specific success metrics. Documentation. Plan for evaluation before launch. Step 6: Launch with transparency. Staff and patrons need to know they're interacting with AI. Not buried in fine print. Obvious. "This is an AI chatbot. It can make mistakes. Here's how to talk to a human." Easy escalation to actual help. Step 7: Monitor, document, decide. Review logs weekly. Catch errors early. After 90 days, evaluate against your metrics. Keep it, kill it, or formalize it. All three are valid outcomes. Killing it is success if it prevents long-term harm. ================================================================================ THE PRACTITIONER'S POSITION ================================================================================ I've been inside vendor companies. I've watched them go from solving library problems to extracting from libraries. I've seen the pattern repeat across four industries over twenty years. The pattern: Start helpful. Gain market share. Optimize for extraction. Lock libraries in. They can't leave. Revenue per customer increases indefinitely. Mission drifts away. This is what's happening with OverDrive. This is what happened to Baker & Taylor before they collapsed. This is what will happen to the next vendor. The practitioner's position is: Know when a tool serves you and when it captures you. Engage when engagement makes sense. Refuse when refusal makes sense. Build something else when neither works. That's it. That's the whole framework. You don't need permission. You don't need the profession to resolve its discourse about AI. You need to make a decision that works for your library and your community. That's what practitioners do. ================================================================================ THE BOTTOM LINE ================================================================================ 67% of libraries are exploring AI. 7% have implemented it. 93% of libraries are still figuring out if it's worth the risk. You're not behind. You're thinking clearly. What works: Narrow scope. Clear limits. Human review. Specific problems. Measurable outcomes. 90-day pilots. Legal review. Insurance confirmation. What doesn't: Autopilot. Vendor lock-in. Bias blindness. No evaluation plan. Patron-facing without legal. Start small. Test in the wild. Document everything. Share what you learn. The profession needs honest data about what works and what doesn't. Be part of the evidence base, not the hype. ================================================================================ END OF FIELD GUIDE January 2026 ================================================================================