**AI in U.S. Public Libraries** A Practical Assessment for Library Directors *January 2026* # **Executive Summary** **Current State** 67% of libraries are exploring AI; only 7% are actually implementing it. U.S. librarian optimism is remarkably low (7%) compared to Asia (27-31%). Budget has overtaken expertise as the top barrier (62%). Most libraries remain at the earliest evaluation stage. **What Works** The copilot model shows promise: AI augments human judgment rather than replacing it. OCLC's AI cataloging tools saved 20 minutes per record when paired with human verification, though accuracy varies by material type and requires human review. After-hours reference chatbots using retrieval-augmented generation (RAG) show 88% patron satisfaction in academic settings. **Key Risks** * **Legal liability is real:** Moffatt v. Air Canada (2024) established that organizations are liable for chatbot misinformation. "AI did it" is not a defense. * **Vendor concentration:** Baker & Taylor's collapse left 5,000+ libraries scrambling. Ingram dominates print; OverDrive dominates digital; OCLC dominates cataloging. Single points of failure. * **AI-generated content:** Already in library collections through Hoopla and OverDrive. Neither vendor has robust AI detection; both rely on publisher self-disclosure. * **Training infrastructure is weak:** 82% of librarians believe AI literacy is critical; only 24% feel prepared to teach it. No public library-specific competencies exist yet. **What We Don't Know** Patron preferences (no comprehensive survey exists), small library feasibility (almost no research), failure cases (undocumented), and cost-benefit analysis (few rigorous studies). Directors are making decisions with incomplete evidence. **Bottom Line** Cautious experimentation with clear evaluation criteria is more appropriate than either wholesale adoption or blanket refusal. Plan for 2-3 year technology cycles, not 5-10 year horizons. Prioritize staff training over technology purchases. Document outcomes - the profession needs honest data. # **1\. Current Adoption Landscape** According to the Clarivate Pulse of the Library 2025 report, 67% of libraries are now exploring or implementing AI tools, up from 63% in 2024\. However, only 7% are actually implementing; 35% remain at the earliest evaluation stage. The U.S. lags behind Asia and Europe in both adoption and optimism - only 7% of U.S. librarians express optimism about AI benefits, compared to 27-31% in Asia. Budget has overtaken expertise as the top barrier (62%, up from 56% in 2024). For public libraries specifically, privacy and security remain the top concern at 65%. Collection librarians show the most skepticism, with 35% expressing pessimism about AI benefits. Libraries are more likely to be in active implementation when AI literacy is part of formal training (28%), librarians have dedicated time/resources (23.3%), or managers actively encourage development (24.2%). Successful adoption requires institutional commitment beyond simply purchasing tools. # **2\. What\'s Working and What\'s Not** ## **Documented Successes** The copilot model - where AI augments rather than replaces human judgment - has shown promise. OCLC's AI cataloging tools (integrated into WorldShare Record Manager and Connexion) paired librarians with AI for cataloging tasks. Accuracy varies by material type and requires human review, but human-AI collaboration saved approximately 20 minutes per record while maintaining quality through human verification. San José State University's KingbotGPT, launched September 2024, provides after-hours reference assistance using retrieval-augmented generation (RAG). Post-use surveys showed 88% of students agreed the chatbot provided relevant information. Other documented applications include AI-powered workshops for resume writing and career mapping, translation tools for multilingual patron services, and accessibility tools including text-to-speech for patrons with disabilities. ## **Documented Failures** AI chatbots struggle with queries requiring critical thinking or subject matter expertise. A 2025 study in Library Resources & Technical Services found that ChatGPT, Gemini, and Copilot performed poorly on cataloging tasks - particularly classification number assignment. Frequent errors included overly broad numbers and numbers for incorrect topics. Initial trials of library chatbots demonstrated higher error rates for non-native English speakers. Data bias remains a concern: without proactive testing and human oversight, AI systems could provide discriminatory recommendations or reinforce stereotypes. The proprietary nature of commercial AI software precludes transparency about training data and response generation. # **3\. Legal and Liability Considerations** ## **The Moffatt Precedent** In Moffatt v. Air Canada (2024), a British Columbia tribunal found Air Canada liable for incorrect information provided by its AI chatbot. The airline argued the chatbot was a "separate legal entity responsible for its own actions" - the tribunal rejected this, ruling that companies remain responsible for all information on their websites, whether from static pages or chatbots. This case established a clear precedent: "AI did it" is not a defense. In New York City, a government chatbot deployed to help navigate municipal services provided advice that was both incorrect and unlawful - potentially exposing users to fines or legal consequences. The FTC settled with DoNotPay, Inc. over an AI chatbot marketed as a "robot lawyer" that generated legal documents without validation. ## **Implications for Libraries** If a library chatbot provides incorrect information - wrong hours, inaccurate policy guidance, or misleading research assistance - the library may bear liability. At least six states enacted AI chatbot laws in 2025, commonly requiring disclosure that users are interacting with AI, safeguards for sensitive use cases, and accountability for misinformation. Common risk-management practices include: disclosing that users are interacting with AI; clarifying that responses are informational; escalating complex inquiries to human staff; monitoring logs for problematic outputs; and reviewing vendor contracts for indemnification, liability caps, and data handling. No library-specific case law exists yet, but commercial precedents are instructive. Directors should consult with legal counsel before deploying patron-facing AI. # **4\. Vendor Risks and Alternatives** ## **The Baker & Taylor Collapse** In November 2025, Baker & Taylor - one of the two dominant library book distributors for over a century - abruptly ceased operations. An August 2024 ransomware attack had already forced the company to operate with pen and paper for months. The private equity owner (Francisco Partners) declined to invest in full recovery. More than 5,000 libraries relied on B\&T for shelf-ready books. As of January 2026, many libraries report being unable to get the newest releases - some have nothing newer than September 2025 titles. This was not an AI-driven disruption - it was private equity mismanagement, bad luck, and thin profit margins. But it reveals how dependent libraries are on a small number of vendors. Ingram is scaling up but notes that absorbing 2,000+ new library accounts "is not a flip of a switch." For digital content, OverDrive/Libby dominates. For cataloging, OCLC is essentially a monopoly. ## **AI Content in Collections** AI-generated content has infiltrated library digital collections through platforms like Hoopla and OverDrive. A February 2025 exposé confirmed what many librarians suspected: AI-generated nonfiction has included foraging guides with potentially lethal misinformation, health advice with dangerous errors, and legal guides unfit for use. OverDrive\'s policy: "We don\'t exclude titles created with AI tools from the catalog, we ask that publishers self-identify AI content." Hoopla pledged to remove poor-quality AI content but relies on "industry metadata standards" that depend on publishers labeling their work. Neither vendor has robust AI detection. Some libraries have updated collection development policies explicitly to exclude AI-generated content. ## **What Libraries Can Do** * **State legislation:** Connecticut signed landmark ebook legislation into law in 2023 (SB 3, effective October 1, 2023) prohibiting contracts that restrict interlibrary loan, limit license purchases, or impose confidentiality clauses. Similar legislation is being considered in at least eight states. * **Open source alternatives:** The Palace Project offers a vendor-agnostic ebook app integrating content from multiple platforms. ByWater Solutions added 196 library partners in 2024 and now supports more than 1,600 libraries on Koha. * **Ownership models:** DPLA and the Independent Publishers Group announced a model giving libraries actual ownership rights to ebooks - including rights to interlibrary loan and format migration. Tens of thousands of titles are now available under ownership terms. * **Collective action:** Individual libraries cannot solve vendor monopoly problems alone. Library Futures, ReadersFirst coalition (300 libraries serving 200 million readers), and state library associations are advocating for systemic change. # **5\. Workforce Implications** Only 24% of librarians feel fully capable of engaging with AI tools. A 2024 survey found 43% of U.S. academic libraries offered zero AI guidance to staff. The Clarivate 2025 report found that 56% recognize AI will require significant upskilling, and formal training correlates with higher confidence. ## **What Gets Automated First** Based on current AI capabilities and library workflow analysis, the following tasks are most vulnerable: basic reference queries, cataloging and metadata creation, circulation desk functions, administrative processing, first-pass collection development recommendations, and simple reader's advisory for popular genres. ## **What Remains Human (For Now)** Certain functions appear more resistant to automation: complex research consultations requiring contextual judgment; community programming and relationship building (storytime, author events, partnerships); information literacy instruction, especially AI literacy education; services requiring empathy (reader's advisory for sensitive topics, social services navigation, crisis response); and culturally sensitive guidance reflecting community values. ## **The Pipeline Problem** Library assistants and paraprofessionals - often the entry point into the profession - face the steepest displacement risk. Budget-pressured libraries will likely use AI to avoid filling these positions first. If entry-level positions disappear, the profession loses its pathway for developing future leadership. Goldman Sachs research shows unemployment among 20-30 year-olds in tech-exposed occupations has risen almost 3 percentage points since early 2025\. The Clarivate 2025 survey found 72% of patrons still prefer human assistance for complex research queries. This preference may persist - or may erode as AI improves. # **6\. Training Landscape** Directors seeking AI training for themselves and staff face a fragmented landscape with significant gaps. ## **What Exists** * **Professional standards:** ACRL AI Competencies for Academic Library Workers was approved in October 2025\. Written for academic librarians but applicable to all. * **National task forces:** PLA launched a Transformative Technology Task Force in December 2025 focused on AI. In a 2025 survey, AI was one of the top five priority areas requested for professional development. * **Free conferences:** GAIL (Generative AI in Libraries) is a free virtual conference launched in 2024, entering its third year, with recordings available on YouTube. * **Webinars:** WebJunction offers free webinars including "AI and Libraries" and collection development policy guidance. * **State/regional programs:** SLAAIT (State Libraries and AI Technologies) Working Group includes 18 state libraries. New Jersey's LibraryLinkNJ AI Ambassadors program trained 20 librarians who then offered numerous trainings reaching 750+ library staff. * **Commercial options:** Galecia Group's PLAID platform provides cloud-based AI training with hands-on practice. ## **Critical Gaps** * **No public library-specific competencies:** ACRL\'s framework is for academic librarians. PLA\'s task force just launched and hasn't produced standards yet. * **State coverage is patchy:** Only 18 of 50 states participate in SLAAIT. If your state library isn\'t involved, you're largely on your own. * **Small/rural library training is essentially nonexistent:** Every documented program (SF Public Library AI Labs, NYPL, Palo Alto, NJ Ambassadors) is urban/suburban. Small and rural libraries face acute challenges with limited resources. * **No hands-on practice environments at scale:** Most training is passive webinars. Galecia's PLAID is the exception, but it costs money. * **No train-the-trainer curriculum:** NJ Ambassadors model worked but hasn't been replicated nationally. * **No certification or credentialing:** Unlike other library skills, there's no AI badge, certificate, or continuing education pathway. * **Patron programming vs. staff competency conflated:** Training librarians to run AI workshops for patrons is different from training them to evaluate AI tools for library operations. Most resources blur this distinction. * **The readiness gap is massive:** 82% of librarians believe AI literacy is critical; only 24% feel prepared to teach it. ## **Practical Recommendations** 1. Check if your state library participates in SLAAIT or offers AI training. 2. Watch GAIL conference recordings (free on YouTube) for practical examples. 3. Join peer networks: AIRUS Interest Group, AI Community of Practice Discord, ACRL AI Interest Group. 4. Consider the NJ Ambassadors model: train a small cohort who then train others. 5. Don\'t wait for perfect infrastructure - peer learning and informal experimentation may be more valuable than formal training that doesn\'t exist yet. # **7\. Research Gaps** Directors making decisions now are operating with incomplete evidence. Key unknowns: * **Patron preferences:** No comprehensive public library patron survey exists on AI services. * **Small library feasibility:** Almost no research on one-librarian or small library implementation. * **Failure cases:** Libraries that abandoned AI - and why - are undocumented. * **Cost-benefit analysis:** Few studies calculate total cost vs. measurable service improvements. * **Training effectiveness:** Do current programs actually improve outcomes? Nobody's measuring. This uncertainty is itself relevant. Cautious experimentation with clear evaluation criteria may be more appropriate than either wholesale adoption or blanket refusal. # **8\. Key Questions for Directors** 6. What specific problem am I trying to solve? 7. Do I have staff capacity to evaluate, implement, and maintain AI tools? 8. What are my patrons' actual preferences? Have I asked them? 9. What data will vendors collect, and does that align with patron privacy expectations? 10. What's the total cost - licensing, training, maintenance, troubleshooting? 11. What are the liability implications of patron-facing AI? Have I consulted legal counsel? 12. Is there a pilot path that doesn't commit to full adoption? 13. What happens if I wait 12-24 months? # **9\. Decision Framework** Before adopting any AI tool, work through this evaluation rubric: | Question | What to Look For | | :---- | :---- | | What problem does this solve? | Specific, measurable pain point - not "we should have AI." If you can\'t name the problem clearly, don\'t proceed. | | What's the total cost? | Include licensing, staff training time, ongoing maintenance, troubleshooting. Hidden costs often exceed purchase price. | | What happens when it fails? | All AI systems produce errors. What\'s your fallback? Who monitors? What\'s the patron experience when it breaks? | | Can we exit if needed? | Data portability, contract terms, vendor lock-in. Avoid multi-year commitments for rapidly changing technology. | | Do we have staff capacity? | Not just to implement, but to maintain, troubleshoot, and train others. Be honest about current workload. | | What will we stop doing? | Every new initiative requires time. If nothing is being cut, something will be neglected. | | What's the liability exposure? | Patron-facing AI creates organizational liability. Have you consulted legal counsel? | | How will we measure success? | Define metrics before launch. Without evaluation criteria, you can't know if it worked. | # **10\. Planning Recommendations** * **Plan for 2-3 year technology cycles, not 5-10 year horizons.** Current AI tools will likely be obsolete before traditional library planning cycles complete. * **Prioritize vendor contracts with exit clauses and clearly defined data portability terms.** Avoid multi-year lock-ins. * **Acknowledge vendor concentration risk you cannot solve alone.** This is a systemic problem requiring collective action through state library associations, ALA, and consortia - not individual library decisions. * **Budget for ongoing training, not one-time implementation.** Staff skills, not technology purchases, will determine success. * **Preserve entry-level positions even if AI could theoretically replace them.** Professional pipeline matters more than short-term efficiency. * **Document everything.** The profession needs failure cases and success cases equally. Share what you learn. # **11\. A Note on Source Quality** The evidence base for AI in libraries is uneven. Stronger sources include the Clarivate Pulse of the Library survey (2,000+ librarians across 109 countries), OCLC AI cataloging study (time-savings metrics from controlled testing), and peer-reviewed studies in Library Resources & Technical Services. Weaker sources requiring caution include claims about patron preferences from unnamed surveys, percentage estimates without named institutions or methodology, and vendor-sponsored content. Where weaker sources are cited, this review notes the limitation. Directors should weight claims accordingly and seek local data where possible. # **Key Statistics at a Glance** | Metric | Value | | :---- | :---- | | Libraries exploring/implementing AI | 67% (Clarivate 2025\) | | Actually implementing | 7% | | U.S. librarian optimism | 7% (vs 27-31% Asia) | | Budget as top barrier | 62% | | Privacy/security concern (public) | 65% | | Librarians feeling AI-capable | 24% | | Believe AI literacy critical | 82% | | OCLC AI cataloging time saved | ~20 min/record (with human review) | | States in SLAAIT working group | 18 of 50 | *Last updated: January 2026* *This document will require revision as AI capabilities, regulations, and library practices evolve.*