AI in 15 Minutes: What Library Staff Actually Need to Know
[an error occurred while processing this directive]- "AI" in your library is not one thing. The chatbot on your website, the AI suggestions in your cataloging tool, and ChatGPT are three completely different technologies with different risks.
- 67% of libraries worldwide are exploring or implementing AI. 33% are actively implementing. This is happening whether you're ready or not.
- The real risks are patron data privacy, training data bias, vendor lock-in, and labor displacement. Not sentient robots.
- Neither blanket refusal nor uncritical adoption serves your community. Informed critical engagement does.
- You can get up to speed in an afternoon. This article is your starting point.
Let's skip the part where I explain AI using a metaphor about a brain or a filing cabinet.
You\'re library staff. You don\'t need a metaphor. You need to know what AI is actually doing in your building, what questions your patrons and board will ask, and what you should actually be worried about. That's what this is.
If someone sent you a 47-slide vendor deck about "AI-powered library transformation," this is the antidote.
Part 1: "AI" Is Not One Thing (Stop Saying It Like It Is)
The single biggest problem with AI conversations in libraries is that everyone uses the same word to mean completely different things. Your board chair, your vendor rep, and your tech-savvy patron are all saying "AI" and meaning three different technologies.
This is what actually exists in library land:
Rule-Based Systems (Not Really AI)
That Springshare LibAnswers chatbot on your website? It follows a decision tree that your staff programmed. A patron types "hours" and it responds with your hours. There\'s no intelligence. There\'s no learning. It's an elaborate if/then statement wearing a chat bubble costume.
Risk level: Low. It can only say what you told it to say. The main risk is it gives outdated info because nobody updated the script after holiday hours changed.
Machine Learning Classifiers (Actual AI, Narrow Scope)
These are trained on specific datasets to do specific tasks. OCLC\'s de-duplication system that matches bibliographic records across WorldCat? Machine learning. Ex Libris\'s discovery algorithms that rank search results? Machine learning. These tools learn from patterns in data, but they can only do the one thing they were trained to do.
Risk level: Moderate. They can embed biases from their training data (if the training data over-represents certain subjects or perspectives, so will the output). But they're not generating content or making decisions outside their narrow domain.
Generative AI / Large Language Models (The Noisy One)
ChatGPT, Claude, Gemini. These are the ones making headlines and causing board members to forward you articles from the Wall Street Journal. They generate new text, code, and images by predicting probable next tokens based on massive training datasets.
Some library tools are starting to use generative AI under the hood. OCLC\'s new cataloging features suggest Dewey numbers and subject headings. Ex Libris\'s Primo Research Assistant answers natural language queries using your library's resources. These are real products shipping right now.
Risk level: Higher. These systems can hallucinate (generate confident-sounding nonsense), they're trained on data that includes biases, they may retain information entered into them, and their decision-making is opaque.
The point: When someone says "We should use AI" or "We should ban AI," ask them which kind. The answer changes everything.
Part 2: What AI Is Actually Doing in Libraries Right Now
Not what vendors promise it could do. Not what the conference keynote imagined. What's actually deployed and operational right now.
Cataloging and Metadata
OCLC added AI features to WorldShare Record Manager and Connexion in December 2025 that auto-suggest Dewey Decimal numbers, LC Classification numbers, and LC Subject Headings as catalogers create records. Pilot testers reported saving up to 20 minutes per title.
Ex Libris is rolling out an AI Metadata Assistant for Alma (general availability expected early 2026) that enriches bibliographic records to improve quality and discoverability.
The catch: The Library of Congress\'s own experiments found that large language models scored 26% accuracy on predicting LC Subject Headings and 35% on subject classification. The Ohio Library Council Technical Services Division put it plainly: human review remains critical. These tools generate drafts, not finished records. If your catalogers are being told to trust the AI output without review, that\'s a problem.
Discovery and Research
Primo Research Assistant (Ex Libris) lets users ask natural language questions and get curated academic sources from the Central Discovery Index. It uses retrieval-augmented generation (RAG), meaning it pulls from your library's actual resources rather than making things up from its training data. Available to all Primo customers now.
Collection Analysis and Resource Sharing
AI algorithms are analyzing usage patterns to inform collection development decisions. OCLC uses AI to match interlibrary loan requests with optimal lending libraries. These are largely invisible to staff and patrons, running as backend optimizations.
What's NOT AI (Even If the Vendor Says It Is)
Vendors love slapping "AI-powered" on everything. Check the fine print. Many library "chatbots" are scripted decision trees. Many "smart" recommendations are basic collaborative filtering (people who checked out X also checked out Y) that\'s existed since Amazon in 1998. If the vendor can\'t explain what model it uses and what data it trains on, it's probably marketing, not AI.
Part 3: The Real Risks (Not the Science Fiction Ones)
Nobody at your library needs to worry about artificial general intelligence, the singularity, or robots taking over. These are the risks that actually matter:
1. Patron Data Privacy
This is the big one. Public generative AI tools can\'t guarantee deletion or non-retention of submitted information. Anything entered may be used to refine the model. If a staff member types a patron\'s reference question into ChatGPT, that question may persist in training data indefinitely.
Even vendor-provided AI features may have inadequate privacy protections. Essential contract clauses to look for:
- No training on your data: The vendor must not use your library's data to train or improve their AI models.
- No commingling: Your data stays separate from other customers' data.
- No retention: Queries and results are not stored beyond the session.
- Audit rights: You can verify compliance.
- Breach notice: You're told if something goes wrong.
If your vendor contract doesn't address these, fix it before your next renewal. The Vendor Contract AI Clauses guide has the language you need.
2. Training Data Bias
AI models are trained on large datasets that reflect existing societal biases. When OCLC's AI suggests subject headings, those suggestions carry the biases of the training data, which itself reflects centuries of biased cataloging practices. LC Subject Headings have well-documented problems with how they represent marginalized communities. AI trained on those headings will reproduce and potentially amplify those problems.
This isn\'t some future risk. It\'s happening right now. Every AI-generated metadata suggestion should be reviewed through the same critical lens you'd apply to any cataloging decision.
3. Vendor Lock-in and Opacity
Ex Libris, OCLC, and other vendors are building AI features on proprietary platforms. You can\'t audit how these tools make decisions, what data they retain, or how they weight results. If your discovery system\'s AI starts deprioritizing certain types of content, you might not even know it's happening, let alone be able to fix it.
It's the same vendor dependency problem libraries have always faced, but with higher stakes because the decision-making is less transparent.
4. Labor Displacement
In July 2025, OCLC reduced its workforce by approximately 80 positions, explicitly citing AI as a factor. Collection librarians are the least optimistic about AI's impact, with 35% expressing pessimism.
The realistic threat isn\'t that AI replaces librarians wholesale. The Library of Congress\'s 26% accuracy rate on subject headings tells you the technology is nowhere near that. The threat is role compression: administrators using AI as justification to not replace departing staff, to consolidate positions, or to shift professional work to paraprofessional classifications. "The AI can handle the first pass" becomes "we don't need as many catalogers."
Fight this by documenting the expert judgment AI can't replicate and the error rates it produces without human oversight.
5. Environmental Cost
Training a single large generative AI model can consume up to 1,287 MWh of electricity (enough to power 120 US homes for a year) and generate over 550 tons of CO2. A typical large AI data center uses millions of gallons of fresh water annually for cooling.
That doesn\'t mean never use AI. It means factor environmental cost into your evaluation the same way you\'d consider any other institutional resource consumption. If a vendor can't tell you about the environmental footprint of their AI features, add that to your list of questions.
Part 4: The Refusal Debate (And Why Both Sides Are Partly Right)
You may have encountered Kay Slater's "Against AI: Critical Refusal in the Library" (Library Trends, May 2025), which won the 2025 Library Juice Paper Contest. Slater argues that generative AI embeds racism and bias, violates user privacy, causes environmental harm, and that libraries should refuse it entirely as a demonstration of professional values.
Violet Fox expanded this into a 32-page zine, AI Refusal in Libraries: A Starter Guide, using the ALA Code of Ethics as a framework for why AI adoption is antithetical to library values.
Their diagnoses of the problems are largely correct. AI does embed biases. Vendor data practices are often inadequate. The environmental costs are real. The labor concerns are legitimate.
But I part ways on the solution: blanket refusal isn\'t a strategy. It\'s an exit.
Michael Ridley\'s response makes the strongest counter-argument: even Emily Bender and Alex Hanna, two of AI\'s most prominent critics, call for "strategic refusal" but devote most of their book The AI Con to resistance and reform. Those are fundamentally different from walking away.
Norah Mazel\'s research on generative AI and information privilege puts a finer point on it: AI tools are already reshaping who has access to information. Patrons who can afford personal AI subscriptions gain research advantages. When libraries refuse to engage, they widen the gap between information haves and have-nots. That\'s the exact inequity the profession exists to counter.
The productive position is uncomfortable but honest:
- Refuse the hype. Demand evidence.
- Refuse inadequate privacy protections. Demand contract language that protects patrons.
- Refuse uncritical adoption. Evaluate AI tools with the same rigor you'd apply to any resource.
- But don\'t refuse to understand. You can\'t effectively critique what you won't engage with.
- And don't refuse to serve. Your patrons are using AI whether you approve or not. They need your expertise to use it well.
Librarians are specifically positioned as trusted professionals who can counter vendor hype and push for responsible AI. Abdicating that role doesn't protect your community. It abandons them to the vendors and the boosters.
Part 5: What to Actually Do This Week
You don't need to become an AI expert. You need to become an informed professional who can make good decisions for your community. In order of priority:
Today (30 minutes)
- Find out what AI your library already uses. Ask your IT department or systems librarian: "What tools in our current stack use AI or machine learning?" You might be surprised. Your ILS, discovery layer, or link resolver may already include AI features nobody chose or evaluated.
- Try a generative AI tool yourself. Use ChatGPT or Claude to answer a reference question you know the answer to. Evaluate the output critically. Note what it gets right, what it gets wrong, and what it presents confidently but incorrectly. This 10-minute experiment is worth more than any conference presentation.
This Week (2 hours)
- Read one foundational article. The ACRL Tips and Trends on AI Developments is written for librarians, not technologists. It covers the landscape without the vendor pitch.
- Review your vendor contracts for AI clauses. Look for data retention, training on patron data, and opt-out provisions. If the contract pre-dates 2024, it almost certainly doesn't address AI adequately. The Vendor Contract AI Clauses guide on this site has the specific language to look for.
This Month (Half a Day)
- Develop a personal AI policy. Decide what you will and won't enter into AI tools. Rule one: never enter patron PII. Rule two: never enter confidential institutional data. Write it down. Share it with colleagues.
- Attend a library-specific AI training. The Clarivate Pulse of the Library 2025 data shows that libraries with formal AI literacy training have significantly higher staff confidence and better implementation outcomes. Check your state library association, WebJunction, or ACRL for options.
This Quarter (Ongoing)
- Advocate for an institutional AI policy. Your library needs a written position on AI use in services, collection management, and staff workflows. If one doesn't exist, draft one. The AI Legislation Tracker on this site can help you understand what your state may require.
- Evaluate AI tools like you evaluate any library resource. Selection criteria, trial periods, bias assessment, privacy review, accessibility testing. AI isn't exempt from collection development principles just because the vendor called it "transformative."
Part 6: Numbers Worth Knowing
From the Clarivate Pulse of the Library 2025 report (2,000+ librarians, 109 countries):
- 67% of libraries are exploring or implementing AI (up from 63% in 2024).
- 33% are actively implementing (triple the 2024 figure).
- 35% are still at evaluation stage only.
- Only 20% are optimistic about AI benefits over the next five years (down from 26% in 2024).
- Libraries that include AI literacy in formal training are significantly more likely to be actively implementing.
From other research:
- Less than half of librarians think AI can improve library operations.
- Only 40% believe AI could replace aspects of their jobs.
- Most library staff have either never used AI tools professionally or use them less than once a month.
- The Library of Congress found LLMs score 26% on predicting LC Subject Headings and 35% on subject classification. This is the state of the art for library-specific tasks.
These numbers tell you two things: AI adoption is accelerating, and the technology is nowhere near replacing professional judgment. Both of these are true simultaneously. Get comfortable holding both ideas at once.
So Now What?
AI isn\'t coming to libraries. It\'s already here. It\'s in your cataloging tools, your discovery layer, your ILL system, and the phones your patrons carry into the building. The question was never whether to engage. It\'s whether you engage on your terms or the vendor's.
The profession has navigated this before. The internet was going to replace libraries. Ebooks were going to end print. Google was going to make librarians obsolete. None of it happened because librarians adapted, advocated, and pushed back on the parts that didn't serve their communities.
AI is the same fight with higher stakes and less transparency. The vendors are bigger, the contracts are more complex, and the technology is harder to audit. But the playbook's the same: understand the technology, protect your patrons, demand accountability from vendors, and refuse to be impressed by a sales pitch.
You don\'t need to be an AI expert. You need to be the same critical, community-focused professional you\'ve always been, with a few new questions in your toolkit.
Start by knowing what's in your building. Go from there.
Want updates (or backup)?
Get new posts by email, or book a free 30-minute call if you're facing a contract, AI policy, or vendor decision.