The Practitioner
[an error occurred while processing this directive]For librarians who don\'t have time to read all of this: The AI discourse gives you two options: engage or refuse. Neither one helps you Monday morning. The practitioner\'s position is a third option. Build enough understanding to know when a tool serves you and when it captures you. Build your own small tools when you can. Refuse when refusal makes sense. Engage when engagement makes sense. The goal is agency, not ideology. And here\'s the twist nobody\'s talking about: the AI everyone's telling you to be afraid of might be the thing that finally gets you free, if you use it on your terms. The rest of this essay is the long version.
What Eighteen Years Taught Me About Library Technology
I started my library career as employee #70 at OverDrive in 2008; first female on the Support Team at an unknown startup in Cleveland, Ohio. The boys were ripping CDs in the back for audiobooks; I was one of the folks on the frontlines teaching librarians how to reset their hidden Windows DRM so they could download a WMA file to transfer to their Windows Media Player.
Imagine this workflow: sign into the library to retrieve your ebook hold, then get redirected to OverDrive, then sign into Adobe Reader, then transfer to 3rd party Kobo/Nook that had its own software. Every barrier added, every step engineered.
I remember that first Nook transfer after a firmware update. We were so excited when we could grab and drop DRM files. We (OverDrive) had pushed Borders and Barnes & Noble to make hardware compatible with the ebooks. We were young and naive; the tech wasn't evil.
Those experiences changed me. The desperation of librarians trying to help patrons who just wanted to read a book. Why are there so many goddamn barriers? I was the vendor and it was frustrating me. I went to library school because I wanted to be the change I wanted to see in the world.
Monday through Friday I was at OverDrive as a project manager, helping libraries set up their first ebook collections. Saturdays I was at Sandusky Library for my practicum. Same week, both sides of the problem. I helped build the scarcity and then sat with the patrons frustrated by it.
Within three months, the director asked me to help a regular patron download ebooks to his Kobo. For 2 hours I painfully looked for titles that were in ebook format but not at the library. I explained that publishers had different licenses and availability; even though it's digital, it was the complete opposite of instantly available and cheap.
Turns out he was chair of a local foundation. The library got $100,000. Pure luck. But I was in the chair.
Shit got real when I was in the room when the 26-checkout model got announced. Same meeting, someone reminded us not to take notes. They didn\'t want a paper trail. That\'s exactly when I knew it was my time to leave. I couldn't take advantage of libraries like that.
One spring Saturday, I bumped into the director. She took a chance on me. I started within two weeks and by that fall, we were planning a new tech series that walked both staff and patrons through social media platforms, explaining the benefits and the harms of these new technologies.
I won a Library Journal Mover & Shaker award that year. We purchased 24 Nooks and loaded them with NYT bestsellers we couldn't get through OverDrive. We purchased Roku boxes, loaded them with free content and lent them with Wifi connection. If people had access, they could choose to sign up for trial accounts and watch shows during their lending period.
I realized I couldn't "bridge the digital divide" myself, but I could find ways to provide everyone the opportunity. It was up to them. The Library was giving patrons their agency back.
Then retirements happened, leadership philosophies changed, and they didn't include the technology, or me. Different leadership took me to a vendor, but this time it was sales, not support.
I watched a sales director buy Johnnie Walker Blue Label at fifty bucks a pour. Just to keep one account happy. And that contract's recurring revenue increased 7% automatically and annually. The dinner was a rounding error.
Every time. Find something libraries need. Wrap it in a subscription. Build in switching costs. Extract until the host dies or someone builds an alternative.
Trellis charging for what Google does free. OverDrive building artificial scarcity into digital books. OCLC suing over MARC records like metadata is a profit center. Baker & Taylor ignored practitioners until the whole thing collapsed.
Same pattern. Four industries over eighteen years. Same pattern every time. I\'ve seen it enough to recognize it before it finishes playing out. That\'s not wisdom. It\'s scar tissue. And it\'s fucking exhausting.
The Pattern Has a Name
I once read Cory Doctorow\'s work on enshittification and it hit like recognition. The same pattern I\'d watched play out at OverDrive, Baker & Taylor, every vendor relationship.
Platforms start helpful, solving an actual problem. Then they squeeze users to serve their private equity board, extract maximum value before the whole thing collapses and you're left holding the bag.
I was on Facebook when it was thefacebook.com and you needed an .edu email. I watched it turn into what it is now. I\'m so disgusted I get anxiety thinking about it. I\'ve cut myself off from my own family because I refuse to use it.
That\'s not ideology. That\'s scar tissue.
I\'ve watched this arc so many times I can predict the beats. Platform launches, platform helps, platform captures, platform extracts, platform collapses. Facebook. OverDrive. Baker & Taylor. Trellis. The names change. The pattern doesn\'t.
I left startups over a year and a half ago as a Director of Customer Success in legal tech: burnt out, chewed up, enshittified. I had no agency left to give.
Where I've Been
Since then I've been building AI tools for small businesses: analyzing contracts, auditing subscriptions, building tools that solve problems instead of creating dependencies. Same practitioner work, different sector. Same vendor capture dynamics. But this time it was on my terms.
On the side I built a book banning tracker, an app to monitor where challenges were being filed and surface patterns across jurisdictions. Again, AI/LLMs aren't going anywhere, so might as well join them. Legal tech for people fighting censorship; without the cost.
Meanwhile, Baker & Taylor went out of business. Let that sink in. 140 years. A foundational piece of library infrastructure since before the Civil War. Gone.
Years ago I sat in a hotel room after a sales conference and pitched them an idea: if you're going to build software for MARC records, build the whole ILS. Don\'t half-ass it with a glorified Excel spreadsheet.
They said no. They went the easy route, tried to morph macros into an app with stolen data. Got caught and imploded. Friends lost careers they'd had for decades.
So while they filed bankruptcy, I built my own ILS.
Prior to AI/LLMs this was never possible. Without hitting the lotto I\'d never have the resources to pull something like this off. Working around the clock with code and sheer stubbornness, I built it. A complete system. Everything I always wanted; done. No extras, no bloat. Cataloging, circulation, patron management, and a platform for ebook authors to submit directly instead of librarians weeding through vendor catalogs. It\'s up and running.
Why This Fired Me Up
Over the summer I happened on Kaitlin Slater\'s "Against AI" and my first reaction was: who the fuck is this, and where are their battle wounds? I\'d spent eighteen years fighting for digital equity from inside the machine and here was someone telling practitioners to refuse the first tool in a decade that could shift power back in our direction.
But there\'s whiplash: the scholars saying embrace it, bring library values to the table. They weren\'t talking about stolen training data, exploited annotation labor, bias problems, environmental costs. Wait, what the fuck?
Both camps had clean hands. The refusers never had to make anything work. The engagers never had to answer for harms.
Practitioners get neither luxury.
The academics say engage with AI. The critics say refuse it. Neither position helps the practitioner standing in front of a board asking why the ILS costs 40% more than three years ago and does less.
What\'s missing is the practitioner\'s voice. The person who actually needs a seat at the table.
I\'m Not Against AI. I\'m Against Enshittification.
Give people what they need and get out of the way. That\'s it. That\'s the whole philosophy.
Yes, the LLMs scraped everything. Anna\'s Archive. The open web. They ripped the whole internet without asking. While we\'re making sure Wikipedia and Internet Archive have enough money to sustain another year. Big Tech is exploiting writers and artists and coders. I\'m not pretending that\'s clean.
But you know who else has been hoarding and monetizing information that should be shared? OCLC. Going around suing libraries over MARC records. Treating metadata like a profit center. And the LLMs just... did it anyway. Scraped it all. Got too big to sue.
The old gatekeepers are screaming about theft while standing on fifty years of gatekeeping.
The tool that exploited us is the same tool that can free us from the people who\'ve been exploiting us longer. That\'s not a comfortable position. It\'s the practitioner\'s position.
We don't get clean hands. We get choices.
I can\'t un-scrape the data. That harm is done. But I can take the tool that resulted and use it to build things that free practitioners from systems that have been extracting for decades. OCLC spent fifty years monetizing metadata and suing anyone who tried to share it. The LLMs just took it. Now I can build an ILS that doesn\'t depend on OCLC\'s blessing. OverDrive built artificial scarcity into digital lending. Now I can build a platform where authors submit directly. Baker & Taylor told me my ideas weren\'t good enough. Now they're bankrupt and my ILS is running.
Practitioners don't get clean hands. We get leverage.
The Practitioner
A practitioner is someone who recognizes the vendor extraction pattern in real time and knows how to respond. Not someone who\'ll move blindly from tool to tool like all the directors before you. Not someone who pushes buttons they don\'t understand. Someone who's seen the playbook enough times that they know what move comes next and can actually make choices that counter it.
Three components: Domain knowledge. You have to actually understand the work. This is how you spot when a vendor is selling you something that doesn\'t actually solve your problem. Tool fluency. You have to know what the tool can and can\'t do well enough to recognize which pieces of it will capture you and which pieces give you options. Recognition. Once you\'ve seen the pattern (how a tool helps, then locks you in, then extracts), you\'ll see it everywhere. That pattern recognition is what separates practitioners from people just buying whatever works today.
A patron comes to the desk. They want books about grief for their kid whose grandmother just died. They don\'t say that directly. They say "my daughter needs books about death, she\'s seven, something happened." A librarian with domain knowledge knows this isn\'t a subject search. This is a readers" advisory interview wrapped in a reference question wrapped in a human being in pain. They know the difference between didactic "explaining death" books and narrative fiction where a character experiences loss.
Now imagine that librarian asks an AI for help. Tool fluency means knowing what to ask for. It means knowing "books about death for children" will get you a generic list, half not in your catalog. It means asking for "middle grade fiction featuring grandmother loss with gentle treatment" and cross-referencing actual holdings.
Pattern recognition is when the AI suggests "Bridge to Terabithia" and you recognize: that\'s a great book about loss, but the death is sudden and traumatic. This kid\'s grandmother probably had a slow decline. That\'s a different kind of grief. You\'ve seen the pattern before: AI pulling whatever matches the search, regardless of context. You make the call.
That\'s domain knowledge plus tool fluency plus the ability to recognize when a tool is doing what it\'s designed to do but not what your situation actually needs. It's about you being a practitioner who reads the pattern correctly instead of getting played by it.
The Practitioner Loop
1. Name the problem. Be specific. "We need AI" is not a problem. "Staff are spending six hours a week cleaning MARC records" is. If you can\'t describe it in one sentence, you're not ready to choose a tool. You're just shopping.
2. Assess leverage. Ask three questions: Can we turn this off without breaking core services? Can we export our data in a usable format? Do we understand what happens if the vendor disappears tomorrow? If the answer to all three is no, the tool will capture you. Because it will.
3. Choose a posture. Engage when the tool clearly serves your goals and exit is possible. Refuse when risks outweigh benefits. Build when neither engagement nor refusal solves the problem.
4. Constrain scope. Limit everything. Minimum data. Minimum users. Minimum time. Scope creep is how experiments become liabilities. Start smaller than you think you need to.
5. Test in the wild. Use it with real people doing real work. If it only works in demos, it doesn't work. The feedback from actual use is worth more than six months of planning.
6. Decide. Keep it. Kill it. Formalize it. Killing a tool is a success when it prevents long-term harm or frees up resources. Not every experiment needs to become permanent infrastructure.
What Practitioners Build
A pop-up tool is purpose-built, disposable, and owned. It does one job. It doesn\'t require an account. It doesn\'t phone home. It doesn't lock you in. You build it. You use it. You delete it. This is the opposite of the platform model.
Need a script to clean MARC records? Build it, run it, delete it. Need an app to track program attendance? Build it, use it, move on. Need to prototype something before committing to a vendor? Build it yourself and test your assumptions before signing anything.
The same AI that Slater wants you to refuse, the same AI that\'s consolidating power and extracting value? You can use it to build things. Your own things. Things that don\'t belong to anyone else.
There's something liberating about you, an LLM, and a blank text file. Open a chat window. Describe what you want to exist. Talk about the problem. What would help. What it looks like in your head. Then watch it start to take shape.
The LLM doesn\'t care that you don\'t know terminology. It doesn\'t care you\'ve never written code. It just starts building with you. The first version is always wrong. Always. The point is to get something that exists so you can figure out what right looks like.
You want a tool that helps patrons book meeting rooms? Do it. A form to log program registrations? Thirty minutes. A dashboard for monthly stats? You can build that now. Today. For free.
The no-code tools, the AI assistants, the LLMs everyone\'s hand-wringing about? They\'re power tools. Circular saws for people who were only given safety scissors. Yes, you can hurt yourself. But you can also build a house.
The Ethics of Building With Compromised Tools
Large language models are trained on extractive data practices. Writers, artists, coders whose work got scraped without consent. All of that is true.
But there's another version. Open-weight models like Llama or Mistral that you can run locally. Systems you deploy on your own hardware where nothing phones home, where you control what data goes in and out. Still imperfect. Still trained on ethically complicated data. But governable in ways cloud platforms are not.
The practitioner\'s position is not "use AI uncritically." It\'s "use AI on terms you control, with eyes open about trade-offs."
If your library saves $15,000 by canceling a vendor contract and building with AI, that savings came from somewhere. It came from automation relying on collective creative output. You can't un-scrape the data. But you can acknowledge the debt.
Redirect some of those savings. Sponsor open-source projects. Buy books from local authors. Hire local artists instead of generating images. Become a supporter of the Creative Commons ecosystem you're drawing from. Not because it absolves you (it doesn\'t), but because it's the difference between extraction and reciprocity.
The Bias Problem and How to Work Around It
These models aren\'t neutral. They\'re trained predominantly on English-language internet text. White. Western. Male-skewing. Academic and technical overrepresented. Marginalized voices underrepresented.
The solution isn't to ask the model to be less biased. The solution is to stop treating it as the source of truth.
This is where RAG comes in. Retrieval-Augmented Generation. Instead of asking the AI "recommend diverse books," you feed it your own curated list of diverse, local, marginalized authors. Your collection. Your community\'s knowledge. The AI\'s job becomes matching patron requests to items you provided, not generating from biased training data.
The librarian controls the source of truth. The AI becomes a reasoning engine, not an oracle. Hallucination risk drops because the AI is restricted to what you provided. Bias risk drops because you curated the input.
This is what practitioners should build toward. Not "AI tells us what to recommend." But "AI helps us surface what we already know, faster than we could manually."
The same principle applies to any system touching patron knowledge: Human in, machine middle, human out.
When Not to Build
Pop-up tools are powerful, but not universal. Part of practitioner judgment is knowing when building is wrong.
Do not build when: The system handles regulated data you cannot secure. Failure would endanger patrons. 24/7 uptime is mission-critical and you can't guarantee it. No one is accountable for maintenance.
Knowing when not to build is practitioner judgment too. Building irresponsibly isn\'t agency. It\'s negligence with extra steps.
The Point Is Agency
Local agency means communities and institutions controlling their own tools and data. Not renting access from corporations. Not depending on vendors whose interests don't align with yours.
We lost local agency gradually. Convenience won. Platforms abstracted complexity. Customization disappeared. Ownership became subscription. Now most libraries depend on systems they don\'t control, can\'t modify, can't leave.
This isn\'t an accident. Librarianship is 80% women. It\'s been systematically devalued for a century. And tool dependency is how that devaluation works. You don\'t give nurses control over medical records. You don\'t give teachers control over curriculum platforms. You don't give librarians control over the ILS. You make them tenants. You make them dependent. You make them ask permission to access their own work.
A system supports agency if you can disable it without catastrophe, if you can migrate away without permission, if staff can explain its behavior in plain language, if failure modes are visible and bounded.
The practitioner\'s position is how you build the capacity to see the pattern coming. You understand how extraction works because you\'ve watched it happen the same way three times. Fluency. Recognition. The ability to name what's happening and decide before it finishes happening to you.
Why This Matters for Librarianship
What happens to librarians in a world where AI actually works? Not robots replacing everyone. The mundane version. Where patrons go to ChatGPT for reference questions because it\'s faster. Where directors look at staffing costs and do math that doesn\'t work for humans.
I've watched it before. Travel agents. Retail workers. People doing work that got automated or outsourced. The pattern is always the same. Technology gets good enough. Economics shift. Humans get squeezed.
But the floor doesn't have to drop out.
The librarian who can build a tool is harder to replace. The librarian who\'s watched extraction happen at three different vendors knows what to look for and when to exit before it finishes playing out. The librarian who can say "I\'ve seen this exact pattern before, here's what happens next" is valuable in ways that show up on spreadsheets.
The future of librarianship isn\'t librarians versus AI. It\'s librarians who recognize the capture pattern in real time versus those who don\'t. The ones who\'ve seen it before will have agency. They\'ll know which tools will capture them and which won\'t. They\'ll build alternatives because they know exactly what they\'re trying to escape from.
The others will follow the same timeline I watched play out at OverDrive, Baker & Taylor, and Follett. They\'ll adopt what gets sold to them because it solves today\'s problem. They won\'t see the extraction architecture until they\'re locked in. Then they\'ll be tenants in systems they don\'t understand, paying rent that keeps going up, unable to leave because their data is hostage and the exit costs are written into the contract.
The profession is at a fork. One path leads to practitioners using AI as a tool among many, maintaining judgment, building agency, serving communities in ways that can't be automated because they require human wisdom in human contexts. The other leads to a hollowed-out profession where AI handles the "information" part and the humans left over just enforce policies and troubleshoot printers.
Both paths are possible. The choice isn't inevitable. But it is urgent.
Start Small. Start Now.
If you do nothing else: Reread one vendor contract with exit in mind. Look for data export clauses. Look for termination penalties. Look for what happens to your workflows if you leave.
Identify one workflow that could be solved with a disposable tool. Something annoying. Repetitive. Something you complain about.
Build something small enough to delete without regret. A form. A script. A dashboard. An afternoon, not a quarter.
And if it works? If you save time or money? Put some of that back into the ecosystem. Sponsor open-source projects. Buy books from local authors. Hire local designers instead of generating designs.
You don't need permission to develop judgment. You develop judgment by making things, breaking them, and choosing when to stop.
That\'s the practitioner\'s position.
Go make something.