[an error occurred while processing this directive]

The Problem with Library Tech Content

Most library tech writing is vendor marketing dressed up as advice. You can spot it: claims without sources, case studies that conveniently skip the hard parts, "best practices" that happen to be what the vendor sells.

Peer research exists but is scattered. Academic journals. Government reports. Practitioner blogs. Nonprofit research. Nobody has to say "here\'s how I verify claims" or "here\'s what I don't know."

This page is my attempt to fix that. Not perfect - I'm still wrong sometimes. But transparent enough that you can audit the work and decide whether to trust it.

How I Source Information

Primary Sources

When possible, I read the actual documents:

  • Legislation: Full text of laws (SB 24-205, EU AI Act, state privacy laws)
  • Court documents: Actual lawsuit filings, not news summaries
  • Vendor contracts: Real contract language and SLAs
  • Official reports: Government audits, incident reports, regulatory guidance

Academic Research

  • Library science journals: Library Journal, College & Research Libraries, Journal of Librarianship and Information Science
  • Computer science research: Peer-reviewed papers on security, AI, data privacy
  • Policy analysis: Think tank reports from organizations focused on technology policy

Practitioner Reports

  • Other librarians' writing: Published blog posts, conference presentations, case studies
  • Library networks: Conversations with library colleagues doing the actual work
  • User groups: Technical communities around specific systems (Koha, ILS implementations)

Official Reports

  • Government: IMLS data, BLS employment statistics, FTC guidance
  • Industry analysts: Gartner, Forrester (for analysis, not hype)
  • Nonprofit research: ALA reports, Library Journal studies, OCLC research

What I Don't Use

  • Vendor marketing materials (unless I'm critiquing them)
  • Unverified claims on social media or forums
  • AI-generated content used as a substitute for primary sources (I treat AI output with deep skepticism)
  • Secondary reporting without checking primary sources

Geographic Limitations

I focus on US laws and EU regulations (primarily the AI Act) because they're the most comprehensive and affect library vendors globally. International coverage is weaker:

  • Strong: US federal law, major states (CA, NY, TX, IL, CO)
  • Covered: EU AI Act, UK GDPR, Canadian privacy law (partial)
  • Weak: Australia, ASEAN countries, India, other global regions

How I Verify Claims

Data Claims

Example: "67% of libraries are adopting AI"

When I encounter a statistic, I:

  • Track it to the original source (not a citation of a citation)
  • Check the methodology: Who was surveyed? Sample size? Margin of error?
  • Note the uncertainty: Is it 67% ± 5% or ± 15%?
  • Compare to other data points: Do other surveys agree or conflict?
  • Check the recency: Is this 2024 data or 2019 that's being recycled?

Legal Claims

Example: "Colorado law requires AI impact assessments"

For legal claims, I:

  • Read the actual statute (SB 24-205, not a summary)
  • Check regulatory guidance and official interpretations
  • Verify with librarians who've actually implemented the requirement
  • Note ambiguities and unsettled questions (because laws are often vague)
  • Link to the full text so you can read it yourself

Vendor Claims

Example: "System guarantees 99.9% uptime"

For vendor claims, I:

  • Check the actual SLA in the contract (vendors are vague in marketing)
  • Research incident reports and outages (public status pages, library complaints)
  • Interview customers discreetly (have you actually hit that uptime rate?)
  • Compare historical performance to claims
  • Note what they're not claiming (like what gets excluded from uptime)

Technology Advice

Example: "Use Koha for small libraries"

For recommendations, I:

  • Test myself or get hands-on from practitioners with experience
  • Research implementation case studies (what worked, what broke)
  • Understand trade-offs: what\'s easy, what\'s hard, what costs what
  • Acknowledge when I don't have direct experience
  • Distinguish between "should work in theory" and "I know it works in practice"

Sources of Potential Bias

I try to be transparent about ways my background shapes my perspective:

Conflict of Interest: Consulting

I'm a consultant. Recommending consulting could benefit me. Full disclosure: I do consulting, but most of my content recommends free tools and vendor-independent approaches, not consulting.

Vendor History: Former Insider

I worked at OverDrive (#70 employee), Baker & Taylor, CollectionHQ, and Trellis Law. This shapes my understanding of vendor incentives and how they make decisions. I try to be fair, but I see vendor problems clearly because I've lived them.

Library Type: Public & Consortia

Most of my consulting has been with public libraries and consortia. Academic library coverage is weaker because it's less from lived experience and more from research. I try to call this out.

Geography: US & EU

I track US law and EU AI Act closely. Other countries much less. This is partly practical (those are the laws affecting US library vendors) but also reflects where I have expertise.

Scale: Large > Small

Easier to see problems at large scale than small. Large library systems have clear metrics on failures. Small libraries may have solutions I don\'t know about because they\'re not documented.

Recency: New > Evergreen

I write about what's new or urgent (legislation, breaches, AI hype), not evergreen best practices. This means coverage of timeless library management topics is weaker.

Update Cadence

When do I refresh content?

  • Breaking news: Published as I learn about it (major legislation, breaches, significant vendor announcements)
  • Research updates: Quarterly review of major articles for new data and studies
  • Legal updates: Annual review of AI Act, state privacy laws (usually January)
  • Vendor news: As major changes happen (usually announced in press releases or public statements)
  • Corrections: Immediately when an error is discovered or reported

How I Handle Errors

If I Get Something Wrong

Published correction: Error posted at top of article

Updated date noted: So you know the article has changed

Original claim visible: I don\'t hide the mistake; it\'s visible so you can see what was wrong

Explanation provided: Why it was wrong and what the correct information is

Getting Feedback

Readers can submit corrections via the contact form. I respond within one week. I don't make stealth edits - corrections are visible so the record is clear.

What I Don't Know (Or Know Poorly)

These are areas where my coverage is weaker or I lack direct expertise:

Academic Libraries

Stronger on public libraries and consortia. Academic library-specific challenges (course reserves, research data, faculty relationships) are less covered from lived experience.

International Law

Only EU AI Act because it affects US vendors. Canada, Australia, UK privacy laws, and other international regulations are not covered in depth.

Implementation Details

Some vendor platforms I haven\'t personally implemented or worked extensively with. I research but don\'t have hands-on experience with every system discussed.

Long-Term Sustainability

Some open-source projects are relatively new. Their long-term sustainability and community viability are unclear, and I note this uncertainty.

Comprehensive State Law Coverage

I track major states (CA, NY, TX, IL, CO) closely but not all 50. Other states' laws receive less attention.

How I Use AI (And How I Don't)

I use AI tools - specifically privacy-focused large language models - to help with drafting, structuring, and editing. Think of it as having a very fast, occasionally unhinged research assistant. It can suggest structure and phrasing; I decide what's true, what's fair, and what actually gets published.

I do not use AI to invent sources, fabricate quotes, or decide what conclusions you should draw. Every claim still goes through the same process: I pull primary sources, check numbers, and cross-verify against multiple references. When the model and the evidence disagree, the evidence wins.

I pay for tools that are explicit about how they handle data and what they were trained on, as far as that can be known. But model training is never fully transparent, so I treat every AI output as a draft, not a source - and I don't paste patron data, contracts, or anything confidential into AI systems, ever.

How to Audit This Work

You don't have to take my word for it. You can verify:

  • Check my sources: Every claim should have a citation. Click the link and read the source material.
  • Verify dates: Is the source from 2024? 2023? Or outdated? Fresh data matters.
  • Compare to other sources: Does another reputable source agree? Disagree? How do you explain the difference?
  • Check for conflicts: If I benefit from advice (consulting, speaking), note that bias. Does it affect credibility?
  • Look for caveats: Do I acknowledge uncertainty or present things as certain? Am I hedging appropriately?
  • Contact me: Found an error? Different interpretation? Different experience? Let me know.

Research Questions I'm Working On

These are areas I\'m actively researching but don\'t have full answers yet:

  • Why do small rural libraries have such different tech adoption patterns? What enables some to thrive while others struggle?
  • What's the actual cost of data extraction when leaving vendor lock-in? (Not theoretical - real implementation numbers.)
  • How are libraries using AI in patron services and what's the measurable impact? Hype vs. reality on the ground.
  • Can open-source ILS actually compete with proprietary systems on user experience and implementation support?
  • What's the real cost of ransomware recovery for small libraries? Who pays? How long does recovery take?
  • What's driving the library workforce crisis? Is it pay, working conditions, or something else - or all three?

Looking for someone to apply this research to your situation? My consulting services help libraries use research and analysis to make better decisions about vendors, contracts, and technology strategy.

This Page & Its Bias

Even this page is written by someone with specific experiences and limitations. It reflects my values (transparency, evidence-based thinking) but also my blind spots. I can't see every angle.

I try to be transparent about limitations. I\'m still wrong sometimes. I\'m still shaped by my background in ways I don't fully recognize.

You should think critically about everything here, not just accept it. If something doesn\'t align with your experience or if you\'ve found better evidence, I want to know.

Found an Error? Different Perspective?

Contact me - I take corrections seriously and will update immediately. Disagreements about interpretation are also welcome. Transparency works both ways.

[an error occurred while processing this directive]