[an error occurred while processing this directive]

Part 4 Vendor Management for AI Systems

Your vendors control most of your AI. Not all AI vendors manage it responsiblyor transparently. Here's how to protect yourself.

Part of an Evaluation Series

This post is part of our framework for evaluating vendors. Related posts:

The Question

Your vendors control most of your AI. Whether you're using a discovery system that ranks results with machine learning, a cataloging tool that uses natural language processing, or an analytics platform that predicts user behavior, you're relying on vendors to manage the responsible use of AI systems embedded in their products.

But here\'s the problem: most libraries don\'t know what AI is included in the products they buy. And many vendors don't manage that AI responsiblyor transparently.

This means vendor due diligence isn\'t just a procurement question. It\'s a governance question. It\'s an accountability question. It\'s an equity question.

Why This Matters

Libraries trust vendors. This is reasonablevendors are experts in their domains. But when AI is involved, that trust needs to be auditable. Here's why:

Vendor due diligence for AI is the gap between what you hope vendors are doing and what you can actually verify they're doing.

What to Evaluate

1. AI Inventory in Vendor Selection

Start here: You cannot manage what you don't know exists.

Create a vendor questionnaire that asks specifically about AI. Don\'t assume vendors will volunteer this informationmany don\'t because they know it raises questions.

Specific questions to ask:

Document the answers. You may not have the expertise to evaluate them immediately, but you'll need them later for risk assessment and governance decisions.

2. Vendor Negotiation Demands

Once you know what AI is in the product, negotiate for the controls you need. Here are non-negotiable contract terms for AI systems:

These aren\'t theoreticalthey\'re standard in regulated industries like finance and healthcare. Libraries should demand the same rigor.

3. Compliance and Risk Management

After purchase, you need ongoing processes to manage AI systems in your systems:

This work is hard and requires expertise you may not have in-house. Consider partnering with organizations that specialize in AI auditing, or funding research partners to help.

Red Flags That Matter

Some vendor responses should trigger immediate concern:

  • Opacity: "We can\'t tell you how the AI works because it\'s proprietary." This is sometimes legitimate for competitive reasons, but it means you cannot audit fairness or detect problems. Push back.
  • No liability: "The vendor disclaims all liability for AI system errors or harms." This transfers all risk to you. Do not accept this.
  • No data transparency: "We can't tell you what data we use to train our AI." This prevents you from identifying bias sources or conflicts of interest. Unacceptable.
  • No audit rights: "You cannot audit how our AI works or what data it uses." This prevents you from detecting problems or verifying fairness. Do not proceed.
  • No opt-out: "Users cannot opt out of the AI systemit\'s core to the product." This means you're forcing AI on your users. Reconsider your use of this vendor.

Mission Lens: Equity and Vendor Due Diligence

Why This Is Equity Work

AI bias harms vulnerable populations first. If a discovery system ranks books differently based on author race or language, marginalized readers are most affected. If an analytics system predicts which users will "engage" and excludes others from promotions, low-income and immigrant communities lose access. If a vendor won\'t audit for bias, you're choosing not to see the harm.

Vendor due diligence isn\'t procurement compliance. It\'s equity work. You\'re asking: Does this vendor share our commitment to equitable access? Will they work with us to ensure their AI doesn\'t reproduce or amplify existing inequities?

Vendors who won\'t answer your questions, won\'t audit for bias, won\'t provide documentation, and won\'t allow you to verify fairness are telling you something: their AI is not designed for your values or your users.

That's actionable information. Use it.

In Practice

Create a vendor evaluation matrix. For each product you're considering, document:

Build vendor management into governance. This shouldn't be just a procurement decision. Your governance structure (the committee, the person, the process) should review all vendor contracts that include AI. They should be involved in audit and testing decisions. They should be notified when problems emerge.

Document everything. Keep records of vendor questionnaires, audit results, bias testing data, and contract terms. You'll need this if a problem emerges and you need to explain your due diligence to stakeholders or regulators.

Next Steps

You now have the questions to ask, the contract terms to negotiate, and the processes to implement. What remains is the hard part: actually doing it. That's Part 5 of this framework.

[an error occurred while processing this directive]