Part 4 Vendor Management for AI Systems
Your vendors control most of your AI. Not all AI vendors manage it responsiblyor transparently. Here's how to protect yourself.
Part of an Evaluation Series
This post is part of our framework for evaluating vendors. Related posts:
- Vendor Management What are we asking of vendors?
- Accountability Framework Governance structures and accountability
- Stability & Support Long-term vendor viability
- Support & Implementation What support means
- Contract Terms Legal protections
The Question
Your vendors control most of your AI. Whether you're using a discovery system that ranks results with machine learning, a cataloging tool that uses natural language processing, or an analytics platform that predicts user behavior, you're relying on vendors to manage the responsible use of AI systems embedded in their products.
But here\'s the problem: most libraries don\'t know what AI is included in the products they buy. And many vendors don't manage that AI responsiblyor transparently.
This means vendor due diligence isn\'t just a procurement question. It\'s a governance question. It\'s an accountability question. It\'s an equity question.
Why This Matters
Libraries trust vendors. This is reasonablevendors are experts in their domains. But when AI is involved, that trust needs to be auditable. Here's why:
- Complexity hides problems. AI systems can fail in subtle waysranking certain books lower because of biased training data, excluding users from resources because of algorithmic errors, or changing behavior in response to system updates without warning. You can\'t detect these problems if you don\'t know AI is there.
- Vendors aren\'t librarians. Your vendor\'s priorities aren't always your values. They optimize for different thingsprofit, scale, speedwhich can conflict with equity, access, and intellectual freedom.
- You bear the legal risk. Your institution implements the tool. You serve the users harmed by it. You\'re liable for discrimination, data breaches, and misuse. The vendor\'s liability may be limited or excluded.
- Vendors can change their products unilaterally. An AI system that was responsible at purchase can become irresponsible after an update. Without contractual protections, you have limited recourse.
- AI is opaque by default. Many vendors can\'tor won\'texplain how their AI makes specific decisions. This makes it impossible for you to audit fairness, explain decisions to users, or detect problems.
Vendor due diligence for AI is the gap between what you hope vendors are doing and what you can actually verify they're doing.
What to Evaluate
1. AI Inventory in Vendor Selection
Start here: You cannot manage what you don't know exists.
Create a vendor questionnaire that asks specifically about AI. Don\'t assume vendors will volunteer this informationmany don\'t because they know it raises questions.
Specific questions to ask:
- What AI systems does this product use? (machine learning, natural language processing, computer vision, predictive analytics, etc.)
- What tasks does the AI perform? (ranking, filtering, recommendation, prediction, classification, etc.)
- What data does the AI use? (user data, content data, interaction data, third-party data, etc.)
- How was the AI trained? (proprietary data, public data, third-party data, federated data, etc.)
- When was the AI last updated? Will there be updates after purchase?
- Does the AI use profiling, decision-making, or predictive analysis on users? If so, how?
- Can the AI be disabled or adjusted by the customer?
- What documentation does the vendor provide about the AI system?
Document the answers. You may not have the expertise to evaluate them immediately, but you'll need them later for risk assessment and governance decisions.
2. Vendor Negotiation Demands
Once you know what AI is in the product, negotiate for the controls you need. Here are non-negotiable contract terms for AI systems:
- Indemnification: The vendor indemnifies you if their AI causes harm (discrimination claims, data breaches, IP infringement, etc.). This shifts the risk to the vendor, where it belongs.
- Audit rights: You have the right to audit how the AI works and what data it uses. This can be managed through legal controls (NDA, limited scope) if the vendor claims trade secrecy.
- Documentation: The vendor provides documentation of the AI system, training data sources, performance metrics, bias testing results, and known limitations. This should be updated when the AI changes.
- Data de-identification: The vendor commits to not using your institution's data to train or improve their AI systems without explicit written consent. If they do use your data, it must be de-identified and aggregated.
- Feature change notice: The vendor commits to notifying you before making significant changes to the AI system. "Significant" should be definedgenerally, any change that affects ranking, filtering, or predictions qualifies.
- Consent and opt-out: Users have the right to opt out of profiling, targeting, or decision-making by the AI. If opt-out isn't possible, the vendor must obtain explicit consent.
These aren\'t theoreticalthey\'re standard in regulated industries like finance and healthcare. Libraries should demand the same rigor.
3. Compliance and Risk Management
After purchase, you need ongoing processes to manage AI systems in your systems:
- Impact assessments: Conduct a Data Protection Impact Assessment (DPIA) or AI Impact Assessment for each AI-driven product. Document: What data does it use? Who could be harmed? What are the risks to equity, privacy, security? What controls are in place?
- Bias testing: For AI systems that rank, recommend, or filter, conduct regular testing for bias. Does the system perform equally well for different user groups? Are certain books/resources ranked lower due to protected characteristics (race, gender, language)? Are certain users excluded or disadvantaged?
- Training data documentation: Require the vendor to document where training data came from, what it represents, known biases or limitations, and what populations it covers well. This is essential for understanding what the AI might do wrong.
- Compliance reviews: At least annually, review the vendor's compliance with contract terms. Are they using your data? Have they changed the AI? Are there new bias issues? Document findings and follow up.
This work is hard and requires expertise you may not have in-house. Consider partnering with organizations that specialize in AI auditing, or funding research partners to help.
Red Flags That Matter
Some vendor responses should trigger immediate concern:
- Opacity: "We can\'t tell you how the AI works because it\'s proprietary." This is sometimes legitimate for competitive reasons, but it means you cannot audit fairness or detect problems. Push back.
- No liability: "The vendor disclaims all liability for AI system errors or harms." This transfers all risk to you. Do not accept this.
- No data transparency: "We can't tell you what data we use to train our AI." This prevents you from identifying bias sources or conflicts of interest. Unacceptable.
- No audit rights: "You cannot audit how our AI works or what data it uses." This prevents you from detecting problems or verifying fairness. Do not proceed.
- No opt-out: "Users cannot opt out of the AI systemit\'s core to the product." This means you're forcing AI on your users. Reconsider your use of this vendor.
Mission Lens: Equity and Vendor Due Diligence
Why This Is Equity Work
AI bias harms vulnerable populations first. If a discovery system ranks books differently based on author race or language, marginalized readers are most affected. If an analytics system predicts which users will "engage" and excludes others from promotions, low-income and immigrant communities lose access. If a vendor won\'t audit for bias, you're choosing not to see the harm.
Vendor due diligence isn\'t procurement compliance. It\'s equity work. You\'re asking: Does this vendor share our commitment to equitable access? Will they work with us to ensure their AI doesn\'t reproduce or amplify existing inequities?
Vendors who won\'t answer your questions, won\'t audit for bias, won\'t provide documentation, and won\'t allow you to verify fairness are telling you something: their AI is not designed for your values or your users.
That's actionable information. Use it.
In Practice
Create a vendor evaluation matrix. For each product you're considering, document:
- What AI is in it (if any)?
- What are the primary risks to your users?
- What controls does the vendor have in place?
- What contract terms have you negotiated?
- What ongoing monitoring will you do?
- What's your plan if problems emerge?
Build vendor management into governance. This shouldn't be just a procurement decision. Your governance structure (the committee, the person, the process) should review all vendor contracts that include AI. They should be involved in audit and testing decisions. They should be notified when problems emerge.
Document everything. Keep records of vendor questionnaires, audit results, bias testing data, and contract terms. You'll need this if a problem emerges and you need to explain your due diligence to stakeholders or regulators.
Next Steps
You now have the questions to ask, the contract terms to negotiate, and the processes to implement. What remains is the hard part: actually doing it. That's Part 5 of this framework.