Responsible AI: Governance and Oversight
[an error occurred while processing this directive]What is this about?
Your library is using (or will use) AI tools, from search to content moderation. This guide helps you understand what decisions your board needs to make to use them responsibly and safely.
Why should our board care?
- Legal protection: Without clear oversight, your library could be liable if an AI tool makes harmful recommendations or violates privacy rules.
- Trust: Patrons and community partners expect you to use AI thoughtfully. One bad incident erodes that trust.
- Strategic advantage: Libraries that govern AI well can use it to serve more patrons, faster, and fairer, without the drama.
What decisions do we need to make today?
- Approve a policy framework that covers acceptable AI use, data privacy, and how we test/monitor AI before launch.
- Set a budget for staff training and the tools we need to validate AI (fairness checks, security testing, etc.).
- Require that any new AI tool from a vendor undergoes review before we contract with them.
Why This Matters (Now)
Your board has fiduciary responsibility for how your library uses AI. Here\'s what\'s driving the urgency:
- Regulatory momentum: ISO 42001 (an international AI governance standard) is becoming expected of organizations, especially those handling patron data. Public-sector agencies are moving toward responsible AI requirements.
- Safety and operational risk: AI tools can make mistakes: recommending inappropriate content, misidentifying patron needs, or leaking patron borrowing history. Without oversight, you won't catch these until they affect patrons.
- Reputation and trust: Partners (schools, community orgs) and patrons expect your library to use AI safely. One incident can damage years of trust.
Our Framework: NIST AI RMF and ISO 42001
We're using two industry standards that work together:
NIST AI Risk Management Framework (NIST RMF)
What it is: A U.S. government framework (from NIST, the National Institute of Standards and Technology) that defines how organizations should think about AI risks step-by-step.
The cycle: GOVERN → MAP → MEASURE → MANAGE → MONITOR
- Govern: Set policy and assign who's responsible.
- Map: Document what your AI systems do and what risks they carry.
- Measure: Test for safety, fairness, and security before launch.
- Manage: Reduce risks through safeguards and controls.
- Monitor: Watch systems in production; catch problems early.
ISO/IEC 42001
What it is: An international standard (from ISO, the International Organization for Standardization) for AI management systems. Think of it like ISO compliance for AI.
Why it matters to libraries: Partners and vendors increasingly ask for it. If your library pursues certification, it shows patrons and stakeholders you take AI safety seriously.
How we use it: Our NIST RMF work keeps us "42001-ready," meaning if you decide to pursue formal certification later, we've already done most of the work.
Who Uses These Standards?
- Public sector and regulated organizations: Government agencies, hospitals, and financial institutions use NIST RMF to demonstrate responsible AI to the public.
- Tech vendors and cloud providers: Companies like Microsoft and AWS use these frameworks to reassure customers that their AI tools are safe.
- Your board\'s auditors: Internal and external auditors use NIST RMF and ISO 42001 to assess whether you're managing AI risk properly.
When do we apply these?
- Before we build or buy AI: Intake and risk assessment process.
- Before launch: Testing and validation (robustness, fairness, security).
- During use: Ongoing monitoring and periodic reviews.
- When evaluating vendors: Third-party AI review checklist.
- Audit moments: When internal audit or external partners ask "how do you govern AI?"
Policy Stack (Board Approval)
Your board will approve five foundational policies. These policies define the rules for how your library uses AI:
- Acceptable AI Use Policy: When and how staff can use or build AI tools. Example: staff must document what the AI is used for and get approval before launch.
- Data Governance Policy: How we protect patron data when using AI. Example: patron borrowing history cannot be used to train AI models without explicit consent.
- Model Risk Management Policy: How we test AI before it serves patrons. Example: fairness testing to ensure recommendations don't bias certain demographics.
- AI Incident Response Policy: What to do if an AI system goes wrong. Example: who to notify, how quickly to respond, when to communicate with patrons.
- Third-Party AI Review Policy: How we evaluate new AI tools from vendors. Example: mandatory security audit and fairness testing before contract signature.
How We Control AI: Step-by-Step
Every AI tool follows this process from the moment someone suggests it until we retire it:
1. Intake & Risk Assessment (Before we build or buy)
Staff fill out a form: What is this AI for? Who uses it? What data does it need? We assess whether it's low, medium, or high risk. High-risk tools (like ones that affect patron services) get extra scrutiny.
2. Testing & Validation (Before launch)
We test the AI on three dimensions:
- Robustness: Does it work reliably, even with unusual inputs? Does it fail gracefully?
- Fairness: Does it treat all patrons fairly, or does it have hidden biases? (Example: does a reading recommendation engine recommend diverse authors equally to all demographics?)
- Security: Can someone hack it or steal patron data through it?
Sign-off gates: The responsible team must approve before moving forward.
3. Deployment Decision (Go/No-Go)
Once tests pass, leadership decides: launch with caution, launch fully, or don't launch. If launching, we document all controls in place (e.g., "this tool has a human review step before outputs reach patrons").
4. Monitoring & Ongoing Review (After launch)
- Drift checks: Does the AI still perform as well as it did at launch, or is it degrading?
- Incident tracking: When something goes wrong, we log it and fix it.
- Periodic reviews: Every quarter (or more often for high-risk systems), we revisit whether the controls still work.
- Decommissioning path: When we retire an AI tool, we have a plan to protect patron data and inform users.
Safety Guardrails: Catching Bad Outputs
Some AI tools can produce harmful, misleading, or inaccurate outputs. Here's how we handle that:
Detection
We build playbooks to catch problems:
- Misleading health or legal advice that could harm patrons.
- Misinformation or deepfakes in user-generated content (UGC: reviews, comments, uploads).
- Recommendations that feel biased or offensive.
Escalation
When we spot a problem, we have clear steps: who to notify, how quickly, and what to do (e.g., take the content down, flag for human review, alert patrons).
User-Facing Guardrails
For tools patrons use directly, we add disclaimers where appropriate. Example: "This AI reading suggestion tool is not infallible. We recommend you review titles before checkout. If a recommendation seems off, tell us."
Training & AI Literacy
Everyone at your library needs to understand AI basics, at a level appropriate to their role. Here's what that looks like:
For Leadership (Director, Board, Senior Team)
- 30-minute briefing on AI risks, your board's decision-making role, and oversight responsibilities.
- Quarterly AI risk reports (what systems are running, any incidents, mitigation steps).
For Practitioners (Staff who will use, build, or oversee AI)
- 2-hour workshop on the intake process, testing standards, documentation requirements, and when to escalate.
- Ongoing office hours for questions.
For All Staff
- Quarterly micro-learnings (10-15 minutes) on AI ethics, bias awareness, and how to report problems.
- Annual refresher.
Our curriculum is based on: Microsoft AI Literacy Starting Guide 2025; Yale AI Literacy Framework 2025; AI Literacy Framework (Paradox Learning 2024); AI Literacy Framework (Digital Promise 2024).
Who's Responsible for AI Governance?
These are the roles and what each does:
Your Board
- Sets "risk appetite," essentially, "how much risk are we willing to take on AI?" (e.g., "we will not launch AI tools that make recommendations affecting minors without human review").
- Receives a quarterly AI risk report: what systems are live, any incidents, remediation steps, and compliance status.
- Approves budget for training and AI governance tools.
Risk/Compliance Team
- Owns the day-to-day controls (intake forms, testing checklists, monitoring procedures).
- Conducts risk assessments.
- Produces the quarterly board report.
Product, IT, and Data Teams
- Execute the gates: fill out intake forms, run tests, document findings, implement monitoring.
- Maintain the systems and catch problems in production.
Internal Audit (if you have one)
- Independent oversight: audits check whether the controls are actually working and whether we're following the RMF/ISO 42001 framework.
- Provides assurance to the board that governance is real, not just on paper.
Metrics Snapshot
| Metric | Target (example) |
|---|---|
| % AI systems risk-assessed | 100% before launch |
| Validation coverage (robustness/fairness/security) | 100% of in-scope systems |
| Training completion | ≥ 95% of required audiences |
| Third-party AI reviews completed | All new vendors/models pre-contract |
| Incidents/exceptions | Tracked with remediation cycle time |
| Use cases in monitoring | Top 5 live use cases monitored |
Roadmap (0–3–6 Months)
- 0–30 days: approve policies; pilot intake form; launch training v1.
- 30–90 days: assess top 3 AI uses; apply validation checklist; start monitoring reports.
- 90–180 days: third-party AI review playbook; ISO 42001 readiness check.
Real-World Examples: AI Tools Your Library Might Use
Here are five high-impact AI applications, and what governance looks like for each:
Content Safety & Moderation
What it does: Screens patron reviews, comments, and uploads to catch harmful content or misinformation before it appears on your catalog.
Why it matters: A bad review (or a deepfake) can damage your library's reputation and patron trust.
Governance challenges: The AI needs to balance catching bad content with not over-censoring legitimate opinions. We test for false positives (censoring good reviews) and false negatives (missing actual harm).
Personalized Reading Pathways
What it does: AI suggests multi-book or multi-media reading plans tailored to each patron.
Why it matters: Helps patrons discover new content and drives circulation.
Governance challenges: We must test for fairness across demographics: does the AI recommend diverse authors and perspectives equally to all patrons, or does it have hidden biases? We also ensure recommendations are explainable ("you liked X, so we recommend Y").
Multilingual Reference Co-Pilot
What it does: An AI tool that assists librarians in reference services, especially for multiple languages, while enforcing rules like "cite your sources" and "don't make up answers."
Why it matters: Helps patrons get better reference service, especially in underserved languages.
Governance challenges: AI language models can "hallucinate," confidently stating false information. We test for this and build in safeguards like mandatory human librarian review of AI suggestions before they reach patrons.
Accessibility Enhancer
What it does: Automatically converts library content to audio, large-print, or machine-readable summaries for patrons with disabilities.
Why it matters: Expands access for patrons who need different formats.
Governance challenges: We validate that AI-generated audio/summaries are accurate and don't introduce errors. We also protect patron data (PII, or personally identifiable information like names and email addresses) when processing.
Sensitive-Topic Shield
What it does: AI flags questions or content about sensitive topics (health advice, legal questions, children's safety) for mandatory human librarian review before answering.
Why it matters: Prevents AI from giving bad medical or legal advice that could harm patrons.
Governance challenges: We track escalation times: if a patron asks a health question, how long until a librarian responds? We set SLAs (Service Level Agreements, or targets for response speed) and monitor whether we meet them.
All Use Cases Follow the Same Governance Path
No matter the tool, every one goes through: intake assessment → testing → deployment approval → ongoing monitoring.
Board Decisions & Next Steps
Your board is being asked to make three concrete decisions today:
- Approve the Policy Stack: Authorize the five policies (Acceptable AI Use, Data Governance, Model Risk Management, AI Incident Response, Third-Party AI Review). Also approve the oversight cadence: quarterly board reports and regular monitoring cycles.
- Approve Budget for Training and Tools: Allocate funds for staff training and validation tools (software that tests AI for fairness, security, and performance).
- Endorse Third-Party AI Review Requirement: Commit to reviewing any new AI tool from a vendor before contracting, using a standardized security and fairness checklist.
Timeline: What Happens Next (0–6 Months)
- Weeks 1–4: Finalize and publish policies. Launch pilot intake form. Begin leadership and staff training.
- Weeks 5–12: Assess your top 3 AI use cases. Run validation tests. Begin quarterly monitoring reports to the board.
- Weeks 13–26: Complete third-party AI review playbook. Conduct ISO 42001 readiness check to see if certification is within reach.
Acronym & Term Reference
Use this section to look up terms used in this brief.
Standards & Frameworks
- ISO 42001 / ISO/IEC 42001: International standard for AI management systems. Think of it like ISO compliance for AI. Shows stakeholders your library takes AI safety seriously.
- NIST AI RMF / NIST AI Risk Management Framework: U.S. government framework (from NIST, the National Institute of Standards and Technology) that defines how to manage AI risks. The core process: GOVERN → MAP → MEASURE → MANAGE → MONITOR.
- SLA / Service Level Agreement: A target for how fast you will respond to something. Example: "We commit to human review of health questions within 2 hours."
AI & Data Terms
- Bias / Fairness: Does an AI system treat all patrons equally, or does it have hidden preferences? Example: does a reading recommendation AI recommend diverse authors equally to all demographics, or does it recommend classics mostly to older patrons and contemporary fiction mostly to younger ones?
- Drift: When an AI system's performance degrades over time. Example: a content moderation AI that was 95% accurate at launch is now only 85% accurate because the types of content it sees have changed.
- Hallucination: When an AI makes up information and states it confidently as fact. Example: an AI library reference tool invents a book title or gives false medical advice.
- Model: The AI algorithm itself (the "brain"). When we say "deploy a new model," we mean launch a new version of the AI.
- PII / Personally Identifiable Information: Data that identifies a patron: name, email, phone, address, borrowing history, etc. We protect this carefully.
- Robustness: Does an AI system work reliably even when things go wrong? Does it handle unusual inputs gracefully, or does it crash?
- UGC / User-Generated Content: Content that patrons create: reviews, comments, uploads. Content moderation AI screens UGC for harm or misinformation.
Governance & Risk Terms
- Gap Assessment: An audit that identifies where your current practices fall short of a standard (like NIST RMF or ISO 42001). Used to plan what to fix first.
- Governance / AI Governance: The policies, roles, and processes for how your library makes decisions about AI and manages risks. Your board sets the "governance framework."
- Intake Process: The form and workflow staff fill out before proposing a new AI tool. Captures what the tool does, what data it needs, who uses it, and whether it's high/medium/low risk.
- Lifecycle / Lifecycle Controls: The full journey of an AI system from conception to retirement. Lifecycle controls are safeguards at each stage: intake, testing, deployment, monitoring, decommissioning.
- Risk Assessment: Evaluating an AI tool to determine how much risk it poses. High-risk tools (e.g., ones affecting patron safety) get extra testing and oversight.
- Risk Appetite: How much risk your board is willing to take on. Example: "We will not launch AI tools that make recommendations affecting minors without human review."
- Third-Party AI Review: A security and fairness audit of an AI tool from an external vendor before your library contracts with them.
- Validation / Testing: Rigorous checks to ensure an AI system works as intended. Includes testing for robustness, fairness, and security.
Library-Specific Terms
- Patron Data / Patron Privacy: Information about patrons: who they are, what they borrow, what they search for. Libraries treat this as highly confidential.
- Mis/Disinformation: False or misleading information. Your library wants to avoid amplifying it through AI.