[an error occurred while processing this directive]

Responsible AI: Governance and Oversight

[an error occurred while processing this directive]
Board Member Quick Guide

What is this about?

Your library is using (or will use) AI tools, from search to content moderation. This guide helps you understand what decisions your board needs to make to use them responsibly and safely.

Why should our board care?

  • Legal protection: Without clear oversight, your library could be liable if an AI tool makes harmful recommendations or violates privacy rules.
  • Trust: Patrons and community partners expect you to use AI thoughtfully. One bad incident erodes that trust.
  • Strategic advantage: Libraries that govern AI well can use it to serve more patrons, faster, and fairer, without the drama.

What decisions do we need to make today?

  • Approve a policy framework that covers acceptable AI use, data privacy, and how we test/monitor AI before launch.
  • Set a budget for staff training and the tools we need to validate AI (fairness checks, security testing, etc.).
  • Require that any new AI tool from a vendor undergoes review before we contract with them.

Why This Matters (Now)

Your board has fiduciary responsibility for how your library uses AI. Here\'s what\'s driving the urgency:

Our Framework: NIST AI RMF and ISO 42001

We're using two industry standards that work together:

NIST AI Risk Management Framework (NIST RMF)

What it is: A U.S. government framework (from NIST, the National Institute of Standards and Technology) that defines how organizations should think about AI risks step-by-step.

The cycle: GOVERN → MAP → MEASURE → MANAGE → MONITOR

ISO/IEC 42001

What it is: An international standard (from ISO, the International Organization for Standardization) for AI management systems. Think of it like ISO compliance for AI.

Why it matters to libraries: Partners and vendors increasingly ask for it. If your library pursues certification, it shows patrons and stakeholders you take AI safety seriously.

How we use it: Our NIST RMF work keeps us "42001-ready," meaning if you decide to pursue formal certification later, we've already done most of the work.

Who Uses These Standards?

When do we apply these?

Policy Stack (Board Approval)

Your board will approve five foundational policies. These policies define the rules for how your library uses AI:

How We Control AI: Step-by-Step

Every AI tool follows this process from the moment someone suggests it until we retire it:

1. Intake & Risk Assessment (Before we build or buy)

Staff fill out a form: What is this AI for? Who uses it? What data does it need? We assess whether it's low, medium, or high risk. High-risk tools (like ones that affect patron services) get extra scrutiny.

2. Testing & Validation (Before launch)

We test the AI on three dimensions:

Sign-off gates: The responsible team must approve before moving forward.

3. Deployment Decision (Go/No-Go)

Once tests pass, leadership decides: launch with caution, launch fully, or don't launch. If launching, we document all controls in place (e.g., "this tool has a human review step before outputs reach patrons").

4. Monitoring & Ongoing Review (After launch)

Safety Guardrails: Catching Bad Outputs

Some AI tools can produce harmful, misleading, or inaccurate outputs. Here's how we handle that:

Detection

We build playbooks to catch problems:

Escalation

When we spot a problem, we have clear steps: who to notify, how quickly, and what to do (e.g., take the content down, flag for human review, alert patrons).

User-Facing Guardrails

For tools patrons use directly, we add disclaimers where appropriate. Example: "This AI reading suggestion tool is not infallible. We recommend you review titles before checkout. If a recommendation seems off, tell us."

Training & AI Literacy

Everyone at your library needs to understand AI basics, at a level appropriate to their role. Here's what that looks like:

For Leadership (Director, Board, Senior Team)

For Practitioners (Staff who will use, build, or oversee AI)

For All Staff

Our curriculum is based on: Microsoft AI Literacy Starting Guide 2025; Yale AI Literacy Framework 2025; AI Literacy Framework (Paradox Learning 2024); AI Literacy Framework (Digital Promise 2024).

Who's Responsible for AI Governance?

These are the roles and what each does:

Your Board

Risk/Compliance Team

Product, IT, and Data Teams

Internal Audit (if you have one)

Metrics Snapshot

MetricTarget (example)
% AI systems risk-assessed100% before launch
Validation coverage (robustness/fairness/security)100% of in-scope systems
Training completion≥ 95% of required audiences
Third-party AI reviews completedAll new vendors/models pre-contract
Incidents/exceptionsTracked with remediation cycle time
Use cases in monitoringTop 5 live use cases monitored

Roadmap (0–3–6 Months)

Real-World Examples: AI Tools Your Library Might Use

Here are five high-impact AI applications, and what governance looks like for each:

Content Safety & Moderation

What it does: Screens patron reviews, comments, and uploads to catch harmful content or misinformation before it appears on your catalog.

Why it matters: A bad review (or a deepfake) can damage your library's reputation and patron trust.

Governance challenges: The AI needs to balance catching bad content with not over-censoring legitimate opinions. We test for false positives (censoring good reviews) and false negatives (missing actual harm).

Personalized Reading Pathways

What it does: AI suggests multi-book or multi-media reading plans tailored to each patron.

Why it matters: Helps patrons discover new content and drives circulation.

Governance challenges: We must test for fairness across demographics: does the AI recommend diverse authors and perspectives equally to all patrons, or does it have hidden biases? We also ensure recommendations are explainable ("you liked X, so we recommend Y").

Multilingual Reference Co-Pilot

What it does: An AI tool that assists librarians in reference services, especially for multiple languages, while enforcing rules like "cite your sources" and "don't make up answers."

Why it matters: Helps patrons get better reference service, especially in underserved languages.

Governance challenges: AI language models can "hallucinate," confidently stating false information. We test for this and build in safeguards like mandatory human librarian review of AI suggestions before they reach patrons.

Accessibility Enhancer

What it does: Automatically converts library content to audio, large-print, or machine-readable summaries for patrons with disabilities.

Why it matters: Expands access for patrons who need different formats.

Governance challenges: We validate that AI-generated audio/summaries are accurate and don't introduce errors. We also protect patron data (PII, or personally identifiable information like names and email addresses) when processing.

Sensitive-Topic Shield

What it does: AI flags questions or content about sensitive topics (health advice, legal questions, children's safety) for mandatory human librarian review before answering.

Why it matters: Prevents AI from giving bad medical or legal advice that could harm patrons.

Governance challenges: We track escalation times: if a patron asks a health question, how long until a librarian responds? We set SLAs (Service Level Agreements, or targets for response speed) and monitor whether we meet them.

All Use Cases Follow the Same Governance Path

No matter the tool, every one goes through: intake assessment → testing → deployment approval → ongoing monitoring.

Board Decisions & Next Steps

Your board is being asked to make three concrete decisions today:

Timeline: What Happens Next (0–6 Months)


Acronym & Term Reference

Use this section to look up terms used in this brief.

Standards & Frameworks

AI & Data Terms

Governance & Risk Terms

Library-Specific Terms

[an error occurred while processing this directive]