AI Governance & Policy Framework
Build Board-Ready AI Governance
Transform your library's approach to AI adoption with a comprehensive governance framework. This system helps you develop policies, assess organizational readiness, implement oversight structures, and communicate AI strategies to your board and staff. Move beyond vendor risk assessment to create a complete governance ecosystem that protects your patrons, staff, and institution.
Time required: 2 hours for policy builder, 4-6 weeks for full implementation
Why This Matters
Your library is using AI. Maybe you know it: a discovery system with ranking algorithms, a chatbot fielding reference questions, a recommendation engine suggesting titles to patrons. Maybe you don't: a vendor quietly integrated AI features into a system you already use, and nobody told you it was happening.
This is why governance matters. AI decisions affect patron access. When a discovery system ranks search results using AI, some patrons find what they need and others don\'t. When a recommendation engine suggests materials, it shapes what communities know is available in your library. These aren\'t neutral technical questions; they're governance questions requiring board attention, documented policy, staff training, and ongoing oversight.
Governance isn\'t about blocking technology. It\'s about making intentional choices: Why you use AI, how you\'ll use it safely, who\'s accountable when something goes wrong, and how you\'ll protect patrons, especially vulnerable populations who depend on library access. Regulations are coming fast. The EU AI Act takes effect August 2, 2026. Colorado\'s AI Act takes effect June 30, 2026. State privacy laws are spreading. This guide helps you get ahead.
Build Your AI Governance Policy
Answer 12 questions about your library's context and generate a customized policy outline, board talking points, and implementation roadmap.
Note: Results in 2 hours, implementation in 4-6 weeks
Complete Guide
The Five Governance Domains
AI governance isn\'t one thing. It\'s five interlocking domains that together create a complete protection ecosystem. These aren't independent; they reinforce each other.
1. Risk Assessment & Management
You can\'t govern what you don\'t understand. This domain asks: What AI systems are we using? What risks do they create? How are we managing those risks?
Start with an audit. Walk through every system your library uses and identify where AI is present. Ask your vendors specifically. AI might be hiding in places you don't expect: discovery systems, chatbots, recommendation engines, even some circulation systems. For each system, ask:
- What decisions does this AI make? (Ranking results, prioritizing holds, generating recommendations, filtering content)
- What data does it use? (Search history, browse behavior, circulation data, demographics)
- Who is affected by those decisions? (All patrons? Specific groups?)
- What could go wrong? (Recommendations skew toward certain communities, search ranking favors certain types of materials, chatbot gives incorrect information)
Then classify risk. Is this AI making decisions that significantly affect patron access to education? If yes, it\'s high-risk under law. Is it recommending content, providing information, or generating explanations? It\'s limited-risk. Does it just provide generic information with no personal data? Minimal-risk.
High-risk systems require formal impact assessments documenting foreseeable harms, affected populations, and specific mitigations. Limited-risk systems require transparency, meaning patrons must know AI is involved. Minimal-risk systems follow standard privacy and accessibility rules.
What you document: AI systems inventory with risk classifications, impact assessments for high-risk systems, bias testing methodology and results, risk mitigation strategies by system.
2. Compliance & Legal Framework
Regulators are watching. You need documented compliance or you're exposed to regulatory investigation, fines, and liability.
The regulatory landscape changed in 2026. EU AI Act (effective August 2, 2026) applies to any library serving EU patrons. Colorado AI Act (effective June 30, 2026) applies to high-risk AI making consequential decisions about people, which includes discovery systems and access decisions. State privacy laws give patrons rights to access, delete, and correct their data. GDPR requires data minimization and deletion rights.
For each AI system, identify what laws apply. Then build compliance. For high-risk systems, you need:
- Complete impact assessment (not a checklist, a real analysis of risks and mitigations)
- Documented bias testing showing the system doesn't cause disparate impact
- Training data documentation including sources and known biases
- Meaningful human oversight (actual authority to override, not rubber-stamp approval)
- Patron disclosure (they must know AI is involved and understand limitations)
- Audit trail (who approved what decision, when, why)
- Annual compliance reviews
Vendor contracts are where compliance actually happens. You can't comply alone; you need vendor cooperation. Your contract must require vendors to:
- Not use patron data for AI training without explicit consent
- Provide impact assessments and bias testing results
- Allow you to audit their compliance
- De-identify data within 48 hours
- Notify you within 24 hours of any data breach
- Not remove core AI features without 90-day notice
What you document: Policy framework, vendor contract AI clauses, impact assessments, bias testing results, board decisions approving AI systems, staff training records, patron complaints and responses.
3. Governance Structure & Decision-Making
Who decides whether your library uses AI? Who has authority to change systems? What happens if something goes wrong? These governance questions matter as much as the technical ones.
Your board needs to approve any new high-risk AI system. This isn\'t micromanagement; it\'s appropriate governance. Boards approve budgets, set policy, and provide oversight. AI that affects patron access to education is a board-level decision.
Form an AI governance committee with board representation, IT director, collection development, reference staff, and ideally a community advocate from vulnerable populations. This committee owns your policy, reviews vendor contracts, approves new systems, and monitors compliance.
Decision authority needs clarity:
- Board approves: New high-risk AI systems, major policy changes, budget for compliance
- Committee approves: Modifications to existing systems, vendor contracts, staff training
- IT director approves: Routine monitoring, bug fixes, performance optimization
- Staff escalate: Patron complaints about AI, suspected bias, security concerns
What you document: Policy specifying approval processes, governance committee charter, decision logs, board resolutions.
4. Vendor Management for AI Systems
Your vendors control most of your AI. You can't remove that dependence, since most modern systems have vendor AI built in. You can protect yourself by being a smart customer.
In vendor selection, ask AI-specific questions:
- What AI is included in this system? (Be specific: "recommendations," "search ranking," etc.)
- Can we audit your compliance with AI regulations?
- What's your liability if your AI causes harm?
- Will you de-identify our patron data? How quickly?
- Do you train your AI on our patron data? Can we opt out?
- What happens if you remove AI features?
In vendor negotiation, demand:
- Indemnification for AI failures (don't accept "AI features are as-is")
- Right to audit vendor compliance
- Clear documentation of what AI can and can't do
- Data de-identification within 48 hours
- 90-day notice before removing AI features
- Breach notification within 24 hours
- Limitations on data use for AI training
Get these in writing. Handshakes and promises don't protect you when regulators investigate. Monitor vendors ongoing. Track their compliance: ask for bias testing results, keep documentation, question deviations from contract.
What you document: Vendor contracts with AI clauses, vendor risk assessments, compliance documentation from vendors, audit reports, performance monitoring data.
5. Patron Rights & Transparency
Patrons are the people AI decisions affect most directly. They need to know, they need to understand, and they need recourse.
Transparency first. Patrons must know where AI is used, not buried in page 47 of the privacy policy, but visible and clear. When a patron searches, a note says "Results ranked by AI. Learn more." When they see a recommendation, they know it\'s AI-generated. When they\'re using a chatbot, they\'re told it\'s AI.
Understand limitations. Patrons need clear explanation of what AI can and can\'t do. "This AI recommends based on your search history" is transparent. "This AI found books you\'ll definitely like" is a promise the system can't keep.
Provide appeals. If a patron disagrees with AI recommendations or wants human judgment, there's a process. Vulnerable populations especially need this, since patrons researching sensitive topics might want search results that AI filtered out.
Protect vulnerable populations especially. People researching asylum law, domestic violence shelters, transgender healthcare, or undisclosed health conditions have serious privacy needs. Data breaches can enable ICE targeting, abuser tracking, outing, or discrimination. For these patrons, you can't rely on consent, because many would lose library access if they had to opt in to AI. So you choose privacy for them. Minimize data collection. Delete aggressively. Restrict vendor access to their data. Encrypt everything.
Language accessibility matters. Privacy policies in English don't protect monolingual Spanish speakers. AI explanations must be available in community languages at appropriate literacy levels.
What you document: Public-facing disclosure about AI use, privacy policy updates, patron FAQ, appeals process, data deletion schedules, community language translations, vulnerable population protections.
Building Your AI Governance Policy
A governance policy isn\'t a regulation. It\'s your intentional framework for how you\'ll use AI responsibly. Here\'s how to build one in six months with a nine-step process.
Step 1: Audit Your AI (Month 1)
List every system your library uses. For each one, answer:
- Is there AI in this system? (Ask the vendor if you're not sure.)
- What specifically does the AI do? (Don't accept "machine learning"; what decisions does it make?)
- What data does it process? (Patron searches, behavior, demographics)
- What risk level is this? (High, limited, or minimal)
- Who is responsible for oversight? (IT director? A specific person?)
This becomes your AI Systems Inventory. You'll use it to identify which systems need immediate attention (high-risk), which need transparency work (limited-risk), and which are fine as-is (minimal-risk).
Step 2: Assess High-Risk Systems (Months 1-2)
For each high-risk system, conduct an impact assessment. This is a formal document, not a checklist. It addresses:
- Purpose: What problem does this system solve?
- Data: What data goes into the system? Where does it come from? What biases might be present?
- Foreseeable risks: Who could be harmed? How? What's the real harm?
- Affected populations: Which groups of patrons does this affect most?
- Mitigations: What are you doing to prevent each identified harm?
Example: Your discovery system uses AI to rank search results. Purpose: Help patrons find materials faster. Data: Search history, circulation data, item metadata. Risks: If the AI was trained on historical circulation data, it amplifies existing collection development biases. Affected populations: Communities historically underrepresented in your collection. Mitigation: Bias testing methodology, manual review of rankings, diverse collection development feeding the system.
Document this formally. Keep it for regulatory inspection.
Step 3: Establish Vendor Accountability (Months 2-3)
Contact vendors with AI systems. Ask for:
- Impact assessment they've conducted
- Bias testing methodology and results
- Training data documentation
- Documentation of what oversight procedures they support
- Explanation of data retention practices
Then negotiate. Your contract should require vendors to provide impact assessments and bias testing documentation, indemnify you for AI failures, allow audits of compliance, de-identify patron data within 48 hours, prohibit AI training on patron data without consent, maintain breach notification within 24 hours, and document what decisions the AI makes.
If a vendor won\'t accept basic accountability, it\'s a red flag.
Step 4: Design Bias Testing (Month 3)
High-risk AI needs ongoing bias testing. You need a methodology documented in your policy.
The basic approach: Test the system for disparate impact. Does the AI's output significantly favor certain groups over others? For a discovery system, do searches for "civil rights history" return books about white civil rights movements disproportionately? For a recommendation system, does it recommend books by authors of color to patrons of color but not to others?
You probably can\'t test this perfectly because you don\'t control the AI; the vendor does. But you can ask the vendor to test for bias, test outputs yourself looking for patterns, track patron complaints about AI bias, review recommendations periodically for appropriateness, and conduct disparate impact analysis on high-stakes systems.
Document your methodology and results. Show that you're monitoring.
Step 5: Implement Human Oversight (Months 3-4)
For high-risk systems, people make final decisions, not AI.
Define when human review is required. For a discovery system, maybe it\'s: "Any result that would rank below position 10 must pass a library staff member who confirms it matches the search." For recommendations, maybe it\'s: "Any recommendation involving sensitive topics goes through human review." For approval decisions, maybe it's: "Every initial denial goes to a person before rejection."
Train the people doing review. They need to understand what they're reviewing, what criteria matter, and when to escalate. This is skilled work. Create appeal mechanisms. Patrons should be able to request human review of AI recommendations or decisions.
Step 6: Draft Your Policy (Month 4)
Your policy should include:
- AI Systems Inventory (which systems, what they do, risk classification)
- Risk Management Program (how you assess, monitor, and respond to risks)
- Impact Assessment Requirements (when required, what's covered, documentation)
- Bias Testing Plan (methodology, frequency, documentation)
- Human Oversight Procedures (when required, who decides, appeals process)
- Vendor Management (contract requirements, audit rights, data protections)
- Staff Roles (who's responsible for what)
- Patron Rights (transparency, appeals, data access/deletion)
- Compliance Monitoring (how you track ongoing compliance, reporting to board)
- Review Schedule (policy updates at least annually, sooner if AI systems change)
Use plain language. Avoid jargon. Make it something staff can actually read and understand.
Step 7: Get Board Approval (Month 4-5)
Present your policy to the board with summary of AI systems identified, risk assessment of high-risk systems, proposed mitigations, budget for implementation (ongoing compliance has costs), timeline, and success metrics.
Ask the board to approve the policy framework. Emphasize that this isn\'t blocking innovation; it\'s managing risk responsibly.
Step 8: Train Staff (Month 5-6)
Staff are your frontline. They answer patron questions, identify problems, and implement the policy day-to-day.
Training should cover what AI systems the library uses by role, what each system does with specific functionality, how to explain AI to patrons using plain language scripts, when to escalate concerns about bias, privacy, and errors, patron privacy protection and what information is sensitive, and emergency procedures if systems fail.
Make it interactive. Bring them scenarios: "A patron asks if their search is tracked. What do you say?" Role-play difficult conversations. Give them approved answers they can adapt.
Step 9: Monitor and Report (Ongoing)
Governance isn't done. You monitor ongoing.
- Quarterly: Track patron complaints about AI, monitor bias testing results, review vendor compliance status, document any AI problems
- Annually: Full compliance review, update policy if needed, report to board on governance status
- When problems emerge: Investigate thoroughly, document your response, communicate to patrons if affected, adjust policy if needed, report to board if significant
This shows regulators that you take governance seriously. You identified problems, responded appropriately, and learned.
Board Decision Framework
Boards make the critical decisions about AI. Not IT directors. Not vendors. Boards.
Six Categories of Board Questions
1. Strategic Questions
- What problem does this AI solve for patrons?
- What's the business case? (Does the benefit justify the cost and risk?)
- Does this align with our mission to serve all patrons equitably?
- What happens if we don't do this? (Is it competitive disadvantage, or just nice-to-have?)
- What are the risks if this goes wrong?
2. Legal and Compliance Questions
- Is this high-risk under EU AI Act, Colorado AI Act, or state law?
- If yes, have we completed required impact assessments?
- Have we documented bias testing?
- Do we have vendor compliance documentation?
- What's our liability exposure?
3. Governance Questions
- Who has authority to change or terminate this system?
- What's the process if problems emerge?
- How do we handle patron complaints?
- Who's accountable if this fails?
4. Financial Questions
- What's the total cost? (Purchase + compliance + monitoring)
- What are ongoing costs for bias testing, audits, vendor management?
- What's the cost to exit if vendor fails or we need to change direction?
- What's our insurance coverage for AI-related claims?
5. Equity and Vulnerable Population Questions
- Who could be harmed by this AI?
- Have we assessed impact on vulnerable populations specifically? (Immigrants, LGBTQ+ youth, domestic violence survivors, people with health concerns, activists)
- How do we protect privacy for patrons with sensitive information needs?
- Do vulnerable populations have real alternatives if they opt out?
6. Vendor Questions
- Does the vendor understand their compliance obligations?
- Do we have audit rights?
- What happens if they breach?
- What happens if they remove the AI feature?
Decision Criteria
Approve with safeguards (Go) if:
- AI genuinely solves a patron need
- Vendor compliance documentation is strong
- Contracts include audit rights and liability sharing
- Impact assessment identifies mitigations for risks
- Board approves budget for ongoing compliance
- Staff training plan is solid
Proceed with aggressive safeguards if:
- Risk is moderate but manageable
- Strong contractual protections can be negotiated
- Vulnerable populations can be meaningfully protected
- Library has resources for ongoing monitoring
Don't approve (No-Go) if:
- Risk to vulnerable populations is severe and can't be mitigated
- Vendor won't accept reasonable compliance responsibility
- Cost of compliance exceeds benefit
- Board can't approve resources for oversight
- System poses unacceptable privacy risk
Staff Training & Implementation
Your policy is worthless if staff don't understand it or believe in it. Implementation happens through training, communication, and ongoing support.
Training Curriculum
Budget 4-5 hours for comprehensive staff training. Deliver it in modules so staff can attend relevant sessions.
- Module 1: What AI Systems Does Our Library Use? (1 hour) - Walk through each system staff will interact with, explain what each system does in plain terms, show what data each system collects, explain which systems staff need to explain to patrons
- Module 2: How to Explain AI to Patrons (1 hour) - Standard language for common questions, how to explain rankings and recommendations, how to discuss data collection and privacy, honest answers when you don't know details
- Module 3: Red Flags and When to Escalate (1 hour) - When to report AI system problems, how to document patron complaints, suspected bias, security concerns
- Module 4: Protecting Patron Privacy (1 hour) - What information should never go in AI systems, how to protect patrons asking about sensitive topics, when to escalate privacy concerns, patron rights
- Module 5: Emergency Procedures (30 minutes) - What to do if systems go down, manual workarounds, communication with patrons, contact procedures
- Module 6: Q&A and Scenarios (30 minutes) - Role-play difficult conversations, practice explaining policy, address staff concerns
Provide scripts. "A patron asks if we track their searches. You say: \"We can see what you searched so we can improve our system, but that information is protected. You can ask me about our privacy policy if you want details.\""
How to Communicate Beyond Email
Email announcements don\'t work. Staff don\'t read them, don\'t retain them, don\'t feel engaged.
Instead: Kick-off meeting where leadership explains why this matters. Interactive training via video or in-person with discussion. Printed guides as reference cards at the desk. Ongoing support with regular check-ins and feedback mechanisms. Make it safe for staff to voice concerns without penalty.
Listen to staff. They\'ll identify problems you miss. They\'ll tell you which parts of the policy don't work in practice. Use their feedback to refine.
Common Misconceptions to Address
"AI is unbiased" - No. AI trained on biased data amplifies those biases. Staff should understand that AI is only as good as its training data.
"Privacy policy protects us from vendors" - No. Vendor contracts may allow different data practices. Staff should know that specific vendor practices are documented in IT, not assumed from the general privacy policy.
"Patrons can always opt out" - No. Some AI features can\'t be opted out without losing functionality. Staff should be honest about what\'s optional and what's not.
Monitoring & Continuous Improvement
Governance isn't one-time. You monitor ongoing and adjust as needed.
Quarterly reviews: Track patron complaints about AI systems and look for patterns. Monitor bias testing results to identify trends. Review vendor compliance status. Assess staff training effectiveness. Document system performance issues.
Annual compliance review: Full audit of all AI systems against policy. Update impact assessments if systems changed significantly. Review bias testing results for concerning patterns. Review vendor contracts for renewals and updates. Report to board on governance status and problems identified.
When problems emerge: Investigate thoroughly. Document your response. Communicate to patrons if affected. Adjust policy if needed. Report to board if significant.
This shows regulators that you take governance seriously. You identified problems, responded appropriately, and learned.
Stakeholder Communication
Your board approved governance. Your staff are trained. Now you need patrons and community to understand and trust.
Patron Transparency
Put it on your website, not page 47 of the privacy policy. When patrons use an AI system, they should see clear notices about what's happening.
In your privacy policy, add specific sections on AI. What systems use AI? What data goes into each? How do you protect privacy?
Vulnerable Population Outreach
Don't assume vulnerable populations will find generic notices. Reach out.
- For immigrants: Provide notices in community languages. Explicitly state what information is protected and how.
- For LGBTQ+ youth: Use affirming language. Clearly state that searches are private and not reported to family/guardians.
- For domestic violence survivors: Confidential notice about privacy protections for safety planning searches.
- For people with health concerns: Explain what data is deleted and when.
Work with community organizations serving these populations. Include them in AI decisions. Invite feedback.
Responding to Community Concerns
Privacy concern: "You\'re tracking everything I do." Response: Explain what\'s collected, how it\'s protected, who has access, how long it\'s kept.
Equity concern: "This doesn\'t work for my community." Response: Acknowledge bias risk. Explain how you're testing for it and making changes. Invite feedback.
Functionality concern: "Search results aren't helpful." Response: Offer non-AI search options. Suggest advanced search. Offer reference help.
Trust concern: "Vendors are using my data." Response: Be honest. Explain what vendors can do with data, what you've contractually protected, audit rights you have.
Case Study: Governance in Practice
A mid-sized library system (three locations, 150,000 patrons) implemented AI governance. This is how they worked through a real decision.
The Question: Discovery System Upgrade
Should they upgrade to an AI-powered discovery system? Current system is from 2015. Vendor offers new system with AI ranking and personalized recommendations.
Month 1: Assessment
They audited their current systems. Inventory included discovery system, a chatbot, a recommendation engine in their current system, and circulation holds management. Most had some AI component.
They categorized: Discovery system = high-risk (affects patron access to materials). Chatbot = limited-risk (needs transparency). Recommendations = high-risk (affects what patrons know is available).
Month 2: Vendor Evaluation
They asked the vendor detailed questions:
- What AI specifically is in the new discovery system? (Search ranking, personalization, relevance calculation)
- What data does it use? (Searches, browse behavior, circulation history)
- Have you completed impact assessment? (Vendor had none)
- What bias testing have you done? (Vendor didn't understand the question)
- Can we audit your compliance? (Vendor's standard contract said no)
Red flags everywhere. The vendor was building to minimum features, not compliance.
Month 3: Negotiation
They required the vendor to conduct impact assessment, establish bias testing methodology and provide baseline results, allow quarterly audits, de-identify patron data within 48 hours, prohibit AI training on patron data, maintain insurance covering AI failures, and provide 90-day notice before removing features.
Cost went up. Vendor wasn't happy. But after three months of negotiation, they got a contract with accountability built in.
Month 4: Board Presentation
"The new discovery system has AI that helps patrons find materials faster. Risk assessment: The AI could amplify collection development biases. Mitigation: We\'re requiring vendor to test for bias, we\'ll review quarterly, we're training staff to understand limitations, and patrons can use non-AI search if they want.
High cost for the work of governance. But protecting patrons from bias is part of our mission. We're recommending approval with these vendor protections in place."
Board approved.
Month 5: Implementation
They required the vendor to train staff on the new system, provide data showing baseline performance, document what the AI does and its limitations, and explain how patrons can access non-AI search.
Library staff got 3 hours of training on what the AI does (ranking results by relevance and personalization), what it doesn't do (understand patron intent perfectly, find everything), what to tell patrons, and red flags to report.
Month 6+: Ongoing
Quarterly they reviewed sample search results for bias, tracked vendor metrics, collected patron feedback, and reviewed staff reports. After 6 months, they found the AI ranked some topics with older materials, not because the AI was biased, but because their collection was older in those subjects. They adjusted collection development to feed diverse materials into the system going forward.
This is governance in practice. Not perfect, but intentional.
When to Call a Consultant
You can probably handle most of this yourself. But there are moments where professional help is worth the cost.
Call a consultant if:
- Your library is large or complex (multiple systems, lots of patrons, significant budget)
- You're negotiating a major vendor contract and the vendor has sophisticated legal
- Your board is nervous and wants expert validation before approving AI
- You're being investigated by regulators and need documentation
- You have a specific equity or compliance concern and need expert guidance
- Your staff is resistant and you need external credibility to shift thinking
A good consultant will help you understand your specific risks, review vendor contracts, guide your board through decision-making, train your staff credibly, and validate your governance approach. Expect to pay $5,000-20,000 depending on scope and your size. That's usually worth it if it saves you from a regulatory fine or a bad vendor contract.
Download Templates
Get started immediately with our four essential templates. Customize each one for your library's specific needs and context.
AI Governance Policy Template
A comprehensive policy framework covering AI adoption, usage guidelines, oversight mechanisms, and accountability structures for your organization.
Get Template (Google Doc)Board Decision Memo
Present your AI governance strategy to the board with executive summary, recommendations, risk assessment, and decision-making framework.
Get Template (Google Doc)Staff Training Outline
Guide your team through AI policy implementation with learning objectives, discussion points, and practical examples tailored to library operations.
Get Template (Google Doc)Risk Assessment Matrix
Systematically evaluate AI risks across your organization, identify mitigation strategies, and track governance progress with this interactive tracker.
Get Template (Google Sheet)Implementation Roadmap
Transform your governance framework from planning to operation with this phased approach. Each phase builds on the previous one, ensuring stakeholder alignment and sustainable implementation.
Phase 1: Assess & Decide
Weeks 1-2
- Run the policy builder to assess your organization
- Identify key governance priorities
- Form AI governance committee
- Review existing policies and technology landscape
Phase 2: Draft & Approve
Weeks 3-4
- Customize policy templates for your library
- Draft board decision memo and talking points
- Build consensus with stakeholders
- Obtain board approval of governance framework
Phase 3: Communicate & Train
Weeks 5-6
- Conduct staff training sessions
- Distribute policy documentation
- Establish governance oversight structures
- Create feedback channels for implementation concerns
Phase 4: Monitor & Update
Ongoing
- Track AI usage against policy guidelines
- Conduct quarterly governance reviews
- Update policies based on new developments
- Report governance metrics to board
Next Steps
Ready to build your AI governance framework? Start here with concrete actions you can take this week.
- Use the policy builder to customize for your library
- Download templates and customize for your context
- Present to board for approval and discussion
- Implement staff training using provided outline
- Monitor and review governance framework quarterly
- Explore related decision framework
- Review vendor assessment strategies