AI Ethics & Governance for Leaders, Boards & Trustees
By Freddie Seba
© 2026 Freddie Seba. All rights reserved.
One ask: If you oversee AI (or will soon), subscribe—this is a weekly board-ready oversight packet (plain language, no hype) + a companion podcast with AI practitioners building and deploying AI (including #AgenticAI).
The signal I can’t unsee this week
We’ve crossed a quiet threshold:
AI is no longer “a vendor choice.” It’s a portfolio reality.
And the governance problem is no longer “which model is best,” but:
- How many model families are in play (officially and unofficially)
- Who is accountable when outputs conflict
- What happens when trust falls faster than adoption rises
In other words, optionality is increasing—confidence is not.
Executive signal
Three stories converge into one board-level truth:
- Multi-model is now normal. @a16z reports that 81% of enterprises use three or more model families in testing or production (up from 68% less than a year ago).
- In healthcare, adoption is outpacing comfort and trust. Recent patient-trust findings summarized by @Daniel Yang and Lucy Orr-Ewing highlight a sharp gap: 75% are using AI, yet only 13% feel very comfortable; 51% say AI makes them trust healthcare less; 93% report at least one concern; 80%+ say trust would increase with clear accountability measures.
- (Referenced ecosystem: #CaliforniaHealthCareFoundation + @NORC at @UniversityofChicago)
- Consumer platforms are moving AI “value” behind paywalls. @Meta is testing premium subscriptions across @Instagram, @Facebook, and @WhatsApp tied to expanded AI capabilities.
Translation for leaders: AI is becoming more distributed, more layered, and more commercialized—which means governance must become more explicit, evidence-based, and portfolio-aware.
Companion signal: this newsletter has a podcast for the “how it actually works” layer!
This newsletter is paired with my Podcast—AI Governance with Dr. Freddie Seba—built for plain-language governance with practitioners actively working with AI in real workflows (including #AgenticAI). The goal is simple: help boards, trustees, and executive leaders ask the right questions before adoption becomes irreversible.
Episode #4 is live!
AI Governance with Dr. Freddie Seba — Episode #4
“AI Agents Explained in Plain Language—What Leaders, Boards, and Trustees Must Ask Before Adoption to Protect Human Oversight and Mission Alignment”
Guest: Omar Nasser (Growth & GTM Engineering at @Inkeep, former @500Global)
Why it belongs in Issue #55: multi-model portfolios + agentic workflows can produce automation drift (decision authority shifts quietly over time), blur accountability, and expand privacy/provenance risk.
Context: @Inkeep recently announced a $13M seed round (Key Investors: @Khosla Ventures, @GreatPoint Ventures, @Y Combinator. CEO notes he studied Computer Science at @MIT.
What you’ll learn (board-ready):
- The simplest way to explain agent vs. chatbot vs. copilot
- The governance questions that map to the 12 Ps (purpose, process, protections, privacy, provenance, preparedness, product ownership)
- Guardrails that actually work: escalation, monitoring, auditability, safe fallbacks
- What “mission alignment” looks like in day-to-day operations—not just strategy slides
Listen on: @YouTube @Spotify @Apple Podcasts @Substack
What this means for boards, trustees, presidents, provosts, deans, and exec teams
1) The “single-model strategy” is dying
A multi-model stack can be healthy (resilience, leverage, fit-for-purpose), but it creates predictable failure modes:
- Accountability fog: “Which model did this? Who signed off?”
- Control gaps: different vendors, different telemetry, different retention/training defaults
- Policy drift: one policy, five systems, ten workarounds
- Incident ambiguity: inconsistent outputs become “no one’s problem.”
Board-level framing: Portfolio governance is now a fiduciary expectation, not a technical preference.
2) Trust isn’t lagging because people are “anti-AI.”
People are using AI while feeling uneasy—because they don’t see clear accountability. That’s a governance gap, not a communications gap.
3) When AI becomes a paid feature, “equity risk” expands
Premium tiers tied to AI can quietly produce:
- Capability inequality (who gets better tools, faster workflows)
- Visibility inequality (who gets amplified, who gets deprioritized)
- Compliance inequality (who can afford auditability, controls, “safer” defaults)
The Seba Framework — The 12 Ps of Responsible AI Oversight (Issue #55 lens)
Use this as a 30-minute portfolio + agentic governance checklist:
- Purpose — What are we enabling (and refusing to automate)?
- Problems — Which decisions are in-scope vs. off-limits?
- Profits — Who benefits, who absorbs downside risk?
- People — Where does AI shift burden, labor, or blame?
- Planet — What scale costs are we accepting (compute/infrastructure)?
- Process — Do we have lifecycle monitoring + change control?
- Policy — Is policy portfolio-aware (multiple tools, contexts, vendors)?
- Protections — What happens when AI is wrong, biased, or overconfident?
- Privacy — What goes into prompts, what is retained, who has access?
- Provenance — Can we trace outputs to model/version/toolchain when it matters?
- Preparedness — Do we have training + escalation for “AI tripwire moments”?
- Product Ownership — Who owns outcomes end-to-end (not just the contract)?
Board packet — 5 oversight questions to ask this week
- How many AI systems are actually in use (approved + shadow), and where?
- Where is accountability documented—by workflow, not vendor?
- What evidence do we require before scaling (accuracy, safety, monitoring plan)?
- What is our incident protocol (privacy leak, automation harm, misleading outputs)?
- What is our portfolio posture—single, multi, hybrid—and why?
Forward this (to one person)
If you found this useful, forward Issue #55 to one trustee, board member, general counsel, CIO, provost, dean, or clinical leader who will be asked this quarter:
“Are we using AI agents yet—and who’s accountable when they act?”
Subscribe — Newsletter + Podcast.
- Newsletter: Follow/subscribe to AI Ethics & Governance for Leaders (weekly board packet) @LinkedIn, @Substack, freddiesebs.com
- Podcast: Subscribe to AI Governance with Dr. Freddie Seba on @YouTube @Spotify @Apple Podcasts @Substack
- YouTube: https://www.youtube.com/@AIEthicsAndGovernance
- Speaking & briefings: https://freddieseba.com
About the Author
Freddie Seba is a researcher and practitioner focused on AI ethics and governance for leaders across higher education, healthcare, and financial services.
He holds an MBA (@Yale University), an MA (@Stanford University), and an EdD in Organization and Leadership (@University of San Francisco), with a dissertation on AI ethics and governance defended in Fall 2025.
He writes AI Ethics & Governance for Leaders, Boards & Trustees and hosts the companion podcast AI Governance with Dr. Freddie Seba, translating practitioner signals into board-ready oversight: decision rights, risk tiering, vendor accountability, monitoring, and incident preparedness.
Corporate Events + Executive Audiences
I keynote on AI governance, risk, trust infrastructure, and institutional legitimacy.
As an AI thought leader speaker, my talks bring strategic framing and practical takeaways for boards and senior leadership—accountability, transparency, safety, responsible adoption in regulated environments, judgment under uncertainty, escalation design, and governance maturity—across business and educational engagements, executive briefings, and board workshops: inventory → tiering → controls → dashboards → incident drills.
To book an AI speaker keynote, AI corporate event talk, AI executive briefing, or AI board workshop: connect via freddieseba.com.
And please subscribe to the newsletter and follow the podcast.
Grounding communities and institutions:
@University of San Francisco • @American Medical Informatics Association • @Stanford Institute for Human-Centered Artificial Intelligence • @Coalition for Health AI • @University of Illinois Chicago (Applied Health) • @American Association of Colleges and Universities
Transparency + property rights
Drafted and refined with generative tools for synthesis and clarity. Responsibility for research selection, interpretation, frameworks, and conclusions remains with the author.
Educational content only. This newsletter does not constitute legal, medical, clinical, insurance, or professional advice.
All original frameworks, analyses, and written content are the intellectual property of Freddie Seba unless otherwise noted. External research remains the property of its respective authors and publishers.
Links to the articles/studies are in the first comment.
LinkedIn tags
#AIGovernance #AIEthics #BoardOversight #Trustees #ExecutiveLeadership #CriticalThinking #AgenticAI #ResponsibleAI #AIinEducation #HealthcareAI #DigitalHealth #TechPolicy #AIAccountability #AITransparency
Refferences and Useful Information
- @a16z — Enterprise AI portfolio adoption (81% using 3+ model families): https://a16z.com/leaders-gainers-and-unexpected-winners-in-the-enterprise-ai-arms-race/
- Why it matters: Multi-model is now the default—governance must shift from “vendor choice” to portfolio oversight.
- Patient trust in health AI (summary thread) — Daniel Yang / Lucy Orr-Ewing: https://www.linkedin.com/posts/danielayang_important-research-exploring-patient-trust-activity-7423435050546966528-6Sa9
- Why it matters: Adoption is rising while comfort is low—trust hinges on clear accountability and human protection.
- @TechCrunch — Meta premium subscription testing across @Instagram @Facebook @WhatsApp: https://techcrunch.com/2026/01/26/meta-to-test-premium-subscriptions-on-instagram-facebook-and-whatsapp/
- Why it matters: AI paywalls can create capability and compliance inequality—a governance and equity issue.
- @TheVerge — Meta premium subscription context: https://www.theverge.com/news/868439/meta-premium-subscription-ai-facebook-instagram-whatsapp
- Why it matters: Confirms platform direction: “AI value” is being monetized—expect downstream impacts in education, health, and work.
- @Inkeep — $13M seed announcement + platform framing: https://inkeep.com/blog/inkeep-funding-announcement
- Why it matters: Agents are moving into real operations—reliability, traceability, and auditability become board-level requirements.
© 2026 Freddie Seba. All rights reserved.
