Freddie Seba headshot

AI Keynote Speaker & Ethics Speaker & Executive Workshops

Dr. Freddie Seba

Issue #55: Multi-Model Everywhere, Trust Nowhere: Governing the AI Portfolio Era

AI Ethics & Governance for Leaders, Boards & Trustees

By Freddie Seba

© 2026 Freddie Seba. All rights reserved.

One ask: If you oversee AI (or will soon), subscribe—this is a weekly board-ready oversight packet (plain language, no hype) + a companion podcast with AI practitioners building and deploying AI (including #AgenticAI).

The signal I can’t unsee this week

We’ve crossed a quiet threshold:

AI is no longer “a vendor choice.” It’s a portfolio reality.

And the governance problem is no longer “which model is best,” but:

  • How many model families are in play (officially and unofficially)
  • Who is accountable when outputs conflict
  • What happens when trust falls faster than adoption rises

In other words, optionality is increasing—confidence is not.

Executive signal

Three stories converge into one board-level truth:

  1. Multi-model is now normal. @a16z reports that 81% of enterprises use three or more model families in testing or production (up from 68% less than a year ago).
  2. In healthcare, adoption is outpacing comfort and trust. Recent patient-trust findings summarized by @Daniel Yang and Lucy Orr-Ewing highlight a sharp gap: 75% are using AI, yet only 13% feel very comfortable; 51% say AI makes them trust healthcare less; 93% report at least one concern; 80%+ say trust would increase with clear accountability measures.
  3. (Referenced ecosystem: #CaliforniaHealthCareFoundation + @NORC at @UniversityofChicago)
  4. Consumer platforms are moving AI “value” behind paywalls. @Meta is testing premium subscriptions across @Instagram, @Facebook, and @WhatsApp tied to expanded AI capabilities.

Translation for leaders: AI is becoming more distributed, more layered, and more commercialized—which means governance must become more explicit, evidence-based, and portfolio-aware.

Companion signal: this newsletter has a podcast for the “how it actually works” layer!

This newsletter is paired with my Podcast—AI Governance with Dr. Freddie Seba—built for plain-language governance with practitioners actively working with AI in real workflows (including #AgenticAI). The goal is simple: help boards, trustees, and executive leaders ask the right questions before adoption becomes irreversible.

Episode #4 is live!

AI Governance with Dr. Freddie Seba — Episode #4

“AI Agents Explained in Plain Language—What Leaders, Boards, and Trustees Must Ask Before Adoption to Protect Human Oversight and Mission Alignment”

Guest: Omar Nasser (Growth & GTM Engineering at @Inkeep, former @500Global)

Why it belongs in Issue #55: multi-model portfolios + agentic workflows can produce automation drift (decision authority shifts quietly over time), blur accountability, and expand privacy/provenance risk.

Context: @Inkeep recently announced a $13M seed round (Key Investors: @Khosla Ventures, @GreatPoint Ventures, @Y Combinator. CEO notes he studied Computer Science at @MIT.

What you’ll learn (board-ready):

  • The simplest way to explain agent vs. chatbot vs. copilot
  • The governance questions that map to the 12 Ps (purpose, process, protections, privacy, provenance, preparedness, product ownership)
  • Guardrails that actually work: escalation, monitoring, auditability, safe fallbacks
  • What “mission alignment” looks like in day-to-day operations—not just strategy slides

Listen on: @YouTube @Spotify @Apple Podcasts @Substack

What this means for boards, trustees, presidents, provosts, deans, and exec teams

1) The “single-model strategy” is dying

A multi-model stack can be healthy (resilience, leverage, fit-for-purpose), but it creates predictable failure modes:

  • Accountability fog: “Which model did this? Who signed off?”
  • Control gaps: different vendors, different telemetry, different retention/training defaults
  • Policy drift: one policy, five systems, ten workarounds
  • Incident ambiguity: inconsistent outputs become “no one’s problem.”

Board-level framing: Portfolio governance is now a fiduciary expectation, not a technical preference.

2) Trust isn’t lagging because people are “anti-AI.”

People are using AI while feeling uneasy—because they don’t see clear accountability. That’s a governance gap, not a communications gap.

3) When AI becomes a paid feature, “equity risk” expands

Premium tiers tied to AI can quietly produce:

  • Capability inequality (who gets better tools, faster workflows)
  • Visibility inequality (who gets amplified, who gets deprioritized)
  • Compliance inequality (who can afford auditability, controls, “safer” defaults)

The Seba Framework — The 12 Ps of Responsible AI Oversight (Issue #55 lens)

Use this as a 30-minute portfolio + agentic governance checklist:

  1. Purpose — What are we enabling (and refusing to automate)?
  2. Problems — Which decisions are in-scope vs. off-limits?
  3. Profits — Who benefits, who absorbs downside risk?
  4. People — Where does AI shift burden, labor, or blame?
  5. Planet — What scale costs are we accepting (compute/infrastructure)?
  6. Process — Do we have lifecycle monitoring + change control?
  7. Policy — Is policy portfolio-aware (multiple tools, contexts, vendors)?
  8. Protections — What happens when AI is wrong, biased, or overconfident?
  9. Privacy — What goes into prompts, what is retained, who has access?
  10. Provenance — Can we trace outputs to model/version/toolchain when it matters?
  11. Preparedness — Do we have training + escalation for “AI tripwire moments”?
  12. Product Ownership — Who owns outcomes end-to-end (not just the contract)?

Board packet — 5 oversight questions to ask this week

  1. How many AI systems are actually in use (approved + shadow), and where?
  2. Where is accountability documented—by workflow, not vendor?
  3. What evidence do we require before scaling (accuracy, safety, monitoring plan)?
  4. What is our incident protocol (privacy leak, automation harm, misleading outputs)?
  5. What is our portfolio posture—single, multi, hybrid—and why?

Forward this (to one person)

If you found this useful, forward Issue #55 to one trustee, board member, general counsel, CIO, provost, dean, or clinical leader who will be asked this quarter:

“Are we using AI agents yet—and who’s accountable when they act?”

Subscribe — Newsletter + Podcast.

About the Author

Freddie Seba is a researcher and practitioner focused on AI ethics and governance for leaders across higher education, healthcare, and financial services.

He holds an MBA (@Yale University), an MA (@Stanford University), and an EdD in Organization and Leadership (@University of San Francisco), with a dissertation on AI ethics and governance defended in Fall 2025.

He writes AI Ethics & Governance for Leaders, Boards & Trustees and hosts the companion podcast AI Governance with Dr. Freddie Seba, translating practitioner signals into board-ready oversight: decision rights, risk tiering, vendor accountability, monitoring, and incident preparedness.

Corporate Events + Executive Audiences

I keynote on AI governance, risk, trust infrastructure, and institutional legitimacy.

As an AI thought leader speaker, my talks bring strategic framing and practical takeaways for boards and senior leadership—accountability, transparency, safety, responsible adoption in regulated environments, judgment under uncertainty, escalation design, and governance maturity—across business and educational engagements, executive briefings, and board workshops: inventory → tiering → controls → dashboards → incident drills.

To book an AI speaker keynote, AI corporate event talk, AI executive briefing, or AI board workshop: connect via freddieseba.com.

And please subscribe to the newsletter and follow the podcast.

Grounding communities and institutions:

@University of San Francisco • @American Medical Informatics Association • @Stanford Institute for Human-Centered Artificial Intelligence • @Coalition for Health AI • @University of Illinois Chicago (Applied Health) • @American Association of Colleges and Universities

Transparency + property rights

Drafted and refined with generative tools for synthesis and clarity. Responsibility for research selection, interpretation, frameworks, and conclusions remains with the author.

Educational content only. This newsletter does not constitute legal, medical, clinical, insurance, or professional advice.

All original frameworks, analyses, and written content are the intellectual property of Freddie Seba unless otherwise noted. External research remains the property of its respective authors and publishers.

Links to the articles/studies are in the first comment.

LinkedIn tags

#AIGovernance #AIEthics #BoardOversight #Trustees #ExecutiveLeadership #CriticalThinking #AgenticAI #ResponsibleAI #AIinEducation #HealthcareAI #DigitalHealth #TechPolicy #AIAccountability #AITransparency

Refferences and Useful Information

© 2026 Freddie Seba. All rights reserved.