AI Ethics & Governance: Leaders, Board & Trustee Guide to Responsible AI Oversight. By Freddie Seba
Copyright © 2026 Freddie Seba. All rights reserved.
Issue #52 translates 2026 AI signals into a board-ready oversight packet: pilots-to-performance, patient-facing AI risk classes, vendor concentration, incident reporting laws (NY RAISE), insurance/liability, synthetic evidence limits, and the 12 Ps of Responsible AI Oversight.
New this week: AI Governance with Freddie Seba 🎙️
Opening context
If you read Issue #51—thank you. This week continues the same governance line: AI oversight is expanding beyond GenAI “outputs” to operational systems that shape decisions, execute workflows, and now touch clinical care, insurance liability, and regulated transparency.
New year, broader oversight mandate: from GenAI to AI systems that act
In 2026, boards, trustees, and leaders are overseeing AI systems that:
- influence high-stakes decisions (clinical, employment, education, financial)
- execute workflows (agentic tooling and automation)
- interact emotionally with users (health + companion patterns)
- increase platform dependence (vendor + cloud + model concentration)
The downside is no longer “bad text.” It is operational exposure.
This week’s signal
Health AI is becoming the default infrastructure, while safety failure modes remain non-trivial.
The governance posture is shifting from “tools” to patient-facing systems, where evaluation must include monitoring + escalation, not just accuracy.
The 30-Minute Board Oversight Packet (Issue #52)
For leaders, boards, and trustees:
- Pilot-to-performance test: Are we funding operating foundations (workforce, adoption, monitoring), or only pilots? (McKinsey & Company)
- Clinical AI safety standard: Require bundled evaluation: calibration + clinical utility + monitoring + escalation (not accuracy alone). (arXiv)
- Patient reality policy: What happens when patients bring AI outputs to visits—or rely on AI outside clinic hours?
- Regulatory readiness: Which state and sector rules affect our vendors and contracts (safety frameworks, incident reporting, disclosures)? (Governor Kathy Hochul)
- Insurance + liability lens: What coverage applies to AI incidents, and what controls reduce exclusions?
- Third-party & concentration risk: Model/cloud/vendor dependencies + exit plans (and governance authority to enforce them).
- Evidence integrity rule: Synthetic evaluation can inform iteration—not certify safety in high-stakes settings. (hai.stanford.edu)
- Data governance redlines: Enforceable limits on collection, reuse, resale, training use, and scale/resource intensity.
1) From pilots to performance: what boards and trustees should demand
AI value becomes durable only when institutions invest in the “unsexy” enablers: operating model, workforce transition, adoption systems, and monitoring.
McKinsey’s “pilots to performance” framing is the same governance lesson everywhere: teams overbuild demos and underbuild operating capacity. (McKinsey & Company)
Oversight (board/trustee): value thesis, metrics, controls, adoption, scenario planning.
Avoid: micromanaging implementation (“noses in, fingers out”).
2) Patients are already using LLMs—now care systems must govern them
Patients are already using LLMs to interpret symptoms, labs, and care plans—often outside clinic hours and in settings with limited access to clinicians.
Board/trustee move: treat patient-facing AI as a distinct risk class with:
- escalation rules
- monitoring + incident learning
- clinician workflow integration
- transparency + patient education
3) Anthropic in healthcare: features for doctors + patients
Anthropic is advancing Claude in healthcare/life sciences and making it easier for clinicians and patients to use its tools for medical information access and workflow integration.
Board implication: when vendors move from “general AI” to domain-tuned workflows, contracting must harden: data boundaries, auditability, incident response, disclosure, lifecycle monitoring (not just pilots).
4) State of Utah + Doctronic: AI-enabled prescription renewals in a regulatory sandbox
State-level pilots matter because they normalize the shift from “advice” to authorized action—even when surrounded by safeguards. Utah’s official release frames this as a first-in-state partnership to evaluate autonomous AI for prescription renewals.
Governance implication: When AI participates in clinical workflow execution, boards and trustees must ensure that scope boundaries, clinical review points, monitoring, and accountability are explicit. (Related reporting underscores both the novelty and the concern surface.)
5) Clinical safety: NOHARM + “humility design” are fiduciary-relevant
Even strong models can fail in clinically meaningful ways, and benchmarks don’t reliably predict real-world safety.
The NOHARM benchmark argues clinical safety is a distinct performance dimension requiring explicit harm measurement—because severe harm can occur at nontrivial rates and is only moderately correlated with typical benchmarks.
Board translation: require lifecycle safety (monitoring + escalation) as part of the definition of done.
6) Ethical data sharing in critical care: governance lives in the “pipes.”
A scoping review on ethical data sharing in critical care reinforces a durable point: “data sharing” is never just technical—it’s permissions, provenance, accountability, and oversight (including Trusted Research Environments).
Board implication: design the governance model before scale.
7) New York State’s RAISE Act: safety + transparency obligations are arriving
New York’s signed RAISE Act requires covered frontier-model developers to publish safety protocol information and report specific incidents to the state within a defined window.
Board implication: oversight must extend to vendor risk, incident response, and disclosure readiness—not just internal model use.
8) AI liability + insurance: governance maturity becomes underwriting reality
Insurance is becoming a governance forcing function: controls, documentation, training, and vendor accountability move from “good practice” to “coverage reality.”
Board move: treat insurability as an output of governance discipline: auditability, incident loops, and risk-class policies.
9) Evidence integrity: Stanford HAI agents can simulate 1,052 individuals—practical, powerful, and risky
Stanford HAI’s work on generative agents simulating 1,052 individuals is a significant capability signal.
Board rule: synthetic populations can support scenario testing—but must not become a substitute for real-world validation in high-stakes deployments.
Board prompt: “Where are we using simulated users—and what guardrails prevent them from becoming ‘proof’?”
10) Robotics + physical AI: governance is leaving the screen
At CES 2026, the AI story wasn’t only models—it was physical AI, robotics, and infrastructure (including major platform announcements).
Board implication: when AI is embedded in devices and physical environments, governance shifts toward safety cases, incident response, supplier concentration risk, and resilience—not just policy docs.
11) Higher education: build AI competence, don’t just police behavior
Purdue trustees approved an AI working competency graduation requirement—an example of governance moving from reactive integrity policing to capability-building.
Trustee reframes:
- From “How do we catch cheating?” → “What does integrity mean in AI co-production?”
- From “Detect and punish” → “Assess differently—so learning stays measurable.”
12) Growth environments: VC pressure accelerates the “ship-first” temptation
Signals from Fast Company highlight how capital, speed, and competition compress governance timelines.
Board-level move: define explicit gates (risk-class policies, launch criteria, monitoring plans, incident playbooks) so safety isn’t treated as optional overhead.
13) Kids, safety, and public policy: OpenAI + Common Sense Media in California
KQED reports OpenAI and Common Sense Media are partnering on a California ballot measure aimed at youth AI safety protections.
Board implication: youth-facing AI is product risk + policy risk + reputation risk. If you touch minors, your risk-class rules must be stricter than the default.
14) Biomed signal: NVIDIA + Eli Lilly + $1B AI drug lab
Reuters reports NVIDIA and Eli Lilly plan to invest $1B over five years in an AI-focused research lab, underscoring AI’s shift into “lab infrastructure.”
Board implication: as AI becomes lab infrastructure, governance must cover data provenance, model risk, validation, and accountability—not just IT controls.
Board, Trustees, and Leaders Takeaway
If AI meaningfully affects people, decisions, or trust, it is a governance matter—regardless of whether it is labeled GenAI, automation, analytics, agents, robotics, or “infrastructure.”
The Seba Framework: The 12 Ps of Responsible AI Oversight (Issue #52)
- Purpose — mission alignment vs. cost extraction
- Problems — decision-relevant framing (not metric-chasing)
- Profits — who benefits vs. who carries risk
- People — workforce/students/patients; lived impacts
- Planet — compute, infrastructure, scale costs
- Process — lifecycle evaluation, monitoring, and incident learning
- Policy — risk-class-specific rules (health, youth, employment)
- Protections — vulnerable populations and escalation paths
- Privacy — enforceable limits on data collection, reuse, resale, and training use
- Provenance — traceability of data/models/vendors; exit readiness
- Preparedness — board/trustee competence + cadence
- Product Ownership — institutions own outcomes once AI acts
Gratitude (institutions that ground this work)
@University of San Francisco (USF), @AMIA Informatics, @Stanford HAI, @Coalition for Health AI, @University of Illinois Chicago Applied Health, @American Association of Colleges and Universities
About the Author
Freddie Seba is a researcher and practitioner focused on AI ethics and governance for leaders across higher education, healthcare, and financial services.
He holds an MBA (@Yale University), an MA (@Stanford University), and an EdD in Organization and Leadership (@University of San Francisco), with a dissertation on AI ethics and governance defended in Fall 2025.
He writes AI Ethics & Governance for Leaders, Boards & Trustees and hosts the companion podcast AI Governance with Dr. Freddie Seba, translating practitioner signals into board-ready oversight: decision rights, risk tiering, vendor accountability, monitoring, and incident preparedness.
Corporate Events + Executive Audiences
I keynote on AI governance, risk, trust infrastructure, and institutional legitimacy.
As an AI thought leader speaker, my talks bring strategic framing and practical takeaways for boards and senior leadership—accountability, transparency, safety, responsible adoption in regulated environments, judgment under uncertainty, escalation design, and governance maturity—across business and educational engagements, executive briefings, and board workshops: inventory → tiering → controls → dashboards → incident drills.
To book an AI speaker keynote, AI corporate event talk, AI executive briefing, or AI board workshop: connect via freddieseba.com.
And please subscribe to the newsletter and follow the podcast.
Speaking & briefings: connect on LinkedIn or visit freddieseba.com
Transparency
Drafted and refined with generative tools for synthesis and clarity; responsibility for research selection, interpretation, frameworks, and conclusions remains the author’s.
Educational content only. This newsletter does not constitute legal, medical, clinical, insurance, or professional advice. Institutions should consult qualified counsel and domain experts for decisions involving patient care, regulatory compliance, contracts, risk transfer, and insurance coverage.
References
1) TechCrunch — “CES 2026: Everything revealed, from Nvidia’s debuts to AMD’s new chips to Razer’s AI oddities.”
2) Reuters — “AMD shows off new higher performing AI chip at CES event” (Jan 6, 2026)
https://www.reuters.com/business/amd-unveils-new-chips-ces-event-las-vegas-2026-01-06
3) The Verge — “Nvidia launches Vera Rubin AI computing platform at CES 2026.”
https://www.theverge.com/tech/855412/nvidia-launches-vera-rubin-ai-computing-platform-at-ces-2026
4) Boston Dynamics — “Boston Dynamics & Google DeepMind Form New AI Partnership…”
https://bostondynamics.com/blog/boston-dynamics-google-deepmind-form-new-ai-partnership
5) WIRED — “Google Gemini Is Taking Control of Humanoid Robots on Auto Factory Floors.”
https://www.wired.com/story/google-boston-dynamics-gemini-powered-robot-atlas
6) McKinsey & Company — “From pilots to performance: How COOs can scale AI in manufacturing.”
8) Utah Department of Commerce (News Release) — “Utah and Doctronic Announce… AI Prescription Medication Renewals” (Jan 6, 2026) https://commerce.utah.gov/2026/01/06/news-release-utah-and-doctronic-announce-groundbreaking-partnership-for-ai-prescription-medication-renewals/
7) Stanford HAI — “AI Agents Simulate 1,052 Individuals’ Personalities with Impressive Accuracy.”
9) New York State (Governor’s Office) — RAISE Act signing/frontier model safety + incident reporting (Dec 19, 2025)
10) KQED — “OpenAI and Common Sense Media Partner on New Kids AI Safety Ballot Measure.” (Jan 9, 2026)
11) Fast Company — “In 2026, venture capital’s hunger for AI will be insatiable.”
https://www.fastcompany.com/91465347/2026-venture-capital-artificial-intelligence-openai-anduril
12) Bloomberg — “Anthropic Adds Features for Doctors, Patients in Health Care Push.” (Jan 11, 2026)
13) Anthropic — “Advancing Claude in healthcare and the life sciences.” (Jan 11, 2026)
https://www.anthropic.com/news/healthcare-life-sciences
14) Reuters — “Nvidia, Eli Lilly to spend $1 billion over five years on joint research lab.” (Jan 12, 2026)
15) Purdue University (Newsroom) — “Trustees approve ‘AI working competency’ graduation requirement.” https://www.purdue.edu/newsroom/2025/Q4/purdue-unveils-comprehensive-ai-strategy-trustees-approve-ai-working-competency-graduation-requirement/
16) Journal of Translational Critical Care Medicine (LWW) — “A scoping review of ethical data sharing in critical care… Trusted Research Environments”
17) arXiv — “First, do NOHARM: towards clinically safe large language models.”
https://arxiv.org/abs/2512.01241
Hashtags
#AIGovernance #AIEthics #ResponsibleAI #AIOversight #BoardGovernance #Trustees #RiskManagement #EnterpriseAI #HealthcareAI #DataGovernance #Privacy #ModelRiskManagement #AICompliance #HigherEdLeadership #Robotics #AgenticAI
