Freddie Seba headshot

AI Keynote Speaker & Ethics Speaker & Executive Workshops

Dr. Freddie Seba

Issue #49 — 2025 Year-in-Review: Board & Trustee AI Governance Lessons (48 Issues, Seba’s 12 Ps, and What’s Next)

AI Ethics & Governance for Leaders: Board & Trustee Guide to Responsible AI oversight

By Freddie Seba

© 2025 Freddie Seba. All rights reserved.

A board-first year-in-review on AI ethics and governance: 48 issues distilled into the 12 Ps of Responsible Power, top trustee questions, and a Q1 2026 oversight checklist for higher ed and healthcare leaders.

A reflection to start

If you sit on a university board of trustees, a hospital board, or you advise senior leadership, 2025 made one thing unavoidable:

AI oversight is no longer a “technology update.” It’s institutional governance.

Not because boards should pick tools—but because boards own the conditions for trust: mission integrity, fiduciary responsibility, safety thresholds, disclosure norms, workforce impact, and accountability when systems fail.

This year, GenAI didn’t just show up as a chatbot. It showed up as a workflow default.

Why I started this (and why 2025 mattered personally)

I launched GenAI Ethics & Governance for Leaders in January 2025, as a public leadership project during the final stretch of my EdD journey—to share sensemaking as it happened, not years later when institutional dependence had already hardened.

That journey culminated in my successful dissertation defense on December 10, 2025, focused on Generative AI Ethics and Governance for Leaders in Higher Education, centered on faculty early adopters’ sensemaking, pedagogical innovation, and role transformation.

So the newsletter became two things at once:

  • A weekly, board-ready AI governance briefing, and
  • A living notebook of a doctoral year spent inside the “messy middle” of real institutional adoption.

How this newsletter started

From day one, the purpose was consistent:

  • Help boards, trustees, presidents, provosts, deans, CIOs/CISOs/CMIOs, and risk leaders govern before harm forces reactive fixes.
  • Translate “AI capability” into decision rights, ownership, evidence, disclosure, procurement guardrails, monitoring, and stop mechanisms.
  • Keep mission and dignity central—learning, care, equity, and public trust.

Cadence: weekly publishing across freddieseba.com, LinkedIn, and Substack.

A weekly issue is a governance discipline: signal → incentives → institutional implications → leadership actions.

What happened in 2025

Across higher Education and healthcare—especially in regulated, high-trust settings—three shifts repeated all year:

1) Tools → operating model

GenAI moved from an optional assistant to an embedded infrastructure: advising, documentation, communications, assessment, student support, clinical admin, procurement, and HR.

2) Ethics talk → proof

The board-level question evolved from “Do we have principles?” to:

Can we demonstrate control—before a failure demonstrates it for us?

3) Copilots → coworker behavior (agents)

We began crossing into agentic territory: systems that chain tools, initiate actions, and make decisions with limited supervision—raising governance requirements around audit trails, approvals, escalation, and stop mechanisms.

Board-ready mini-scorecard

A board does not need 100 AI headlines. A board needs a repeatable oversight frame.

Shift 1: Tools → Operating Model

  • Signal: AI embedded in core workflows
  • Board question: Where is AI already decision-adjacent—and who owns it?
  • Ethics & governance upgrade: Maintain an AI inventory/registry with intended use, data flows, and risk tiering

Shift 2: Ethics Talk → Proof

  • Signal: Principles are not enough; evidence is the new bar
  • Board question: Can we show safety, fairness, privacy, and performance before scale?
  • Ethics & governance upgrade: Require monitoring + incident reporting (including near-misses) and versioned change logs

Shift 3: Copilots → Coworker Behavior (Agents)

  • Signal: Action-taking systems are emerging
  • Board question: What autonomy thresholds exist—and who can stop the system?
  • Ethics & governance upgrade: Build approval gates, audit trails, escalation paths, and kill switches for action-taking workflows

Board translation (one sentence):

If AI is inside core workflows, boards should expect: (1) inventory, (2) risk tiering, (3) named owners, (4) monitoring and incident response, and (5) procurement guardrails and exit ramps.

The year’s progression

This wasn’t 48 disconnected posts. It was a visible progression—ethics lenses tightening into board-ready AI governance.

Phase 1 — Foundations

The early run established the “non-negotiables”: privacy/security, accountability/traceability, IP, bias & trust, deskilling/upskilling, and human-in-the-loop.

Board translation: If it can’t be audited, disclosed, stopped, or owned, it can’t be scaled responsibly.

Phase 2 — Persuasion + surveillance logic + autonomy

Mid-year, two realities became harder to ignore:

  • GenAI can be persuasive—shaping beliefs, not just answering questions
  • GenAI can feel like an enforcer or gatekeeper—even when mechanisms are unclear

Leadership warning: The governance question is no longer “Can it help?” but “Should it influence?” and “Under whose values?”

Phase 3 — Trust economics + governance as infrastructure

By summer, the throughline was clear: capability wasn’t the constraint—trust was.

This phase leaned into policy signals, procurement discipline, and lifecycle governance—because systems change after launch.

Phase 4 — Boundaries and red flags

Fall issues operationalized governance: intended use, measurable guardrails, and monitoring as routine—not exceptional.

Phase 5 — Boards/trustees + sovereignty + “agentic coworkers” realism

By year-end, the focus became explicitly board-facing: portability/lock-in, labor impacts, partnerships, and what happens when “pilot” becomes “platform” without lifecycle governance.

Ethics & governance upgrades

This year wasn’t just more topics—it was a governance evolution:

  • From principles → controls: registries, monitoring, escalation, evidence standards
  • From pilots → lifecycle governance: design → deploy → monitor → update → retire
  • From vendor claims → institutional proof: evaluation harnesses and audit-ready documentation
  • From tool use → decision systems: treating AI as institutional decision architecture
  • From GenAI-only → AI-wide governance: preparing for agentic AI and embodied AI where failure modes become physical

What we covered

Across 48 issues, the same tensions kept resurfacing—just in new forms:

  • Autonomy vs. accountability
  • Transparency vs. opacity
  • Persuasion vs. manipulation
  • Pilotitis vs. product discipline
  • Procurement as governance
  • Human mastery vs. deskilling
  • Trust as economics

Governance truth boards recognize immediately:

Platform defaults become institutional destiny unless leaders set explicit boundaries.

Companies and systems we kept returning to

Not as “winners,” but as default-setters whose incentives become institutional reality:

  • Frontier ecosystems: OpenAI/ChatGPT; Google/DeepMind/Gemini; Anthropic/Claude
  • Enterprise gravity: platform bundling; cloud routing, identity, workflow capture, opt-out friction
  • Workflow gravity: health and higher-ed platforms where AI becomes embedded into documentation, advising, tutoring, assessment, and communications—turning tools into operating models

Scope and effort

Conservative, transparent estimates based on typical issue structure:

  • Total words: ~90,000–150,000
  • Book-equivalent length: ~300–500 pages (at ~300 words/page)
  • Reading time: ~7–12 hours (at ~200 wpm)

Sources (aligned to reality: ~5–10/week):

  • Typical sources per issue: ~5–10
  • Estimated total sources across the year: ~240–480
  • (peer-reviewed + preprints + policy/standards + major news + institutional artifacts)

Sector center of gravity (directional):

  • Higher Education: ~35–45%
  • Healthcare / Clinical Informatics / Digital Health: ~30–40%
  • Finance / Policy / Markets/Power & Infrastructure signals: ~20–30%

Actionability:

3–7 governance moves per issue → roughly 150–250 actionable governance prompts across the year.

The Seba framework: where we are now

By late 2025, the framework consolidated into a board-usable operating lens:

The 12 Ps of Responsible Power

WHY

1. Purpose — Use AI only where it advances mission and the public good

2. Problems — Solve real needs, not shiny demos or vendor agendas

3. Profits — Create value without externalizing harm to people or trust

    WHO

    4. People — Protect dignity, agency, labor, and lived experience

    5. Planet — Measure environmental and societal costs; mitigate, don’t ignore

    HOW

    6. Process — Govern the lifecycle: design → deploy → monitor → retire

    7. Policy — Align with evolving rules and institutional norms; update continuously

    8. Protections — Guardrails, limits, and a kill switch before scale

    9. Privacy — Minimize data, secure it, define retention, meaningful consent

    10. Provenance — Track sources, authorship, and what outputs are grounded on

    11. Preparedness — Expect failure; drill incident response and escalation

    12. Product Ownership — One accountable leader who can resource—and stop—it

    Board note: The 12 Ps convert “AI ethics” into repeatable oversight questions you can ask every quarter.

    Top 10 board and trustee questions for 2026

    1. Where is AI already shaping decisions (admissions, advising, grading, HR, clinical documentation, billing)?
    2. Who is the Product Owner for each system, with resource authority, and stop it?
    3. What is our risk tiering, and what controls are required at each tier?
    4. What data leaves the institution, what’s retained, and for how long?
    5. What’s our incident playbook (near-misses included), and when does it trigger board visibility?
    6. What’s our evidence standard before scale—impact, safety, equity, cost—and who verifies it?
    7. What audit rights do we have with vendors—and what happens if a vendor refuses?
    8. What’s our portability/exit plan (switching costs, routing strategy, data export)?
    9. How do we protect human mastery when AI is wrong, intermittent, or unavailable?
    10. What do we disclose, to whom, and at the point of decision?

    Q1 2026 governance memo (90-day checklist)

    If you do nothing else in the next 90 days:

    • Inventory AI use (including vendor-embedded “AI features”) and risk-tier it
    • Assign Product Ownership for every system in production.
    • Require logging + monitoring + incident reporting (near-misses included).
    • Put governance into procurement (audit rights, data rights, retention limits, exit ramps).
    • Tie expansion to evidence standards (impact, safety, equity, cost)—not vibes.

    What to expect in 2026 (broader than GenAI)

    GenAI is the headline—but governance must generalize across the AI stack:

    • AI agents / agentic systems (action-taking workflows embedded inside operations)
    • Embodied AI (robots, drones, autonomous vehicles—where failure becomes physical)
    • Traditional machine learning is already governing quietly (risk scoring, triage logic, personalization, fraud detection, surveillance-by-default patterns)

    In 2026, this newsletter stays technology-agnostic in principles, while becoming more modality-specific in nuance—because governance requirements change when harms move from persuasion and paperwork to physical action.

    Presentations and convenings that shaped the work

    This newsletter wasn’t written only from headlines. It was shaped by leaders in rooms asking: “What do we do next week?”

    • AMIA Clinical Informatics Conference (Anaheim, CA)
    • AAC&U CLASS (San Juan, Puerto Rico)
    • UC Berkeley Extension × Pitch Global (San Francisco)
    • AMIA Annual Symposium (Atlanta, GA)
    • USF ETS/CTE GenAI Symposium 2025 (San Francisco)
    • USF GenAI Symposium (San Francisco)

    Gratitude

    This work is not solo. It’s shaped by communities where governance is lived reality—patient safety, learning integrity, equity, and public trust.

    With appreciation to:

    University of San Francisco (School of Education; School of Nursing & Health Professions); AMIA; AAC&U; Stanford HAI; CHAI; University of Illinois Chicago; and the faculty early adopters, clinicians, students, and institutional leaders doing the hard governance work in real institutions.

    And personally, my dissertation chair and committee, and family and friends who supported the long arc.

    Transparency and copyright

    Drafted and refined with generative tools for synthesis and clarity; responsibility for interpretation, selections, research, framework, sensemaking, and conclusions remains the author’s.

    Appendix: Full Issue Index (2025) — Issues #1–#48

    1. Happy New Year! Welcome to the Journey of Exploring Generative AI Ethics
    2. Why Generative AI Ethics Should Be Every Leader’s Priority
    3. Ensuring Data Privacy & Security in GenAI
    4. Traceability and Accountability Challenges
    5. The Dual Forces of Deskilling & Upskilling in the Age of AI Agents
    6. Intellectual Property & Creative Ownership in the Age of GenAI
    7. Humanizing AI Agents Through Human-in-the-Loop Strategies
    8. AI Agents and Autonomy in Highly Regulated Industries
    9. The End of Search as We Know It
    10. Self-Improving GenAI, Copyright Battles, and Deepfake Dangers
    11. Leading Through the AI Crossroads — DeepMind’s Promise, Anthropic’s Caution
    12. Redefining Work: AI, Human Autonomy, Governance, and the Future of the Workplace
    13. Google Gemini, VC Oversight, and the Governance Gap: Leading in a Race-to-Market AI Era
    14. Cheating, and the Leadership Wake-Up Call
    15. From “AI-First” to “Human-Centered”: Rethinking Strategy in the Age of Generative AI
    16. What Happens When Education Falls Behind AI?
    17. Promising Applications, Progress with Caution: GenAI in Health & Education
    18. Persuasive GenAI, Surveillance Logic & the Erosion of Human Autonomy
    19. Can We Trust AI in Health, Education, and Beyond?
    20. AI That Won’t Shut Down, Governance That Can’t Wait
    21. When AI Governs Our Institutions, People, or Wages War
    22. The Promise and Peril of Going “AI-First”
    23. Communicate—or Be Obsoleted
    24. Trust, Talent & the Line Between Capability and Care
    25. More Signals, Same Challenge: Trust, Ethics & Purpose in GenAI
    26. When AI Models Mirror Society, Authentic Leadership Must Discern and Transform
    27. Infrastructure, Policy & Governance: Building the Backbone of Ethical GenAI
    28. Learning, Healing, Working, Sustaining: Signals & Guidelines for Ethical AI Leadership Across Sectors
    29. ChatGPT-5: When AI Chooses the Route — Router-Era Ethics & Governance (and the Leadership Choices After Launch)
    30. Where Should GenAI Be Deployed—and Where Must It Never Decide? Hiring Integrity, GeoSpy’s Pixel-Only Geolocation, and Human-in-Command Red Lines
    31. AI’s Consciousness Warning, Pilotitis, and Governance: What GenAI Signals Mean for Leaders
    32. Greener Tokens, Stronger Guardrails: Lower-energy inference, market consolidation, copyright détente, edge autonomy, and kids’ AI—what leaders need to do now
    33. From Classrooms to Clinics: AI Governance Red Flags—and What Should Leaders Do Next
    34. Policy Experiments, Clinical Boundaries, and Mission-Driven Governance
    35. Predictive Health Gets Real; Transparency Rules Catch Up; How People Actually Use AI
    36. Validating AI Claims, Building Global Guardrails, and Keeping Humans Accountable
    37. The Paradox of Counter-AI: Guardrails, Misuse, and the Next Frontier
    38. Counter-AI in the Wild: Misuse, Bio-Risk & Strategic Bets
    39. Generative AI’s Messy Middle: Interviews, Anxiety & 2030’s Compute Bill
    40. From Scale to Stewardship: Rethinking Power, People & Purpose
    41. From Hype to Habits: Platforms, Introspection & the AI-Factory Moment
    42. Voice, Copy & Bio: What Just Got Real
    43. Doubt, Control & Access: GenAI Guardrails for Health
    44. Fragility, Stewardship & the Public Good: GenAI for Health, Higher Ed & Financial Services
    45. Jobs on Iceberg & Agents in the Wild: GenAI Between Quiet Change and Loud Claims
    46. Co-Improvement, Care Logs & National AI Plans
    47. AI Architects, Omnibus, and New Work
    48. Partnerships, Sovereignty, and “Agentic Coworkers

    © 2025 Freddie Seba. All rights reserved.