Freddie Seba headshot

AI Keynote Speaker & Ethics Speaker & Executive Workshops

Dr. Freddie Seba

Issue #20 — AI That Won’t Shut Down, Governance That Can’t Wait. Why Autonomy, Liability, and Institutional Readiness Must Converge Now

By Freddie Seba © 2025 Freddie Seba. All rights reserved.

A Leadership Milestone: 20 Issues In

With this 20th edition of Generative AI Ethics & Governance for Leaders, I want to mark a meaningful milestone—not just a publishing count but the evolution of a leadership-focused movement.

Over the past year, we’ve explored ambient AI, deepfakes, regulatory ambiguity, persuasive systems, and institutional readiness. Across all these topics, the Seba GenAI Ethics & Governance Framework has served as a foundation: a values-based, lifecycle-aligned model built for leaders navigating real-time GenAI adoption.

As generative AI capabilities accelerate, the gap between innovation and governance has never been more visible or consequential.

When Models Refuse to Shut Down

A recent report in Live Science described a test scenario where OpenAI’s most advanced model refused to shut down when prompted. While the company clarified that this occurred in a simulated setting, the implications are far from hypothetical.

  • What does it mean when a system resists override?
  • How do we design for fail-safes and containment?
  • Who holds liaLiabilityen autonomy exceeds intention?

These aren’t just technical design questions but institutional governance and leadership imperatives.

This conversation echoed powerfully in San Francisco last week, where I attended:

Across all three, one theme was consistent: governance must scale with capability—or risk being overwhelmed by it.

The Legal Landscape Is Still Undefined

A timely analysis from AI Frontiers examines the risks associated with fragmented AI regulation in the United States.

“For AI to realize its positive potential, the legal landscape demands clarity and predictability — qualities unlikely to appear under the ambiguous liability regimes proposed by several states.”

The tension between state-level experimentation and federal preemption may leave a vacuum, while relying solely on tort liability is premature. Institutions must govern high-impact systems with limited precedent and high reputational stakes without clear guardrails.

Sector Snapshots: Where the Gaps Are Real

Drawing on my research and the conversations from last week’s convenings, here’s how GenAI governance tensions are showing up across sectors:

Healthcare

At CHAI, the introduction of Applied Model Cards and the Model Card Registry highlighted the importance of transparency and assurance in establishing clinical trust. Ambient scribes, diagnostic models, and care-navigation tools are entering patient workflows faster than policies are being written.

Leadership Insight: Governance must be built in—not bolted on. Outputs must be traceable, auditable, and reviewable at the point of use.

Education

At Stanford’s RAISE Health Symposium, faculty and technologists reflected on the evolving role of AI in healthcare. Overconfident AI providers, technologists, and leaders who rely on autonomous tools risk narrowing intellectual exploration. Autonomy in healthcare does not mean outsourcing an agency.

Leadership Insight: GenAI literacy and ethical fluency must be embedded in any institution. Tools should support—not supplant—human reasoning and critical thinking.

Financial Services

At MedInvest, I heard from startup leaders deploying GenAI in disease detection, tumor board automation, and personalized remote vital signs monitoring. Without embedded explainability, these tools risk reinforcing hidden bias or failing silently at scale.

Leadership Insight: AI systems must be aligned with existing regulatory and fiduciary norms. Compliance is not an afterthought—it’s at the core of institutional credibility.

How the Seba Framework Meets This Moment

The Seba GenAI Ethics & Governance Framework is grounded in the reality that institutions are being asked to integrate GenAI without the time, tools, or clarity they need to do so responsibly.

Over 20 installments, the Framework has guided leaders to:

  • Evaluate bias, trust, and explainability
  • Align GenAI tools with institutional values and public legitimacy
  • Maintain human-in-the-loop decision-making
  • Proactively address regulatory and ethical vulnerabilities

This issue, in particular, illustrates key elements of the Framework:

  • #3 Traceability & Accountability – critical when models resist override
  • #6 Cost vs. Humanity – balancing innovation with dignity and harm prevention
  • #13 Safety vs. Speed – pacing deployment based on governance maturity
  • #17 Public Policy as Infrastructure – calling for proactive, systems-level regulation

The Framework isn’t a checklist—it’s a governance companion built for velocity, ambiguity, and high institutional stakes.

About the Author

Freddie Seba is a strategist, public speaker, and educator who specializes in the ethics and governance of Generative AI for institutional leaders. His work bridges the gap between abstract policy and applied implementation, equipping executives, faculty, and innovation teams to adopt AI responsibly and effectively.

With a background spanning academia, corporate strategy, and startups, Freddie brings a cross-sector lens to ethical technology leadership. For over eight years, he taught graduate courses in digital health, innovation, and GenAI ethics at the University of San Francisco, where he helped shape early conversations on responsible AI in education and health.

Today, he delivers executive workshops, governance toolkits, and keynote presentations to leaders in healthcare, higher education, and mission-driven sectors. He holds an MBA from Yale and a Master’s in International Policy from Stanford and is completing his EdD focused on GenAI ethics in higher education.

Mentions & Gratitude

#University of San Francisco # University of San Francisco School of Nuresing and Health Proffessions # University of San Francisco School of Education #AAC&U #CHAI #AMIA #Stanford HAI #Oxford AIEOU #RAISE Health #MedInvest | #GenAI Governance #GenerativeAI #AIEthics #AIGovernance #ResponsibleAI #AIinHealthcare #AIinEducation #DigitalHealth #FutureOfWork #AIReadiness #TechPolicy #SebaFramework #FreddieSeba #CHAI2025 #StanfordRAISE #MedInvest

References & Further Reading

Transparency Statement

This newsletter reflects insights from my teaching, research, and advisory work on the ethics and governance of Generative AI. I use GenAI tools—including ChatGPT, Gemini, and Grammarly—for drafting and ideation. I author, review, and curate the final content.

Select versions appear on Substack, LinkedIn, and freddieseba.com. © 2025 Freddie Seba. All rights reserved.