Freddie Seba headshot

AI Keynote Speaker & Ethics Speaker & Executive Workshops

Dr. Freddie Seba

Issue #21 — When AI Governs Our Institutions, People, or Wages War

A plot twist leaders didn’t script—and how to step back into the role.

By Freddie Seba © 2025 Freddie Seba. All rights reserved.

A Movie We’re Suddenly Inside

If this were a film, we’d be ten minutes in—snacking on popcorn—before realizing that the AI assistant quietly supporting operations has started making decisions on its own. There’s no evil mastermind. Just a set of quietly automated processes put in place to reduce costs or increase efficiency.

However, over time, those unseen systems—intended or not—have begun to shift control of key institutional and human decisions. What was a helpful tool has become a governing force. No one scripted it this way. Yet here we are.

Does the story sound familiar?

With Generative AI advancing at breakneck speed, we’re no longer talking about a speculative future. We’re living inside the movie. And most of us weren’t cast as the lead character.

Last week’s headlines brought this plot into sharper focus: AI is no longer just enhancing people and processes—it has begun to govern. It’s shaping outcomes, influencing strategy, and producing recommendations that human leaders approve, defer to, or fail to interrogate thoroughly.

If you’re a university president, provost, CIO, CMO, or dean—this isn’t just a technical challenge. It’s a leadership accountability issue.

And the core question is no longer:

“Can we use AI?” or “Should we?”

It’s:

“Who decides—and who is accountable—when AI decides?”

When AI Becomes the Default Decision-Maker: 3 Signals Leaders Shouldn’t Ignore

1. What If Organizations Ran Themselves?

AI Frontiers explores AI-powered firms that optimize hiring, product design, culture, and growth—without direct human leadership. In theory, it is efficient. In practice? Untethered decisions and downstream misalignment.

Seba Framework Alignment:

  • #7 Human-in-the-Loop – Delegation must have limits.
  • #13 Safety vs. Speed – Optimization without oversight invites risk.
  • #17 Public Policy as Infrastructure – If AI governs, it must be governed.

2. AI Simulates War in Real Time

Phys.org reports that AI now simulates global conflict, offering military planners casualty estimates, strategic moves, and political risks. These outputs are starting to shape actual decisions.

Seba Framework Alignment:

  • #3 Traceability & Accountability – Who owns the recommendation?
  • #6 Cost vs. Humanity – A simulation can turn into policy.
  • #10 Deepfakes & Manipulation – Modeled outcomes distort moral decision-making.

3. Anthropic CEO Calls for Mandatory Transparency

New York Times features Anthropic CEO Dario Amodei urging transparency mandates. Why? Because even developers don’t fully understand how these models make decisions.

Seba Framework Alignment:

  • #1 Bias & Trust – If the builders can’t explain it, users shouldn’t unquestioningly trust it.
  • #13 Safety vs. Speed – Release timelines now outpace readiness.
  • #17 Public Policy as Infrastructure – The Era of Self-Regulation Has Run Its Course.

Sector Snapshots: When AI Outpaces Governance

Higher Education

GenAI writes syllabi, grades papers, tutors students, and even evaluates faculty. But most governance structures haven’t caught up.

Leadership Insight:

Build cross-functional GenAI governance councils before institutional decisions are automated. Make ethics proactive—not post-hoc.

Healthcare

At CHAI, the introduction of Applied Model Cards and a Model Card Registry emphasized the importance of transparency and assurance in building clinical trust. AI scribes and decision tools are entering workflows faster than institutions can respond.

Leadership Insight:

Treat GenAI like a clinical device. Require oversight, auditability, and real-time human review—especially when safety and liability are involved.

Financial Services

GenAI drives fraud detection, risk scoring, personalization, and trade automation. Yet internal governance beyond model documentation is often lacking.

Leadership Insight:

Move from algorithm compliance to institutional ethics and governance. Align AI decisions with fiduciary responsibility, equity, and public trust.

References & Resources

Additional resources:

Gratitude

With thanks to the institutions, collaborators, and communities who inform and strengthen this work:

University of San Francisco, University of San Francisco School of Nursing and Health Professions, University of San Francisco School of Education

AMIA (American Medical Informatics Association)

American Association of Colleges and Universities (AAC&U) Coalition for Health AI (CHAI), Stanford Institute for Human-Centered Artificial Intelligence (HAI), University of Illinois Chicago, in Education at Oxford University (AIEOU)

About the Author

Freddie Seba is a keynote speaker, strategist, and educator specializing in the ethics and governance of Generative AI for institutional leaders.

With experience spanning academia, startups, and organizational strategy, he helps institutions make sense of GenAI—before the consultants arrive or tools are procured. For over eight years, he taught graduate courses on GenAI ethics, innovation, and digital health at the University of San Francisco.

Today, Freddie collaborates with universities, health systems, foundations, and public-serving organizations to develop governance approaches grounded in clarity, mission, and accountability.

He holds an MBA from Yale and a Master’s in International Policy from Stanford and is currently completing a doctorate focused on GenAI ethics and leadership.

How I Work with Leaders

(Because colleagues and readers often ask… here’s the briefest plug.)

Before institutions commit to strategies, software, or multi-year GenAI investments, many leadership teams benefit from something more foundational: strategic clarity.

I deliver presentations and workshops—primarily for executive teams, boards, and senior leadership—to help institutions pause, assess, and act with alignment. These are not one-on-one consulting sessions or tech implementation services. They’re designed for the people who set institutional direction, not those tasked with execution.

Think of these sessions as a compass before the roadmap—a way to orient your team before decisions are locked in.

Recent presentations and workshops include:

  • The American Medical Informatics Association (AMIA) Clinical Informatics Conference
  • The AAC&U Conference on Learning and Student Success (CLASS)link
  • The ETS/CTE Symposium on GenAI in Higher Education
  • A GenAI ethics workshop with the University Assessment Committee (UAC) at the University of San Francisco focused on AI detection and academic integrity.

For inquiries or collaboration opportunities:

freddieseba.com/contact

linkedin.com/in/freddieseba

This newsletter appears on Substack, LinkedIn, and freddieseba.com.

Subscribe, explore past issues, or reach out to collaborate.

Freddie Seba © 2025. All rights reserved.

#GenerativeAI #AIethics #AIgovernance #LeadershipInAI #SebaFramework

#AIinEducation #AIinHealthcare #AIinFinance #HumanCenteredAI #CHAI

#DigitalHealth #ResponsibleAI #StrategicClarity #FutureOfCare

#StanfordHAI #AACU #AMIA2025 #FreddieSeba

This newsletter appears on Substack, LinkedIn, and freddieseba.com.

Subscribe, explore past issues, or reach out for collaboration.

© 2025 Freddie Seba. All rights reserved.

Transparency Statement

This newsletter integrates generative AI tools for drafting and clarity refinement. All insights, interpretations, and frameworks are authored by Freddie Seba and grounded in original governance research.