Freddie Seba headshot

AI Keynote Speaker & Ethics Speaker & Executive Workshops

Dr. Freddie Seba

Issue #27 | Infrastructure, Policy & Governance: Building the Backbone of Ethical GenAI

Generative AI Ethics & Governance for Leaders

By Freddie Seba

Also published on Substack and Linkedin

Framing the Conversation

To our discerning leaders who follow this newsletter, this week’s installment features several articles and news pieces centered on the ethics, governance, policy, and organizational infrastructure acceleration of Generative AI. I intend to continue with the strategy of increasing breadth—more spotlights and keeping the news reflections concise, plain, and jargon-free, with links to sources for those who want to dig deeper.

This week’s theme: purposeful GenAI progress is relentless, propelled not just by innovation, but by leadership decisions—what infrastructure to build, which values to prioritize, and which communities to serve.

From national AI investments and grants to early glimpses of superintelligence economics and regulatory paradigms, ethics and governance are evolving from a compliance checklist into a strategic differentiator and civic responsibility.

This issue explores what it means to build not just AI systems, but trustworthy, future-ready institutions.

Signals That Test the Seba GenAI Ethics & Governance Framework

U.S. Education Dept: Funding AI With Guardrails

The Department of Education will fund AI in tutoring, advising, and curriculum, with specific guardrails in place to ensure data privacy, educator impact, and student equity.

🔗 EdWeek

Seba Take: Grants that foreground values signal a move from “AI for education” to “education-led AI.”

Federal Grant Influence Without Guardrails

A new study finds that many U.S. government AI research grants shape outcomes but lack formal ethics or governance frameworks.

🔗 arXiv: Federal Grants & AI Governance

Seba Take: When values aren’t in the RFP, governance becomes reactive rather than foundational.

The Economics of Superintelligence

The Economist warns that so-called superintelligence could further concentrate power and systemic risk, requiring governance frameworks that are transparent, pluralistic, and values-driven.

🔗 The Economist

Seba Take: Leadership today must anticipate not just capabilities but consequences.

Nvidia’s Sovereign AI Pitch to Governments

Nvidia is promoting “sovereign AI” infrastructure to governments, positioning public funding as a shaper—not just consumer—of AI ecosystems.

🔗 The Economist: Sovereign AI

China’s Data Center Surplus Becomes Global Asset

China aims to offer excess AI computing capacity, raising concerns about transparency, access, and the governance of geopolitical data.

🔗 Reuters

U.S. Policy Shifts: AI Action Plan on the Horizon

A new federal action plan may establish clearer regulatory paths, oversight models, and incentives for readiness.

🔗 The Verge

Sector Reflections: Ethics-Led Governance Across Industries

Higher Education

Universities are urged to lead in GenAI, but while students often adopt new technologies quickly, institutional governance typically lags. Leaders must clarify the “how” and “why” behind AI use, ensuring alignment with ethical values, access, and faculty agency.

Leadership Insight: Federal Funding Is Coming. Will your institution lead with policy or play catch-up?

Healthcare

AI infrastructure—from ambient sensors to decision support—is scaling faster than governance. Sovereign AI models may help, but only if national capacity centers equity, clinical accountability, and safety.

Leadership Insight: Don’t let tools outpace trust. Governance is now clinical infrastructure.

Financial Services

As AI infrastructure is increasingly shaped by government funding and vendor ecosystems, finance leaders must build ethics into every layer—from model transparency to procurement logic.

Leadership Insight: Governance is no longer about risk disclosure—it’s a precondition for trust.

Applying the Seba Framework: This Week’s Governance Lessons

Each of this week’s stories underscores the need for bold, principled, and literate leadership. The Seba GenAI Ethics & Governance Framework enables institutional leaders to ground their AI strategy in their mission, values, and commitment to human dignity.

Clear Communication Alignment

Define and articulate how AI serves institutional mission, growth, and equity goals, not just technical gains.

Leadership AI Literacy

Understand infrastructure dependencies and “downstem” risks, from outsourcing to geopolitical exposure. Technical knowledge is no longer optional at the top.

Ethics and Governance Frameworks

Embed governance across the lifecycle—from exploration to deployment. Policy can’t be an afterthought.

Accountability Structures

Design systems to remain accountable to patients, students, and customers, not just metrics.

Transparency and Traceability

Demand traceability across infrastructure, vendors, and models. If it’s invisible, it’s ungovernable.

What Authentic Leaders and Teams Should Consider

Leaders

  • Communicate a dynamic GenAI strategy rooted in ethics, purpose, and long-term viability.
  • Elevate AI literacy at the executive and board levels.
  • Align partnerships and grants with transparent, mission-driven innovation.

Teams

  • Track the ethical implications of procurement and deployment, not just technical outcomes.
  • Share learnings from pilot to production, especially across infrastructure-dependent tools.
  • Always ask: Who benefits? Who bears the risk? Make your governance strategy people-centered.

With Gratitude

Thank you to the communities advancing leadership in GenAI through rigorous collaboration and public-interest innovation. Your work informs and strengthens this practice.

@University of San Francisco, @USF School of Education, @USF School of Nursing and Health Professions, @UC Berkeley Extension, @University of Illinois Chicago, @AMIA, @AAC&U, @Stanford Human-Centered AI, @CHAI, @OECD, @AAAI

About the Author

Freddie Seba is a doctoral candidate (Ed.D.) in Organizational Leadership at the University of San Francisco, researching Generative AI ethics and governance. A former faculty member for over eight years, chair/program director for over four, and global executive over ten years, and serial entrepreneur, he holds an MBA from Yale and a Master’s in International Policy from Stanford. He advises universities, healthcare systems, and financial institutions on ethical strategies for GenAI.

Copyright & Transparency

© 2025 Freddie Seba. All rights reserved.

This issue was synthesized using generative AI tools and manual curation. All interpretations and frameworks are original to the author. Connect via LinkedIn or freddieseba.com for reprints, collaboration, or speaking.