By Freddie Seba | GenAI Ethics & Governance for Leaders
A note to readers
I’m sharing reflections and working notes from my doctoral research and real-world practice in Generative AI ethics and governance—human-centered guidance and discernment for leaders across higher education, healthcare, financial services, and beyond. If you’re deciding how AI touches people, policy, and performance, this is for you.
This week’s signals → actionable moves
1) Interview “fraud” or broken process?
The Atlantic — “People Are Using AI to Cheat in Job Interviews”
What it is: Candidates using live AI copilots reveal brittle, performance-theater interviews.
Leader move: Swap at least one interview for a scored work sample. Publish what’s allowed (copilots, notes) to separate enhancement from misrepresentation.
2) AI anxiety goes global.
Pew Research Center — “How People Around the World View AI”
What it is: In 25 countries, concern outweighs excitement; trust in government oversight is low.
Leader move: Add a plain-language AI notice to one workflow (purpose, human-in-loop, data use, recourse) and a visible feedback channel.
3) The “useful AI” phone arrives.
Evolution AI Hub — HONOR smartphone discount assistant
What it is: Auto-finds discounts—a minor, concrete feature that saves money on day one.
Leader move: Launch one utility feature with opt-in, an on/off toggle, and a simple success metric (minutes saved, dollars saved).
4) Claude Skills = operational AI done right.
Simon Willison — overview of “Claude Skills”
What it is: Structured, reusable workflows that turn ad-hoc prompting into verifiable playbooks.
Leader move: Convert three repetitive tasks into Skills/Playbooks (clear inputs/outputs + audit log).
5) Health AI meets the Senate.
Stanford HAI testimony + JAMA Network perspective
What it is: Call for clinical evaluation, drift monitoring, and patient recourse—beyond pilots.
Leader move: Establish a Model Oversight Board; track real-world performance, overrides, incidents, subgroup deltas; publish an AI Use Registry.
6) Google + Yale: AI in cancer discovery.
TechSpot — on Google DeepMind & Yale study
What it is: AI uncovered previously unknown cancer-related mutations/protein behaviors—high potential, verification required.
Leader move: Require external validation and provenance/data cards before public claims or deployment.
7) The 2030 compute crunch.
Epoch AI — AI 2030 report & summary
What it is: Gigawatt-scale energy demand and massive annual AI capex by 2030; compute/energy now shape moats.
Leader move: Add compute & power lines to strategy: reserved capacity, energy mix, efficiency targets, and AI total cost of ownership in leadership reviews.
Industry focus (quick takeaways)
Higher Education
- Shift assessment to portfolios, oral defenses, and collaborative reasoning.
- Update honor codes to distinguish enhancement vs. misrepresentation.
- Compensate faculty for AI curriculum redesign (treat like service/research load) and fund exemplar assignments.
Healthcare
- Institutionalize lifecycle assurance: evaluation → drift monitoring → patient recourse.
- Maintain a model registry (intended use, subgroup performance, update log, decommission plan).
- Integrate safety review with IRB; treat oversight like pharmacovigilance.
Financial Services
- Treat compute cost, bias, and opacity as brand trust and “communities-you-serve” partnership issues (not just compliance).
- Add AI TCO, incident metrics, red-team drills, and vendor chain reviews to regular risk discussions.
The Seba GenAI Ethics & Governance Framework — 12 Ps of Responsible Power © 2025 Freddie Seba
WHY
- Purpose – Only where it advances mission & societal benefit.
- Problems – Real needs, not shiny curiosities.
- Profits – Value without externalizing harm.
WHO
4) People – Protect users, workers, communities; design for dignity.
5) Planet – Measure and mitigate environmental and societal costs.
HOW
6) Process – Govern the whole lifecycle.
7) Policy – Anticipate and align with emerging rules.
8) Protections – Guardrails, limits, and a kill switch.
9) Privacy – Minimize, secure, consent.
10) Provenance – Track sources, authorship, accountability.
11) Preparedness – Expect failure; drill response; share learnings.
12) Product Ownership – Name the accountable owner for safety & shutdown.
Reflection
We’ve moved from capability to credibility. The winners will prove value, show their guardrails, and plan for the energy and compute required—human-centered by design, with transparency that builds trust.
With gratitude
Thanks to The Atlantic, Pew Research Center, Epoch AI, Stanford HAI, JAMA Network, Simon Willison, Evolution AI Hub, and TechSpot for high-quality reporting and analysis.
Appreciation to the University of San Francisco · USF School of Education · USF School of Nursing & Health Professions · AMIA · Stanford HAI · CHAI · AAC&U · AAAI for advancing ethical AI education and governance.
About the author
Freddie Seba is an author, speaker, and EdD doctoral candidate (University of San Francisco) focused on Generative AI Ethics & Governance for Leaders. MBA (Yale); MA (Stanford). Former USF faculty and Digital Health Informatics program director; Silicon Valley executive and entrepreneur.
Speaking / Briefings: For keynotes, board workshops, or executive sessions, connect on LinkedIn or visit freddieseba.com.
Transparency & copyright
Drafted and refined with generative tools (ChatGPT, Gemini, Grammarly)—synthesis, structure, and voice are the author’s.
© 2025 Freddie Seba | All rights reserved | GenAI Ethics & Governance for Leaders
Sources & Useful Materials
- The Atlantic — People Are Using AI to Cheat in Job Interviews: https://www.theatlantic.com/technology/2025/10/ai-cheating-job-interviews-fraud/684568/
- Pew Research Center — How People Around the World View AI: https://www.pewresearch.org/global/2025/10/15/how-people-around-the-world-view-ai/
- HONOR AI smartphone discount assistant: https://www.techradar.com/phones/honor-phones/honor-reveals-its-first-self-evolving-ai-smartphone-and-yes-it-has-an-ai-button
- TechSpot — Google and Yale’s AI Made a Major Cancer Discovery: https://www.techspot.com/news/109888-google-yale-new-ai-made-major-cancer-discovery.html
- Simon Willison — Claude Skills: https://simonwillison.net/2025/Oct/16/claude-skills/
- Stanford HAI — Russ Altman Senate Testimony: https://hai.stanford.edu/policy/russ-altmans-testimony-before-the-us-senate-committee-on-health-education-labor-and-pensions
- JAMA Network — AI, Health, and Health Care Today and Tomorrow: https://jamanetwork.com/journals/jama/fullarticle/2840175
- Epoch AI — Power Demands of Frontier AI Training: https://epoch.ai/blog/power-demands-of-frontier-ai-training
