Freddie Seba headshot

AI Keynote Speaker & Ethics Speaker & Executive Workshops

Dr. Freddie Seba

Issue #35 Predictive Health Gets Real; Transparency Rules Catch Up; How People Actually Use AI

GenAI Ethics & Governance for Leaders

By Freddie Seba — Also on Substack and Linked

Reflection

Predictive health just took a leap from “promising” to “policy-relevant.” Delphi-2M demonstrates cross-border generalization at the population scale, while an ECG-based perioperative model recalibrates surgical-risk baselines. Both raise the same leadership task: build lifecycle assurance and communication duties before autonomy. Meanwhile, the largest usage study of ChatGPT reminds us to train for how people actually use AI—practical guidance, info seeking, and writing—not just code. Nature+2The Hub+2

Leadership Snapshot (quick scan)

  • Population-scale predictive health. Delphi-2M trained on ~400k UK Biobank records and externally validated on 1.9M Danish records; forecasts >1,000 conditions up to ~20 years. Tremendous promise—serious questions on consent, cross-border analysis, and downstream use. The Economist+2Nature+2
  • Link: The Economist (overview). The Economist
  • Transparency is fragmenting in Health AI. CHAI’s Transparency Report tracks over 250 state bills that impact documentation, disclosure, Monitoring, and incident reporting. If you operate across states, assume a national floor plus local add-ons. (PDF) Contentful+1
  • Surgical risk, re-baselined. A Johns Hopkins/BIDMC retrospective (37k patients) reports ~85% accuracy for 30-day post-op MI/stroke/death with a multimodal ECG+EHR model versus ~60% for standard scores—power ⇒ responsibility; needs prospective validation. The Hub+1
  • How people actually use ChatGPT. OpenAI + Duke + Harvard study of ~1.5M chats: non-work use grew 53% → 70%+ (2024→2025). The big buckets—practical guidance, information seeking, and writing—dominate the field. Train policy and enablement to that reality. (PDF) OpenAI
  • Innovation velocity & agentic front doors. McKinsey, a consultancy, argues that AI’s next S-curve is not just efficiency, but also invention speed. AT&T is piloting a digital receptionist, and Handshake has shown nearly 5 times growth in GenAI mentions in job and intern postings since 2023. McKinsey & Company+2AT&T Newsroom+2
  • AI & Medicine Partnerships: Science/AAAS Webinar Offers Governance Playbooks for Shared Data, Validation, and IP. Science
  • Education adoption data point. Microsoft’s 2025 report: 86% of education orgs now use GenAI; U.S. students “often use” +26 pts, educators “often use” +21 pts YoY. Microsoft

Deep dives (through the governance lens)

1) Predictive health at population scale (Delphi-2M)

Global & Policy. Cross-border validation spotlights portability, longitudinal consent, and obligations when forecasting far-horizon risk. Expect scrutiny of data residency (e.g., Danish data analyzed in-country) and duties to communicate risks and uncertainties. Nature

Institutional & Governance. Before piloting, stand up a model registry (including intended use, data sources, validations, subgroup performance, and monitoring intervals), align with the IRB/ethics and patient councils, and define where human oversight is mandatory.

Leadership & Practice. Start with prevention workflows (screening nudges, social supports). Treat outputs as decision support; require clinician sign-off. Publish a plain-language explainer of what the model does—and doesn’t.

Source: The Economist; Nature coverage. The Economist+1

2) Health-AI transparency: patchwork (CHAI)

Global & Policy. Over 250 state bills address transparency, bias, Monitoring, and audits; national actors face overlapping requirements. Plan for a baseline disclosure and a monitoring stack, then layer state-specific elements on top. Contentful

Institutional & Governance. Maintain a living transparency folder per clinical model, including: purpose, training/eval data, performance by subgroup, limitations, change log, monitoring plan, incident reporting, and contacts. Map each folder to relevant state rules.

Leadership & Practice. Publish a versioned notice for patients and clinicians. When models update, push change highlights (what changed, why, how safety is maintained, and the name of the human who approved the change).

Source: CHAI Transparency Report (PDF). Contentful

3) ECG-based surgical risk model (~85% vs ~60%)

Global & Policy. Outperforming legacy scores ≠ waiver of oversight. Require prospective validation, fairness audits, and clear accountability when predictions influence care. The Hub

Institutional & Governance. Implement lifecycle governance: pre-deployment bias checks, advice-only vs. recommendation guardrails, drift monitoring, clinician training, escalation protocols, and periodic outcome audits.

Leadership & Practice. Begin advice-only integration with human sign-off; track outcomes vs baseline and publish internal reviews so clinicians see where the model adds value—and where it doesn’t. Be open about it.

Research release & journal record. The Hub+1

Sector-specific implications

Higher Education

  • Move from bans to critical adoption, experimentation, and governed enablement: simple guidance at the assignment level, such as “allowed/limited/prohibited,” in addition to syllabus disclosures.
  • Use adoption data and pedagogical experimentation to prioritize practical guidance, info seeking, and honor-code aligned writing tasks (reflecting real usage). OpenAI+1

Healthcare

  • Treat clinical models as safety-critical systems: prospective validation, subgroup fairness audits, drift monitoring, versioned change-logs, and patient-facing notices.
  • For high-impact models (e.g., surgical risk, population risk prediction), ensure clear human accountability and document the clinical rationale alongside model results. The Hub+1

Financial Services & Other Regulated Sectors

  • Apply longitudinal & materiality thresholds over autonomy; maintain model portfolios with portability, audit trails, and stoppage plans.
  • Talent: respond to new information, such as Handshake’s signal, with GenAI-literacy training for managers and entry hires; make responsible use part of performance. Handshake

How this aligns with the Seba GenAI Ethics & Governance Framework

  • Narrative & Purpose Alignment. Align critical adoption to mission (patient safety, student learning, fiduciary duty, human-centredness), not hype or FOMO.
  • Executive Literacy. Reduce GenAI’s leadership knowledge gaps before scale; align board oversight with time-horizon thresholds for autonomy.
  • Lifecycle Governance. Treat models as valuable assets, with approval → monitoring → retraining → retirement, including versioned change logs, registries, and clear human oversight.
  • Human Oversight & Role Clarity. Keep humans accountable for consequential decisions; use advice-only phases and sign-offs.
  • Transparency & Registries. Public-facing disclosures and internal instruments mapped to emerging state requirements (CHAI).

With Gratitude

@University of San Francisco · @USF School of Education · @USF School of Nursing and Health Professions · @AMIA – American Medical Informatics Association · @AAC&U – Association of American Colleges & Universities · @Stanford Human-Centered AI (HAI) · @CHAI – Coalition for Health AI · @University of Illinois Chicago

About Freddie Seba

Freddie Seba is an author, public speaker, and Ed.D. doctoral candidate in Organization & Leadership at the University of San Francisco, focusing on the ethics and governance of Generative AI for leaders. He holds an MBA from Yale and an MA in International Policy from Stanford. A former Digital Health Informatics faculty member and program director (2017–2025), and a former global corporate executive and serial entrepreneur based in the San Francisco Bay Area, he advises universities, health systems, and financial institutions on mission-driven ethics and governance strategy for GenAI.

Transparency & Copyright

Drafted and edited with generative tools (ChatGPT, Gemini, Grammarly) for synthesis and clarity; insights, analysis frameworks, and voice are the author’s. © 2025 Freddie Seba. All rights reserved. For reprints, licensing, or speaking inquiries, contact via LinkedIn or freddieseba.com.

Sources & additional references

  • Delphi-2M overviews and coverage: The Economist; Nature news & paper. The Economist+2Nature+2
  • CHAI Transparency Report (PDF) + overview. Contentful+1
  • ECG surgical-risk: JHU release; BJA record (Harris et al.). The Hub+1
  • How People Use ChatGPT: An OpenAI/Duke/Harvard Study (PDF) and Summary. OpenAI+1
  • Microsoft 2025 AI in Education report. Microsoft
  • McKinsey: The next innovation revolution—powered by AI. McKinsey & Company
  • AT&T digital receptionist (pilot). AT&T Newsroom
  • Handshake: Class of 2026 outlook (GenAI mentions ~5× since 2023). Handshake
  • Science/AAAS webinar: AI meets medicine. Science