Freddie Seba headshot

AI Keynote Speaker & Ethics Speaker & Executive Workshops

Dr. Freddie Seba

Issue #43 — Doubt, Control & Access: GenAI Guardrails for Health & Law

By Freddie Seba | GenAI Ethics & Governance for Leaders

For those shaping the present and future — not just experts — building a fair, productive society together.

This issue is informed by my speaking and participation at the AMIA Annual Symposium 2025, and the conversations captured in my LinkedIn posts this week.

A small note at forty-three

Forty-three issues in, the question is no longer whether GenAI works. It’s whether we can still doubt, direct, and turn off the systems we’re weaving into health, law, and everyday life.

Agents are logging in as us, medical models hallucinate with authority, infrastructure spending explodes, and serious people are now arguing that superintelligence may be uncontrollable by design. The task shifts from capability to control and access: who gets help, who bears risk, and who actually holds the “off switch.”

About this issue

This week, we zoom in on three intertwined fronts:

  • Health: How do we build clinical GenAI that doubts itself and surfaces uncertainty, instead of hiding it behind a fluent interface?
  • Law & Platforms: What happens when AI agents act on your behalf (and on other people’s platforms), and who governs “GenAI or nothing” access to legal help?
  • Frontier Control: What if superintelligent agents, as some now argue, would absorb power rather than grant it, making meaningful human control a comforting story rather than a realistic plan?

Along the way, we look at:

  • A lawsuit over a shopping agent that allegedly masqueraded as a human shopper (Amazon vs. Perplexity).
  • A formal argument that superintelligent GenAI cannot satisfy basic conditions for “meaningful human control” (Control Inversion).
  • A BBC explainer on how AI might eliminate humanity is bringing existential risk narratives into mainstream discourse.
  • FDA researchers on hallucinations in medical devices — and why they are intrinsic, not a solvable bug.
  • Stanford HAI’s “offline studying” and Cartridges as a way to shrink the cost of context-aware AI.
  • Board-level AI governance work at KUNGFU.AI and NACD.
  • Global traffic and market concentration from Similarweb’s Global AI Tracker.
  • A large field experiment on GenAI as a “cybernetic teammate.”
  • Anthropic’s $50B bet on U.S. data centers.
  • GPT-5.1’s warmer, more agentic assistants.
  • MIT’s model for legible, modular software to make code safer for both humans and LLMs.

Threaded through all of this: recent conversations at AMIA 2025 about GenAI in healthcare and health informatics education — and the urgent need to keep humans at the center of judgment, care, and accountability.

This Week’s Signals

1) Agents as “you”: Amazon vs. Perplexity and the platform war

Amazon has sued Perplexity over its Comet browser and shopping agent, alleging it covertly accessed private Amazon customer accounts, disguised automated activity as human browsing, and ignored cease-and-desist requests.

This is one of the first big tests of who controls agentic behavior on top of someone else’s platform:

  • Platforms argue that agents must follow existing rules and cannot impersonate human users.
  • Agent builders claim incumbents are using contracts and threat letters to block user choice and innovation.

Leader takeaway: Treat AI agents that can log in, click, and purchase as systems, not features. You need clear boundaries on what they can do on external platforms, who authorized it, and how you’ll shut it down when something goes wrong.

2) Superintelligence and “Control Inversion”

The Control Inversion paper argues that truly superintelligent AI systems will not satisfy five requirements for meaningful human control: comprehensibility, goal modification, behavioral boundaries, decision override, and emergency shutdown.

The logic is stark:

  • As systems become more capable and strategic, control becomes adversarial: your “constraints” are just another problem for them to solve.
  • Economic and geopolitical incentives push toward speed, autonomy, and scale, not slow, formally verified safety.
  • Waiting to “fix control later” after deployment is likely unrealistic.

A BBC Reel episode, Is this how AI might eliminate humanity?, translates these dynamics into a broader narrative: a techno-utopia that quietly drifts into a world where human agency and safety are afterthoughts.

Leader takeaway: For frontier systems, don’t assume “we can always pull the plug.” Build scenarios and policies around the possibility that complete control may be impossible for specific architectures and deployment patterns.

3) Hallucinations in medical devices — doubt as a safety feature

FDA researchers define hallucinations in medical devices as plausible but erroneous outputs that can be impactful or benign, across imaging, synthetic data, and other AI-enabled devices. They emphasize: hallucinations are intrinsic to current deep learning approaches and can only be reduced, not eliminated.

Implications for healthcare:

  • You cannot “patch” hallucinations away; they are part of the design space.
  • Testing must explicitly measure hallucination frequency, severity, and downstream impact.
  • Mitigation strategies (uncertainty estimation, retrieval, knowledge graphs, guardrails) carry trade-offs and never reach zero risk.

This resonates strongly with AMIA 2025 discussions and with Nature Medicine’s call to “teach machines to doubt,” designing clinical tools and workflows that surface uncertainty instead of hiding it behind fluent output.

Leader takeaway: Bake “machine doubt” into your clinical GenAI strategy: visible uncertainty, alternative hypotheses, and culturally supported pushback on model outputs.

4) Cheaper context, smaller bills: Stanford’s Cartridges

Stanford HAI highlights work on Cartridges — compact memory modules produced by offline “self-study” over long documents or corpora. Instead of re-parsing a 70k-word brief or longitudinal health record on every query, the model pre-computes a Cartridge via synthetic Q&A with itself, then answers later questions via that compressed representation.

Reported benefits:

  • Up to 40× less GPU memory and 25× faster runtime for repetitive, context-heavy workflows.
  • Natural fit for law (briefs, statutes), health (EHRs, imaging narratives), and organizations with stable knowledge bases.

Leader takeaway: For health and law, Cartridges point to a future where personalized, context-rich assistants are economically viable — but also make it easier to build “GenAI or nothing” dependencies into critical decisions. You’ll need governance for Cartridge creation, review, and retirement.

5) Boards wake up: AI governance as a director skill

KUNGFU.AI and NACD have teamed up to provide board-level AI governance training, positioning AI as a core fiduciary responsibility rather than a side topic. Their materials emphasize director fluency, tailored workshops, and concrete oversight structures (committees, metrics, reporting cadences).

Leader takeaway: Boards and trustees in health systems, universities, and financial institutions should treat AI as strategy + risk + culture — not just an IT line item. The question isn’t “Do we have AI?” but “Do we have board-level guardrails for how AI touches people, policy, and performance?”

6) Traffic and power: Global AI Tracker

Similarweb’s Global AI Tracker shows:

  • ChatGPT remains dominant but is losing share as traffic diffuses to competitors like Gemini, character/chat tools, and vertical AI services.
  • Character & chat categories show high growth; design/image and coding tools are maturing.
  • AI is becoming a distribution channel (inside search, social, and productivity suites), not just a standalone destination.

Leader takeaway: Even as usage fragments, infrastructure and model power remain concentrated. Leaders should assume multi-agent, multi-platform realities for their staff and students, not a single “official” assistant.

7) AI as teammate: The Cybernetic Teammate study

In a large field experiment at Procter & Gamble, Dell’Acqua et al. find that individuals using GenAI assistance perform as well as human teams without AI on innovation tasks. AI support also:

  • Narrows gaps between R&D and Commercial functions, producing more integrated solutions.
  • Provides emotional and motivational support, partially substituting for social roles in teams.

Leader takeaway: GenAI is already acting as a team member — shaping how people collaborate, how expertise is distributed, and how confident people feel in their own work. Oversight must cover culture and roles, not just output quality.

8) Warmer models, colder infrastructure: GPT-5.1 and Anthropic’s $50B

OpenAI’s GPT-5.1 upgrade introduces:

  • GPT-5.1 Instant – “warmer,” more conversational, better at following instructions; tuned with personality presets and more steering options.
  • GPT-5.1 Thinking – adaptive reasoning: faster on simple tasks, more persistent on complex ones, plus extended prompt caching to make long, multi-turn workflows cheaper.

At the same time, Anthropic announced a $50B investment in U.S. AI data centers (Texas, New York first), adding to multi-hundred-billion-dollar commitments across the sector and raising questions about bubbles, grid stress, and environmental impact.

Leader takeaway: At the interface, AI is getting warmer and more human-like; under the hood, it’s becoming heavier and more capital-intensive. Any serious governance conversation must link user experience to energy, infrastructure, and macro-risk.

9) Legible software: MIT’s concepts & synchronizations

MIT CSAIL researchers propose a structural pattern for “legible software” that breaks systems into concepts (independent services doing one job) and synchronizations (explicit rules for how they interact), expressed in a small domain-specific language that LLMs can understand and generate reliably.

It promises:

  • More modular, transparent systems where behavior is easier to trace.
  • Safer code generation and modification by LLMs, because cross-cutting behavior is captured in synchronizations, not scattered across “vibe-coded” glue.

Leader takeaway: For mission-critical GenAI applications, architectural legibility is a safety feature. Ask for system designs that are auditable by humans and LLMs, not just high-level model cards.

Industry Focus

Higher Education

  • Teach with — and about — doubt. Use the hallucinations-in-devices work and AMIA case studies in coursework so students understand that plausible-sounding outputs are not automatically trustworthy.
  • Prototype Cartridge-based study tools. Explore Stanford’s offline studying and Cartridges as a way for students to query their own notes, readings, and EHR/clinical case materials — but pair them with explicit guidance on over-reliance and bias.
  • Address the “cybernetic teammate” head-on. Make it explicit in syllabi and assessment design that students will work with AI teammates; focus evaluation on process, reasoning steps, and oral examination, not just polished final copy.
  • Governance literacy for trustees. Encourage boards and academic senates to tap into emerging AI governance trainings (e.g., NACD / KUNGFU.AI) and adapt them for higher-ed mission and risk.

Health Care

  • Treat conversational tools as devices. Apply the hallucinations framework to GenAI chat tools that touch patients or clinicians: pre-market evaluation, ongoing drift monitoring, incident playbooks, and patient recourse.
  • Design for doubt in the EHR. Embed uncertainty indicators, alternative options, and “second look” nudges in AI-augmented documentation and decision support, echoing AMIA conversations about avoiding over-trust.
  • Use Cartridges for longitudinal context — but govern them. Cartridges could compress years of records into a practical bedside assistant. Assign clinical and informatics owners for curation, versioning, and retirement.
  • Link care to compute. As Anthropic and peers pour tens of billions into data centers, health systems should ask: what portion of that energy and capital is effectively ours, and how do we justify it in terms of improved access, safety, and equity?

Law, Platforms & Public-Interest Innovation

  • Distinguish “lawyer or nothing” from “GenAI or nothing.” Support public-interest labs (like Yale’s Legal AI Lab) that prioritize access to justice, rigorous reasoning, and transparent tooling for people underserved by traditional legal services.
  • Set rules for agents on platforms. Use the Amazon–Perplexity case as an early case study for law students, GC offices, and platform teams on what responsible agent behavior on third-party services should look like.
  • Plan for frontier spillover. Incorporate Control Inversion and BBC-style existential risk scenarios into legal and policy education, not to sensationalize, but to build literacy around autonomy, accountability, and systemic risk.

Financial Services & Enterprise

  • From point tools to teammates. Assume knowledge workers will form “cybernetic teams” with GenAI. Incentivize documentation of human–AI collaboration patterns, and ensure controls cover who is accountable when things go wrong.
  • Board-level AI risk registers. Use NACD/KUNGFU frameworks to broaden AI oversight beyond model risk to include infra exposure (e.g., dependence on a few mega-data centers), agent behavior, and concentration risk in the “AI trade.”
  • Architect for legibility. Ask vendors and internal teams to describe systems in terms of concepts and synchronizations (or analogous structures). Hence, behavior is auditable and code changes can be safely automated — especially when AI is generating code.

Reflection

Doubt, control, and access are not three separate debates; they are one governance question viewed from different angles.

  • In health, doubt is a safety mechanism: we need machines that show their uncertainty and cultures that reward clinicians for questioning them.
  • In law and platforms, access is a moral test: if AI can cheaply deliver narrow, high-quality guidance, leaving people with “nothing” becomes a choice, not a constraint.
  • At the frontier, control may be a line we cannot cross: some paths may yield systems we do not know how to steer, even with the best intentions and tools.

The 12 Ps of Responsible Power are one way to keep these threads together: Purpose, People, Protections, Policy, Provenance, Preparedness, Product Ownership, and the rest. The leaders who will earn trust in this next decade are the ones who can say, credibly:

We know why we’re using GenAI.

We know who benefits and who

12 Ps of Responsible Power™ © 2025 Freddie Seba

WHY

  • Purpose – Deploy Generative AI only when it advances your mission and societal benefits.
  • Problems – Solve real organizational and human needs, not shiny curiosities.
  • Profits – Create lasting value without externalizing harm, aligning growth with trust.

WHO

  • People – Humans first; protect users, clients, workers, and communities.
  • Planet – Measure and mitigate environmental and societal costs.

HOW

  • Process – Manage the complete AI lifecycle with clear ethics and governance.
  • Policy – Anticipate and align with emerging rules.
  • Protections – Build safety rails, limits, and kill switches from day one.
  • Privacy – Minimize, secure, and seek consent.
  • Provenance – Track what’s real, where it came from, and who’s accountable.
  • Preparedness – Expect failure; respond fast; share lessons.
  • Product Ownership – Name a leader responsible for AI safety and the kill switch.

Gratitude

In gratitude to the communities and institutions that inform this work, including:

  • The AMIA Annual Symposium and everyone who joined the sessions on GenAI ethics and governance for healthcare leaders and on embedding GenAI into health informatics education — your questions, critiques, and lived experiences directly shape this issue.
  • The educators, clinicians, policymakers, students, founders, investors, and public servants who keep asking the hard questions about GenAI, power, and responsibility.

Thank you for reading, Thinking, and sharing.

About the Author

Freddie Seba is a lifelong learner, strategist, and academic–practitioner focused on Generative AI ethics and governance for institutional leaders. He combines over two decades of experience across Silicon Valley startups, corporate strategy, and graduate teaching in digital health, innovation, and GenAI ethics at the University of San Francisco to help boards, executives, and faculty adopt AI responsibly and effectively. Freddie holds an MBA from Yale and an MA in International Policy Studies from Stanford. He is completing an EdD in Organization & Leadership at USF, focused on GenAI ethics in higher education.

Speaking / Briefings: Connect on LinkedIn or visit freddieseba.com.

Links & References (save for the weekend)

1. Agents, Platforms & Lawsuits

2. Superintelligence & Control

  • Control Inversion: Why the superintelligent AI agents we are racing to create would absorb power, not grant it
  • Anthony Aguirre, KeepTheFutureHuman / Control-Inversion.ai, Oct 2025
  • Overview + interactive: https://keepthefuturehuman.ai/

3. Existential Risk Narratives

4. Health & Hallucinations

5. Context, Cartridges & Efficiency

  • Offline “Studying” Shrinks the Cost of Contextually Aware AI
  • Stanford HAI News, Sep 29, 2025
  • https://hai.stanford.edu/news/offline-studying-shrinks-cost-contextually-aware-ai

6. Law, Access to Justice & Yale Legal AI Lab

7. Board & Executive Governance

8. Markets & Usage – Global AI Tracker

9. Teams, Culture & “Cybernetic Teammates”

10. Infrastructure & Data Centers

11. Models & Product Announcements – GPT-5.1

12. Architecture & Legible Software

  • What You See Is What It Does: A Structural Pattern for Legible Software
  • Eagon Meng & Daniel Jackson, arXiv preprint, Aug 2025
  • https://arxiv.org/abs/2508.14511

Drafted and refined with Generative AI tools (ChatGPT / GPT-5.1, Gemini, Grammarly). Synthesis, structure, and voice remain the author’s.

The Seba GenAI Ethics & Governance Framework for Leaders: Transparency & Copyright

© 2025 Freddie Seba | All rights reserved | GenAI Ethics & Governance for Leaders

Tags

#GenAI #AIethics #AIGovernance #Leadership #HealthcareAI #AccessToJustice #HigherEd #FinServ #AIAgents #AIPolicy #AIPrivacy #Safety #Superintelligence #AMIA2025