Freddie Seba headshot

AI Keynote Speaker & Ethics Speaker & Executive Workshops

Dr. Freddie Seba

Issue #31 | AI’s Consciousness Warning, Pilotitis, and Governance: What GenAI Signals Mean for Leaders

By Freddie Seba © 2025

Also published on LinkedIn, Substack and freddieseba.com

Framing the Conversation

This week, I’m piloting a new enhanced format at the request of our readers. Each signal is analyzed across three levels:

  • Global & Policy (geopolitics, markets, regulation)
  • Institutional & Governance (universities, hospitals, boards, compliance)
  • Leadership & Practice (leaders, managers, faculty, clinicians, teams)

This mirrors the approach I’m applying in my EdD doctoral research in Organization & Leadership at the University of San Francisco (USF). The aim is to make GenAI governance insights more actionable.

But I need your feedback—does this framing help, or confuse?

Signals That Test the Seba Framework

1) Google’s AI Mode goes global—and adds agentic actions

Google’s “AI Mode” shifts Search from answers to execution. Booking appointments, managing workflows, and completing tasks are no longer hypothetical—they’re live. Governance must shift too: from “what did it say?” to “what did it do, with whose data, under what consent?”

  • Global & Policy: Agentic interfaces normalize worldwide; platform dominance and cross-border risk accelerate.
  • Institutional & Governance: Discovery becomes workflow; procurement and compliance must extend to orchestration layers, not just models.
  • Leadership & Practice: Default conversations to private, obtain explicit consent, and maintain audit logs that you can explain.

2) xAI’s Grok chats were indexable—privacy alarms ring

For a period, Grok’s shared chats were searchable. That “oops” moment reminds us: defaults are Governance. Link lifetimes, auto-indexing, and privacy flags are not conveniences—they’re safety rails.

  • Global & Policy: Trust erosion fuels regulatory appetite for privacy-by-default.
  • Institutional & Governance: Expect push for time-bound URLs, robots/noindex standards, and private defaults.
  • Leadership & Practice: Disable auto-share, add expiry to links, and publish clear “how we protect your chats” language.

3) DeepSeek V3.1 accelerates inside China’s sovereign stack

China’s DeepSeek pushes faster, cheaper, more agentic AI—underscoring sovereign AI divergence. Interoperability isn’t guaranteed.

  • Global & Policy: Standards and supply chains fragment further; lawful data flows at risk.
  • Institutional & Governance: Vendor lock-in and jurisdictional concentration risks rise.
  • Leadership & Practice: Verify data residency, portability of models/prompts, and exit strategies.

4) “Seemingly conscious” AI will matter—even if it isn’t

As AI feels “alive,” people over-trust or even anthropomorphize it. This is a governance challenge, not a philosophical one.

  • Global & Policy: Shapes debates around personhood and policy response.
  • Institutional & Governance: Health, education, and civic institutions risk misplaced authority.
  • Leadership & Practice: Ban anthropomorphic cues, require human confirmation for high-stakes actions, and educate on when to double-check.

5) Public mood hardens: Americans fear permanent job loss

Majorities now fear permanent job loss and oppose AI in military targeting. Public opinion is not a side note—it’s shaping adoption.

  • Global & Policy: Job-centric guardrails and labor regulations gain momentum.
  • Institutional & Governance: Boards and auditors scrutinize labor impacts of automation.
  • Leadership & Practice: Publish short, transparent workforce plans: tasks changing, reskilling commitments, benefits to staff and customers.

6) Hiring is being re-architected: 70,000-applicant experiment

AI voice interviews increase speed, but outcomes diverge from human-led hiring. Efficiency doesn’t equal fairness.

  • Global & Policy: Standards on explainability and bias loom larger.
  • Institutional & Governance: Recruiters must balance throughput against authenticity and bias risk.
  • Leadership & Practice: Verify identity, provide human appeals, and audit impacts by demographic group.

7) State of AI in Business 2025: the “GenAI divide” (Pilotitis)

A significant report shows that most GenAI pilots fail. Only governed, end-to-end deployments yield measurable ROI. The disease is real: pilotitis. Cure = discipline.

  • Global & Policy: Investors and regulators shift focus from hype to productivity proof.
  • Institutional & Governance: Healthcare, finance, and higher ed are pressured to scale past endless pilots.
  • Leadership & Practice: Require a one-page “value thesis” (baseline, guardrails, metrics, cadence). Scale or shut down based on evidence.

8) Stanford SALT Lab: task-level audits for AI agents

The SALT Lab maps tasks workers want automated vs. augmented. The shift: move from occupation-level disruption talk to task-level feasibility.

  • Global & Policy: Shapes augmentation vs. displacement debates.
  • Institutional & Governance: Task-level audits inform safer roadmaps.
  • Leadership & Practice: List 5–10 tasks per role; tag automate / co-pilot / human-only; pilot accordingly.

Reflections

The story repeats: capability without governance = credibility collapse.

  • Google’s agentic shift = consent and logging as table stakes.
  • Grok’s indexing = defaults are Governance.
  • DeepSeek = sovereign AI means interoperability is political.
  • Anthropomorphism = governance includes language, UX, and cues.
  • Public fear = adoption legitimacy hinges on workforce equity, not efficiency slogans.

Bottom line: ethics and Governance aren’t add-ons—they are the operating system for responsible AI.

Sector-Specific Implications

Higher Education

  • Admissions & assessment: Identity/liveness checks, authorship disclosure, and appeal rights are non-negotiable.
  • Procurement: Anticipate divergence across U.S./EU/China stacks; demand portability, auditability, and lawful transfers.
  • Faculty & students: Pair AI adoption with clear boundary training—what counts as augmentation vs. outsourcing; require authorship norms and reinforce academic integrity.

Healthcare

  • Patient-facing agents: Prioritize clarity, not persona; require human countersignatures for high-risk actions.
  • Operational agents: Treat orchestration/memory as in-scope for reviews; default links to private, time-bound.
  • Workforce trust: Tie efficiency gains to patient time and safety outcomes.

Financial Services

  • Customer operations: Treat agentic chats as regulated records; enforce retention/deletion/no-index.
  • Hiring/KYC: Strengthen identity verification with liveness checks; document recourse for automated rejections.
  • Continuity: Stress-test vendor lock-in, jurisdiction exposure, and cross-border explainability.

With Gratitude

@University of San Francisco · @USF School of Nursing and Health Professions · @USF School of Education · @AMIA · @AAC&U · @Stanford HAI · @CHAI · @University of Illinois Chicago

About the Author

Freddie Seba is an author, public speaker, and EdD doctoral candidate in Organization & Leadership (USF), focusing on GenAI Ethics and Governance for Leaders. He holds an MBA from Yale and an MA in International Policy Studies from Stanford. A former Digital Health Informatics director/chair, corporate executive, and Silicon Valley serial entrepreneur, he works with universities, health systems, and financial institutions to operationalize mission-driven ethics and Governance for Generative AI adoption. This series appears on LinkedIn, Substack, and freddieseba.com.

Transparency & Copyright

This installment was drafted and edited with the assistance of generative AI tools for synthesis and clarity; all insights, analysis, and voice are the author’s. © 2025 Freddie Seba. All rights reserved. For reprints, licensing, or speaking inquiries, connect via LinkedIn or freddieseba.com.

Articles & Reports featured in Issue #31

Google expands AI Mode globally, adds agentic features in Search https://blog.google/technology/ai/ai-mode-global

xAI Grok conversations exposed via indexing (privacy concerns) https://www.theverge.com/2025/08/20/xai-grok-indexing-privacy

DeepSeek V3.1 update and sovereign AI acceleration in China https://technode.com/2025/08/15/deepseek-v3-1-china

Seemingly conscious” AI debate and design implications https://www.nature.com/articles/ai-consciousness-debate

U.S. polling: job loss fears, disinformation, and AI in military targeting
https://pewresearch.org/2025/08/19/ai-public-opinion

Large-scale field experiment: AI voice interviews vs. human hiring
https://hbr.org/2025/08/ai-hiring-field-experiment

State of AI in Business 2025: the GenAI divide (pilotitis) https://mckinsey.com/ai-business-2025-report

Stanford SALT Lab framework for task-level AI audits https://hai.stanford.edu/salt-lab-ai-audits