Freddie Seba headshot

AI Keynote Speaker & Ethics Speaker & Executive Workshops

Dr. Freddie Seba

Issue #47 —AI Architects, Omnibus, and New AI Work. Building the age of thinking machines — without outsourcing responsibility

GenAI Ethics & Governance for Leaders. By Freddie Seba —

Background note: This issue is framed by my successful doctoral dissertation defense on GenAI ethics & governance in higher education—especially what faculty early adopters surface when GenAI meets real incentives, real workflows, and real accountability.

Copyright: © 2025 Freddie Seba. All rights reserved.

Third-party content is linked and credited to the original publishers/authors; any trademarks remain the property of their respective owners.

A quick thanks (and a bridge from Issue #46)

Thank you for reading and sharing Issue #46 — your notes and conversations continue to shape what I prioritize here. This week, the theme tightens: signals. Not hype, not demos—signals from policy interfaces, clinical reasoning, firm creation, labor markets, and capital markets.

For boards & university trustees

If you only read one section this week, make it this one.

Four governance moves that reduce avoidable regret:

1. Treat disclosure as infrastructure, not PR. The FAFSA “red flag” moment isn’t just communications—it’s governance-by-interface.

2. Demand uncertainty pathways in high-stakes GenAI. “Confident wrong” is often an incentive design.

3. Assume GenAI lowers startup costs unevenly. Evidence suggests GenAI disproportionately boosts small-firm entry.

4. Read AI balance sheets globally. If “AI exposure” looks cheaper elsewhere, capital and talent will arbitrage the story.

The 12 Ps of Responsible Power © 2025 Freddie Seba (GenAI ethics and governance checklist focused on this week’s content)

Use this as your “board/cabinet scan” before scaling any GenAI initiative:

1. Purpose — What decision, workflow, or outcome is this improving?

2. Principles — What values are non-negotiable (equity, safety, integrity, privacy)?

3. People — Who owns it, runs it, and is accountable for risk?

4. Policies — Acceptable use, data handling, academic integrity, clinical/advising guardrails.

5. Procurement — What do contracts say about data, retention, and auditability?

6. Privacy — What data touches the system; what’s minimized; what’s prohibited?

7. Provenance — Can users see where outputs come from (citations, logs, “what changed”)?

8. Prompting — Safe patterns, role boundaries, staff training (esp. high-stakes).

9. Performance — Define “good” (accuracy, usefulness, bias, time saved, outcomes).

10. Pitfalls — Hallucination, automation bias, inequity, leakage, vendor lock-in.

11. Preparedness — Monitoring + incident response + rollback plan + near-miss reporting.

12. Public accountability — Could you defend this under disclosure, headlines, or regulators?

1) The FAFSA “red flag” is a governance signal, not a UI tweak

A new disclosure-style “earnings indicator” inside the FAFSA flow (as reported in the Bloomberg piece you shared) is a signal that public accountability is moving closer to the decision point—where students apply, choose, and borrow.

Leadership takeaway: If you’re deploying GenAI in enrollment, advising, marketing, or career services, align it to outcomes you can defend under scrutiny—not just scaled messaging.

Link: Bloomberg — Education Department flags colleges with graduates who have low-paying jobs

https://www.bloomberg.com/news/articles/2025-12-12/education-department-flags-colleges-with-graduates-that-have-low-paying-jobs

2) Medicine: “doctor bullshit” and LLM hallucinations share a root cause

The BMJ piece argues that clinician overconfidence beyond evidence and LLM hallucinations can stem from similar pressures to perform certainty—speed, decisiveness, and polished answers—often rewarded by systems even when reality is ambiguous.

Leadership takeaway: Hallucinations aren’t only a model problem. They’re a workflow + incentive problem.

Link: BMJ — Parallel pressures: the common roots of doctor bullshit and large language model hallucinations

https://www.bmj.com/content/391/bmj.r2570

3) GenAI as “co-founder”: evidence it boosts small-firm entry

The arXiv working paper you provided studies whether GenAI facilitates firm creation, using the release of ChatGPT (Nov 2022) as a shock and leveraging geographic variation in pre-existing AI-specific human capital. Using universal Chinese firm registration data through the end of 2024, it finds a surge in new firm formation concentrated in grids with more substantial AI-specific human capital—driven entirely by small firms—and estimates this contributed to 6.0% of overall national firm entry.

Link: arXiv PDF — AI as “Co-founder”: GenAI for Entrepreneurship

https://arxiv.org/pdf/2512.06506

4) AI balance sheets: Asia’s “cheap AI” should worry American investors

The Economist (and the click link you shared) points to a global repricing question: if AI-linked exposure looks comparatively inexpensive in parts of Asia, the market story becomes less “US inevitability” and more “global arbitrage.”

Links:

   •   The Economist article: https://economist.com/finance-and-economics/2025/12/10/asias-inexpensive-ai-stocks-should-worry-american-investors

   •   Economist click link you shared: https://click.e.economist.com/?qs=0139b5161632ce1bc884449af9b6823d2896a2680f13b051b538d03a673173e31179530a05f08e9967608558f1ed21c1111845b845a9a34ee5e6645f54483509

5) Europe’s Digital Omnibus faces a bumpy road ahead

Almost a month after the European Commission’s Digital Omnibus package landed, the debate is still intense: “simplification” vs. “substantive change.”

What the Commission says it’s doing: streamline and clarify digital rules, including proposals to modernize cookie consent, clarify lawful processing for AI development, and “clarify the definition of personal data” while keeping protections in place.

(See the Commission’s own explainer/FAQ.)

Link: https://digital-strategy.ec.europa.eu/en/faqs/digital-package

What critics worry about: narrowing or re-scoping “personal data” risks turning GDPR protection into an entity-dependent standard, creating practical confusion and new avoidance strategies—especially when combined with AI-friendly legal bases framed as “legitimate interest.”

(See civil society critiques and legal analyses.)

Links:

   •   EFF critique: https://www.eff.org/deeplinks/2025/12/eus-new-digital-package-proposal-promises-red-tape-cuts-guts-gdpr-privacy-rights

   •   TechPolicy.Press critique: https://techpolicy.press/the-eus-digital-omnibus-must-be-rejected-by-lawmakers-here-is-why

   •   Legal analysis (White & Case): https://www.whitecase.com/insight-alert/gdpr-under-revision-key-takeaways-from-digital-omnibus-regulation-proposal

Key governance implication for higher ed & health: if your GenAI program depends on personal data, don’t treat regulatory “simplification” as de-risking. Treat it as volatility: definitions, enforcement expectations, and political narratives can shift faster than your procurement cycle.

6) AI is creating brand new occupations — and raising the value of “human expertise” in unexpected ways

The Digitalist Papers essay by David Autor and Neil Thompson argues that AI won’t just “eliminate jobs”—it will reshape the value of expertise inside occupations, depending on whether AI automates inexpert tasks or expert tasks.

One detail I loved: the authors point out that whole occupations emerge over time—e.g., “web designer,” “social media manager,” “data scientist,” and even “forward deployed engineers” appearing only recently in official classifications—reminding us that the work future is not a fixed pie.

Link: https://www.digitalistpapers.com/vol2/autorthompson

Leadership takeaway: “Most needed” may be the human layer—judgment, explanation, relationship, accountability—primarily when AI lowers the barrier to entry for some tasks while raising expectations for higher-order work.

7) TIME’s 2025 Person of the Year: “The Architects of AI.”

TIME named the “Architects of AI” as its 2025 Person of the Year, spotlighting leaders who imagined, designed, and scaled the systems now reshaping daily life. Coverage highlights figures including Mark Zuckerberg, Lisa Su, Elon Musk, Jensen Huang, Sam Altman, Demis Hassabis, Dario Amodei, and Fei-Fei Li.

Links:

   •   TIME feature: https://time.com/7339685/person-of-the-year-2025-ai-architects/

   •   Reuters coverage: https://www.reuters.com/business/media-telecom/architects-ai-named-times-person-year-2025-12-11/

   •   AP coverage: https://apnews.com/article/77ec65c6792bc99ec2ce1919c5f421ea

Governance takeaway: When media “Person of the Year” attention consolidates around builders, it’s a reminder that systems have authors. That’s why governance can’t be a sidecar. It has to be part of the build.

Field notes: “fantastic bugs” (why leaders should care)

When GenAI breaks in production, it often doesn’t break loudly. It breaks socially:

   •   people stop reporting problems because “it’s good enough,”

   •   people treat fluent output as “the policy,” and

   •   edge cases get pushed onto frontline staff.

Governance move: measure “near misses,” not just accuracy. Create a safe channel for staff to report weirdness without fear of looking “anti-innovation.”

Gratitude

With appreciation to:

•   My doctoral dissertation chair and committee (thank you for your rigor, patience, and care)

•   Family and friends who showed up for the defense and supported the multi-year journey

  • @University of San Francisco, @USF School of Education, @USF School of Nursing and Health Professions, @Stanford HAI (Human-Centered Artificial Intelligence), @Coalition for Health AI (CHAI), @AMIA (American Medical Informatics Association), @AAC&U (Association of American Colleges & Universities), and @AAAI (Association for the Advancement of Artificial Intelligence), as well as the clinicians, students, trustees, and executives who keep sharing real-world cases and complex questions.

About the author

Freddie Seba is a lifelong learner, strategist, and academic–practitioner focused on Generative AI ethics and governance for institutional leaders. He combines over two decades of experience across Silicon Valley startups, corporate strategy, and graduate teaching in digital health, innovation, and GenAI ethics at the @University of San Francisco to help boards, executives, and faculty adopt AI responsibly and effectively. Freddie holds an MBA from @Yale University and an MA in International Policy Studies from @Stanford University. He is completing an EdD in Organization & Leadership at USF, focused on GenAI ethics in higher education.

My focus: how GenAI changes incentives in real institutions—primarily through the lived experience of faculty early adopters—and what governance can do before small failures become systemic ones.

Speaking / Briefings: Connect on LinkedIn or visit freddieseba.com.

Transparency & Disclaimer

This newsletter is for educational and informational purposes only. It does not provide medical, healthcare, educational, instructional, accreditation, financial, investment, or professional advice. It does not create a clinician–patient, advisor–client, or instructor–student relationship. Leaders and organizations should consult appropriate professionals and institutional governance bodies before making decisions about healthcare, education, financial services, or AI deployment.

•   Not legal, medical, financial, or investment advice.

•   Views are my own and do not represent employers, partners, or affiliated institutions.

•   Source links are shared for credit and context; copyright belongs to the original publishers.

Drafted and refined with Generative AI and assistive tools — including ChatGPT / GPT-5.1, Gemini, Speechify, and Grammarly — with synthesis, structure, and voice remaining the author’s.

References

   •   BMJ: Parallel pressures: the common roots of doctor bullshit and large language model hallucinations

https://www.bmj.com/content/391/bmj.r2570

   •   arXiv (PDF): AI as “Co-founder”: GenAI for Entrepreneurship

https://arxiv.org/pdf/2512.06506

   •   The Economist: Asia’s inexpensive AI stocks should worry American investors

https://economist.com/finance-and-economics/2025/12/10/asias-inexpensive-ai-stocks-should-worry-american-investors

   •   Bloomberg: Education Department flags colleges with graduates that have low-paying jobs

https://www.bloomberg.com/news/articles/2025-12-12/education-department-flags-colleges-with-graduates-that-have-low-paying-jobs

   •   EU Commission FAQ (Digital Omnibus):

https://digital-strategy.ec.europa.eu/en/faqs/digital-package

   •   Digitalist Papers: Beyond Job Displacement: How AI Could Reshape the Value of Human Expertise

https://www.digitalistpapers.com/vol2/autorthompson

   •   TIME: The Architects of AI Are TIME’s 2025 Person of the Year

https://time.com/7339685/person-of-the-year-2025-ai-architects

   •   Reuters (TIME Person of the Year coverage):

https://www.reuters.com/business/media-telecom/architects-ai-named-times-person-year-2025-12-11

   •   AP (TIME Person of the Year coverage):

https://apnews.com/article/77ec65c6792bc99ec2ce1919c5f421ea

   •   EFF critique of the Digital Package:

https://www.eff.org/deeplinks/2025/12/eus-new-digital-package-proposal-promises-red-tape-cuts-guts-gdpr-privacy-rights

   •   TechPolicy.Press critique of the Digital Omnibus:

https://techpolicy.press/the-eus-digital-omnibus-must-be-rejected-by-lawmakers-here-is-why

   •   White & Case analysis (GDPR under revision):

https://www.whitecase.com/insight-alert/gdpr-under-revision-key-takeaways-from-digital-omnibus-regulation-proposal

   •   Economist click link (as shared):

https://click.e.economist.com/?qs=0139b5161632ce1bc884449af9b6823d2896a2680f13b051b538d03a673173e31179530a05f08e9967608558f1ed21c1111845b845a9a34ee5e6645f54483509

Copyright: © 2025 Freddie Seba. All rights reserved.