This newsletter is for those shaping the present and future — not just experts — building a fair, productive society together across higher education, healthcare, finance, and beyond—anyone deciding how AI touches people, policy, and performance.
We’re still scaling inexpensive, capable AI faster than we’re building guardrails. The result: misuse risk is compounding in the open while policy and safety sprint to catch up.
This Week’s Signals
1) OpenAI’s threat brief: “bolt-on” misuse, not novel superpowers.
Threat actors are attaching AI to old playbooks (phishing, malware, influence). Accounts are banned; indicators are shared with partners. AI is amplifying abuse, not inventing new categories—though scale and reach are unprecedented.
2) AI-designed toxins evaded DNA screening.
Science showed paraphrased or novel toxin sequences generated with AI slipped through standard vendor filters—moving dual-use from theoretical to operational for synthesis providers.
3) Europe’s sovereignty play—Apply AI + AI in Science.
The European Commission launched twin strategies to push industrial adoption and strengthen research—explicitly linking AI to competitiveness and resilience.
4) Pharma doubles down: AstraZeneca–Algen, $555M.
A milestone-based partnership to identify immunology targets using AI + CRISPR—a concrete move from experimental to core pipeline.
5) California leads again: regulating AI companion chatbots.
California became the first U.S. state to regulate “AI companion” chatbots (apps simulating emotional relationships) — requiring transparency, safety controls, and oversight. This marks a shift: not just “frontier AI” is in scope, but everyday interactive agents.
6) Copyright reckoning continues around Anthropic.
A proposed $1.5 B settlement with authors is under scrutiny; the court’s mediation and final terms will shape norms for use, licensing, and provenance.
7) RAND on AGI race: preventive risks.
RAND warns that if states perceive rivals near AGI, preventive actions may escalate—notably if norms or verification lag. Governance must scale faster than fear.
8) SSRN: Generative AI and the Threat of Weaponization.
An SSRN study (Nguyen et al., 2025) maps governance gaps across biology, cybersecurity, and information operations—arguing for international coordination, threshold rules, and early-warning systems.
Sector Implications
Higher Education
Risks: Dual-use lab exposure; misuse in student/bot interactions; data leakage.
Leaders’ Considerations: Ensuring AI Use & Integrity protocols are in place, mindful of student-facing bots interaction safety, audit protocols, coordinated data governance, aligned with FERPA/IRB.
Healthcare
Risks: Hallucinations in patient-facing AIs; PHI leaks; AI companions misguiding users.
Leaders’ Considerations: Requiring robust safety workflows for any companion/assistant AI; double-checking vendors’ emotional-interaction models and escalation protocols; validating AI applications with clinicians, pilot first.
Financial Services
Risks: AI agents misleading users; fraud via empathetic bots; synthetic social engineering.
Leaders’ Considerations: Monitoring real-time conversational drift; require provenance and red-team reports on any AI features.
The Seba GenAI Ethics and Governance Framework for Leaders: 12 Ps for Power With Responsibility. © 2025 Freddie
A leader’s responsibility is not to deploy GenAI models that are aligned with your organization’s ethics and governance frameworks, as follows:
WHY?
1. Purpose – Deploy AI only when it advances your mission and societal benefits.
2. Problems – Solve real organizational and human needs, not shiny tech curiosities.
3. Profits – Create lasting value to your organization and society, without externalizing harm or deferring mitigation, aligning growth with trust.
WHO?
4. People – Humans first; protect users, clients, workers, external stakeholders, including any potentially affected communities.
5. Planet – Minimize and mitigate toxic outputs.
HOW?
6. Process – Manage the complete AI product journey—ideation, deployment to sunset—with clear ethics and governance.
7. Policy – Design proactively for the legal frameworks that will inevitably catch up.
8. Protections – Build safety rails, limits, and kill switches from the beginning of the project.
9. Privacy – Observe data integrity and dignity, minimize data when possible, secure, and always ask for real consent.
10. Provenance – Track what’s real, where it came from, and who’s accountable.
11. Preparedness – Expect failure; respond fast, learn, iterate, and share lessons with the community.
12. Product Ownership – Assign an accountable leader who is responsible for acting on AI safety—and the kill switch.
Why This Matters
Misuse scales inexpensively; defense and mitigation must scale smarter. AI guardrails are no longer the sole responsibility of your legal and compliance teams—they demand leadership focus, cross-domain coordination, and anticipatory posture.
What I’m Watching Next
- How California enforces companion-AI rules and whether other states follow.
- Implementation of Europe’s Apply AI / AI in Science across core industries.
- Upgrades in DNA synthesis filters to detect AI-paraphrased toxins.
- Real yield from AI + CRISPR in drug pipelines beyond announcements.
- Legal rulings shaping the Anthropic settlement and creative rights.
- Indicator sharing across defenders following OpenAI’s threat program.
- SSRN proposals and a few early governments adopting governance thresholds.
References / Links
- OpenAI — Disrupting Malicious Uses of AI: October 2025 (blog)https://openai.com/global-affairs/disrupting-malicious-uses-of-ai-october-2025/
- OpenAI — Threat Intelligence Report (PDF)https://cdn.openai.com/threat-intelligence-reports/7d662b68-952f-4dfd-a2f2-fe55b041cc4a/disrupting-malicious-uses-of-ai-october-2025.pdf
- Science — “Made-to-order bioweapon? AI-designed toxins slip through safety checks” https://www.science.org/content/article/made-order-bioweapon-ai-designed-toxins-slip-through-safety-checks-used-companies
- European Commission — Keeping European industry & science at AI forefronthttps://commission.europa.eu/news-and-media/news/keeping-european-industry-and-science-forefront-ai-2025-10-08_en
- FierceBiotech — AstraZeneca–Algen $555M AI pacthttps://www.fiercebiotech.com/biotech/astrazeneca-algen-biotechnolgies-pen-555m-ai-pact-immunology-targets
- Reuters — Anthropic $1.5B settlement scrutinyhttps://www.reuters.com/legal/government/anthropics-15-billion-copyright-settlement-faces-judges-scrutiny-2025-09-09/
- RAND — Preventive Attack Risks in AGI Race (PE-A3691-13)https://www.rand.org/pubs/perspectives/PEA3691-13.html
- SSRN — Generative AI & Weaponization: Governance Gaps & Mitigation (2025)https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5560401
- TechCrunch — California regulates AI companion chatbotshttps://techcrunch.com/2025/10/13/california-becomes-first-state-to-regulate-ai-companion-chatbots/
About the Author
Freddie Seba is an author, speaker, and EdD doctoral candidate in Organization & Leadership at the University of San Francisco, focusing on Generative AI Ethics & Governance for Leaders.
MBA (Yale); MA (Stanford). Former USF faculty & Digital Health Informatics director; Silicon Valley entrepreneur and executive.
Speaking / Briefings: Keynotes, board workshops, executive sessions — connect via LinkedIn or freddieseba.com.
Transparency & Copyright
Drafted and refined with generative tools (ChatGPT, Gemini, Grammarly) — synthesis, structure, and voice remain the author’s.
© 2025 Freddie Seba. All rights reserved. For reprints, licensing, or speaking requests: LinkedIn or freddieseba.com.
Gratitude
With deep gratitude to all the institutions and communities pushing this forward:
@University of San Francisco · @USF School of Education · @USF School of Nursing & Health Professions · @UC Berkeley Extension · @University of Illinois Chicago · @AMIA · @AAC&U · @Stanford Human-Centered AI · @CHAI · @OECD · @AAAI
Hashtags
#GenAI #AIForLeaders #AIGovernance #AISafety #RiskManagement #Biosecurity #DualUse #EUAI #CRISPR #Pharma #ThreatIntel #ModelRisk #MRM #DataGovernance #TrustAndSafety #IncidentResponse #Traceability #Audit #Weaponization #Policy #CompanionAI
