Freddie Seba headshot

AI Keynote Speaker & Ethics Speaker & Executive Workshops

Dr. Freddie Seba

Issue #37 — The Paradox of Counter-AI: Guardrails, Misuse, and the Next Frontier

By Freddie Seba | GenAI Ethics & Governance for Leaders

We are scaling cheap, capable AI faster than we are building guardrails. The result: real-world misuse risks are compounding while policies and safety features sprint to catch up.

This week’s signals

1) The threat surface just went biological.

A new Science-covered study shows AI-designed toxic sequences can slip past commercial DNA screening for under $50—moving dual-use from hypothetical to operational. The Washington Post+1

2) RAND: misuse is the multiplier.

Guardrails don’t “travel” with open or replicated models. RAND emphasizes ecosystem defense—detect harmful outputs, deter with legal/economic consequences, and disrupt supply chains enabling abuse. RAND Corporation+1

3) Policy is waking up—starting with California.

Gov. Newsom signed SB-53, a first-in-the-nation frontier-AI law requiring transparency, risk assessments, and incident reporting. It marks a move from voluntary ethics to enforceable accountability. Governor of California+1

4) The next frontier: genome-scale AI.

At Stanford HAI, Brian Hie showcased genome models (Evo-2) with million-token context windows—accelerating discovery and raising dual-use governance needs. Stanford HAI

5) Offense is getting funded.

Dash0 raised $35M to build AI-native reverse-engineering/observability tools—offense iterates hourly, defense still moves by change request. dash0.com+2The AI World Organisation+2

6) Safety features are still chasing the curve.

OpenAI launched Parental Controls; Google’s DORA 2025 focuses on secure, reliable AI-era delivery—welcome but reactive. Google Cloud+3OpenAI+3Google Cloud+3

Sector Implications and Leadership’s Considerations

Higher Education

Risks: academic integrity, dual-use exposure, data leakage.

Leadership Considerations: adopt an AI Use & Integrity framework (clear “allowed/not allowed/why”), inventory AI use in life-science labs with dual-use review, and centralize data governance aligned to FERPA/IRB.

Healthcare

Risks: clinical hallucinations, PHI exposure, BioAI dual-use.

Leadership Considerations: require a Clinical AI Safety Case per system (intended use, failure modes, overrides, subgroup performance), demand vendor transparency on data/red-teaming, and stand up a BioAI Governance Committee.

Financial Services

Risks: model drift, synthetic fraud, code-supply-chain exposure.

Leadership Considerations: extend MRM to LLMs (pre-prod reviews, challenger models), deploy AI-native fraud/anomaly detection with real-time telemetry, and require security attestations for training data and red-team results.

Why this matters

Misuse scales cheaply; defense must scale smarter. Guardrails can’t live only in IT, legal, or compliance—this is an executive priority.

Watching next

About the author

Freddie Seba is an author, speaker, and EdD doctoral candidate (USF) focused on Generative AI Ethics & Governance for Leaders. MBA (Yale); MA (Stanford). Former USF faculty & Digital Health Informatics program director; Silicon Valley exec & entrepreneur.

Speaking / Briefings: For keynotes, board workshops, or exec sessions, connect on LinkedIn or visit freddieseba.com.

Science — “Made-to-order bioweapon? AI-designed toxins slip through safety checks.”

Copyright: © 2025 Freddie Seba. All rights reserved.

Sources and Useful Information

https://www.science.org/content/article/made-order-bioweapon-ai-designed-toxins-slip-through-safety-checks-used-companies

Washington Post coverage — AI-designed toxins & biosecurity gaps

https://www.washingtonpost.com/science/2025/10/02/ai-toxins-biosecurity-risks

RAND — Evaluating the Risks of Preventive Attack in the Race for Advanced AI (PE-A3691-13)

https://www.rand.org/pubs/perspectives/PEA3691-13.html

Governor of California — SB-53 signing announcement (Transparency in Frontier AI Act)

https://www.gov.ca.gov/2025/09/29/governor-newsom-signs-sb-53-advancing-californias-world-leading-artificial-intelligence-industry

Stanford HAI event — Brian Hie: Genome Modeling & Design Across All Domains of Life

https://hai.stanford.edu/events/brian-hie-genome-modeling-design-across-all-domains-of-life

Stanford Data Science listing — Brian Hie seminar (Evo-2)

https://datascience.stanford.edu/events/seminar/brian-hie-genome-modeling-design-across-all-domains-life

bioRxiv — Evo-2 preprint

https://www.biorxiv.org/content/10.1101/2025.02.18.638918v1

Dash0 — $35M Series A announcement

https://www.dash0.com/blog/dash0-raises-usd35-million-series-a

Dash0 — Building Dash0: From Idea to Series A

https://www.dash0.com/blog/building-dash0-from-idea-to-series-a

OpenAI — Introducing Parental Controls

https://openai.com/index/introducing-parental-controls

Google — 2025 DORA Report (Google Blog)

https://blog.google/technology/developers/dora-report-2025

Google Cloud — Announcing the 2025 DORA Report

https://blog.google/technology/developers/dora-report-2025