Freddie Seba headshot

AI Keynote Speaker & Ethics Speaker & Executive Workshops

Dr. Freddie Seba

Issue #32 | Greener Tokens, Stronger Guardrails



By Freddie Seba © 2025 · Also published on LinkedIn, Substack, and freddieseba.com


Article Summaries (ordered by governance impact)

1) The unit-economics reset: cheaper tokens can lower the energy footprint
A sharp price/performance drop for “nano/mini” models shifts the conversation from how much AI costs to where and how inference runs. Right-sized models, smarter routing, and more on-device/edge use can lower energy and grid pressure—if we architect for it. (Source in comment.)
Global & Policy — Scrutiny shifts to carbon reporting, grid impact, and location of compute.
Institutional & Governance — Move high-volume/low-risk tasks (summaries, routing, classification) to nano/mini or edge; keep higher-stakes tasks on vetted stacks.
Leadership & Practice — Default to a small model for routine tasks and escalate only when necessary; at home, use on-device features and turn off unnecessary chat history/logging.

2) Enterprise spend is consolidating—Anthropic is winning more of it
Many large companies now prefer Anthropic’s Claude, especially for coding. Mid-year reads show enterprise dollars concentrating around a few leaders—good for quality/speed, but it raises questions about lock-in and resilience. (Sources in comment.)
Global & Policy — Market concentration heightens competitiveness and antitrust concerns.
Institutional & Governance — Build price and vendor portability into your technology partnerships/platform strategy (routing, evaluation harnesses, fine-tuning/export rights, data residency).
Leadership & Practice — Write a one-page model-portfolio plan: which model for which job, how you’ll switch, and who approves changes.

3) Copyright détente: settlements show IP risk being “priced in”
A major model provider settled with authors over training data claims. Directionally, IP risk is shifting from abstract to managed via settlements, licensing, provenance, and filters. (Source in comment.)
Global & Policy — Expect evolving norms (licensing, opt-outs, provenance) rather than a single final standard.
Institutional & Governance — Treat training-data rights and output protections as separate contract surfaces.
Leadership & Practice — Ask vendors two questions and write the answers into your MSA: (1) What rights do you have to the training data? (2) What output protections/indemnities do you provide?

4) Edge autonomy arrives: NVIDIA’s next-gen “robot brain”
A new Jetson/“Thor” platform enables real-time perception, planning, and control at the edge—enabling more autonomous behavior off-cloud and increased governance in physical spaces. (Source in comment.)
Global & Policy — Productivity gains and safety/liability risks rise together; autonomy standards will tighten.
Institutional & Governance — Manufacturing, logistics, and retail scale “centaur operations” (human + robot) with clear safety accountabilities.
Leadership & Practice — Before pilots: name one accountable owner; post simple, visible rules (e.g., “If the robot hesitates or blocks a walkway, press STOP and call [team]”); log incidents in a shared doc (what happened, where, when, who stopped it, quick fix); do weekly 10-minute safety reviews.

5) AI for kids: from scripted toys to persistent companions
New conversational toys can remember and adapt—raising tough questions about privacy, retention, and children bonding with machines. (Source in comment.)
Global & Policy — Children’s AI is a rising policy priority (privacy, ads, cross-border data).
Institutional & Governance — Toy/ed-tech brands need age-appropriate design, do-not-share-by-default, short retention, and a clear incident (“oops”) playbook.
Leadership & Practice — For parents: turn off cloud logging/sharing; enable parental controls; keep sessions short and supervised; store toys offline when not in use; teach the “talk to a person when unsure” rule.

6) Government safety promises under strain
Fast Company, reporting on a Brown/Harvard/Stanford analysis led by Rishi Bommasani, argues several federal AI safety commitments are unevenly implemented—with weak links on consumer platforms—creating an assurance gap for buyers. (Source in comment.)
Global & Policy — Momentum toward mandated transparency, testing, and independent assurance.
Institutional & Governance — Public buyers and regulated sectors must evidence compliance; attestations aren’t enough.
Leadership & Practice — Map vendor promises to your own tests (simple red-team scenarios, spot checks) and run quarterly reviews with action items.


Reflections

Efficiency ≠ responsibility. The unit-economics reset demonstrates how right-sized models and routing can reduce energy and costs—while expanding the surface area for privacy, safety, and IP exposure. Enterprise spend is consolidating—Anthropic is winning more of it underscores why portability and mission-driven governance must be designed in, not bolted on. Copyright détente signals that rights and responsibilities are being priced in, while Edge autonomy arrives and AI for kids brings AI into physical spaces and children’s rooms, where the governance bar is highest.

The leadership question isn’t “Which model is best?” It’s: What portfolio and governance posture keep us credible when the market tilts, a rights claim lands, products reach children, or autonomy fails in the field?


Sector-Specific Implications

Higher Education (teaching focus)
Teaching practice — Default to nano/mini for routine classroom tasks; document authorship; update syllabi/class policies on data location and logging; be explicit about what’s allowed and why.
Technology partnerships & platform strategy — Bake portability and evaluation harnesses into agreements; specify training-data rights and output protections up front; plan for switching without disrupting courses.

Healthcare
Clinical & operational agents — Treat orchestration (routing, memory, external actions) as in scope for safety; require human countersignature for high-risk steps.
IP & provenance — Separate training-data rights from output indemnities; align with privacy and auditability (chart notes, agentic conversations).

Financial Services
Records & explainability — Treat agentic conversations as regulated records; control retention, deletion, and discoverability.
Technology concentration risk — Stress-test continuity if a preferred vendor changes pricing, access, or legal posture; maintain a multi-model routing layer.


With Gratitude
@University of San Francisco · @USF School of Education · @USF School of Nursing and Health Professions · @AMIA · @AAC&U · @Stanford HAI · @CHAI · @University of Illinois Chicago

About Freddie Seba
Freddie Seba is an author, public speaker, and EdD doctoral candidate in Organization & Leadership at the University of San Francisco, focusing on GenAI Ethics & Governance for Leaders. A former Digital Health Informatics faculty member (8+ years), director/chair, and former global corporate executive and serial entrepreneur in the San Francisco Bay Area, he helps universities, health systems, and financial institutions operationalize mission-driven ethics and governance for generative AI adoption. This series appears on LinkedIn, Substack, and freddieseba.com.

Transparency & Copyright
This installment was drafted and edited using generative AI tools for synthesis and clarity; all insights and voice are the author’s.
© 2025 Freddie Seba. All rights reserved. For reprints, licensing, or speaking inquiries, contact via LinkedIn or freddieseba.com.
Reminder: Full citations and links are posted in the first comment.

Hashtags
#GenAI #AIethics #AIgovernance #Sustainability #EnergyEfficiency #EdgeAI #ChildrensPrivacy #Anthropic #NVIDIA #OpenAI #Google

Sources for Issue #32

1) Cheaper tokens/efficiency
– Menlo Ventures: “2025 Mid-Year LLM Market Update”
https://menlovc.com/perspective/2025-mid-year-llm-market-update/

2) Enterprise consolidation (Anthropic momentum) & market dynamics
– TechCrunch: “Enterprises prefer Anthropic’s AI models over anyone else’s, including OpenAI’s”
https://techcrunch.com/2025/07/31/enterprises-prefer-anthropics-ai-models-over-anyone-elses-including-openais/

3) Copyright settlement
– WIRED: “Anthropic Settles High-Profile AI Copyright Lawsuit Brought by Book Authors”
https://www.wired.com/story/anthropic-settles-copyright-lawsuit-authors/

4) Edge autonomy/robot compute
– NVIDIA Blog: “Jetson Thor Unlocks Real-Time Reasoning for General Robotics and Physical AI”
https://blogs.nvidia.com/blog/jetson-thor-physical-ai-edge/

5) AI for kids
– IEEE Spectrum: “AI Barbie Dolls Are Here. Should You Buy One?”
https://spectrum.ieee.org/ai-barbie-dolls

6) Government safety promises under strain
– Fast Company: “Biden-era AI safety promises aren’t holding up—and Apple’s the weakest link”
https://www.fastcompany.com/91389117/biden-era-ai-safety-promises-arent-holding-up-and-apples-the-weakest-link