What we let it become...
- Douglas Adams
Background
- Ex-FBI cybersecurity and defense expert with a focus on distributed systems.
- Decades of work in lawful interception and telecom security standards.
- Stanford-trained mathematician with a dynamical systems mindset.
Implications for AI
- Preference for architectures in which constraints are structural, not bolted on.
- Bias toward mechanisms that can be audited and reasoned about.
- View of AI as infrastructure that must be robust, observable, and governable.
What we let it become...is the path of drift, the path of building systems that quietly drift in to pathology, systems that behave in ways no one intended, because no one really truly understands what we built in the first place.
Two decades building safety-critical systems for Navy submarines and architecting FBI cybersecurity standards teaches one unavoidable truth: if a system can fail, it will. If it can be exploited, it will be. That's not pessimism. That's Murphy's Law with teeth. This is the pattern: systems designed without adversarial thinking eventually get exploited. Systems designed without structural constraints eventually drift. Every conceivable mode of failure shows up eventually. And several that were, frankly, inconceivable.1
1 I'm told that word doesn't mean what I think it means. I disagree.
There's a good chance the AI alignment problem isn't about training better models. It might be that we're building the wrong architecture in the first place.
Current Version: v1.0.94
We're trying to make giant centralized language models "safe" through better training, better guardrails, better oversight. But you can't bolt safety onto a fundamentally unsafe architecture. Not in submarines. Not in telecom. Not in AI.
That's why the focus here is distributed spiking neural networks, architectures in which safety is structural, not added after the fact. Where alignment isn't a single point of failure.
My work now focuses on distributed spiking neural networks to avoid the single-point-of-failure risk inherent to centralized architectures.
My fundamental conclusion? Security must lead and constrain architecture. Training first, then failing, then surrounding the failures with constraints, after the fact, doesn't work. It can't work.
This worldview shapes my approach to AI alignment: not as a tuning exercise, but as an architectural systems-safety problem.
Also, a day without code is a day without sunshine.
Discussion