Capabilities & Background
Service Areas
Our Framework
Most organizations treat alignment, safety, and security as a single undifferentiated concern. They map to different disciplines, different tooling, and different success metrics. A mature program names them separately, runs them separately, then stitches the results together into a coherent assurance posture. Anything short of that is theater.
The concept of intent is the scalpel that separates them:
Alignment — Is the system trying to do the right thing? If it fails by design, that's an alignment failure.
Safety — Even when it's trying, can it still cause harm? If it fails by accident, that's a safety failure.
Security — Can someone make it cause harm on purpose? If it fails on command, that's a security failure.
The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong, it usually turns out to be impossible to get at or repair. — Douglas Adams
Our Approach
What we let it become... is the path of drift. The path of building systems that quietly slide into pathology, systems that behave in ways no one intended, because no one really, truly understands what was built in the first place.
Adams understood something fundamental: the systems we declare "safe" are often the most dangerous. Not because they fail more often, but because when they fail, we have no idea how to fix them. We didn't design for failure. We designed for the happy path and crossed our fingers.
This is the pattern that emerges from decades of work on safety-critical systems— from Navy submarines to FBI cybersecurity standards to telecom infrastructure. Systems designed without adversarial thinking eventually get exploited. Systems designed without structural constraints eventually drift. Every conceivable mode of failure shows up eventually. And several that were, frankly, inconceivable.1
1 I'm told that word doesn't mean what I think it means. I disagree.
There's a good chance the AI alignment problem isn't about training better models. It might be that we're building the wrong architecture in the first place.
We're trying to make giant centralized language models "safe" through better training, better guardrails, better oversight. But you can't bolt safety onto a fundamentally unsafe architecture. Not in submarines. Not in telecom. Not in AI.
That's why the focus here is on distributed spiking neural networks—architectures in which safety is structural, not added after the fact. Where alignment isn't a single point of failure.
Training first, then failing, then surrounding the failures with constraints after the fact— that approach doesn't work. It can't work. The constraint has to come first. The architecture has to embody the safety property, not merely be trained to approximate it.
This worldview shapes the approach to AI alignment taken here: not as a tuning exercise, but as an architectural systems-safety problem.
One thing is certain: AI will become.
The only question is whether we make it, or merely let it.
The Perspective Behind This Work
These conclusions emerge from two decades of building, hardening, and operationalizing safety-critical systems—U.S. Navy submarine systems, FBI cybersecurity architecture, and international telecom security standards (3GPP SA3-LI).
When you spend that long watching systems fail in ways their designers never imagined, you develop a bias: toward architectures where constraints are structural, not bolted on. Toward mechanisms that can be audited and reasoned about. Toward viewing AI as infrastructure that must be robust, observable, and governable by design.