WRITING
Notes from the build.
Field reports from teams putting AI agents into production. Patterns we keep seeing. Things we got wrong. Things we still do not understand. Published when there is something worth saying.
Why we stopped calling it AI safety and started calling it bounded authority.
"Safety" frames the problem as the model. "Authority" frames the problem as the operator. One of these gives you a road forward; the other gives you a research agenda.
Read →
What a CFO actually asks before signing off on an autonomous spend agent.
Three questions, every time. None of them are about model accuracy. All of them are about what happens when something goes wrong.
Read →
Six tools, six dashboards, one agent — and still nobody can answer the audit.
The current shape of "production-ready agent" is a stack assembled by exhausted security teams. We think there is a better shape.
Read →
More posts on the way. We publish when we have something to say, not on a calendar.