Agents that act.Actions that are bound.
We build infrastructure for teams putting AI agents into the workflows where the agent has to do something — move money, change production state, file with a regulator — and answer for it.
We're building Primis — a runtime for AI agents that act with bounded authority.
AI agents today are powerful and unreliable. They reason well; they cannot be trusted with budget, with customer accounts, with production systems — not without a human watching every move. The teams trying to deploy them ship six bolt-on tools and a prayer.
Primis is the infrastructure that closes that gap. The team operating the agent declares the purpose and the authority it gets to act on. Primis holds the agent inside that declaration. Actions inside the bounds go through. Actions outside the bounds do not. Every consequential action is signed and attributable by default — the audit trail is a property of the system, not a feature you assemble after the fact.
If any of these sound like your week, we're building for you.
The site is for teams putting AI agents into workflows where the agent has to do something real and someone has to answer for it. Recognize your situation? Tell us about it.
If your AI agent can answer a customer but can't issue the refund, because no one will give a stochastic system unbounded credit authority — we're building for you.
If your cloud cost anomaly detector pages a human at 3am and the runaway resource keeps burning until someone clicks approve — we're building for you.
If your compliance team has a regulatory filing coming due and your only options are spreadsheets, consultants, or an AI that nobody trusts to submit — we're building for you.
If you have an agent in your platform that needs IAM scope tighter than "admin" but broader than "nothing," and the gap is being patched with custom RBAC and a Slack approval queue — we're building for you.
If your security review keeps stalling on the same question — "what does the agent do when it's wrong, and who can prove what it did?" — we're building for you.
If you've shipped agents into procurement, customer ops, or supply chain and the audit trail is six dashboards stitched together by someone on the security team — we're building for you.
If you're putting agents into production and the part that scares you isn't the model, it's the day someone asks what authority you actually gave it — we're building for you.
A small company,building one thing properly.
Medhaksha Labs started as a services company building AI systems and embedded software for other people's businesses. We kept hitting the same gap — the agent demo was easy, the production deployment was the same six bolt-on tools, and nobody could sleep at night with the result on. So we changed what we work on. Today we are full-time on Primis and the workflows it serves.
More about us →What we are thinking about.
Notes from the build. Patterns we keep seeing. Mistakes we have made and want others to skip.
Why we stopped calling it AI safety and started calling it bounded authority.
“Safety” frames the problem as the model. “Authority” frames the problem as the operator. One of these gives you a road forward; the other gives you a research agenda.
What a CFO actually asks before signing off on an autonomous spend agent.
Three questions, every time. None of them are about model accuracy. All of them are about what happens when something goes wrong.
Six tools, six dashboards, one agent — and still nobody can answer the audit.
The current shape of “production-ready agent” is a stack assembled by exhausted security teams. We think there is a better shape.