Intercept every tool call. Enforce policies. Cap budgets. Detect loops. Write tamper-evident audit logs. All before your agent touches production.
No credit card required · 2-minute setup · MIT licensed
Works with every AI framework that makes tool calls
See It Work
Every tool call is evaluated against your policy. Allowed, denied, or killed — with a full audit trail.
How It Works
Stages 1 – 4 are fully deterministic. No AI in the critical path. Stage 5 is advisory only — human approval required.
Policy enforcement at the call boundary.
Binary allow or deny. First-match-wins rule evaluation. YAML-defined. P99 < 1ms latency.
Loop-detection heuristics within a run.
Same call fingerprint? Error loops? Stuck retry cycles? Detected immediately — no ML required.
Cooldown + corrective context injection.
One structured chance for the agent to self-correct. Bounded: if it fails, escalation is automatic.
Safe termination with evidence preservation.
Clean shutdown, not a process kill. Audit log sealed. Evidence preserved for human review.
AI Supervisor on the observation plane.
Interprets patterns. Proposes policy updates. Escalates to humans. Never in the enforcement critical path.
Advisory only · Human approval required · Self-guarded
Capabilities
A complete safety layer between your AI agents and production systems.
YAML-defined rules. Deny cloud metadata, block unauthorized tools, restrict URLs. First match wins.
Hard limits on cost (USD), tokens, and call count per run. Fail-closed when exceeded.
Fingerprint-based heuristics catch retry loops, error loops, and stuck agents automatically.
Tamper-evident JSONL log with SHA-256 hash chain. CLI-verifiable. Any tampering is detectable.
Runs entirely local. No account. No network. No telemetry. pip install and go.
Engine + CLI + shims are MIT. Use them anywhere. Control plane UI is AGPL-3.0.
Quick Start
Add LoopStorm Guard to your agent in Python or TypeScript. Define your policy in YAML.
from loopstorm import guard
@guard(policy="loopstorm.yaml")
def my_agent():
# Every tool call is now intercepted,
# policy-checked, budget-tracked, and logged
result = call_tool("web_search", {"query": "..."})
return resultimport { guard } from "loopstorm-ts";
const run = guard({ policy: "loopstorm.yaml" });
// Wrap your agent's tool calls
const result = await run.wrap("web_search", {
query: "..."
});# loopstorm.yaml
agent_role: my-agent
rules:
- name: deny-cloud-metadata
action: deny
tool_pattern: "http.*"
conditions:
- field: url
operator: matches
pattern: "169.254.169.254.*"
budget:
cost_usd:
hard: 5.00
calls:
hard: 100Pricing
The open-source engine is free forever. The cloud dashboard gives your team visibility.
Full enforcement engine. No limits. No account. No network dependency.
Everything in Open Source, plus a hosted dashboard for team visibility.
7-day free trial · No credit card required
Self-hosted or managed. Full data sovereignty. AI Supervisor on the observation plane.
Open Source
We believe the best safety tools are built in the open. Inspect every line. Deploy anywhere. Contribute back.
Engine (Rust), shims (Python/TS), and CLI are MIT licensed. PRs welcome.
Deploy the full control plane on your infrastructure. Docker, Kubernetes, or bare metal.
Schema hashes in VERIFY.md. Audit logs are CLI-verifiable. Reproducible builds.
FAQ
Less than 1ms P99 per tool call. The enforcement engine is written in Rust, runs locally in-process via IPC (Unix domain socket), and does zero network I/O in Mode 0. Policy evaluation is a single-pass first-match lookup — no ML inference, no API calls. Your agent won't notice it's there.
Under 2 minutes. For Python: pip install loopstorm-py, add @guard(policy="loopstorm.yaml") to your agent, and define your rules. For TypeScript: bun add loopstorm-ts with a similar one-line wrapper. No infrastructure, no account, no config server.
Yes — that's Mode 0, and it's the default. The engine, CLI, and shims run entirely locally. No telemetry, no license server, no phone-home. Audit logs stay on disk. You can air-gap it completely. The cloud dashboard (Mode 2) is optional, for teams that want cross-run analytics.
LoopStorm intercepts at the tool-call level, so it works with any framework that makes tool calls: LangChain, LlamaIndex, CrewAI, AutoGen, custom agents, or raw OpenAI/Anthropic SDK usage. Python and TypeScript shims are included. The Rust engine speaks a simple JSON-over-IPC protocol if you need to integrate from another language.
Most guardrails libraries validate LLM output text (prompt injection, toxicity). LoopStorm operates at the tool-call boundary — it controls what your agent can do, not what it says. Policy rules, budget caps, and loop detection are deterministic (no AI in the critical path). It's an enforcement layer, not a content filter.
Yes. The MIT-licensed engine has 67+ unit tests, 11 integration tests, and 4 end-to-end case studies covering budget exhaustion, loop detection, policy deny, and escalation. The audit log uses SHA-256 hash chains that are CLI-verifiable. It's the same engine that powers the cloud version — there's no "lite" edition.
LoopStorm Guard is open source, free, and installs in under a minute. No account required. No network dependency. Just safety.
pip install loopstorm-py · MIT licensed