I built a governance layer for AI agents after watching them fail silently in production
In the last year, I've watched AI agents fail in ways that would have been catastrophic in regulated environments. A clinical triage agent that routed a patient to the wrong care pathway — no log, no audit trail, no rollback, no way to prove what happened or why. A financial reporting agent that executed a transaction based on stale data — silently. No error. No receipt. Just wrong output that passed downstream.
The problem isn't that the agents were bad. The problem is that there was nothing between the agent and the action.
What “governing” an agent actually means
A governance gate sits between the agent's intent and the action. Before the agent does anything, four questions must be answered:
- What is the risk tier of this action?
- Does it violate any policies?
- Is there a human override requirement?
- What's the rollback path if it goes wrong?
If any gate fails — the action doesn't execute. Fail-closed by default.
from dingdawg_loop import schedule_governed @schedule_governed( agent_id="@hipaa-intake", cron="0 9 * * *", risk_tier="high") def run_intake_agent(): # This runs daily at 9am # If governance is unreachable → skipped, not silently executed return process_patient_intake()
What a governance receipt looks like
Every governed action produces a cryptographically signed receipt. This is what comes back after a patient record read:
{ "receipt_id": "gov_1a2b3c4d5e6f", "agent_id": "@hipaa-intake", "action_type": "read_patient_record", "decision": "allow", "risk_score": 22, "explanation": { "primary_trigger": "read_only_access", "causal_chain": [ "read_patient_record → read_only_access policy → +8pts", "cumulative_score=22 < 40 → decision=allow" ], "confidence": 0.98 }, "ipfs_cid": "bafybeigdyrzt5sfp7udm7hu76uh7y26nf3efuylqabf3oclgtqy55fbzdi" }
The explanation field is generated by an LNN (Liquid Neural Network) — a deterministic causal trace, not a black-box score. You can read it top to bottom and understand exactly why the decision was made. That matters when a regulator asks.
The regulatory context
Colorado SB 205 (enforcement June 30, 2026) and the EU AI Act (August 2026) both require:
- Documented impact assessments for high-risk AI
- Audit trails for consequential decisions
- Human override mechanisms
- Explainability for decisions affecting consumers
Every governance receipt is that documentation. Not a PDF you generate once and forget — a live, pinned, cryptographically verifiable record of every action your agent took. The ipfs_cid in the receipt pins it to IPFS so it can't be modified after the fact, even by you.
The open-core model
The governance interface is Apache 2.0 on GitHub: github.com/dingdawg/governance-sdk. Fork it, audit it, run it locally. The LNN inference engine and IPFS pinning are cloud-tier — 25 free calls/day, $49/mo Pro.
The free tier is enough to instrument your most critical agent path and see governance receipts in real production traffic. The paid tier removes rate limits and adds IPFS pinning, receipt search, and webhook delivery for every decision.
Try it
Install in 60 seconds. Claude Code / Cursor MCP config:
{ "mcpServers": { "dingdawg-governance": { "command": "npx", "args": ["dingdawg-governance"] }} }} }
Or Python: pip install dingdawg-loop
Score your agent against 12 governance primitives at dingdawg.com/harness.
DingDawg provides AI governance tooling and automated compliance assessment. This post reflects production patterns we've developed and is not legal advice. Consult qualified legal counsel for your specific regulatory obligations.
Instrument your first governed agent
25 free governed calls per day. No credit card. Full receipt on every action.