The accountability gap
Today: "the model recommended approval."
Tomorrow: a sealed record showing which model, what version, what data it saw, what reasoning steps, signed at the moment of decision.
When an AI agent acts on your data, signs a contract, makes a recommendation, or accesses a record — VLI seals what it did, why, and with what context. Every action becomes a tamper-evident receipt your auditors, regulators, and customers can verify forever.
Today: "the model recommended approval."
Tomorrow: a sealed record showing which model, what version, what data it saw, what reasoning steps, signed at the moment of decision.
Today: opaque LLM calls in your logs.
Tomorrow: every agent session begins with ai_begin_session, every decision sealed, every data access registered. Auditable end-to-end.
Today: AI policy lives in PDFs no one reads.
Tomorrow: policy is enforced in the seal — a decision either has its required attestations or it doesn't. The math doesn't bend.
Regulators are asking organizations to prove AI systems behaved correctly — EU AI Act, NIST AI RMF, sector-specific rules in healthcare and finance. The proof has to be portable, tamper-evident, and verifiable without trusting the AI vendor.
VLI is that proof layer. It doesn't try to govern what your AI does. It makes what your AI did impossible to deny later.
Cryptographic accountability turns on.
ai_begin_sessionStart a sealed agent session with an attested identity.
ai_seal_decisionRecord an agent decision, inputs, reasoning, outputs.
ai_seal_accessRecord what data the agent read or wrote.
ai_checkpointAnchor session state at intervals so partial work is provable.
ai_end_sessionClose and seal the session bundle.
ai_verifyVerify a sealed AI session matches its registered Merkle proof.
{
"session_id": "ai-sess:abc123...",
"agent_id": "did:vli:agent:claude-opus-4-7",
"started_at": "2026-04-27T18:00:00Z",
"ended_at": "2026-04-27T18:42:13Z",
"decisions": 47,
"merkle_root": "sha256:...",
"signature": {"alg": "Ed25519", "value": "sig:..."},
"registry": {
"logId": "verifylinkinfra-prod",
"leafIndex": 9173,
"auditPath": ["sha256:...", "..."]
}
} This bundle works in any ClearKey CLI install. No VLI dependency at verification time.
Add to your Claude Code MCP config:
{
"mcpServers": {
"ai-trust": {
"command": "node",
"args": ["./mcp-ai-trust/src/server.js"],
"env": {
"VLI_REGISTRY_URL": "https://verifylinkinfra.com/registry-api",
"VLI_ART_API": "https://vliart.com/api",
"VLI_AGENT_ID": "your-agent-identity"
}
}
}
} Source: github.com/VerifyLinkInfra-cloud/mcp-ai-trust · Apache 2.0
ART (AI Registry & Trust) is the public-facing companion — a transparency log of AI agents, what they're authorized to do, who delegated that authority, and the trust attestations behind them. Every sealed session via the AI Trust MCP can anchor to ART.
Provable agent behavior, no extra database, no logging hacks. Drop in the MCP and turn it on.
Audit-ready AI by default. Every decision a tamper-evident receipt. Frameworks: EU AI Act, NIST AI RMF, HIPAA AI use, FINRA.
Independent verifiability of agent behavior. ClearKey + the AI Trust MCP = a research substrate that doesn't require trusting the lab.