VerifyLink Infrastructure

Cryptographic accountability for every AI decision.

When an AI agent acts on your data, signs a contract, makes a recommendation, or accesses a record — VLI seals what it did, why, and with what context. Every action becomes a tamper-evident receipt your auditors, regulators, and customers can verify forever.

Three problems VLI solves.

The accountability gap

Today: "the model recommended approval."

Tomorrow: a sealed record showing which model, what version, what data it saw, what reasoning steps, signed at the moment of decision.

The audit-trail gap

Today: opaque LLM calls in your logs.

Tomorrow: every agent session begins with ai_begin_session, every decision sealed, every data access registered. Auditable end-to-end.

The governance gap

Today: AI policy lives in PDFs no one reads.

Tomorrow: policy is enforced in the seal — a decision either has its required attestations or it doesn't. The math doesn't bend.

Why this matters now.

Regulators are asking organizations to prove AI systems behaved correctly — EU AI Act, NIST AI RMF, sector-specific rules in healthcare and finance. The proof has to be portable, tamper-evident, and verifiable without trusting the AI vendor.

VLI is that proof layer. It doesn't try to govern what your AI does. It makes what your AI did impossible to deny later.

Six tools. Drop into Claude Code or any agent.

Cryptographic accountability turns on.

ai_begin_session

Start a sealed agent session with an attested identity.

ai_seal_decision

Record an agent decision, inputs, reasoning, outputs.

ai_seal_access

Record what data the agent read or wrote.

ai_checkpoint

Anchor session state at intervals so partial work is provable.

ai_end_session

Close and seal the session bundle.

ai_verify

Verify a sealed AI session matches its registered Merkle proof.

What a sealed AI session looks like.

Show the session bundle
{
  "session_id":   "ai-sess:abc123...",
  "agent_id":     "did:vli:agent:claude-opus-4-7",
  "started_at":   "2026-04-27T18:00:00Z",
  "ended_at":     "2026-04-27T18:42:13Z",
  "decisions":    47,
  "merkle_root":  "sha256:...",
  "signature":    {"alg": "Ed25519", "value": "sig:..."},
  "registry": {
    "logId":      "verifylinkinfra-prod",
    "leafIndex":  9173,
    "auditPath":  ["sha256:...", "..."]
  }
}

This bundle works in any ClearKey CLI install. No VLI dependency at verification time.

Install + configure

Add to your Claude Code MCP config:

{
  "mcpServers": {
    "ai-trust": {
      "command": "node",
      "args": ["./mcp-ai-trust/src/server.js"],
      "env": {
        "VLI_REGISTRY_URL": "https://verifylinkinfra.com/registry-api",
        "VLI_ART_API":      "https://vliart.com/api",
        "VLI_AGENT_ID":     "your-agent-identity"
      }
    }
  }
}

Looking for the public registry of AI agents?

ART (AI Registry & Trust) is the public-facing companion — a transparency log of AI agents, what they're authorized to do, who delegated that authority, and the trust attestations behind them. Every sealed session via the AI Trust MCP can anchor to ART.

Visit ART →

Who this is for.

AI engineers building agent systems

Provable agent behavior, no extra database, no logging hacks. Drop in the MCP and turn it on.

Compliance & risk officers

Audit-ready AI by default. Every decision a tamper-evident receipt. Frameworks: EU AI Act, NIST AI RMF, HIPAA AI use, FINRA.

AI safety researchers

Independent verifiability of agent behavior. ClearKey + the AI Trust MCP = a research substrate that doesn't require trusting the lab.