AI Audit Log Proof

Audited ≠ explainable

Seal AI decision attribution with a ZK proof at decision time. Make past rationale recoverable after model updates. Book a 30-minute discovery call to see how it fits your AI governance.

P2 · Verifiable AI Financial services · Insurance · Healthcare AI 5 min read

This page is for

Lending. Insurance. Underwriting. Clinical triage. Public benefits. AI makes these decisions every day. A year later, when an appeal or an audit lands, can your organization return the original rationale? Once the model is updated, can you precisely reconstruct what it decided and why?

The EU AI Act, ISO 42001, and FSA AI governance guidelines have all moved the bar from "logs exist" to "each decision's rationale is cryptographically sealed as tamper-proof evidence."

  • AI governance and compliance leads for financial, insurance, healthcare, and public-sector AI
  • Organizations preparing for EU AI Act (effective 2026) and ISO 42001 certification
  • MLOps and AI platform teams treating post-update explainability as an operational problem
  • Functions reducing appeal / regulatory audit / third-party audit response work — from paper trails to cryptographic evidence
  • Security teams running DLP, SIEM, and AI audit vendors who still want decision logs cryptographically sealed

How Lemma approaches it

When an AI makes a decision, Lemma seals the model identifier, the docHash of the input attributes, the cryptographic signature of the applied guideline, and the attribution of the final output into a single ZK proof. Source data stays inside your perimeter. What crosses to the verifier is only the cryptographic record of which model decided what, based on which evidence, and when.

However the model evolves afterward, the structure of past decisions remains intact. When an appeal or audit arrives a year later, the regulator, the auditor, and the appellant can each verify the same proof independently — without your organization having to expose the underlying data.

Where to start — which decisioning system to attest first, ahead of your next EU AI Act or ISO 42001 audit cycle — is what we work out in a first conversation.

Lemma Discovery Call — Start with a 30-minute conversation

Tell us where your current AI decisioning concentrates accountability risk. We'll explore together whether Lemma's decision-time attestation could fit. No model implementation details or sensitive information required.

If we see a fit, we move to NDA and then into sector-specific regulatory mapping (EU AI Act, ISO 42001, FSA AI guidelines), reference architecture, and PoC design.

Book a Discovery Call → Download whitepaper

A real-world example: an appeal arrives a year later

In September 2026, a regional bank's corporate lending AI declined a small-business loan application. A year later, an appeal arrives: "Under current standards, was the original decision actually justified?"

The problem: the model was updated in December 2026. Reconstructing exactly which features carried which weights, and which internal guideline produced the decline, is nearly impossible. What remains is a thin log: "Model version v2.3 / decline score 0.71 / decline reason code C-04." A quiet, organization-wide state of "logs exist but cannot explain."

With Lemma in place, the model hash, the docHash of the input attributes, the signature of the applied guideline, and the attribution of the final output are all sealed into a single ZK proof at decision time. The bank can prove that "on 2026-09-18, model A-v2.3 — under internal guideline G-2026-04 and based on input attribute category X — produced this decision," without exposing the underlying data. The regulator, the auditor, and the appellant each verify the same proof, independently.

Sector-specific regulatory mapping (EU AI Act Article 12, ISO 42001 audit requirements), MLOps integration patterns, and regulatory response time estimates are shared in the sector-specific kit we send after the consultation call.

Architecture in concept

Lemma does not replace your model or your MLOps stack. We add a single decision-time attestation step in the inference path.

Even after the model is updated, the hashes and guidelines of past decisions remain intact. Source data stays inside your perimeter. Only the cryptographic identity of the decision crosses to the verifier. Regulator, auditor, and appellant each verify independently.

Integration patterns with existing MLOps (MLflow, Weights & Biases, in-house inference platforms), alignment to EU AI Act Article 12 / ISO 42001 audit requirements, and decision-time overhead minimization are detailed in the whitepaper and the post-call technical kit.

What Lemma cryptographically guarantees

  • The timestamp of every decision, the deciding subject (model identifier, version), and the cryptographic signature of the applied guideline
  • The docHash of the input attributes and the attribution of the final output
  • The cryptographic identity of past decisions, unchanged across model updates
  • A trail that the regulator, auditor, and appellant can verify independently — without ever exposing the source data
Get Started

Ready to prove?

Talk to us about your use case. We respond within one business day.