Verifiable AI

Audit Trail

監査トレイル

Tamper-evident records of system execution. Essential wherever after-the-fact verification matters — AI decision logs, payment paths, data-access history.

Definition

Application-side text logs are administrator-rewritable and therefore weak evidence. Tamper-evident audit trails use Merkle trees, transparency logs (Certificate Transparency / SCITT), or blockchain anchoring to make rewriting detectable.

Structurally: each event is docHash-pinned and linked to the previous entry. Periodically the chain head is anchored to an external surface, so retroactive tampering becomes detectable.

For AI workloads, the minimum content set is (1) input hash, (2) model version, (3) inference timestamp, (4) output hash, (5) human-approval status. This is the direct technical response to EU AI Act Article 12 (automated logging for high-risk AI).

Lemma Oracle implementation

Lemma realizes the audit trail as a commitment chain. Each entry hash-links to the prior; the head is anchored to a distributed ledger. Selective disclosure lets the auditor see only the attributes that matter (timestamp, model version) while the rest stays closed.

The body of the data stays hidden; what is provable via ZK is the fact that the inference happened — a specified model, on a specified input, at a specified time. GDPR and audit obligation hold together on the technical layer.

The same audit-trail substrate covers agent collaboration on A2A and tool calls under MCP.

Get started

Tamper-evident execution history for AI.