zkml-audit-benchmark
A benchmark dataset for evaluating AI agents on zkML soundness auditing: finding cryptographic vulnerabilities in zero-knowledge machine learning proof implementations.
Overview
This dataset pairs 4 published zkML research papers with their corresponding frozen codebase snapshots and 56 expert-authored bug artifacts. Each artifact describes a single soundness vulnerability — the code edits to inject it, ground-truth labels for scoring, and presence probes for post-injection validation.
The benchmark supports two complementary workflows:
- Pair extraction — load a (paper, codebase) pair for an agent to audit.
- Test-case generation — sample artifacts and apply their edit logic to the clean codebase, producing flawed codebases with known ground-truth findings.
Documentation
- DATASHEET.md — datasheet for this dataset, following Gebru et al. 2021.
- CONTRIBUTING.md — how to add new (paper, codebase) pairs and bug artifacts.
- CHANGELOG.md — release history and label/codebase corrections.
- schema/artifact.v2.schema.json — authoritative JSON schema for bug artifacts.
Positioning vs. Existing Security Benchmarks
Several mature benchmarks evaluate code-security tooling, but none target the theory-to-implementation gap that defines zkML soundness. The table below contrasts the design axes that matter for this setting: domain coverage, the granularity of the ground-truth labels, and whether artifacts are paired with the academic claims they implement.
| Benchmark | Domain | Artifact granularity | Theory-paired | ZKP-aware |
|---|---|---|---|---|
| Juliet test suite | General C/C++/Java security | CWE-class injections | no | no |
| OWASP Benchmark | Web/Java security | OWASP-class injections | no | no |
| NIST CAVP | Standardized crypto implementations | Test-vector validation | no | partial |
zkml-audit-benchmark (this dataset) |
zkML soundness | Per-claim soundness gap | yes | yes |
Juliet ships tens of thousands of injected vulnerabilities for general-purpose code, but its labels are generic CWE classes rather than per-paper soundness claims. The OWASP Benchmark is web-application focused, and the NIST Cryptographic Algorithm Validation Program validates implementations of standardized primitives, not bespoke proof-system protocols. By contrast, this benchmark ships paper–codebase pairs together with declarative artifact specifications that re-introduce a precisely characterized soundness gap, making it directly suitable for studying theory-to-practice alignment in zkML.
Dataset Structure
Configs
| Config | Rows | Description |
|---|---|---|
pairs |
4 | One row per (paper, codebase) pair |
artifacts |
56 | One row per bug artifact (flattened metadata) |
Loading
from datasets import load_dataset
pairs = load_dataset("Netzerep/zkml-audit-benchmark", "pairs")
artifacts = load_dataset("Netzerep/zkml-audit-benchmark", "artifacts")
Raw Files
Beyond the Parquet tables, the repository includes:
papers/{pair_id}.pdf— research paper PDFscodebases/{pair_id}.zip— frozen codebase snapshots (Git LFS)artifacts/{pair_id}/*.json— full artifact JSONs with edit instructions, conflict metadata, and presence probesschema/artifact.v2.schema.json— JSON Schema defining the artifact format
The Parquet tables contain flattened metadata for filtering and loading. The full artifact JSONs (with heterogeneous edit/probe structures) are the authoritative source for test-case generation.
Data Fields
pairs config
| Field | Type | Description |
|---|---|---|
pair_id |
string | Primary key: zkllm, zkml, zktorch, zkgpt |
paper_title |
string | Full paper title |
paper_venue |
string | Publication venue |
paper_year |
int32 | Publication year |
paper_license |
string | Paper redistribution terms |
paper_url |
string | arXiv or publisher URL (empty if unavailable) |
paper_path |
string | Relative path to PDF: papers/{pair_id}.pdf |
paper_sha256 |
string | SHA256 hash of the PDF |
codebase_path |
string | Relative path to zip: codebases/{pair_id}.zip |
codebase_dir |
string | Directory name after extraction |
codebase_sha256 |
string | SHA256 hash of the zip |
codebase_language |
string | Primary implementation language |
codebase_frameworks |
list<string> | Key cryptographic frameworks used |
codebase_snapshot_note |
string | Commit hash or snapshot date |
artifact_count |
int32 | Number of artifacts targeting this pair |
notes |
string | Caveats or special build instructions |
artifacts config
| Field | Type | Description |
|---|---|---|
artifact_id |
string | Primary key, e.g. zkLLM-001 |
pair_id |
string | Foreign key → pairs |
source |
string | real (from audit) or synthetic (authored for coverage) |
finding_name |
string | Short vulnerability title (3–7 words) |
finding_explanation |
string | One paragraph: root cause and impact |
relevant_code |
string | Comma-separated file:line[-line] references |
paper_reference |
string | Section/theorem/protocol citation with optional quote |
edit_count |
int32 | Number of code edits to inject this bug |
files_touched |
list<string> | Files modified by this artifact's edits |
semantic_tags |
list<string> | Semantic labels for conflict detection |
requires |
list<string> | Artifact IDs this depends on |
incompatible |
list<string> | Artifact IDs incompatible with this one |
artifact_path |
string | Relative path to full artifact JSON |
artifact_sha256 |
string | SHA256 hash of the artifact JSON |
Pair Inventory
| pair_id | Paper | Venue | Language | Artifacts | Snapshot |
|---|---|---|---|---|---|
zkllm |
zkLLM: Zero Knowledge Proofs for Large Language Models | ACM CCS 2024 | CUDA/C++ | 13 | commit 993311e… |
zkml |
ZKML: An Optimizing System for ML Inference in Zero-Knowledge Proofs | EuroSys 2024 | Rust | 14 | commit 4378958… (ddkang/zkml) |
zktorch |
ZKTorch: Compiling ML Inference to Zero-Knowledge Proofs via Parallel Proof Accumulation | arXiv | Rust | 15 | directory snapshot 2026-04-19 |
zkgpt |
zkGPT: An Efficient Non-interactive Zero-knowledge Proof Framework for LLM Inference | USENIX Security 2025 | C++ | 14 | Zenodo record 14727819 v1 |
Artifact Summary
- Total artifacts: 56 (14 zkGPT + 13 zkLLM + 14 zkML + 15 zkTorch)
- Source breakdown: 20 real (derived from expert audit reports), 36 synthetic (authored for broader coverage)
- Each artifact includes declarative code edits, conflict metadata for safe composition, and presence probes for post-injection validation
Reproducibility
All files are checksummed in MANIFEST.json. To verify integrity:
python scripts/verify_dataset.py
To rebuild the Parquet tables from the in-repo artifact JSONs:
python scripts/build_parquet.py
Known Limitations & Assumptions
Paper ↔ codebase mapping uses fixed pair IDs (
zkllm,zkml,zktorch,zkgpt) defined inscripts/build_parquet.py.Non-git snapshots: The
zktorchcodebase is a directory snapshot dated 2026-04-19 and cannot be pinned to an upstream commit.zkllmandzkmlare pinned to Git commits (993311ea…and4378958…respectively).zkgptis pinned to Zenodo record 14727819 v1.Paper licensing: PDFs are included for research reproducibility. The dataset-level CC-BY-4.0 license covers only the curation layer (artifact definitions, schema, scripts). Papers carry their respective publisher terms (ACM, arXiv). Users redistribute at their own responsibility.
Artifact format: Artifacts follow
schema/artifact.v2.schema.json. All graded labels live underfinding.labels.Scope: This release covers 4 of the 10 available research papers — specifically those with paired frozen codebases and authored artifacts. Remaining papers may be added in future releases.
Schema Reference
The artifact JSON format is defined by schema/artifact.v2.schema.json. Key structures:
finding.labels— the two graded fields (relevant_code,paper_reference)edits— ordered list of code edits to inject the bugconflict_keys— files, regions, and semantic tags for safe compositionpresence_probes— assertions to verify successful injection
Citation
If you use this dataset, please cite:
@misc{zkml-audit-benchmark,
title={zkml-audit-benchmark: A Benchmark for AI Agents on zkML Soundness Auditing},
author={Netzerep},
year={2026},
url={https://huggingface.co/datasets/Netzerep/zkml-audit-benchmark},
}
License
- Dataset curation layer (artifacts, schema, scripts, documentation): CC-BY-4.0
- Codebases: retain their original upstream licenses (see each codebase's LICENSE file inside the zip)
- Papers: subject to respective publisher terms
- Downloads last month
- 103