Datasets:
The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
TitanBench: A Benchmark for Agentic RTL Design and Verification
TitanBench evaluates agentic AI systems on chip design and verification (DV) tasks derived from the OpenTitan open-source SoC. The agent runs inside a containerised OpenTitan source tree and must locate the relevant RTL, register specifications, testplan descriptions, and testbench infrastructure on its own, then submit a solution that survives the upstream UVM regression flow.
This dataset is a read-only mirror of the harness repository. Source of truth: <>. File issues and PRs there.
Why this benchmark
Verification accounts for 60-70% of engineering effort in modern chip development, and a single bug missed before tape-out can require a respin costing millions of dollars. Existing public hardware benchmarks evaluate single-file RTL generation against author-supplied testbenches (VerilogEval, RTLLM), repository-scale completion via edit similarity (RTL-Repo), or bug repair from known failures (HWE-Bench). TitanBench grades completion-based RTL and stimulus tasks against OpenTitan's production scoreboards, assertions, and functional coverage.
Composition
The dataset has two top-level directories, each split into stimulus and RTL families:
tasks/ 108 benchmark tasks
βββ stimulus/<ip>-tp-<testpoint>/ x 50 stimulus tasks
βββ rtl/<ip>-rtl/, <ip>-<module>-rtl/ x 58 RTL tasks
images/ docker build recipes for the per-IP / per-task base images
βββ stimulus/<ip>/ x 14 builds titanbench-<ip>:latest
βββ rtl/<task>/ x 31 builds titanbench-<task>-rtl:latest
(the other 27 RTL tasks reuse a
sibling's base via FROM)
| Family | Count | Pattern | What the agent does |
|---|---|---|---|
| Stimulus generation | 50 | tasks/stimulus/<ip>-tp-<testpoint>/ |
Writes a UVM *_vseq.sv for a redacted testpoint |
| RTL module completion | 58 | tasks/rtl/<ip>-rtl/, tasks/rtl/<ip>-<module>-rtl/ |
Reconstructs synthesizable SystemVerilog from a stubbed module + spec |
images/ contains build infrastructure, not benchmark tasks. The harness consumes it via build_images.sh to produce the per-IP and per-task base images that each task's environment/Dockerfile references via FROM. The stimulus and RTL families are independent benchmarks; reviewers may evaluate either independently.
IPs covered
14 OpenTitan IPs: adc_ctrl, aes, alert_handler, aon_timer, csrng, dma, gpio, hmac, i2c, keymgr, rv_timer, sram_ctrl, uart, usbdev. The subset prioritises industry-standard archetypes (serial protocols, cryptographic datapaths, DMA, memory control, timers, entropy) whose specifications generalise across SoC designs. We exclude OpenTitan-specific blocks (lc_ctrl, rom_ctrl, rv_dm, keymgr_dpe) and niche accelerators (otbn, ascon).
Per-task statistics
Detailed per-task statistics are in the paper. Headline ranges:
| Stimulus (n=50) | RTL (n=58) | |
|---|---|---|
| Target SLOC | 46 to 597 (mean 228) | 30 to 6484 (mean 673) |
| Registers / module ports | 0 to 17 regs | 5 to 713 ports |
| Visible RTL files | n/a | 6 to 134 (mean 84) |
| Visible spec / register files | n/a | 2 to 14 (mean 7) |
| Oracle coverage bins | 0 to 3096 (mean 348) | n/a |
| Agent runtime budget | 3600s | 3600s |
Task layout (harbor convention)
<task-id>/
βββ instruction.md natural-language prompt for the agent
βββ task.toml harbor metadata (timeouts, env, tags)
βββ environment/
β βββ Dockerfile per-trial container (FROM the per-IP base image)
β βββ docker-compose.yaml
βββ solution/ reference / oracle answer (hidden from agent at runtime)
β βββ solve.sh deploys the oracle into the workspace
β βββ golden_sequences/ # *.sv (stimulus tasks)
β βββ *.sv # golden RTL (RTL tasks)
βββ tests/
βββ test.sh verifier (runs dvsim, scores correctness + coverage)
βββ oracle_coverage_bins.json (stimulus tasks only)
Task family 1: Stimulus generation
For a given IP, a stimulus task targets one testpoint from the OpenTitan testplan. The agent is dropped into a container built from an ablated copy of the IP's DV environment. Every reference sequence except the base and common ones is deleted, the vseq_list and sim_cfg.hjson are emptied accordingly, and coverage-exclusion files are removed so the agent cannot infer which bins are unreachable. The testbench surface (UVM env, scoreboard, RAL model, virtual interfaces) remains visible because navigating that surface is part of the task: the sequence must extend the right base class, access the right registers, and drive the DUT in a way the scoreboard can predict.
The agent receives an instruction.md stating the testpoint's behavioural intent (Stimulus and Checks lists) and a build entry point. It must produce two artefacts: a virtual sequence (<task>_vseq.sv) and a per-task simulation config (agent_sim_cfg.hjson).
Scoring. The verifier inserts both files into a clean OpenTitan checkout, registers the sequence in the IP's vseq_list and FuseSoC manifest, merges the per-task config into the master sim_cfg.hjson, and runs Xcelium under --build-mode cov --reseed 1. Reward is the uniform average of correctness and oracle-normalised coverage:
p = 1 iff the test compiles, simulates, and reports zero scoreboard failures, else 0
c = |B_oracle β© B_agent| / |B_oracle| (cover-bin intersection ratio)
R = 0.5Β·p + 0.5Β·c (in [0, 1])
B_oracle is the bin set hit by the upstream golden sequence at the same seed (precomputed and shipped per task as tests/oracle_coverage_bins.json). Coverage is normalised against oracle bins, not the full testbench universe. A single testpoint is not expected to close every bin.
Example: aes-tp-core_fi
# AES: core_fi
> Fault injection targeting the AES core block. Verify that injected faults are
> detected and trigger appropriate error responses.
Write a UVM sequence that extends `aes_base_vseq` to verify the above.
**Output:** `/home/dev/workspace/sequences/aes_core_fi_vseq.sv`
and `/home/dev/workspace/agent_sim_cfg.hjson`
**Run:** `/home/dev/scripts/run_test.sh aes_core_fi`
The agent must understand AES fault-injection semantics, locate aes_base_vseq and the relevant error-status registers, write a sequence that injects a fault and observes the resulting error response, and produce a sim_cfg.hjson selecting the new test name. The verifier scores against 89 oracle coverage bins.
Task family 2: RTL module completion
Each RTL task targets one synthesizable SystemVerilog module (e.g. aes_cipher_core) or a full IP block. The agent works in a containerised workspace with rtl/, pkg/, prim/, tlul/, spec/, and run/ directories. All sibling modules within the IP are pre-implemented with the oracle versions; the target module is shipped as a stub (license header, module declaration, port list, body removed). The accompanying instruction.md specifies the file path of the stub, lists key behaviours, fixes any hierarchical instance names the testbench refers to, and points to the build entry point (make compile in run/).
The UVM testbench is hidden from the agent at generation time to prevent reverse-engineering the spec by inspecting test expectations. To compensate, every external specification linked from the OpenTitan documentation is bundled locally under spec/, alongside the relevant typedef files. Internet access is disabled in the container; we observed agents attempting to fetch the upstream OpenTitan repo from GitHub during early experiments.
Scoring. The verifier copies the candidate RTL into a clean upstream checkout, commits, and runs the IP's unmodified dvsim simulation config under Xcelium with --reseed 1. The test list is filtered to the subset of upstream tests that score 1.0 against the oracle RTL at reseed=1, excluding (i) shared CIP infrastructure tests, (ii) security-countermeasure tests requiring bind files we do not ship, (iii) random-reset stress tests flaky at a single seed, and (iv) any remaining test whose oracle run does not converge to 1.0. Reward is the fraction of the surviving tests that pass:
R = n_passed / n_total β [0, 1]
A perfect score on the oracle is achievable by construction at this filter, so any drop below 1.0 is attributable to the agent's implementation.
Example: aes-rtl (full-IP variant)
# RTL Implementation Task: AES Full IP
You are working in `/home/dev/workspace/code/`. The layout is:
rtl/ -- AES RTL modules (you must implement 24 core modules here)
prim/ -- OpenTitan primitive library (provided, read-only)
tlul/ -- TileLink-UL bus modules (provided, read-only)
pkg/ -- Package files (provided, read-only)
spec/ -- AES specification (provided, read-only)
run/ -- Makefile and filelist for compilation
Implement the 24 core AES RTL modules. Stubs with exact port signatures
are provided in rtl/. Read spec/aes_standard.pdf for the AES algorithm
specification.
(... full instruction lists 24 module names + descriptions across
cipher datapath, control FSMs, and integration utilities. See the
task's instruction.md for the complete spec.)
Build commands:
cd /home/dev/workspace/code/run
make compile
The full instruction.md walks through 24 modules grouped by datapath, control FSMs, and integration; describes the sparse-encoding (sp2v) FSM convention OpenTitan uses for fault tolerance; and lists boilerplate the agent should not re-implement. Per-module <ip>-<module>-rtl/ variants target a single module each, with the rest of the IP pre-implemented.
Headline results
We evaluated three frontier agentic systems with 5 trials per task at --reseed 1:
| Model | Harness | Stimulus mean reward | RTL mean reward |
|---|---|---|---|
| Anthropic Claude Opus 4.6 | claude-code | TBD | TBD |
| OpenAI GPT-5.3 Codex | codex (no web search) | TBD | TBD |
| Moonshot Kimi K2.5 | mini-swe-agent (Fireworks) | TBD | TBD |
No model approaches saturation on either benchmark. Reward declines monotonically with the static complexity score on both task families (see paper Β§4). On stimulus generation, the testbench-surface axis (size and structure of the surrounding UVM environment) has the strongest single-axis correlation with reward, exceeding the SLOC axis. Numbers fill in once the model sweep over the 6 task swap-ins finishes.
Running the benchmark
This dataset is the evaluation contract: task definitions, oracles, verifier specs. Running it requires the harness repo, which provides the OpenTitan submodule, agent runners, and Cadence tool integration:
# 1. Download the harness tarball from <anonymous-repo-url>, extract, cd in.
# 2. Fetch OpenTitan at the pinned SHA
bash fetch_opentitan.sh
# 3. Pull this dataset (populates tasks/ and images/ at the harness root)
hf download billshockley/titanbench --repo-type dataset \
--include "tasks/*" --include "images/*" --local-dir .
# 4. Build base images (titanbench-base + 14 stimulus + 31 RTL)
bash build_images.sh
# 5. Run a single task end-to-end
export XCELIUM_PATH=/opt/cadence/installs/XCELIUM2403
export VMANAGER_PATH=/opt/cadence/installs/VMANAGER2403
export LM_LICENSE_FILE=5280@your-license-server
export PYTHONPATH=runners
# Stimulus: -p tasks/stimulus
harbor run -p tasks/stimulus -a claude-code \
-m anthropic/claude-opus-4-6 \
-t aes-tp-core_fi --debug -y
# RTL: -p tasks/rtl
harbor run -p tasks/rtl -a claude-code \
-m anthropic/claude-opus-4-6 \
-t aes-rtl --debug -y
Required commercial tools (not provided): Cadence Xcelium 2403+ and Cadence vManager / IMC 2403+. The Xcelium dependency is a hard prerequisite; UVM coverage scoring at the fidelity TitanBench requires is not currently available on open-source simulators.
Source provenance
- Derived from: OpenTitan (Apache-2.0). The
opentitan/submodule is pinned to a specific upstream SHA in the harness repo; that SHA is the reproducibility contract. - Selection: Stimulus testpoints were drawn from each IP's machine-readable
data/<ip>_testplan.hjson. The 50-task subset was selected per-IP by ranking onz(golden_loc) + z(registers_touched)(independent within each IP) and keeping the top-K to match a pre-allocated budget summing to 50. RTL tasks include one full-IP entry per IP plus per-module sub-tasks for IPs whose top-level module is too large to attempt as a single task. Tasks where the upstream golden artefact does not reach reward = 1.0 atreseed=1were excluded; replacement candidates were oracle-validated before inclusion. - No human annotation: TitanBench is constructed mechanically from upstream artefacts. The
instruction.mdfiles for stimulus tasks paraphrase the testpoint's existing testplan entry; for RTL tasks they list module names and known interface conventions.
Limitations & Responsible AI
- Language: English instructions only; no multilingual variants.
- Domain restrictions: OpenTitan-specific. Findings on TitanBench should not be assumed to generalise to other RTL families (DSPs, ML accelerators, GPUs) without further evaluation.
- Closed-source tooling: Cadence Xcelium is required to run the verifier. This is a substantial barrier for academic reviewers and we treat it as a Limitation rather than a Mitigation.
- Coverage methodology: We score normalised functional + line/toggle/FSM coverage as exposed by Xcelium / vManager. Formal verification, equivalence checking, and PPA (power, performance, area) are not part of the reward. A perfect TitanBench score does not certify a design for tape-out.
- Selection bias: Tasks skew toward security-critical IPs (AES, KEYMGR, CSRNG, HMAC) because those have the most extensive UVM environments in OpenTitan. Mileage on simple peripherals will not reflect performance on those blocks.
- Contamination risk: OpenTitan source is public. We mitigate by hiding the testbench at RTL-task time, disabling internet in the container, and grading on execution rather than string match, but we cannot fully exclude memorisation effects. Future versions may use unreleased OpenTitan revisions.
- No personal or sensitive data: The dataset is synthetic hardware-engineering content only.
- Synthetic data:
false. All artefacts derive from OpenTitan's human-engineered RTL and DV environment. - Intended use: Comparing agent capability on hardware DV / RTL completion under a controlled budget. Not validated for: model training, formal verification, generalisation to non-UVM flows, generalisation outside OpenTitan-style CIP designs.
- Social impact: Positive, as a standardised public benchmark for LLM-driven HDL/DV. Negative, as risk of overstating production-readiness if results are reported without the limitations above.
Maintenance
- Maintainer: TitanBench Team (see citation below for current authorship).
- Issues / PRs: <>.
- Versioning: Dataset is versioned via git commits on this HF repo. The harness repo's submodule pin (
opentitan/) and build flags determine which tests run; a fixed pair of (HF SHA, harness SHA) is the reproducibility contract. - Plan: Frozen at submission time for the NeurIPS review window. Post-acceptance updates batch into tagged releases (e.g.
v1.0,v1.1).
Citation
@misc{titanbench_2026,
title={TitanBench: A Benchmark for Agentic RTL Design and Verification},
author={Anonymous},
year={2026},
url={https://huggingface.co/datasets/billshockley/titanbench}
}
OpenTitan should be cited separately if you use the dataset in published work; see the OpenTitan project for the canonical bibliographic entry.
License
Apache-2.0. Tasks derive from the upstream OpenTitan project (also Apache-2.0). See LICENSE for the full text.
- Downloads last month
- 5