Datasets:
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 2 new columns ({' "ts_iso": "2026-04-29T07:04:00.975823+00:00"', '{"ts": 1777446240.975823'}) and 2 missing columns ({' "ts_iso": "2026-04-29T06:45:08.898091+00:00"', '{"ts": 1777445108.8980887'}).
This happened while the csv dataset builder was generating data using
hf://datasets/jasminexli/verifier-challenge-traces/runs/20260429T070359Z-bqrf8dk6ndppqx/workload_labels.jsonl (at revision 0f53155093cd802dc92543351b2a6cfc3eb1280a), [/tmp/hf-datasets-cache/medium/datasets/66654874541666-config-parquet-and-info-jasminexli-verifier-chall-353b7917/hub/datasets--jasminexli--verifier-challenge-traces/snapshots/0f53155093cd802dc92543351b2a6cfc3eb1280a/runs/20260429T064507Z-j4vei3duw66yfd/workload_labels.jsonl (origin=hf://datasets/jasminexli/verifier-challenge-traces@0f53155093cd802dc92543351b2a6cfc3eb1280a/runs/20260429T064507Z-j4vei3duw66yfd/workload_labels.jsonl), /tmp/hf-datasets-cache/medium/datasets/66654874541666-config-parquet-and-info-jasminexli-verifier-chall-353b7917/hub/datasets--jasminexli--verifier-challenge-traces/snapshots/0f53155093cd802dc92543351b2a6cfc3eb1280a/runs/20260429T070359Z-bqrf8dk6ndppqx/workload_labels.jsonl (origin=hf://datasets/jasminexli/verifier-challenge-traces@0f53155093cd802dc92543351b2a6cfc3eb1280a/runs/20260429T070359Z-bqrf8dk6ndppqx/workload_labels.jsonl)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1800, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 765, in write_table
self._write_table(pa_table, writer_batch_size=writer_batch_size)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 773, in _write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
{"ts": 1777446240.975823: string
"ts_iso": "2026-04-29T07:04:00.975823+00:00": string
"phase_id": "honest_pretrain_tiny": string
"event": "start": string
"claimed": {"op_type": "training": string
"model": "gpt-tiny": string
"phase": "pretrain"}: string
"truth": {"op_type": "training": string
"model": "gpt-tiny".1: string
"phase": "pretrain"}.1: string
"seed": 1417016481: string
"duration_s_target": 1800}: string
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 2190
to
{'{"ts": 1777445108.8980887': Value('string'), ' "ts_iso": "2026-04-29T06:45:08.898091+00:00"': Value('string'), ' "phase_id": "honest_pretrain_tiny"': Value('string'), ' "event": "start"': Value('string'), ' "claimed": {"op_type": "training"': Value('string'), ' "model": "gpt-tiny"': Value('string'), ' "phase": "pretrain"}': Value('string'), ' "truth": {"op_type": "training"': Value('string'), ' "model": "gpt-tiny".1': Value('string'), ' "phase": "pretrain"}.1': Value('string'), ' "seed": 1417016481': Value('string'), ' "duration_s_target": 1800}': Value('string')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 882, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 943, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1646, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1802, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 2 new columns ({' "ts_iso": "2026-04-29T07:04:00.975823+00:00"', '{"ts": 1777446240.975823'}) and 2 missing columns ({' "ts_iso": "2026-04-29T06:45:08.898091+00:00"', '{"ts": 1777445108.8980887'}).
This happened while the csv dataset builder was generating data using
hf://datasets/jasminexli/verifier-challenge-traces/runs/20260429T070359Z-bqrf8dk6ndppqx/workload_labels.jsonl (at revision 0f53155093cd802dc92543351b2a6cfc3eb1280a), [/tmp/hf-datasets-cache/medium/datasets/66654874541666-config-parquet-and-info-jasminexli-verifier-chall-353b7917/hub/datasets--jasminexli--verifier-challenge-traces/snapshots/0f53155093cd802dc92543351b2a6cfc3eb1280a/runs/20260429T064507Z-j4vei3duw66yfd/workload_labels.jsonl (origin=hf://datasets/jasminexli/verifier-challenge-traces@0f53155093cd802dc92543351b2a6cfc3eb1280a/runs/20260429T064507Z-j4vei3duw66yfd/workload_labels.jsonl), /tmp/hf-datasets-cache/medium/datasets/66654874541666-config-parquet-and-info-jasminexli-verifier-chall-353b7917/hub/datasets--jasminexli--verifier-challenge-traces/snapshots/0f53155093cd802dc92543351b2a6cfc3eb1280a/runs/20260429T070359Z-bqrf8dk6ndppqx/workload_labels.jsonl (origin=hf://datasets/jasminexli/verifier-challenge-traces@0f53155093cd802dc92543351b2a6cfc3eb1280a/runs/20260429T070359Z-bqrf8dk6ndppqx/workload_labels.jsonl)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
{"ts": 1777445108.8980887 string | "ts_iso": "2026-04-29T06:45:08.898091+00:00" string | "phase_id": "honest_pretrain_tiny" string | "event": "start" string | "claimed": {"op_type": "training" string | "model": "gpt-tiny" string | "phase": "pretrain"} string | "truth": {"op_type": "training" string | "model": "gpt-tiny".1 string | "phase": "pretrain"}.1 string | "seed": 1417016481 string | "duration_s_target": 1800} string |
|---|---|---|---|---|---|---|---|---|---|---|---|
{"ts": 1777446920.1388147 | "ts_iso": "2026-04-29T07:15:20.138823+00:00" | "phase_id": "honest_pretrain_tiny" | "event": "end" | "status": "ok" | "rc": 0 | "duration_actual_s": 1811.2257108688354} | null | null | null | null | null |
{"ts": 1777446920.1483169 | "ts_iso": "2026-04-29T07:15:20.148317+00:00" | "phase_id": "honest_pretrain_small" | "event": "start" | "claimed": {"op_type": "training" | "model": "gpt-small" | "phase": "pretrain"} | "truth": {"op_type": "training" | "model": "gpt-small" | "phase": "pretrain"} | "seed": 832974205 | "duration_s_target": 1800} |
{"ts": 1777448733.0231392 | "ts_iso": "2026-04-29T07:45:33.023145+00:00" | "phase_id": "honest_pretrain_small" | "event": "end" | "status": "ok" | "rc": 0 | "duration_actual_s": 1812.870394706726} | null | null | null | null | null |
{"ts": 1777448733.032234 | "ts_iso": "2026-04-29T07:45:33.032234+00:00" | "phase_id": "honest_inference_small" | "event": "start" | "claimed": {"op_type": "inference" | "model": "gpt-small" | "phase": "inference"} | "truth": {"op_type": "inference" | "model": "gpt-small" | "phase": "inference"} | "seed": 1740926214 | "duration_s_target": 900} |
{"ts": 1777449644.4492695 | "ts_iso": "2026-04-29T08:00:44.449274+00:00" | "phase_id": "honest_inference_small" | "event": "end" | "status": "ok" | "rc": 0 | "duration_actual_s": 911.4124100208282} | null | null | null | null | null |
{"ts": 1777449644.459026 | "ts_iso": "2026-04-29T08:00:44.459026+00:00" | "phase_id": "honest_finetune_tiny" | "event": "start" | "claimed": {"op_type": "training" | "model": "gpt-tiny" | "phase": "finetune"} | "truth": {"op_type": "training" | "model": "gpt-tiny" | "phase": "finetune"} | "seed": 2681460915 | "duration_s_target": 1200} |
{"ts": 1777450855.2738366 | "ts_iso": "2026-04-29T08:20:55.273844+00:00" | "phase_id": "honest_finetune_tiny" | "event": "end" | "status": "ok" | "rc": 0 | "duration_actual_s": 1210.8098521232605} | null | null | null | null | null |
{"ts": 1777450855.2817175 | "ts_iso": "2026-04-29T08:20:55.281718+00:00" | "phase_id": "idle" | "event": "start" | "claimed": {"op_type": "idle"} | "truth": {"op_type": "idle"} | "seed": 1337336648 | "duration_s_target": 300} | null | null | null | null |
{"ts": 1777451155.2866466 | "ts_iso": "2026-04-29T08:25:55.286650+00:00" | "phase_id": "idle" | "event": "end" | "status": "ok" | "rc": 0 | "duration_actual_s": 300.000079870224} | null | null | null | null | null |
{"ts": 1777451155.2955976 | "ts_iso": "2026-04-29T08:25:55.295598+00:00" | "phase_id": "adv_train_as_infer" | "event": "start" | "claimed": {"op_type": "inference" | "model": "gpt-small" | "phase": "inference"} | "truth": {"op_type": "training" | "model": "gpt-small" | "phase": "pretrain"} | "seed": 3642455368 | "duration_s_target": 900} |
{"ts": 1777452084.596532 | "ts_iso": "2026-04-29T08:41:24.596540+00:00" | "phase_id": "adv_train_as_infer" | "event": "end" | "status": "ok" | "rc": 0 | "duration_actual_s": 929.2971000671387} | null | null | null | null | null |
{"ts": 1777452084.6273098 | "ts_iso": "2026-04-29T08:41:24.627311+00:00" | "phase_id": "adv_big_as_small" | "event": "start" | "claimed": {"op_type": "training" | "model": "gpt-tiny" | "phase": "pretrain"} | "truth": {"op_type": "training" | "model": "gpt-small" | "phase": "pretrain"} | "seed": 3826403184 | "duration_s_target": 900} |
{"ts": 1777453000.9685287 | "ts_iso": "2026-04-29T08:56:40.968537+00:00" | "phase_id": "adv_big_as_small" | "event": "end" | "status": "ok" | "rc": 0 | "duration_actual_s": 916.3361587524414} | null | null | null | null | null |
{"ts": 1777453000.9750822 | "ts_iso": "2026-04-29T08:56:40.975083+00:00" | "phase_id": "adv_finetune_as_pretrain" | "event": "start" | "claimed": {"op_type": "training" | "model": "gpt-tiny" | "phase": "pretrain"} | "truth": {"op_type": "training" | "model": "gpt-tiny" | "phase": "finetune"} | "seed": 1722559667 | "duration_s_target": 900} |
{"ts": 1777453914.6466258 | "ts_iso": "2026-04-29T09:11:54.646632+00:00" | "phase_id": "adv_finetune_as_pretrain" | "event": "end" | "status": "ok" | "rc": 0 | "duration_actual_s": 913.667795419693} | null | null | null | null | null |
null | null | "phase_id": "honest_pretrain_tiny" | "event": "end" | "status": "ok" | "rc": 0 | "duration_actual_s": 1811.4562163352966} | null | null | null | null | null |
null | null | "phase_id": "honest_pretrain_small" | "event": "start" | "claimed": {"op_type": "training" | "model": "gpt-small" | "phase": "pretrain"} | "truth": {"op_type": "training" | "model": "gpt-small" | "phase": "pretrain"} | "seed": 832974205 | "duration_s_target": 1800} |
null | null | "phase_id": "honest_pretrain_small" | "event": "end" | "status": "ok" | "rc": 0 | "duration_actual_s": 1812.781503200531} | null | null | null | null | null |
null | null | "phase_id": "honest_inference_small" | "event": "start" | "claimed": {"op_type": "inference" | "model": "gpt-small" | "phase": "inference"} | "truth": {"op_type": "inference" | "model": "gpt-small" | "phase": "inference"} | "seed": 1740926214 | "duration_s_target": 900} |
null | null | "phase_id": "honest_inference_small" | "event": "end" | "status": "ok" | "rc": 0 | "duration_actual_s": 916.7351529598236} | null | null | null | null | null |
null | null | "phase_id": "honest_finetune_tiny" | "event": "start" | "claimed": {"op_type": "training" | "model": "gpt-tiny" | "phase": "finetune"} | "truth": {"op_type": "training" | "model": "gpt-tiny" | "phase": "finetune"} | "seed": 2681460915 | "duration_s_target": 1200} |
null | null | "phase_id": "honest_finetune_tiny" | "event": "end" | "status": "ok" | "rc": 0 | "duration_actual_s": 1267.075142621994} | null | null | null | null | null |
null | null | "phase_id": "idle" | "event": "start" | "claimed": {"op_type": "idle"} | "truth": {"op_type": "idle"} | "seed": 1337336648 | "duration_s_target": 300} | null | null | null | null |
null | null | "phase_id": "idle" | "event": "end" | "status": "ok" | "rc": 0 | "duration_actual_s": 300.00008893013} | null | null | null | null | null |
null | null | "phase_id": "adv_train_as_infer" | "event": "start" | "claimed": {"op_type": "inference" | "model": "gpt-small" | "phase": "inference"} | "truth": {"op_type": "training" | "model": "gpt-small" | "phase": "pretrain"} | "seed": 3642455368 | "duration_s_target": 900} |
null | null | "phase_id": "adv_train_as_infer" | "event": "end" | "status": "ok" | "rc": 0 | "duration_actual_s": 921.5026063919067} | null | null | null | null | null |
null | null | "phase_id": "adv_big_as_small" | "event": "start" | "claimed": {"op_type": "training" | "model": "gpt-tiny" | "phase": "pretrain"} | "truth": {"op_type": "training" | "model": "gpt-small" | "phase": "pretrain"} | "seed": 3826403184 | "duration_s_target": 900} |
null | null | "phase_id": "adv_big_as_small" | "event": "end" | "status": "ok" | "rc": 0 | "duration_actual_s": 929.802140712738} | null | null | null | null | null |
null | null | "phase_id": "adv_finetune_as_pretrain" | "event": "start" | "claimed": {"op_type": "training" | "model": "gpt-tiny" | "phase": "pretrain"} | "truth": {"op_type": "training" | "model": "gpt-tiny" | "phase": "finetune"} | "seed": 1722559667 | "duration_s_target": 900} |
null | null | "phase_id": "adv_finetune_as_pretrain" | "event": "end" | "status": "ok" | "rc": 0 | "duration_actual_s": 923.0067896842957} | null | null | null | null | null |
Verifier Challenge Traces — v0.1
Ground-truth-labeled GPU telemetry from real 2×H100 workloads, with honest and adversarial phases. Companion dataset to the Inspector Agents schema-layer prototype: this dataset addresses the translator-layer gap — given raw telemetry from a prover, can a verifier infer what's actually running, even when the prover's labels lie?
v0.1 is a label-flip benchmark. Adversarial phases here perturb the claimed labels but not the raw telemetry. v0.2 will add translator-layer tampering (subsetted NCCL logs, fake-NCCL-during-inference, sub-second interleaved train/infer) — sketches in the source repo's followup-adv-tests.md.
Quickstart
The dataset is an artifact collection, not a single tabular table. The two HF-viewable configs are convenience accessors; most users will iterate over the per-phase directories directly.
from huggingface_hub import snapshot_download
local = snapshot_download("jasminexli/verifier-challenge-traces", repo_type="dataset")
# Or load just the per-second nvidia-smi power/util via the viewer config:
from datasets import load_dataset
nvsmi = load_dataset("jasminexli/verifier-challenge-traces", "nvsmi", split="train")
labels = load_dataset("jasminexli/verifier-challenge-traces", "labels", split="train")
Layout
Two hosts, each running the full 8-phase schedule independently and concurrently on RunPod 2×H100 SXM:
runs/<UTC_timestamp>-<runpod_pod_id>/
├── provenance.json # host/GPU/python/git/clock at start AND end
├── workload_labels.jsonl # one start/end record per phase (claimed vs truth)
├── workload.log # orchestrator stdout
├── checkpoints/ # gpt_tiny.pt, gpt_small.pt
└── phases/<phase_id>/
├── nvsmi.csv # 1 Hz GPU power/util/mem/clocks/temp (proper CSV)
├── dcgm.csv # 10 Hz dcgmi dmon — see note below
├── nccl_<host>_<pid>.log.gz # one per rank, NCCL_DEBUG=INFO COLL+INIT, gzipped
├── netdev.log.gz # /proc/net/dev @ 1 Hz, gzipped
├── stdout.log
└── stderr.log
Note on dcgm.csv: v0.1 wrote DCGM telemetry from dcgmi dmon to a .csv extension, but the format is actually space-separated text with repeating #Entity header rows (closer to top output). It is not standard CSV; HF's auto-viewer cannot parse it. Parse with whitespace splitting and skip lines starting with #Entity or ID. v0.2 will rename to dcgm.dmon.
Phase schedule
| # | Phase ID | Duration | Truth op | Claimed op | Notes |
|---|---|---|---|---|---|
| 1 | honest_pretrain_tiny |
30 min | training | training | gpt-tiny pretrain on TinyStories |
| 2 | honest_pretrain_small |
30 min | training | training | gpt-small (GPT-2 Small) pretrain |
| 3 | honest_inference_small |
15 min | inference | inference | loads phase-2 ckpt, no_grad gen |
| 4 | honest_finetune_tiny |
20 min | training | training | gpt-tiny finetune on WikiText, frozen embed |
| 5 | idle |
5 min | idle | idle | confound check |
| 6 | adv_train_as_infer |
15 min | training | inference | P1 attack — workload identical to phase 2, label lies |
| 7 | adv_big_as_small |
15 min | training | training | P2 attack — runs gpt-small, claims gpt-tiny |
| 8 | adv_finetune_as_pretrain |
15 min | training | training | P3 attack — workload identical to phase 4, claims pretrain |
Phases 6, 7, 8 are the lies. The lie lives in claimed.* of the corresponding workload_labels.jsonl start record; truth.* is what actually ran.
Evaluation protocol — leave-one-host-out
The whole point of running multiple hosts in parallel is to make cross-host generalization measurable. The recommended primary metric is leave-one-host-out (LOHO) AUROC, not pooled.
For each property:
- Train on data from N-1 hosts (all phases included).
- Test on the held-out host.
- Repeat for every host; report mean AUROC and per-split spread.
Per-host evaluation (train and test on the same host) is trivial — the verifier wins by learning host fingerprint. The cross-host floor is the actual scientific bar. If a verifier's pooled AUROC is much higher than its LOHO AUROC, it's overfit to host fingerprint, and the gap is itself a useful number to publish.
Baseline (mean nvsmi power per phase + 4 other simple features → logistic regression, P1) achieves mean LOHO AUROC = 1.000 on this dataset — i.e. the simplest possible features trivially separate training from inference. This is the v0.1 floor; serious verifiers should target much harder evaluations and add adversarial perturbations on top.
Caveats
- n=2 hosts. Cross-host axis exists but variance estimates are weak. v0.2 will scale to ≥3 hosts in distinct RunPod regions for stronger generalization signal.
- Single-node, NVLink only. No multi-node fabric, no InfiniBand. NCCL traffic is NVLink P2P, not the topology a frontier verifier will encounter.
- Adversarial subset is ~25% of run time. Benchmark numbers should be class-balanced or report the imbalance.
- v0.1 adversarial phases mostly perturb workload/labels. Deeper translator-layer tampering (subsetted NCCL, fake-NCCL-during-inference, sub-second interleaved train/infer) is in the source repo's
followup-adv-tests.mdand will land in v0.2. - DCGM availability is host-dependent. When the container is unprivileged, a
dcgm.skippedmarker file is written instead ofdcgm.csv. v0.1 captured DCGM on both hosts, but downstream code should not assume it. - Clock skew between hosts is recorded in
provenance.json(clock_offset_start_us,clock_offset_end_us). Hosts whose offset shifted >1s during the run should be flagged unreliable for cross-host temporal alignment; v0.1 hosts were stable.
Source data not redistributed
The training corpora — TinyStories (CDLA-Sharing 1.0) and WikiText-2 (CC-BY-SA 4.0) — are not shipped in this dataset. Only the telemetry and labels generated from them are. Users who want to reproduce the exact inputs should re-download from HuggingFace (roneneldan/TinyStories, wikitext/wikitext-2-raw-v1).
Source repository
Code that produced this dataset (orchestrator, capture wrapper, configs, smoke test): https://github.com/jasonhausenloy/inspector-agents/tree/main/gpu-runs
Companion paper / live demo: https://jason.ml/inspector
License
Telemetry and labels: CC-BY-4.0. Users assume the licensing of any source corpora they re-download.
- Downloads last month
- 33