Dariusfar commited on
Commit
d74b16c
·
verified ·
1 Parent(s): dca968b

update dataset card

Browse files
Files changed (1) hide show
  1. README.md +123 -62
README.md CHANGED
@@ -1,64 +1,125 @@
1
  ---
2
- license: apache-2.0
3
- dataset_info:
4
- features:
5
- - name: task_id
6
- dtype: string
7
- - name: paper_id
8
- dtype: string
9
- - name: analysis_target
10
- dtype: string
11
- - name: signal_model
12
- dtype: string
13
- - name: observable
14
- dtype: string
15
- - name: observable_pretty
16
- dtype: string
17
- - name: plot_units
18
- dtype: string
19
- - name: score_mode
20
- dtype: string
21
- - name: tolerance
22
- dtype: float64
23
- - name: walltime
24
- dtype: string
25
- - name: difficulty
26
- dtype: string
27
- - name: tags
28
- list: string
29
- - name: instructions_md
30
- dtype: string
31
- - name: task_toml
32
- dtype: string
33
- - name: template_yaml
34
- dtype: string
35
- - name: n_bins
36
- dtype: int64
37
- - name: paper_pdf
38
- dtype: binary
39
- - name: paper_pdf_sha256
40
- dtype: string
41
- - name: paper_pdf_bytes
42
- dtype: int64
43
- - name: object_efficiencies
44
- list:
45
- - name: filename
46
- dtype: string
47
- - name: data
48
- dtype: binary
49
- - name: sha256
50
- dtype: string
51
- - name: size_bytes
52
- dtype: int64
53
- splits:
54
- - name: train
55
- num_bytes: 7635049
56
- num_examples: 10
57
- download_size: 6722274
58
- dataset_size: 7635049
59
- configs:
60
- - config_name: default
61
- data_files:
62
- - split: train
63
- path: data/train-*
64
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ task_categories:
4
+ - text-generation
5
+ - question-answering
6
+ language:
7
+ - en
8
+ tags:
9
+ - physics
10
+ - high-energy-physics
11
+ - particle-physics
12
+ - LHC
13
+ - CMS
14
+ - benchmark
15
+ - agentic
16
+ - llm-agents
17
+ - tool-use
18
+ - simulation
19
+ pretty_name: Collider-Bench
20
+ size_categories:
21
+ - n<1K
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  ---
23
+
24
+ # Collider-Bench
25
+
26
+ **Collider-Bench** is a benchmark for evaluating whether LLM agents can reproduce experimental analyses from the Large Hadron Collider (LHC) using only public papers and open scientific software. Such analyses are often difficult to reproduce because the public toolchain only approximates the software used internally by the experimental collaborations, while the published papers inevitably omit implementation details needed for a faithful reconstruction. Agents must therefore rely on physical reasoning, domain knowledge, and trial-and-error to fill these gaps. Each task requires the agent to turn a published analysis into an executable simulation-and-selection pipeline and submit predicted collision event yields in specified signal regions.
27
+
28
+ This HuggingFace dataset hosts the **task corpus only** — the agent-facing instructions, the null-filled HEPData-style template the agent fills, the CMS paper PDF, and the published object-efficiency maps. The **runtime harness, scorer, and hidden reference values** live in the companion GitHub repository:
29
+
30
+ 🔗 **https://github.com/dfaroughy/Collider-Bench**
31
+
32
+ The reference yields used by the scorer are deliberately not published here — leaking them would let any LLM ingesting HF datasets memorize the answers and defeat the benchmark's blind-test property.
33
+
34
+ ## Quick start
35
+
36
+ ```python
37
+ from datasets import load_dataset
38
+
39
+ ds = load_dataset("Dariusfar/ColliderBench", split="train")
40
+ print(ds) # 10 sim tasks
41
+ print(ds[0]["task_id"], ds[0]["paper_id"])
42
+ print(ds[0]["instructions_md"][:400]) # what the agent gets shown
43
+ print(ds[0]["template_yaml"][:400]) # the null-filled template
44
+ print(ds[0]["paper_pdf"][:8]) # PDF magic bytes (b'%PDF-1.5')
45
+ ```
46
+
47
+ To actually run an agent against a task and score its submission, install the harness from the GitHub repo:
48
+
49
+ ```bash
50
+ git clone https://github.com/dfaroughy/Collider-Bench.git
51
+ cd Collider-Bench
52
+ pip install -e ".[dev]"
53
+ podman pull ghcr.io/dfaroughy/lhc-bench:latest # MadGraph + Pythia + Delphes + ROOT image
54
+ export ANTHROPIC_API_KEY=... # or OPENAI_API_KEY / GEMINI_API_KEY / DEEPSEEK_API_KEY
55
+ scripts/run-agent --config configs/anthropics/claude_sonnet.yaml --task sus-16-046_sim-T5Wg
56
+ ```
57
+
58
+ ## Schema (one row per task)
59
+
60
+ | Field | Type | Description |
61
+ |---|---|---|
62
+ | `task_id` | string | Canonical task identifier, e.g. `sus-16-046_sim-T5Wg` |
63
+ | `paper_id` | string | CMS analysis identifier, e.g. `CMS-SUS-16-046` |
64
+ | `analysis_target` | string | Final state under study (`photons`, `single lepton`, `leptons + jets`) |
65
+ | `signal_model` | string | SUSY simplified-model name + slice (`T5Wg, high-H_T`, `T2tt, compressed`, …) |
66
+ | `observable` | string | Observable key as used by the scorer (`STgamma`, `MET`, …) |
67
+ | `observable_pretty` | string | LaTeX-style label (`S_T^gamma`, `E_T^miss`, `p_T^miss`) |
68
+ | `plot_units` | string | y-axis units of the histogram (`Events/bin`, `Events/GeV`) |
69
+ | `score_mode` | string | Scoring mode used by the harness (`shape_norm` for sim tasks) |
70
+ | `tolerance` | float64 | Per-bin tolerance band used in shape-pass-rate diagnostics |
71
+ | `walltime` | string | Harness walltime budget (e.g. `2h30m`) |
72
+ | `difficulty` | string | `easy` / `medium` / `hard` rough difficulty tag |
73
+ | `tags` | seq<string> | Free-form metadata tags |
74
+ | `instructions_md` | string | The full `TASK.md` text — the agent's primary instructions |
75
+ | `task_toml` | string | Verbatim `task.toml` content (paper id, observable, walltime, tolerance, …) |
76
+ | `template_yaml` | string | Null-filled HEPData-style YAML the agent fills with predicted bin values |
77
+ | `n_bins` | int64 | Total bin count across all dependent variables in the template |
78
+ | `paper_pdf` | binary | Bytes of the CMS analysis paper (publicly available on CDS/Inspire-HEP) |
79
+ | `paper_pdf_sha256` | string | SHA-256 of `paper_pdf` |
80
+ | `paper_pdf_bytes` | int64 | Length of `paper_pdf` |
81
+ | `object_efficiencies`| seq<{filename, data, sha256, size_bytes}> | CMS public detector-efficiency maps (ROOT files) the agent needs to apply during selection |
82
+
83
+ ## Task corpus
84
+
85
+ | Task id | Analysis target | Signal | Observable | Paper |
86
+ |---|---|---|---|---|
87
+ | `sus-16-034_sim-TChiWZ` | leptons + jets | `TChiWZ` | $E_T^{\rm miss}$ | CMS-SUS-16-034 |
88
+ | `sus-16-046_sim-T5Wg` | photons | `T5Wg` | $S_T^{\gamma}$ | CMS-SUS-16-046 |
89
+ | `sus-16-046_sim-TChiWg` | photons | `TChiWg` | $S_T^{\gamma}$ | CMS-SUS-16-046 |
90
+ | `sus-16-047_sim-T5Wg_highHT` | photons | `T5Wg`, high-$H_T$ | $p_T^{\rm miss}$ | CMS-SUS-16-047 |
91
+ | `sus-16-047_sim-T5Wg_lowHT` | photons | `T5Wg`, low-$H_T$ | $p_T^{\rm miss}$ | CMS-SUS-16-047 |
92
+ | `sus-16-047_sim-T6gg_highHT` | photons | `T6gg`, high-$H_T$ | $p_T^{\rm miss}$ | CMS-SUS-16-047 |
93
+ | `sus-16-047_sim-T6gg_lowHT` | photons | `T6gg`, low-$H_T$ | $p_T^{\rm miss}$ | CMS-SUS-16-047 |
94
+ | `sus-16-051_sim-T2tt_SRG` | single lepton | `T2tt` | $E_T^{\rm miss}$ | CMS-SUS-16-051 |
95
+ | `sus-16-051_sim-T2bW_SRG` | single lepton | `T2bW` | $E_T^{\rm miss}$ | CMS-SUS-16-051 |
96
+ | `sus-16-051_sim-T2tt_comp` | single lepton | `T2tt`, compressed | $E_T^{\rm miss}$ | CMS-SUS-16-051 |
97
+
98
+ ## Scoring
99
+
100
+ Each `sim` task asks the agent to reproduce the published per-bin yield distribution. The primary metric used by the scorer is the relative L² distance
101
+
102
+ $$d(\hat y, y^\star) = \sqrt{\sum_k (\hat y_k - y_k^\star)^2 \big/ \sum_k (y_k^\star)^2}$$
103
+
104
+ between the agent's bin yields $\hat y$ and the published reference $y^\star$, plus the integrated yield error $\Delta = |\Sigma\hat y - \Sigma y^\star| / \Sigma y^\star$. Diagnostic metrics (RMSLE, Jensen-Shannon, Baker-Cousins shape p-value) are also computed per run.
105
+
106
+ Scoring is offline and deterministic — it does **not** require an LLM. See [`ColliderBench/Evals/`](https://github.com/dfaroughy/Collider-Bench/tree/main/ColliderBench/Evals) in the harness repo.
107
+
108
+ ## Citation
109
+
110
+ If you use Collider-Bench in your research, please cite:
111
+
112
+ ```
113
+ @misc{colliderbench2026,
114
+ title = {Collider-Bench: A benchmark for LHC analysis recasting by LLM agents},
115
+ author = {Faroughy, Darius A. and contributors},
116
+ year = {2026},
117
+ url = {https://huggingface.co/datasets/Dariusfar/ColliderBench},
118
+ }
119
+ ```
120
+
121
+ …and the four underlying CMS papers (CMS-SUS-16-034, -046, -047, -051) as listed in the GitHub repo's [References section](https://github.com/dfaroughy/Collider-Bench#references).
122
+
123
+ ## License
124
+
125
+ MIT (matches the GitHub repo). The CMS paper PDFs and detector efficiency maps are reproduced here as published by the CMS Collaboration under the terms of their respective public-data policies.