JVRoggeveen commited on
Commit
b46ecf4
·
verified ·
1 Parent(s): f87948d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +88 -0
README.md ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ tags:
6
+ - physics
7
+ - cmt
8
+ pretty_name: CMT Benchmark
9
+ ---
10
+ # CMT-Benchmark
11
+
12
+ **CMT-Benchmark** is a specialized dataset of 50 original, research-level problems in Condensed Matter Theory (CMT) designed to evaluate the scientific reasoning, analytical, and computational capabilities of Large Language Models (LLMs).
13
+
14
+ ## Details
15
+ ### Authors
16
+ Haining Pan, James V. Roggeveen, Erez Berg, Juan Carrasquilla, Debanjan Chowdhury, Surya Ganguli, Federico Ghimenti, Juraj Hasik, Henry Hunt, Hong-Chen Jiang, Mason Kamb, Ying-Jer Kao, Ehsan Khatami, Michael J. Lawler, Di Luo, Titus Neupert, Xiaoliang Qi, Michael P. Brenner, Eun-Ah Kim
17
+
18
+ ### Description
19
+
20
+ Unlike traditional textbook-based benchmarks, CMT-Benchmark consists of bespoke problems authored by a panel of experts. The problems are designed to test if an AI can function as a scientific research assistant, requiring the synthesis of material knowledge with theoretical principles. The dataset covers seven major computational and theoretical methods including Hartree-Fock, Exact Diagonalization, and Quantum Monte Carlo.
21
+
22
+ * **Funded by:** NSF (Awards 2433348, OAC-2118310, DMR-2237522), US-ONR, Swiss National Science Foundation, NTT Research, and the U.S. Department of Energy.
23
+ * **Language:** English.
24
+
25
+ ### Sources
26
+
27
+ * **Dataset:** [Hugging Face Repository](https://huggingface.co/datasets/AnonBenchmark322/benchmark_data).
28
+ * **Evaluation Code:** [Github Repository](https://github.com/JamesRoggeveen/cmt_benchmark_data)
29
+ * **Paper:** [CMT-BENCHMARK: A BENCHMARK FOR CONDENSED MATTER THEORY BUILT BY EXPERT RESEARCHERS](https://arxiv.org/abs/2510.05228).
30
+
31
+ ## Structure
32
+
33
+ The dataset contains 50 problems categorized by theoretical/computational methods:
34
+
35
+ * **Exact Diagonalization (ED):** 16% (8 problems).
36
+ * **Statistical Mechanics (SM):** 12% (6 problems).
37
+ * **Quantum Monte Carlo (QMC):** 12% (6 problems).
38
+ * **Hartree-Fock (HF):** 10% (5 problems).
39
+ * **DMRG:** 8% (4 problems).
40
+ * **PEPS:** 6% (3 problems).
41
+ * **Variational Monte Carlo (VMC):** 4% (2 problems).
42
+ * **Other:** 32% (16 problems).
43
+
44
+ ## Creation
45
+ Problems were authored in a collaborative Google Sheet environment using a custom-built extension. Authors iteratively refined problems to identify gaps in LLM reasoning.
46
+ * **Technical Note on Infrastructure**: The custom Google Sheet integration and the LaTeX-to-Sympy parsing infrastructure used for automated grading are described in further detail in [Roggeveen et al. (2025)](
47
+ https://doi.org/10.48550/arXiv.2505.11774).
48
+
49
+ ## Model Performance Summary
50
+
51
+ The following table summarizes the Overall Pass@1 rates (%) as reported in the paper.
52
+
53
+ | Model | Overall |
54
+ | --- | --- |
55
+ | GPT-4o | 2.0 |
56
+ | GPT-4.1 | 4.0 |
57
+ | GPT-5 | 30.0 |
58
+ | GPT-5-mini | 24.0 |
59
+ | GPT-5-nano | 14.0 |
60
+ | GPT-o3 | 26.0 |
61
+ | GPT-o4-mini | 18.0 |
62
+ | Gemini 2.0 Flash | 10.0 |
63
+ | Gemini 2.5 Flash | 4.0 |
64
+ | Gemini 2.5 Pro | 14.0 |
65
+ | Claude 3.7 Sonnet | 6.0 |
66
+ | Claude 4.1 Sonnet | 2.0 |
67
+ | Claude 4.0 Sonnet | 6.0 |
68
+ | Claude 4.1 Opus | 8.0 |
69
+ | Claude 4.0 Opus | 10.0 |
70
+ | DeepSeek v3 | 4.0 |
71
+ | LLaMA Maverick | 12.0 |
72
+
73
+ ## Citation
74
+
75
+ **BibTeX:**
76
+
77
+ ```bibtex
78
+ @misc{https://doi.org/10.48550/arxiv.2510.05228,
79
+ doi = {10.48550/ARXIV.2510.05228},
80
+ url = {https://arxiv.org/abs/2510.05228},
81
+ author = {Pan, Haining and Roggeveen, James V. and Berg, Erez and Carrasquilla, Juan and Chowdhury, Debanjan and Ganguli, Surya and Ghimenti, Federico and Hasik, Juraj and Hunt, Henry and Jiang, Hong-Chen and Kamb, Mason and Kao, Ying-Jer and Khatami, Ehsan and Lawler, Michael J. and Luo, Di and Neupert, Titus and Qi, Xiaoliang and Brenner, Michael P. and Kim, Eun-Ah},
82
+ keywords = {Machine Learning (cs.LG), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
83
+ title = {CMT-Benchmark: A Benchmark for Condensed Matter Theory Built by Expert Researchers},
84
+ publisher = {arXiv},
85
+ year = {2025},
86
+ copyright = {Creative Commons Zero v1.0 Universal}
87
+ }
88
+ ```