dddanielddd Cursor commited on
Commit
ef02ccb
·
1 Parent(s): 8abaf02

commit CMPhysBench

Browse files

Co-authored-by: Cursor <cursoragent@cursor.com>

Files changed (2) hide show
  1. CMPhysBench.json +0 -0
  2. README.md +39 -0
CMPhysBench.json ADDED
The diff for this file is too large to render. See raw diff
 
README.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ size_categories:
6
+ - n=520
7
+ task_categories:
8
+ - question-answering
9
+ - text-generation
10
+ pretty_name: CMPhysBench
11
+ tags:
12
+ - Condensed Matter Physics
13
+ - physics
14
+ - benchmark
15
+ ---
16
+
17
+ # CMPhysBench: A Benchmark for Evaluating Large Language Models in Condensed Matter Physics
18
+
19
+ > 🎉🎉🎉 This paper is accpeted by ICLR 2026.
20
+
21
+ [![Paper](https://img.shields.io/badge/Paper-B31B1B?logo=arxiv)](https://arxiv.org/abs/2508.18124)&nbsp;&nbsp;&nbsp;[![Code](https://img.shields.io/badge/Code-8A2BE2?logo=github)](https://github.com/CMPhysBench/CMPhysBench)&nbsp;&nbsp;&nbsp;[![Data](https://img.shields.io/badge/Data-FFD700?logo=huggingface)](https://huggingface.co/datasets/weidawang/CMPhysBench)&nbsp;&nbsp;&nbsp;[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](https://github.com/CMPhysBench/CMPhysBench/blob/main/LICENSE)
22
+
23
+ We introduce **CMPhysBench**, designed to assess the proficiency of Large Language Models (LLMs) in **C**ondensed **M**atter **Phys**ics, as a novel **Bench**mark. CMPhysBench is composed of more than 520 graduate-level meticulously curated questions covering both representative subfields and foundational theoretical frameworks of condensed matter physics, such as magnetism, superconductivity, strongly correlated systems, etc. To ensure a deep understanding of the problem-solving process,we focus exclusively on calculation problems, requiring LLMs to independently generate comprehensive solutions. Meanwhile, leveraging tree-based representations of expressions, we introduce the Scalable Expression Edit Distance (SEED) score, which provides fine-grained (non-binary) partial credit and yields a more accurate assessment of similarity between prediction and ground-truth. Our results show that even the best models, Grok-4, reach only 36 average SEED score and 28% accuracy on CMPhysBench, underscoring a significant capability gap, especially for this practical and frontier domain relative to traditional physics.
24
+
25
+ <div align="center">
26
+ <img src="https://raw.githubusercontent.com/CMPhysBench/CMPhysBench/main/imgs/CMPhysBench.png" width="1000"/>
27
+ </div>
28
+
29
+
30
+ ## Citations
31
+
32
+ ```bibtex
33
+ @article{wang2025cmphysbench,
34
+ title={CMPhysBench: A Benchmark for Evaluating Large Language Models in Condensed Matter Physics},
35
+ author={Wang, Weida and Huang, Dongchen and Li, Jiatong and Yang, Tengchao and Zheng, Ziyang and Zhang, Di and Han, Dong and Chen, Benteng and Luo, Binzhao and Liu, Zhiyu and others},
36
+ journal={arXiv preprint arXiv:2508.18124},
37
+ year={2025}
38
+ }
39
+ ```