zhibei1204 commited on
Commit
b63b7fa
Β·
verified Β·
1 Parent(s): 59d0c5c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +160 -114
README.md CHANGED
@@ -10,117 +10,163 @@ tags:
10
 
11
  # PhysReason: A Comprehensive Benchmark towards Physics-Based Reasoning
12
 
13
- [\[Project Page]](https://dxzxy12138.github.io/PhysReason)
14
- [\[πŸ“Paper\]](https://huggingface.co/papers/2502.12054)
15
-
16
- ## PhysReason is accepted by ACL-2025-main
17
- ## Overview
18
-
19
- PhysReason is a comprehensive physics-based reasoning benchmark consisting of 1,200 physics problems spanning multiple domains, with a focus on both knowledge-based (25%) and reasoning-based (75%) questions.
20
- <div align="center">
21
-
22
- </div>
23
- ## Key Features
24
-
25
- - **Dataset Size**: 1,200 problems
26
- - **Problem Types**: Mix of knowledge (25%) and reasoning (75%) questions
27
- - **Theorem Coverage**: 147 physics theorems
28
- - **Visual Content**: 81% problems include diagrams
29
- - **Difficulty Levels**: Knowledge, Easy, Medium, Hard
30
-
31
- ## Data Collection
32
-
33
- - Sources: Global college entrance exams and competitions
34
-
35
- - Process: Standardized using MinerU framework
36
-
37
- - Quality Control: Two-phase translation with expert verification
38
-
39
- - Filtering: Excluded easily searchable problems
40
-
41
- - Classification: Based on solving time and theorem complexity
42
-
43
- - ## Benchmark Comparison
44
-
45
- | Benchmark | Multi-modal | Size | Knowledge | Question Type | Avg. T | Step-by-step | Avg. T | Avg. S |
46
- | -------------- | ----------- | ---- | --------- | ------------- | ------ | ------------ | ------ | ------ |
47
- | JEEBench | ❌ | 123 | CEE | OE,MC | 169.7 | - | - | - |
48
- | MMLU-Pro | ❌ | 1299 | COL | MC | 52.1 | - | - | - |
49
- | GPQA | ❌ | 227 | PH.D. | OE | 111.4 | ❌ | 197.2 | 3.6 |
50
- | SciEval | ❌ | 1657 | - | OE,MC | 154.5 | - | - | - |
51
- | SciBench | βœ… | 295 | COL | OE | 80.5 | ❌ | 315.9 | 2.8 |
52
- | MMMU | βœ… | 443 | COL | OE,MC | 53.8 | - | - | - |
53
- | ScienceQA | βœ… | 617 | K1-K12 | MC | 13.3 | ❌ | 63.0 | 2.4 |
54
- | OlympiadBench | βœ… | 2334 | COMP | OE | 222.0 | ❌ | 199.8 | 3.7 |
55
- | EMMA | βœ… | 156 | - | MC | 109.5 | - | - | - |
56
- | Ours-Knowledge | βœ… | 300 | CEE+COMP | OE | 163.7 | βœ… | 196.5 | 3.3 |
57
- | Ours-Easy | βœ… | 300 | CEE+COMP | OE | 171.2 | βœ… | 241.5 | 5.0 |
58
- | Ours-Medium | βœ… | 300 | CEE+COMP | OE | 229.2 | βœ… | 391.3 | 8.4 |
59
- | Ours-Hard | βœ… | 300 | CEE+COMP | OE | 340.9 | βœ… | 936.1 | 15.6 |
60
- | Ours-Full | βœ… | 1200 | CEE+COMP | OE | 226.3 | βœ… | 441.3 | 8.1 |
61
-
62
- ## Evaluation Framework
63
-
64
- ### PSAS-A (Answer Level)
65
-
66
- - Evaluates sub-question answers
67
- - Uses LLM for answer extraction
68
- - Verifies semantic consistency
69
- - Weighted scoring based on solution steps
70
-
71
- ### PSAS-S (Step Level)
72
-
73
- - Four-phase assessment:
74
- 1. Data extraction
75
- 2. Scoring
76
- 3. First error step detection
77
- 4. Error analysis
78
-
79
- ## Experimental Results
80
-
81
- ### Non-O-like Models Performance
82
-
83
- | Model | Input | Knowledge | Easy | Medium | Hard | Avg. |
84
- | ----------------- | ----- | ----------- | ----------- | ----------- | ----------- | ----------- |
85
- | Qwen2VL-72B | Q, I | 41.92/62.47 | 24.04/45.26 | 15.97/36.13 | 4.83/24.23 | 16.96/42.88 |
86
- | InternVL2.5-78B | Q, I | 28.34/64.71 | 24.16/50.69 | 17.72/38.56 | 9.71/25.95 | 19.98/45.89 |
87
- | GPT-4o | Q, I | 50.71/65.82 | 33.87/51.98 | 22.73/42.36 | 11.03/24.71 | 29.58/47.23 |
88
- | Deepseek-V3-671B | Q, IC | 55.86/66.14 | 40.06/52.77 | 26.63/44.02 | 13.73/26.87 | 34.07/48.42 |
89
- | Claude-3.5-Sonnet | Q, I | 54.14/66.45 | 41.35/55.85 | 28.14/44.86 | 15.11/28.51 | 34.69/49.88 |
90
- | Gemini-2.0-Flash | Q, I | 65.08/75.04 | 54.84/68.60 | 39.79/55.67 | 21.99/38.39 | 45.20/60.40 |
91
- | Gemini-2.0-Pro | Q, I | 67.99/79.01 | 55.43/71.47 | 44.29/57.74 | 23.81/42.66 | 47.88/62.74 |
92
-
93
- ### O-like Models Performance
94
-
95
- | Model | Input | Knowledge | Easy | Medium | Hard | Avg. |
96
- | ------------- | ----- | ----------- | ----------- | ----------- | ----------- | ----------- |
97
- | o1-mini | Q, IC | 53.90/65.74 | 35.21/52.26 | 22.24/40.19 | 10.61/26.80 | 30.49/47.18 |
98
- | QvQ-72B | Q, I | 62.44/70.92 | 53.74/64.65 | 28.18/54.88 | 14.30/36.47 | 32.67/57.66 |
99
- |Gemini-2.0-Flash-Thinking-1206 | Q, I | 65.35/77.20 | 51.89/67.49 | 44.43/58.95 | 27.14/45.48 | 47.20/63.07 |
100
- | QwQ-32B | Q, IC | 62.03/76.28 | 54.92/71.08 | 43.64/62.14 | 22.99/42.19 | 45.89/63.87 |
101
- | GLM-Zero | Q, IC | 64.95/80.36 | 54.11/71.54 | 41.32/63.67 | 23.04/47.46 | 46.52/65.76 |
102
- | o3-mini-high | Q, IC | 70.67/83.61 | 67.20/81.95 | 45.31/64.57 | 30.12/47.23 | 53.32/69.34 |
103
- | Gemini-2.0-Flash-Thinking-0121 | Q, I | 73.44/84.15 | 63.17/75.94 | 50.41/66.60 | 31.90/48.47 | 54.73/69.73 |
104
- | Deepseek-R1 | Q, IC | 75.11/85.91 | 65.08/79.81 | 54.84/72.02 | 31.95/51.50 | 56.75/73.26 |
105
-
106
- ### PhysReason-mini Results
107
-
108
- | Model | K. | E. | M. | H. | Avg. |
109
- | ------------- | ----- | ----- | ----- | ----- | ----- |
110
- | o1-mini | 54.80 | 30.33 | 15.41 | 7.92 | 27.11 |
111
- | QvQ-72B | 51.17 | 37.10 | 29.83 | 22.13 | 35.06 |
112
- | QwQ-32B | 64.40 | 50.07 | 38.88 | 27.45 | 45.20 |
113
- | Gemini-2.0-Flash-Thinking-1206 | 71.47 | 49.97 | 36.83 | 22.97 | 45.42 |
114
- | GLM-Zero | 72.70 | 50.17 | 43.42 | 24.70 | 47.75 |
115
- | o1 | 72.47 | 53.37 | 49.31 | 25.32 | 50.12 |
116
- | o3-mini-high | 71.10 | 63.20 | 47.02 | 31.93 | 53.31 |
117
- | Gemini-2.0-Flash-Thinking-0121 | 76.33 | 56.87 | 51.85 | 32.61 | 54.42 |
118
- | Deepseek-R1 | 85.17 | 60.77 | 47.24 | 33.23 | 56.60 |
119
-
120
- ## Key Findings
121
-
122
- - Strong performance from O-like models
123
- - Gemini and Deepseek models show competitive results
124
- - Detailed error analysis through PSAS-S framework
125
- - Multi-modal capabilities enhance performance
126
- - Step-by-step evaluation provides deeper insights
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
  # PhysReason: A Comprehensive Benchmark towards Physics-Based Reasoning
12
 
13
+ [![arXiv](https://img.shields.io/badge/arXiv-2502.12054-b31b1b.svg)](https://arxiv.org/abs/2502.12054)
14
+ [![Dataset](https://img.shields.io/badge/πŸ€—%20Hugging%20Face-Dataset-yellow)](https://huggingface.co/datasets/zhibei1204/PhysReason)
15
+ [![Project Page](https://img.shields.io/badge/🌐%20Project-Page-blue)](https://dxzxy12138.github.io/PhysReason/)
16
+
17
+ > **PhysReason is accepted by ACL-2025-main**
18
+
19
+ ## πŸ“‹ Overview
20
+
21
+ PhysReason is a comprehensive physics-based reasoning benchmark consisting of **1,200 physics problems** spanning multiple domains, with a focus on both knowledge-based (25%) and reasoning-based (75%) questions. This benchmark addresses the critical gap in evaluating large language models' capabilities in physics-based reasoning, which requires applying physics theorems and constraints in complex problem-solving scenarios.
22
+
23
+ ## ✨ Key Features
24
+
25
+ - **πŸ“Š Dataset Size**: 1,200 carefully curated physics problems
26
+ - **🎯 Problem Types**: Strategic mix of knowledge-based (25%) and reasoning-based (75%) questions
27
+ - **πŸ“š Theorem Coverage**: Comprehensive coverage of 147 physics theorems
28
+ - **🎨 Visual Content**: 81% of problems include diagrams and visual elements
29
+ - **πŸ“ˆ Difficulty Levels**: Four distinct levels - Knowledge, Easy, Medium, Hard
30
+ - **πŸ”„ Step-by-step Solutions**: Average of 8.1 solution steps per problem (15.6 for hard problems)
31
+ - **🌍 Multi-modal**: Supports both text and image inputs
32
+
33
+ ## πŸ”§ Data Collection
34
+
35
+ Our rigorous data collection process ensures high-quality, challenging problems:
36
+
37
+ - **πŸ“– Sources**: Global college entrance exams and international physics competitions
38
+ - **βš™οΈ Process**: Standardized using MinerU framework for consistent formatting
39
+ - **βœ… Quality Control**: Two-phase translation process with expert verification
40
+ - **πŸ” Filtering**: Systematically excluded easily searchable problems to prevent data leakage
41
+ - **πŸ“Š Classification**: Difficulty levels based on solving time and theorem complexity analysis
42
+
43
+ ## πŸ“Š Benchmark Comparison
44
+
45
+ | Benchmark | Multi-modal | Size | Knowledge | Question Type | Avg. T | Step-by-step | Avg. T | Avg. S |
46
+ |----------------|-------------|------|-----------|---------------|--------|--------------|--------|--------|
47
+ | JEEBench | ❌ | 123 | CEE | OE,MC | 169.7 | - | - | - |
48
+ | MMLU-Pro | ❌ | 1299 | COL | MC | 52.1 | - | - | - |
49
+ | GPQA | ❌ | 227 | PH.D. | OE | 111.4 | ❌ | 197.2 | 3.6 |
50
+ | SciEval | ❌ | 1657 | - | OE,MC | 154.5 | - | - | - |
51
+ | SciBench | βœ… | 295 | COL | OE | 80.5 | ❌ | 315.9 | 2.8 |
52
+ | MMMU | βœ… | 443 | COL | OE,MC | 53.8 | - | - | - |
53
+ | ScienceQA | βœ… | 617 | K1-K12 | MC | 13.3 | ❌ | 63.0 | 2.4 |
54
+ | OlympiadBench | βœ… | 2334 | COMP | OE | 222.0 | ❌ | 199.8 | 3.7 |
55
+ | EMMA | βœ… | 156 | - | MC | 109.5 | - | - | - |
56
+ | **Ours-Knowledge** | βœ… | 300 | CEE+COMP | OE | 163.7 | βœ… | 196.5 | 3.3 |
57
+ | **Ours-Easy** | βœ… | 300 | CEE+COMP | OE | 171.2 | βœ… | 241.5 | 5.0 |
58
+ | **Ours-Medium** | βœ… | 300 | CEE+COMP | OE | 229.2 | βœ… | 391.3 | 8.4 |
59
+ | **Ours-Hard** | βœ… | 300 | CEE+COMP | OE | 340.9 | βœ… | 936.1 | 15.6 |
60
+ | **Ours-Full** | βœ… | 1200 | CEE+COMP | OE | 226.3 | βœ… | 441.3 | 8.1 |
61
+
62
+ ## πŸ” Evaluation Framework
63
+
64
+ We introduce the **Physics Solution Auto Scoring (PSAS)** framework with two complementary evaluation approaches:
65
+
66
+ ### PSAS-A (Answer Level Evaluation)
67
+ - **Sub-question Assessment**: Evaluates answers for each sub-question independently
68
+ - **LLM-based Extraction**: Uses advanced language models for answer extraction
69
+ - **Semantic Verification**: Ensures semantic consistency between extracted and ground truth answers
70
+ - **Weighted Scoring**: Considers solution step lengths as weights for different sub-questions
71
+
72
+ ### PSAS-S (Step Level Evaluation)
73
+ Provides detailed step-by-step assessment through four phases:
74
+ 1. **Data Extraction**: Parses model responses and reference solutions
75
+ 2. **Scoring**: Evaluates correctness of each reasoning step
76
+ 3. **First Error Detection**: Identifies where models first deviate from correct reasoning
77
+ 4. **Error Analysis**: Classifies error types into four key bottlenecks:
78
+ - Physics Theorem Application
79
+ - Physics Process Understanding
80
+ - Calculation
81
+ - Physics Condition Analysis
82
+
83
+ ## πŸš€ Usage
84
+
85
+ ### Core Evaluation Files
86
+ - `answer_evaluation_with_ds_ch_prompt.py`: Answer-level evaluation using Chinese prompts
87
+ - `answer_evaluation_with_ds_en_prompt.py`: Answer-level evaluation using English prompts
88
+ - `format_result_ds.py`: Optimizes unstable outputs into stable, consistent formats
89
+ - `step_evaluation_with_ds_ch_prompt.py`: Step-level evaluation using Chinese prompts
90
+ - `step_evaluation_with_ds_en_prompt.py`: Step-level evaluation using English prompts
91
+
92
+ ## πŸ“ˆ Experimental Results
93
+
94
+ ### Non-O-like Models Performance
95
+
96
+ | Model | Input | Knowledge | Easy | Medium | Hard | Avg. |
97
+ |-------------------|-------|-------------|-------------|-------------|-------------|-------------|
98
+ | Qwen2VL-72B | Q, I | 41.92/62.47 | 24.04/45.26 | 15.97/36.13 | 4.83/24.23 | 16.96/42.88 |
99
+ | InternVL2.5-78B | Q, I | 28.34/64.71 | 24.16/50.69 | 17.72/38.56 | 9.71/25.95 | 19.98/45.89 |
100
+ | GPT-4o | Q, I | 50.71/65.82 | 33.87/51.98 | 22.73/42.36 | 11.03/24.71 | 29.58/47.23 |
101
+ | Deepseek-V3-671B | Q, IC | 55.86/66.14 | 40.06/52.77 | 26.63/44.02 | 13.73/26.87 | 34.07/48.42 |
102
+ | Claude-3.5-Sonnet | Q, I | 54.14/66.45 | 41.35/55.85 | 28.14/44.86 | 15.11/28.51 | 34.69/49.88 |
103
+ | Gemini-2.0-Flash | Q, I | 65.08/75.04 | 54.84/68.60 | 39.79/55.67 | 21.99/38.39 | 45.20/60.40 |
104
+ | Gemini-2.0-Pro | Q, I | 67.99/79.01 | 55.43/71.47 | 44.29/57.74 | 23.81/42.66 | 47.88/62.74 |
105
+
106
+ ### O-like Models Performance
107
+
108
+ | Model | Input | Knowledge | Easy | Medium | Hard | Avg. |
109
+ |------------------------------------|-------|-------------|-------------|-------------|-------------|-------------|
110
+ | o1-mini | Q, IC | 53.90/65.74 | 35.21/52.26 | 22.24/40.19 | 10.61/26.80 | 30.49/47.18 |
111
+ | QvQ-72B | Q, I | 62.44/70.92 | 53.74/64.65 | 28.18/54.88 | 14.30/36.47 | 32.67/57.66 |
112
+ | Gemini-2.0-Flash-Thinking-1206 | Q, I | 65.35/77.20 | 51.89/67.49 | 44.43/58.95 | 27.14/45.48 | 47.20/63.07 |
113
+ | QwQ-32B | Q, IC | 62.03/76.28 | 54.92/71.08 | 43.64/62.14 | 22.99/42.19 | 45.89/63.87 |
114
+ | GLM-Zero | Q, IC | 64.95/80.36 | 54.11/71.54 | 41.32/63.67 | 23.04/47.46 | 46.52/65.76 |
115
+ | o3-mini-high | Q, IC | 70.67/83.61 | 67.20/81.95 | 45.31/64.57 | 30.12/47.23 | 53.32/69.34 |
116
+ | Gemini-2.0-Flash-Thinking-0121 | Q, I | 73.44/84.15 | 63.17/75.94 | 50.41/66.60 | 31.90/48.47 | 54.73/69.73 |
117
+ | **Deepseek-R1** | Q, IC | **75.11/85.91** | **65.08/79.81** | **54.84/72.02** | **31.95/51.50** | **56.75/73.26** |
118
+
119
+ ### PhysReason-mini Results
120
+
121
+ | Model | K. | E. | M. | H. | Avg. |
122
+ |------------------------------------|-------|-------|-------|-------|-------|
123
+ | o1-mini | 54.80 | 30.33 | 15.41 | 7.92 | 27.11 |
124
+ | QvQ-72B | 51.17 | 37.10 | 29.83 | 22.13 | 35.06 |
125
+ | QwQ-32B | 64.40 | 50.07 | 38.88 | 27.45 | 45.20 |
126
+ | Gemini-2.0-Flash-Thinking-1206 | 71.47 | 49.97 | 36.83 | 22.97 | 45.42 |
127
+ | GLM-Zero | 72.70 | 50.17 | 43.42 | 24.70 | 47.75 |
128
+ | o1 | 72.47 | 53.37 | 49.31 | 25.32 | 50.12 |
129
+ | o3-mini-high | 71.10 | 63.20 | 47.02 | 31.93 | 53.31 |
130
+ | Gemini-2.0-Flash-Thinking-0121 | 76.33 | 56.87 | 51.85 | 32.61 | 54.42 |
131
+ | **Deepseek-R1** | **85.17** | **60.77** | **47.24** | **33.23** | **56.60** |
132
+
133
+ ## πŸ”‘ Key Findings
134
+
135
+ - **Performance Gap**: Even top-performing models achieve less than 60% on answer-level evaluation
136
+ - **Difficulty Scaling**: Performance drops significantly from knowledge questions (75.11%) to hard problems (31.95%)
137
+ - **O-like Model Advantage**: Models with enhanced reasoning capabilities show superior performance
138
+ - **Multi-modal Benefits**: Visual content significantly enhances model understanding and performance
139
+ - **Four Critical Bottlenecks** identified through step-level evaluation:
140
+ 1. **Physics Theorem Application**
141
+ 2. **Physics Process Understanding**
142
+ 3. **Calculation Accuracy**
143
+ 4. **Physics Condition Analysis**
144
+
145
+ ## πŸ“ Citation
146
+
147
+ If you find PhysReason useful in your research, please cite our paper:
148
+
149
+ ```bibtex
150
+ @article{zhang2025physreason,
151
+ title={Physreason: A comprehensive benchmark towards physics-based reasoning},
152
+ author={Zhang, Xinyu and Dong, Yuxuan and Wu, Yanrui and Huang, Jiaxing and Jia, Chengyou and Fernando, Basura and Shou, Mike Zheng and Zhang, Lingling and Liu, Jun},
153
+ journal={arXiv preprint arXiv:2502.12054},
154
+ year={2025}
155
+ }
156
+ ```
157
+
158
+ ## πŸ“„ License
159
+
160
+ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
161
+
162
+ ## πŸ“§ Contact
163
+
164
+ We welcome contributions to PhysReason! Please contact us for more details.
165
+
166
+ ---
167
+
168
+ **πŸ”— Quick Links:**
169
+ - [πŸ“„ Paper](https://arxiv.org/abs/2502.12054)
170
+ - [πŸ€— Dataset](https://huggingface.co/datasets/zhibei1204/PhysReason)
171
+ - [🌐 Project Page](https://dxzxy12138.github.io/PhysReason/)
172
+ - [πŸ’» GitHub Repository](https://github.com/dxzxy12138/PhysReason)