Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:

Improve dataset card: Add task categories, tags, HF paper link, detailed usage, and evaluation results

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +179 -17
README.md CHANGED
@@ -1,34 +1,45 @@
1
  ---
2
  license: apache-2.0
3
-
 
 
 
 
 
 
 
 
 
 
 
4
  configs:
5
  - config_name: default
6
  data_files:
7
  - split: seed
8
  path:
9
- - "amgm_seed.parquet"
10
- - "cauchy_seed.parquet"
11
- - "misc_seed.parquet"
12
  - split: type1
13
  path:
14
- - "amgm_type1.parquet"
15
- - "cauchy_type1.parquet"
16
- - "misc_type1.parquet"
17
  - split: type2
18
  path:
19
- - "amgm_type2.parquet"
20
- - "cauchy_type2.parquet"
21
- - "misc_type2.parquet"
22
  - split: mix
23
  path:
24
- - "comp2_100.parquet"
25
  - split: real
26
  path:
27
- - "real.parquet"
28
  ---
29
 
30
  <div align="center">
31
- <h1> <a href="https://arxiv.org">Ineq-Comp: Benchmarking Human-Intuitive Compositional Reasoning in Automated Theorem Proving on Inequalities</a></h1>
32
  </div>
33
 
34
  <div align="center">
@@ -40,20 +51,171 @@ configs:
40
 
41
  </div>
42
 
 
 
 
 
 
 
43
  ## Introduction
44
 
45
- We introduce Ineq-Comp, a benchmark built from elementary inequalities through systematic transformations, including variable duplication, algebraic rewriting, and multi-step composition. Although these problems remain easy for humans, we find that most provers&mdash;including Goedel, STP, and Kimina-7B&mdash;struggle significantly. DeepSeek-Prover-V2 shows relative robustness&mdash;possibly because it is trained to decompose the problems into sub-problems&mdash;but still suffers a 20\% performance drop (pass@32). Strikingly, performance remains poor for all models even when formal proofs of the constituent parts are provided in context, revealing that the source of weakness is indeed in compositional reasoning. Our results expose a persisting gap between the generalization behavior of current AI provers and human mathematical intuition.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46
 
47
- ## Quick Start
 
 
48
 
49
- The proof of the seed problems and the evaluation scripts can be found at https://github.com/haoyuzhao123/LeanIneqComp
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
 
51
  ## Ineq-Comp Benchmark
52
 
53
- We provide 5 splits: seed, type1, type2, mix, and real. For seed, type1, and type2 splits, each contains 75 problems. mix split contains 100 problems generated by Ineq-Mix, and real split contains 50 real-world inequality problems. Please refer to the github repo for more fine-grained splits.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
 
55
  ## Citation
56
 
 
 
57
  ```{bibtex}
58
  @article{zhao2025ineq,
59
  title={Ineq-Comp: Benchmarking Human-Intuitive Compositional Reasoning in Automated Theorem Proving on Inequalities},
 
1
  ---
2
  license: apache-2.0
3
+ task_categories:
4
+ - mathematical-reasoning
5
+ - automated-theorem-proving
6
+ - text-generation
7
+ tags:
8
+ - mathematics
9
+ - theorem-proving
10
+ - inequalities
11
+ - compositional-reasoning
12
+ - benchmarking
13
+ - proof-generation
14
+ - lean-4
15
  configs:
16
  - config_name: default
17
  data_files:
18
  - split: seed
19
  path:
20
+ - amgm_seed.parquet
21
+ - cauchy_seed.parquet
22
+ - misc_seed.parquet
23
  - split: type1
24
  path:
25
+ - amgm_type1.parquet
26
+ - cauchy_type1.parquet
27
+ - misc_type1.parquet
28
  - split: type2
29
  path:
30
+ - amgm_type2.parquet
31
+ - cauchy_type2.parquet
32
+ - misc_type2.parquet
33
  - split: mix
34
  path:
35
+ - comp2_100.parquet
36
  - split: real
37
  path:
38
+ - real.parquet
39
  ---
40
 
41
  <div align="center">
42
+ <h1> <a href="https://huggingface.co/papers/2505.12680">Ineq-Comp: Benchmarking Human-Intuitive Compositional Reasoning in Automated Theorem Proving on Inequalities</a></h1>
43
  </div>
44
 
45
  <div align="center">
 
51
 
52
  </div>
53
 
54
+ ## Paper
55
+ [Ineq-Comp: Benchmarking Human-Intuitive Compositional Reasoning in Automated Theorem Proving on Inequalities](https://huggingface.co/papers/2505.12680)
56
+
57
+ ## Code / Project Page
58
+ [https://github.com/haoyuzhao123/LeanIneqComp](https://github.com/haoyuzhao123/LeanIneqComp)
59
+
60
  ## Introduction
61
 
62
+ We introduce Ineq-Comp, a benchmark built from elementary inequalities through systematic transformations, including variable duplication, algebraic rewriting, and multi-step composition. Although these problems remain easy for humans, we find that most provers&mdash;including Goedel, STP, and Kimina-7B&mdash;struggle significantly. DeepSeek-Prover-V2 shows relative robustness&mdash;possibly because it is trained to decompose the problems into sub-problems&mdash;but still suffers a 20\% performance drop (pass@32). Strikingly, performance remains poor for all models even when formal proofs of the constituent parts are provided in context, revealing that the source of weakness is indeed in compositional reasoning. Our results expose a persisting gap between the generalization behavior of current AI provers and human mathematical intuition.
63
+
64
+ ## Sample Usage
65
+
66
+ The proof of the seed problems and the evaluation scripts can be found at https://github.com/haoyuzhao123/LeanIneqComp.
67
+
68
+ ### Environment Setup
69
+
70
+ **Lean 4 Environment**
71
+
72
+ The Lean 4 environment and the corresponding Mathlib version used in this project follow from [DeepSeek-Prover-V1.5](https://github.com/deepseek-ai/DeepSeek-Prover-V1.5). Please first install the correct Lean 4 and Mathlib version following the [environment setup guide](https://github.com/deepseek-ai/DeepSeek-Prover-V1.5/blob/main/README.md#4-setup-environment).
73
+
74
+ **Copy Data and Testing Scripts**
75
+
76
+ After installing the corresponding Lean 4 environment, please copy the `benchmark/` and `scripts_eval/` folder to the parent folder where you build your Mathlib. You should get the following file structure (only show the important folders).
77
+
78
+ ```text
79
+ parent_folder/
80
+ ├── benchmark/
81
+ ├── configs/
82
+ ├── mathlib4/
83
+ ├── prover/
84
+ └── scripts_eval/
85
+ ```
86
+
87
+ ### General-Purpose Models
88
+
89
+ Please run the following command to test DeepSeek-R1-Distill-Qwen-32B without the thinking block (chat template) under pass@32. This will test the model on the 25 seed problems from Ineq-AMGM.
90
+
91
+ ```sh
92
+ bash scripts_eval/inference_2gpu.sh -i benchmark/amgm_seed.jsonl -m deepseek-ai/DeepSeek-R1-Distill-Qwen-32B -o results/amgm_seed_r1-distill-qwen-32b_nothink -n 32
93
+ ```
94
+
95
+ The script will: (1) inference using VLLM and extract the Lean code (need 2 gpus); (2) submit the code to REPL and verified by Lean 4 compiler. (no gpu needed)
96
+
97
+ For DeepSeek-R1-Distill-Qwen-32B with the thinking block, please run
98
+
99
+ ```sh
100
+ bash scripts_eval/inference_think_2gpu.sh -i benchmark/amgm_seed.jsonl -m deepseek-ai/DeepSeek-R1-Distill-Qwen-32B -o results/amgm_seed_r1-distill-qwen-32b -n 32
101
+ ```
102
+
103
+ ### Whole-Proof Generation Methods
104
+
105
+ To test DeepSeek-Prover-V1.5-RL, Goedel-Prover-SFT, STP, please using the script that test DeepSeek-R1-Distill-Qwen-32B without the thinking block while changing the model to your target model (taking STP as an example).
106
+
107
+ ```sh
108
+ bash scripts_eval/inference_2gpu.sh -i benchmark/amgm_seed.jsonl -m kfdong/STP_model_Lean -o results/amgm_seed_stp -n 32
109
+ ```
110
+
111
+ For Kimina-Prover-Preview-Distill-7B, please run the following script
112
 
113
+ ```sh
114
+ bash scripts_eval/inference_kimina_2gpu.sh -i benchmark/amgm_seed.jsonl -m AI-MO/Kimina-Prover-Preview-Distill-7B -o results/amgm_seed_kimina-7b -n 32
115
+ ```
116
 
117
+ For DeepSeek-Prover-V2-7B, please run the following script
118
+
119
+ ```sh
120
+ bash scripts_eval/inference_dsprover2_2gpu.sh -i benchmark/amgm_seed.jsonl -m deepseek-ai/DeepSeek-Prover-V2-7B -o results/amgm_seed_dsprover2-7b -n 32
121
+ ```
122
+
123
+ Note 1: all the scripts can be technically run with 2 H100 80GB GPUs. However, we recommend to use 4 H100 80GB GPUs when testing DeepSeek-R1-Distill-Qwen-32B, Kimina-Prover-Preview-Distill-7B, and DeepSeek-Prover-V2-7B, especially if you want to test with more than 16K generation length, since for some VLLM versions it might cause GPU OOM with only 2 gpus.
124
+
125
+ Note 2: we highly recommend splitting the job into smaller ones, especially when testing DeepSeek-R1-Distill-Qwen-32B, Kimina-Prover-Preview-Distill-7B, and DeepSeek-Prover-V2-7B or testing models under high budget (pass@3200). We include the SLURM head in each scripts for better parallelization with more GPU resources, please refer to the scripts for more details.
126
+
127
+
128
+ ### DeepSeek-Prover-V1.5-RL+RMaxTX
129
+
130
+ The experiments for DeepSeek-Prover-V1.5-RL+RMaxTX can be reproduced using exactly the same command in the original DeepSeek-Prover-V1.5 repo, by changing the dataset to the benchmark dataset (`benchmark/amgm_seed.jsonl`) in configs/RMaxTS.py file:
131
+
132
+ ```sh
133
+ python -m prover.launch --config=configs/RMaxTS.py --log_dir=logs/RMaxTS_results
134
+ ```
135
+
136
+ Please refer to [DeepSeek-Prover-V1.5](https://github.com/deepseek-ai/DeepSeek-Prover-V1.5/tree/main#5-quick-start) codebase for more details.
137
+
138
+ ### InternLM2.5-StepProver+BF
139
+
140
+ The evaluation code is based on the [InternLM Github Repo](https://github.com/InternLM/InternLM-Math/tree/main), and particularly, the [evaluation code](https://github.com/InternLM/InternLM-Math/tree/main/minif2f), with only minimal modifications. Please follow the repo to install the correct Lean 4 and other corresponding package version, especially LeanDojo.
141
+
142
+ After installing the corresponding environment, please substitute the MiniF2F by the <a href="https://github.com/haoyuzhao123/LeanIneqComp-Dojo">
143
+ <img
144
+ src="https://github.githubassets.com/images/modules/logos_page/GitHub-Mark.png"
145
+ alt="GitHub"
146
+ width="20"
147
+ style="vertical-align: middle; margin-right: 4px;"
148
+ />
149
+ LeanIneqComp-Dojo
150
+ </a>, which is the Github Repo of Ineq-Comp tracable by Leandojo and adapted to the Lean 4 version used by InternLM2.5-StepProver.
151
 
152
  ## Ineq-Comp Benchmark
153
 
154
+ We provide 5 splits: seed, type1, type2, mix, and real. For seed, type1, and type2 splits, each contains 75 problems. mix split contains 100 problems generated by Ineq-Mix, and real split contains 50 real-world inequality problems. Please refer to the github repo for more fine-grained splits. All the data of our Ineq-Comp benchmark, including the proofs of the seed problems, can be found in the `benchmark` folder of the [GitHub repository](https://github.com/haoyuzhao123/LeanIneqComp), and is also available on [<img src="https://huggingface.co/front/assets/huggingface_logo-noborder.svg" alt="Hugging Face" width="20" style="vertical-align: text-bottom;"/> Huggingface](https://huggingface.co/datasets/zzzzzhy/Ineq-Comp).
155
+
156
+ - For Ineq-Simp, which contains Ineq-AMGM, Ineq-Cauchy, and Ineq-MISC, the questions can be found at `benchmark/amgm_*.jsonl`, `benchmark/cauchy_*.jsonl`, and `benchmark/misc_*.jsonl`.
157
+ - `benchmark/comp2_100.jsonl` contains 100 inequality problems generated using Ineq-Mix by randomly composing two seed problems from Ineq-Simp.
158
+ - `benchmark/real.jsonl` contains 50 real-world inequality problems.
159
+ - The proof for the 75 seed problems can be found in the `benchmark/full_answer/` folder. We include the proof for two Lean 4 versions: 4.9, where our main experiments are based on (which is also the version used by DeepSeek-Prover-V1.5), and 4.18, which is the Lean 4 version (stable) used by the [interactive Lean 4 web](https://live.lean-lang.org/) at the time this benchmark is curated.
160
+
161
+ ## Custom Problem Generation through Ineq-Mix
162
+
163
+ Our Ineq-Mix framework that can generates more problems (given a pool of base problem) can be found in the `composition/` folder.
164
+
165
+ `original_problems.jsonl` includes 65 seed problems that can be possibly composed or transformed. The composition rules, the variable-level algebraic transformation rules, and the problem-level algebraic transformation rules are defined in `composition/comp_op.py`, `composition/algebraic_op.py`, and `composition/algebraic_whole_op.py`, respectively. Please refer to `composition/mix.py` for more details.
166
+
167
+ ## Evaluation Results
168
+
169
+ ### General-Purpose Models
170
+ | Method | Budget | AM-GM Seed | AM-GM I | AM-GM II | Cauchy Seed | Cauchy I | Cauchy II | Misc Seed | Misc I | Misc II |
171
+ | ----------------------------------------------- | ------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ |
172
+ | **DeepSeek-R1-Distill-Qwen-32B (w/o thinking)** | 32 | 48.2<sub>1.9</sub> | 3.5<sub>3.3</sub> | 16.2<sub>3.0</sub> | 28.0<sub>3.3</sub> | 17.0<sub>3.2</sub> | 15.0<sub>3.0</sub> | 41.4<sub>3.7</sub> | 13.4<sub>4.5</sub> | 15.4<sub>4.4</sub> |
173
+ | | 64 | 49.0<sub>1.7</sub> | 6.5<sub>4.1</sub> | 18.4<sub>2.4</sub> | 30.6<sub>3.2</sub> | 19.5<sub>2.8</sub> | 16.8<sub>2.7</sub> | 44.5<sub>3.2</sub> | 17.7<sub>4.0</sub> | 20.2<sub>4.8</sub> |
174
+ | | 128 | 49.9<sub>2.0</sub> | 10.6<sub>4.2</sub> | 20.0<sub>2.5</sub> | 32.6<sub>2.9</sub> | 21.8<sub>3.2</sub> | 19.0<sub>2.6</sub> | 47.4<sub>3.1</sub> | 21.1<sub>3.7</sub> | 25.4<sub>4.2</sub> |
175
+ | | 3200 | 52.0 | 40.0 | 36.0 | 44.0 | 32.0 | 28.0 | 52.0 | 36.0 | 36.0 |
176
+ | **DeepSeek-R1-Distill-Qwen-32B (w thinking)** | 32 | 48.8<sub>1.6</sub> | 10.9<sub>3.8</sub> | 21.1<sub>3.1</sub> | 42.9<sub>2.5</sub> | 27.0<sub>3.4</sub> | 18.4<sub>2.4</sub> | 50.5<sub>2.3</sub> | 18.9<sub>4.6</sub> | 22.0<sub>4.0</sub> |
177
+ | | 64 | 49.5<sub>1.9</sub> | 14.5<sub>4.4</sub> | 23.0<sub>3.4</sub> | 44.5<sub>2.4</sub> | 30.3<sub>2.9</sub> | 20.6<sub>2.3</sub> | 51.9<sub>0.6</sub> | 23.7<sub>4.9</sub> | 26.2<sub>3.1</sub> |
178
+ | | 128 | 50.9<sub>2.1</sub> | 19.2<sub>4.1</sub> | 26.1<sub>4.3</sub> | 46.2<sub>2.3</sub> | 32.6<sub>2.7</sub> | 22.1<sub>2.0</sub> | 52.0<sub>0.0</sub> | 28.0<sub>3.9</sub> | 29.4<sub>2.7</sub> |
179
+ | | 3200 | 60.0 | 44.0 | 44.0 | 56.0 | 40.0 | 24.0 | 52.0 | 36.0 | 40.0 |
180
+
181
+ ### Whole-Proof Generation Methods
182
+ | Method | Budget | AM-GM Seed | AM-GM I | AM-GM II | Cauchy Seed | Cauchy I | Cauchy II | Misc Seed | Misc I | Misc II |
183
+ | ------------------------------------ | ------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ |
184
+ | **DeepSeek-Prover-V1.5-RL** | 32 | 48.1<sub>3.0</sub> | 0.0<sub>0.4</sub> | 8.2<sub>1.5</sub> | 14.9<sub>3.2</sub> | 2.9<sub>1.8</sub> | 4.4<sub>1.4</sub> | 40.2<sub>2.8</sub> | 12.4<sub>1.1</sub> | 12.2<sub>2.5</sub> |
185
+ | | 64 | 50.6<sub>2.9</sub> | 0.1<sub>0.6</sub> | 9.0<sub>1.7</sub> | 17.0<sub>2.7</sub> | 3.7<sub>1.1</sub> | 5.0<sub>1.9</sub> | 42.1<sub>2.3</sub> | 12.7<sub>1.7</sub> | 13.8<sub>2.9</sub> |
186
+ | | 128 | 52.2<sub>2.1</sub> | 0.2<sub>0.8</sub> | 9.8<sub>2.0</sub> | 18.7<sub>2.7</sub> | 4.0<sub>0.0</sub> | 6.1<sub>2.3</sub> | 43.2<sub>1.6</sub> | 13.3<sub>2.2</sub> | 16.2<sub>2.9</sub> |
187
+ | | 3200 | 60.0 | 4.0 | 24.0 | 24.0 | 4.0 | 12.0 | 44.0 | 20.0 | 28.0 |
188
+ | **Goedel-Prover-SFT** | 32 | 48.6<sub>2.9</sub> | 0.4<sub>1.2</sub> | 14.0<sub>3.2</sub> | 34.8<sub>2.5</sub> | 12.4<sub>3.5</sub> | 21.5<sub>3.4</sub> | 47.0<sub>1.7</sub> | 14.4<sub>3.1</sub> | 24.6<sub>1.9</sub> |
189
+ | | 64 | 50.6<sub>2.6</sub> | 0.8<sub>1.6</sub> | 16.6<sub>2.8</sub> | 36.2<sub>1.9</sub> | 15.8<sub>3.4</sub> | 24.6<sub>2.9</sub> | 47.8<sub>0.9</sub> | 16.6<sub>2.5</sub> | 25.5<sub>1.9</sub> |
190
+ | | 128 | 52.2<sub>1.4</sub> | 1.3<sub>1.9</sub> | 18.6<sub>2.2</sub> | 37.1<sub>1.8</sub> | 19.4<sub>2.9</sub> | 26.9<sub>1.8</sub> | 48.0<sub>0.0</sub> | 17.9<sub>2.6</sub> | 26.4<sub>2.5</sub> |
191
+ | | 3200 | 60.0 | 4.0 | 24.0 | 40.0 | 32.0 | 28.0 | 48.0 | 24.0 | 36.0 |
192
+ | **STP (w/o miniF2F valid)** | 32 | 59.1<sub>1.9</sub> | 14.3<sub>4.4</sub> | 23.2<sub>4.5</sub> | 35.2<sub>2.4</sub> | 14.6<sub>2.7</sub> | 16.0<sub>2.6</sub> | 55.6<sub>1.3</sub> | 12.6<sub>5.0</sub> | 27.6<sub>3.6</sub> |
193
+ | | 64 | 60.1<sub>0.6</sub> | 18.5<sub>4.1</sub> | 28.2<sub>4.6</sub> | 36.8<sub>2.4</sub> | 16.7<sub>2.8</sub> | 17.3<sub>2.7</sub> | 56.0<sub>0.0</sub> | 17.8<sub>4.9</sub> | 31.0<sub>4.1</sub> |
194
+ | | 128 | 60.3<sub>1.1</sub> | 24.3<sub>4.1</sub> | 33.0<sub>3.6</sub> | 37.9<sub>2.6</sub> | 18.4<sub>3.0</sub> | 18.9<sub>3.3</sub> | 56.0<sub>0.0</sub> | 24.0<sub>4.4</sub> | 33.9<sub>4.1</sub> |
195
+ | | 3200 | 64.0 | 44.0 | 40.0 | 44.0 | 24.0 | 28.0 | 56.0 | 36.0 | 40.0 |
196
+ | **Kimina-Prover-Preview-Distill-7B** | 32 | 59.4<sub>4.1</sub> | 11.7<sub>5.4</sub> | 45.2<sub>3.7</sub> | 46.9<sub>4.5</sub> | 27.0<sub>2.6</sub> | 27.7<sub>3.3</sub> | 44.2<sub>1.3</sub> | 18.1<sub>3.9</sub> | 35.8<sub>2.0</sub> |
197
+ | | 64 | 64.1<sub>4.6</sub> | 19.4<sub>5.9</sub> | 48.6<sub>2.4</sub> | 52.7<sub>4.3</sub> | 28.8<sub>2.5</sub> | 30.2<sub>2.8</sub> | 44.6<sub>1.4</sub> | 22.3<sub>2.9</sub> | 36.8<sub>2.0</sub> |
198
+ | | 128 | 69.4<sub>4.2</sub> | 28.2<sub>5.4</sub> | 50.6<sub>2.2</sub> | 57.6<sub>3.6</sub> | 30.4<sub>3.0</sub> | 32.0<sub>1.6</sub> | 45.1<sub>1.8</sub> | 25.6<sub>2.5</sub> | 37.6<sub>2.5</sub> |
199
+ | | 3200 | 80.0 | 44.0 | 64.0 | 68.0 | 52.0 | 36.0 | 52.0 | 32.0 | 44.0 |
200
+ | **DeepSeek-Prover-V2-7B** | 32 | 75.0<sub>4.4</sub> | 58.6<sub>4.0</sub> | 52.5<sub>4.5</sub> | 64.6<sub>4.1</sub> | 33.0<sub>2.3</sub> | 35.0<sub>2.3</sub> | 59.1<sub>2.9</sub> | 49.3<sub>3.4</sub> | 38.8<sub>4.4</sub> |
201
+ | | 64 | 80.7<sub>5.3</sub> | 62.1<sub>4.5</sub> | 57.4<sub>4.0</sub> | 68.3<sub>3.1</sub> | 34.7<sub>2.7</sub> | 36.6<sub>2.3</sub> | 61.7<sub>2.5</sub> | 51.6<sub>2.9</sub> | 43.7<sub>4.2</sub> |
202
+ | | 128 | 85.8<sub>5.4</sub> | 65.9<sub>5.3</sub> | 61.4<sub>3.7</sub> | 71.0<sub>2.0</sub> | 36.3<sub>3.6</sub> | 37.9<sub>2.6</sub> | 64.0<sub>1.6</sub> | 53.3<sub>3.1</sub> | 49.9<sub>4.3</sub> |
203
+ | | 3200 | 96.0 | 84.0 | 76.0 | 76.0 | 52.0 | 48.0 | 68.0 | 64.0 | 64.0 |
204
+
205
+ ### Tree-Search Methods
206
+ | Method | Budget | AM-GM Seed | AM-GM I | AM-GM II | Cauchy Seed | Cauchy I | Cauchy II | Misc Seed | Misc I | Misc II |
207
+ | ------------------------------------ | --------- | ------------------ | ----------------- | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ |
208
+ | **DeepSeek-Prover-V1.5-RL + RMaxTS** | 1×3200 | 60.0<sub>0.0</sub> | 3.0<sub>1.7</sub> | 22.0<sub>2.0</sub> | 24.0<sub>0.0</sub> | 8.0<sub>2.8</sub> | 13.0<sub>3.3</sub> | 44.0<sub>0.0</sub> | 14.0<sub>3.5</sub> | 29.0<sub>1.7</sub> |
209
+ | | 2×3200 | 60.0<sub>0.0</sub> | 6.0<sub>2.0</sub> | 26.0<sub>2.0</sub> | 24.0<sub>0.0</sub> | 10.0<sub>2.0</sub> | 16.0<sub>0.0</sub> | 44.0<sub>0.0</sub> | 16.0<sub>4.0</sub> | 32.0<sub>0.0</sub> |
210
+ | | 4×3200 | 60.0 | 8.0 | 28.0 | 24.0 | 12.0 | 20.0 | 44.0 | 20.0 | 36.0 |
211
+ | **InternLM2.5-StepProver + BF** | 1×32×600 | 30.8<sub>3.1</sub> | 0.0<sub>0.0</sub> | 2.5<sub>3.1</sub> | 12.0<sub>0.0</sub> | 0.0<sub>0.0</sub> | 1.2<sub>1.9</sub> | 34.0<sub>2.0</sub> | 2.2<sub>2.0</sub> | 17.0<sub>3.9</sub> |
212
+ | | 4×32×600 | 38.0<sub>4.5</sub> | 0.0<sub>0.0</sub> | 9.0<sub>3.3</sub> | 12.0<sub>0.0</sub> | 0.0<sub>0.0</sub> | 3.0<sub>1.7</sub> | 36.0<sub>0.0</sub> | 5.0<sub>1.7</sub> | 21.0<sub>1.7</sub> |
213
+ | | 16×32×600 | 44.0 | 0.0 | 24.0 | 12.0 | 0.0 | 4.0 | 36.0 | 8.0 | 24.0 |
214
 
215
  ## Citation
216
 
217
+ If you find our work helps, please consider starring ⭐ us and citing:
218
+
219
  ```{bibtex}
220
  @article{zhao2025ineq,
221
  title={Ineq-Comp: Benchmarking Human-Intuitive Compositional Reasoning in Automated Theorem Proving on Inequalities},