linyongver commited on
Commit
401cf03
·
verified ·
1 Parent(s): 8eca7f0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -44,8 +44,8 @@ license: mit
44
 
45
  ## 1. Introduction
46
 
47
- We introduce Goedel-Prover, an open-source large language model (LLM) that achieves the state-of-the-art (SOTA) performance in automated formal proof generation for mathematical problems. The key challenge in this field is the scarcity of formalized math statements and proofs, which we tackle in the following ways. We train statement formalizers to translate the natural language math problems from Numina into formal language (Lean 4), creating a dataset of 1.64 million formal statements. LLMs are used to check that the formal statements accurately preserve the content of the original natural language problems. We then iteratively build a large dataset of formal proofs by training a series of provers. Each prover succeeds in proving many statements that the previous ones could not, and these new proofs are added to the training set for the next prover. The final prover outperforms all existing open-source models in whole-proof generation. On the miniF2F benchmark, it achieves a 57.6% success rate (Pass@32), exceeding the previous best open-source model by 7.6%. On PutnamBench, Goedel-Prover successfully solves 7 problems (Pass@512), ranking first on the leaderboard. Furthermore, it generates 29.7K formal proofs for Lean Workbook problems, nearly doubling the 15.7K produced by earlier works.
48
-
49
  <p align="center">
50
  <img width="100%" src="performance.png">
51
  </p>
@@ -65,10 +65,14 @@ We introduce Goedel-Prover, an open-source large language model (LLM) that achie
65
  | DeepSeek-Prover-V1.5-SFT | 32 | 48.2% |
66
  | DeepSeek-Prover-V1.5-RL | 32 | 50.0% |
67
  | **Goedel-Prover-SFT** | **32** | **57.6%** |
 
 
68
  |------------------------|------------------|------------------|
69
  | DeepSeek-Prover-V1.5-SFT | 3200 | 53.3% |
70
  | DeepSeek-Prover-V1.5-RL | 3200 | 54.9% |
71
  | **Goedel-Prover-SFT** | **3200** | **62.7%** |
 
 
72
  |------------------------|------------------|------------------|
73
  | DeepSeek-Prover-V1.5-SFT | 25600 | 55.8% |
74
  | DeepSeek-Prover-V1.5-RL | 25600 | 58.5% |
 
44
 
45
  ## 1. Introduction
46
 
47
+ We introduce Goedel-Prover, an open-source language model that achieves state-of-the-art performance in automated formal proof generation for mathematical problems. A key challenge in this field is the scarcity of formalized mathematical statements and proofs, which we address through the following approaches. First, we train statement formalizers to translate natural language math problems from Numina into the formal language Lean 4, and use an LLM to verify that the formal statements accurately preserve the content of the original problems. This results in a dataset of 1.64 million formal statements. We then iteratively build a large dataset of formal proofs by training a series of provers: each prover is able to prove many statements that the previous ones could not, and these new proofs are added to the training set for the next prover. Despite using only supervised fine-tuning, our final prover (fine-tuned on DeepSeek-Prover-V1.5-Base) significantly outperforms the previous best open-source model, DeepSeek-Prover-V1.5-RL, which uses reinforcement learning (RL). On the miniF2F benchmark, our model achieves a success rate of 57.6% (Pass@32), surpassing DeepSeek-Prover-V1.5-RL by 7.6%. On PutnamBench, Goedel-Prover successfully solves 7 problems (Pass@512), ranking first on the leaderboard. Furthermore, it generates 29.7K formal proofs for Lean Workbook problems, nearly doubling the 15.7K produced by prior work. We provide extensive discussion of our training methodology, highlighting the key design choices that contribute to Goedel-Prover's strong performance. We then explore direct preference optimization (DPO) and other forms of reinforcement learning on top of Goedel-Prover-SFT, improving success to over 60% (Pass@32) on miniF2F. Additionally, we fully open source our code, model, and formalized statements to facilitate future research.
48
+
49
  <p align="center">
50
  <img width="100%" src="performance.png">
51
  </p>
 
65
  | DeepSeek-Prover-V1.5-SFT | 32 | 48.2% |
66
  | DeepSeek-Prover-V1.5-RL | 32 | 50.0% |
67
  | **Goedel-Prover-SFT** | **32** | **57.6%** |
68
+ | **Goedel-Prover-DPO** | **32** | **60.2%** |
69
+ | **Goedel-Prover-RL** | **32** | **60.5%** |
70
  |------------------------|------------------|------------------|
71
  | DeepSeek-Prover-V1.5-SFT | 3200 | 53.3% |
72
  | DeepSeek-Prover-V1.5-RL | 3200 | 54.9% |
73
  | **Goedel-Prover-SFT** | **3200** | **62.7%** |
74
+ | **Goedel-Prover-DPO** | **3200** | **63.2%** |
75
+ | **Goedel-Prover-RL** | **3200** | **65.0%** |
76
  |------------------------|------------------|------------------|
77
  | DeepSeek-Prover-V1.5-SFT | 25600 | 55.8% |
78
  | DeepSeek-Prover-V1.5-RL | 25600 | 58.5% |