ubowang commited on
Commit
0906889
·
verified ·
1 Parent(s): b68243b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -31,7 +31,7 @@ Instead of learning from reference answers (as in supervised fine-tuning) or rew
31
  - **Outperforms RLVR and Full SFT with 20× Less Compute:** One-Shot CFT outperforms both one-shot Reinforcement Learning with Verifiable Rewards (RLVR) and full-dataset supervised fine-tuning, while requiring only 5 GPU hours on a 7B model—offering a much more efficient and stable training alternative.
32
  - **Robust Across Seeds and Model Scales:** One-Shot CFT remains effective across different seed problem choices and model sizes—from 1.5B to 14B parameters—demonstrating strong generalization and scalability.
33
 
34
- **This specific model is the One-Shot CFT variant trained based on [Qwen2.5-7B-Math](https://huggingface.co/Qwen/Qwen2.5-Math-7B) with [DSR-CFT-p0](https://huggingface.co/datasets/TIGER-Lab/One-Shot-CFT-Data) dataset.**
35
 
36
 
37
  ## Main Results
 
31
  - **Outperforms RLVR and Full SFT with 20× Less Compute:** One-Shot CFT outperforms both one-shot Reinforcement Learning with Verifiable Rewards (RLVR) and full-dataset supervised fine-tuning, while requiring only 5 GPU hours on a 7B model—offering a much more efficient and stable training alternative.
32
  - **Robust Across Seeds and Model Scales:** One-Shot CFT remains effective across different seed problem choices and model sizes—from 1.5B to 14B parameters—demonstrating strong generalization and scalability.
33
 
34
+ **This specific model is the One-Shot CFT variant trained based on [Qwen2.5-1.5B-Math](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) with [DSR-CFT-p0](https://huggingface.co/datasets/TIGER-Lab/One-Shot-CFT-Data) dataset.**
35
 
36
 
37
  ## Main Results