wetsoledrysoul commited on
Commit
8c28d64
·
verified ·
1 Parent(s): f8afa3e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -52
README.md CHANGED
@@ -1,69 +1,55 @@
1
  ---
2
- base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
3
  library_name: transformers
4
- model_name: dpo_DeepSeek-R1-Distill-Qwen-14B
5
- tags:
6
- - generated_from_trainer
7
- - dpo
8
- - trl
9
- licence: license
10
  ---
11
 
12
- # Model Card for dpo_DeepSeek-R1-Distill-Qwen-14B
 
 
 
 
13
 
14
- This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B).
15
- It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
- ## Quick start
18
 
19
- ```python
20
- from transformers import pipeline
21
 
22
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
- generator = pipeline("text-generation", model="wetsoledrysoul/dpo_DeepSeek-R1-Distill-Qwen-14B", device="cuda")
24
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
- print(output["generated_text"])
26
- ```
 
 
 
 
27
 
28
- ## Training procedure
 
29
 
30
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/CodeShield/CerebRM-DPO/runs/h0aacdm3)
 
 
 
 
 
31
 
32
-
33
- This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
34
-
35
- ### Framework versions
36
-
37
- - TRL: 0.24.0
38
- - Transformers: 4.56.1
39
- - Pytorch: 2.7.1
40
- - Datasets: 4.0.0
41
- - Tokenizers: 0.22.0
42
 
43
  ## Citations
44
 
45
- Cite DPO as:
46
-
47
- ```bibtex
48
- @inproceedings{rafailov2023direct,
49
- title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
50
- author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
51
- year = 2023,
52
- booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
53
- url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
54
- editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
55
- }
56
- ```
57
-
58
- Cite TRL as:
59
-
60
  ```bibtex
61
- @misc{vonwerra2022trl,
62
- title = {{TRL: Transformer Reinforcement Learning}},
63
- author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
64
- year = 2020,
65
- journal = {GitHub repository},
66
- publisher = {GitHub},
67
- howpublished = {\url{https://github.com/huggingface/trl}}
68
  }
69
  ```
 
1
  ---
 
2
  library_name: transformers
3
+ license: cc-by-nc-sa-4.0
4
+ datasets:
5
+ - Aletheia-Bench/Aletheia-DPO
6
+ base_model:
7
+ - deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
 
8
  ---
9
 
10
+ <font size=3><div align='center' >
11
+ [[**🤗 Model & Dataset**](https://huggingface.co/Aletheia-Bench)]
12
+ [[**📊 Code**](https://github.com/insait-institute/aletheia)]
13
+ [[**📖 Paper**](https://arxiv.org/)]
14
+ </div></font>
15
 
16
+ # Aletheia: What Makes RLVR For Code Verifiers Tick?
17
+ Multi-domain thinking verifiers trained via Reinforcement Learning from Verifiable Rewards (RLVR) are a prominent fixture of the Large Language Model (LLM) post-training pipeline, owing to their ability to robustly rate and rerank model outputs. However, the adoption of such verifiers towards code generation has been comparatively sparse, with execution feedback constituting the dominant signal. Nonetheless, code verifiers remain valuable toward judging model outputs in scenarios where execution feedback is hard to obtain and are a potentially powerful addition to the code generation post-training toolbox. To this end, we create and open-source Aletheia, a controlled testbed that enables execution-grounded evaluation of code verifiers' robustness across disparate policy models and covariate shifts. We examine components of the RLVR-based verifier training recipe widely credited for its success: (1) intermediate thinking traces, (2) learning from negative samples, and (3) on-policy training. While experiments show the optimality of RLVR, we uncover important opportunities to simplify the recipe. Particularly, despite code verification exhibiting positive training- and inference-time scaling, on-policy learning stands out as the key component at small verifier sizes, and thinking-based training emerges as the most important component at larger scales.
18
 
 
19
 
 
 
20
 
21
+ ## 📦 Model Zoo
22
+ Fine-tuned code verifiers at 1.5B, 7B, and 14B scales using several algorithms
23
+ | Algorithm | Thinking | Negatives | Online | Description |
24
+ | :--- | :---: | :---: | :---: | :--- |
25
+ | **GRPO-Think** | ✅ | ✅ | ✅ | Standard GRPO-style approach to training verifiers. |
26
+ | **GRPO-Instruct** | ❌ | ✅ | ✅ | RLVR training without intermediate thinking traces. |
27
+ | **RAFT** | ✅ | ❌ | ✅ | On-policy rejection sampling fine-tuning using only positive reasoning samples. |
28
+ | **Think-DPO** | ✅ | ✅ | ❌ | Offline preference optimization using pre-collected thinking traces. |
29
+ | **Batch-online GRPO** | ✅ | ✅ | ⚠️ | Semi-online training where the generator policy is synced every 4 steps. |
30
 
31
+ ## 🎁 Datasets
32
+ The Aletheia dataset collection is available on HuggingFace and includes:
33
 
34
+ * **Aletheia-Train**: A dataset of 2-5 candidate code snippets to solve a coding problem, each with exactly one correct code
35
+ * **Aletheia-DPO**: A companion dataset to Aletheia-Train, containing "chosen" and "rejected" responses for each instance. The chosen response identifies the correct code snippet, while the rejected responses do not.
36
+ * **Aletheia-Heldout**: A completely in-distribution test set
37
+ * **Aletheia-Strong**: An OOD test set where the candidates are generated by stronger models
38
+ * **Aletheia-Hard**: An OOD test set where the comparison between candidates is more difficult
39
+ * **Aletheia-Adv**: An OOD test set where the candidates are adversarially modified to exploit common LLM biases
40
 
41
+ ## 💡 Intended uses
42
+ * **RLHF / RLAIF**: plug-and-play reward function for code generation policy optimization.
43
+ * **Automated evaluation**: LLM-as-a-judge for a variety of code-related tasks.
44
+ * **Research**: study the effects of thinking traces, on-policy learning, and negative samples in training successful code verifiers
 
 
 
 
 
 
45
 
46
  ## Citations
47
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
  ```bibtex
49
+ @article{aletheia2025,
50
+ title={Aletheia: What Makes RLVR For Code Verifiers Tick?},
51
+ author={Venkatkrishna, Vatsal and Paul, Indraneil and Gurevych, Iryna},
52
+ journal={arXiv preprint},
53
+ year={2025}
 
 
54
  }
55
  ```