Shekswess commited on
Commit
e3d7855
·
verified ·
1 Parent(s): ad78a7b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -43
README.md CHANGED
@@ -14,22 +14,8 @@ licence: license
14
  This model is a fine-tuned version of [Shekswess/tiny-think-sft-math-stem-loss-nll-bf16-lr2e-5-e2-bs8](https://huggingface.co/Shekswess/tiny-think-sft-math-stem-loss-nll-bf16-lr2e-5-e2-bs8).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
- ## Quick start
18
-
19
- ```python
20
- from transformers import pipeline
21
-
22
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
- generator = pipeline("text-generation", model="Shekswess/tiny-think-dpo-math-stem-apo_zero-beta0_3-lr3e-6-e1-bs8", device="cuda")
24
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
- print(output["generated_text"])
26
- ```
27
-
28
  ## Training procedure
29
 
30
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bokicasheks-loka/tiny_think/runs/7aerg1ht)
31
-
32
-
33
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
34
 
35
  ### Framework versions
@@ -38,32 +24,4 @@ This model was trained with DPO, a method introduced in [Direct Preference Optim
38
  - Transformers: 4.57.5
39
  - Pytorch: 2.9.0+cu128
40
  - Datasets: 4.5.0
41
- - Tokenizers: 0.22.2
42
-
43
- ## Citations
44
-
45
- Cite DPO as:
46
-
47
- ```bibtex
48
- @inproceedings{rafailov2023direct,
49
- title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
50
- author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
51
- year = 2023,
52
- booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
53
- url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
54
- editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
55
- }
56
- ```
57
-
58
- Cite TRL as:
59
-
60
- ```bibtex
61
- @misc{vonwerra2022trl,
62
- title = {{TRL: Transformer Reinforcement Learning}},
63
- author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
64
- year = 2020,
65
- journal = {GitHub repository},
66
- publisher = {GitHub},
67
- howpublished = {\url{https://github.com/huggingface/trl}}
68
- }
69
- ```
 
14
  This model is a fine-tuned version of [Shekswess/tiny-think-sft-math-stem-loss-nll-bf16-lr2e-5-e2-bs8](https://huggingface.co/Shekswess/tiny-think-sft-math-stem-loss-nll-bf16-lr2e-5-e2-bs8).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
 
 
 
 
 
 
 
 
 
 
 
17
  ## Training procedure
18
 
 
 
 
19
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
20
 
21
  ### Framework versions
 
24
  - Transformers: 4.57.5
25
  - Pytorch: 2.9.0+cu128
26
  - Datasets: 4.5.0
27
+ - Tokenizers: 0.22.2