Question Answering
Transformers
Safetensors
English
doge
text-generation
trl
sft
dpo
custom_code
JingzeShi commited on
Commit
a012470
verified
1 Parent(s): fc13473

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -17
README.md CHANGED
@@ -27,9 +27,9 @@ tags:
27
  <a href="https://discord.gg/P2yYH95N" target="_blank" style="margin: 2px;">
28
  <img alt="Discord" src="https://img.shields.io/badge/Discord-Small%20Doges-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
29
  </a>
30
- <a href="https://arxiv.org/abs/2412.11834" target="_blank" style="margin: 2px;">
31
  <img alt="arXiv" src="https://img.shields.io/static/v1?label=arXiv&message=2412.11834&color=B31B1B&logo=arXiv" style="display: inline-block; vertical-align: middle;"/>
32
- </a>
33
  <a href="https://github.com/SmallDoges/small-doge" target="_blank" style="margin: 2px;">
34
  <img alt="GitHub" src="https://img.shields.io/badge/GitHub-SmallDoge-181717?logo=github" style="display: inline-block; vertical-align: middle;"/>
35
  </a>
@@ -90,16 +90,18 @@ We build the Doge-Instruct by first SFT on [SmolTalk](https://huggingface.co/dat
90
  **SFT**:
91
  | Model | Training Data | Epochs | Content Length | LR | Batch Size | Precision |
92
  |---|---|---|---|---|---|---|
93
- | [Doge-20M-Instruct-SFT](https://huggingface.co/SmallDoge/Doge-20M-Instruct-SFT) | [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) | 2 | 2048 | 8e-4 | 0.25M | bfloat16 |
94
- | [Doge-60M-Instruct-SFT](https://huggingface.co/SmallDoge/Doge-60M-Instruct-SFT) | [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) | 2 | 2048 | 6e-4 | 0.25M | bfloat16 |
95
- | [Doge-160M-Instruct-SFT](https://huggingface.co/SmallDoge/Doge-160M-Instruct-SFT) | [HuggingFaceTB/smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) | 2 | 2048 | 4e-4 | 0.25M | bfloat16 |
 
96
 
97
  **DPO**:
98
  | Model | Training Data | Epochs | Content Length | LR | Batch Size | Precision |
99
  |---|---|---|---|---|---|---|
100
- | [Doge-20M-Instruct](https://huggingface.co/SmallDoge/Doge-20M-Instruct) | [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) | 2 | 1024 | 8e-5 | 0.125M | bfloat16 |
101
- | [Doge-60M-Instruct](https://huggingface.co/SmallDoge/Doge-60M-Instruct) | [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) | 2 | 1024 | 6e-5 | 0.125M | bfloat16 |
102
- | [Doge-160M-Instruct](https://huggingface.co/SmallDoge/Doge-160M-Instruct) | [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) | 2 | 1024 | 4e-5 | 0.125M | bfloat16 |
 
103
 
104
 
105
  **Evaluation**:
@@ -108,8 +110,8 @@ We build the Doge-Instruct by first SFT on [SmolTalk](https://huggingface.co/dat
108
  |---|---|---|---|---|---|---|---|
109
  | [Doge-20M-Instruct](https://huggingface.co/SmallDoge/Doge-20M-Instruct) | 7.3 | 26.3 | 18.3 | 29.2 | 57.8 | 27.8 | 142 |
110
  | [Doge-60M-Instruct](https://huggingface.co/SmallDoge/Doge-60M-Instruct) | 7.4 | 27.5 | 27.7 | 37.5 | 61.4 | 32.1 | 62 |
111
- | [SmolLM-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM-135M-Instruct) | 12.2 | 28.3 | 25.2 | 33.9 | 64.0 | 38.9 | 30 |
112
  | [Doge-160M-Instruct](https://huggingface.co/SmallDoge/Doge-160M-Instruct) | 16.8 | 29.7 | 29.1 | 42.8 | 64.1 | 37.1 | 28 |
 
113
 
114
 
115
  **Procedure**:
@@ -130,13 +132,11 @@ We build the Doge-Instruct by first SFT on [SmolTalk](https://huggingface.co/dat
130
  ## Citation
131
 
132
  ```bibtex
133
- @misc{shi2024wonderfulmatrices,
134
- title={Wonderful Matrices: Combining for a More Efficient and Effective Foundation Model Architecture},
135
- author={Jingze Shi and Bingheng Wu},
136
- year={2024},
137
- eprint={2412.11834},
138
- archivePrefix={arXiv},
139
- primaryClass={cs.LG},
140
- url={https://arxiv.org/abs/2412.11834},
141
  }
142
  ```
 
27
  <a href="https://discord.gg/P2yYH95N" target="_blank" style="margin: 2px;">
28
  <img alt="Discord" src="https://img.shields.io/badge/Discord-Small%20Doges-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
29
  </a>
30
+ <!-- <a href="https://arxiv.org/abs/2412.11834" target="_blank" style="margin: 2px;">
31
  <img alt="arXiv" src="https://img.shields.io/static/v1?label=arXiv&message=2412.11834&color=B31B1B&logo=arXiv" style="display: inline-block; vertical-align: middle;"/>
32
+ </a> -->
33
  <a href="https://github.com/SmallDoges/small-doge" target="_blank" style="margin: 2px;">
34
  <img alt="GitHub" src="https://img.shields.io/badge/GitHub-SmallDoge-181717?logo=github" style="display: inline-block; vertical-align: middle;"/>
35
  </a>
 
90
  **SFT**:
91
  | Model | Training Data | Epochs | Content Length | LR | Batch Size | Precision |
92
  |---|---|---|---|---|---|---|
93
+ | [Doge-20M-Instruct-SFT](https://huggingface.co/SmallDoge/Doge-20M-Instruct-SFT) | [smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) | 2 | 2048 | 8e-4 | 0.25M | bfloat16 |
94
+ | [Doge-60M-Instruct-SFT](https://huggingface.co/SmallDoge/Doge-60M-Instruct-SFT) | [smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) | 2 | 2048 | 6e-4 | 0.25M | bfloat16 |
95
+ | [Doge-160M-Instruct-SFT](https://huggingface.co/SmallDoge/Doge-160M-Instruct-SFT) | [smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) | 2 | 2048 | 4e-4 | 0.25M | bfloat16 |
96
+ | [Doge-320M-Instruct-SFT](https://huggingface.co/SmallDoge/Doge-320M-Instruct-SFT) | [smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smoltalk) | 2 | 2048 | 2e-4 | 0.25M | bfloat16 |
97
 
98
  **DPO**:
99
  | Model | Training Data | Epochs | Content Length | LR | Batch Size | Precision |
100
  |---|---|---|---|---|---|---|
101
+ | [Doge-20M-Instruct](https://huggingface.co/SmallDoge/Doge-20M-Instruct) | [ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) | 2 | 1024 | 8e-5 | 0.125M | bfloat16 |
102
+ | [Doge-60M-Instruct](https://huggingface.co/SmallDoge/Doge-60M-Instruct) | [ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) | 2 | 1024 | 6e-5 | 0.125M | bfloat16 |
103
+ | [Doge-160M-Instruct](https://huggingface.co/SmallDoge/Doge-160M-Instruct) | [ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) | 2 | 1024 | 4e-5 | 0.125M | bfloat16 |
104
+ | [Doge-320M-Instruct](https://huggingface.co/SmallDoge/Doge-320M-Instruct) | [ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized) | 2 | 1024 | 2e-5 | 0.125M | bfloat16 |
105
 
106
 
107
  **Evaluation**:
 
110
  |---|---|---|---|---|---|---|---|
111
  | [Doge-20M-Instruct](https://huggingface.co/SmallDoge/Doge-20M-Instruct) | 7.3 | 26.3 | 18.3 | 29.2 | 57.8 | 27.8 | 142 |
112
  | [Doge-60M-Instruct](https://huggingface.co/SmallDoge/Doge-60M-Instruct) | 7.4 | 27.5 | 27.7 | 37.5 | 61.4 | 32.1 | 62 |
 
113
  | [Doge-160M-Instruct](https://huggingface.co/SmallDoge/Doge-160M-Instruct) | 16.8 | 29.7 | 29.1 | 42.8 | 64.1 | 37.1 | 28 |
114
+ | [Doge-320M-Instruct](https://huggingface.co/SmallDoge/Doge-320M-Instruct) | 28.5 | 30.3 | 31.9 | 51.7 | 71.0 | 50.6 | 16 |
115
 
116
 
117
  **Procedure**:
 
132
  ## Citation
133
 
134
  ```bibtex
135
+ @misc{smalldoges,
136
+ title={SmallDoges: A Family of Dynamic UltraFast Small Language Models},
137
+ author={Jingze, Shi and Yifan, Wu and Bingheng, Wu and Yuyu, Luo},
138
+ year={2025},
139
+ month={March},
140
+ url={https://github.com/SmallDoges/small-doge}
 
 
141
  }
142
  ```