Update README.md
Browse files
README.md
CHANGED
|
@@ -36,7 +36,7 @@ In addition, Doge uses Dynamic Mask Attention as sequence transformation and can
|
|
| 36 |
|
| 37 |
We build the Doge by doing Per-Training on [Smollm-Corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus).
|
| 38 |
|
| 39 |
-
> NOTE: If you want to continue pre-training this model, you can find the unconverged checkpoint [here](https://huggingface.co/JingzeShi/Doge-20M-checkpoint
|
| 40 |
|
| 41 |
> NOTE: These models has not been fine-tuned for instruction, the instruction model is [here](https://huggingface.co/JingzeShi/Doge-20M-Instruct).
|
| 42 |
|
|
@@ -54,7 +54,7 @@ We build the Doge by doing Per-Training on [Smollm-Corpus](https://huggingface.c
|
|
| 54 |
| Model | MMLU | TriviaQA | ARC-E | ARC-C | PIQA | HellaSwag | OBQA | Winogrande | tokens / s on CPU |
|
| 55 |
|---|---|---|---|---|---|---|---|---|---|
|
| 56 |
| [Doge-20M](https://huggingface.co/JingzeShi/Doge-20M) | 25.43 | 0.03 | 36.83 | 22.78 | 58.38 | 27.25 | 25.60 | 50.20 | 142 |
|
| 57 |
-
| [Doge-60M](https://huggingface.co/JingzeShi/Doge-60M) | 26.41 | 0 | 50.
|
| 58 |
|
| 59 |
> All evaluations are done using five-shot settings, without additional training on the benchmarks.
|
| 60 |
|
|
|
|
| 36 |
|
| 37 |
We build the Doge by doing Per-Training on [Smollm-Corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus).
|
| 38 |
|
| 39 |
+
> NOTE: If you want to continue pre-training this model, you can find the unconverged checkpoint [here](https://huggingface.co/JingzeShi/Doge-20M-checkpoint).
|
| 40 |
|
| 41 |
> NOTE: These models has not been fine-tuned for instruction, the instruction model is [here](https://huggingface.co/JingzeShi/Doge-20M-Instruct).
|
| 42 |
|
|
|
|
| 54 |
| Model | MMLU | TriviaQA | ARC-E | ARC-C | PIQA | HellaSwag | OBQA | Winogrande | tokens / s on CPU |
|
| 55 |
|---|---|---|---|---|---|---|---|---|---|
|
| 56 |
| [Doge-20M](https://huggingface.co/JingzeShi/Doge-20M) | 25.43 | 0.03 | 36.83 | 22.78 | 58.38 | 27.25 | 25.60 | 50.20 | 142 |
|
| 57 |
+
| [Doge-60M](https://huggingface.co/JingzeShi/Doge-60M) | 26.41 | 0.18 | 50.46 | 25.34 | 61.43 | 31.45 | 28.00 | 50.75 | 62 |
|
| 58 |
|
| 59 |
> All evaluations are done using five-shot settings, without additional training on the benchmarks.
|
| 60 |
|