Update README.md
Browse files
README.md
CHANGED
|
@@ -68,7 +68,7 @@ For local inference, you can use `llama.cpp`, `ONNX`, `MLX` and `MLC`. You can f
|
|
| 68 |
|
| 69 |
## Evaluation
|
| 70 |
|
| 71 |
-
In this section, we report the evaluation results of SmolLM3
|
| 72 |
|
| 73 |
We highlight the best score in bold and underline the second-best score.
|
| 74 |
|
|
|
|
| 68 |
|
| 69 |
## Evaluation
|
| 70 |
|
| 71 |
+
In this section, we report the evaluation results of SmolLM3 model. All evaluations are zero-shot unless stated otherwise, and we use [lighteval](https://github.com/huggingface/lighteval) to run them. For Ruler 64k evaluation, we apply YaRN to the Qwen models with 32k context to extrapolate the context length.
|
| 72 |
|
| 73 |
We highlight the best score in bold and underline the second-best score.
|
| 74 |
|