loubnabnl HF Staff commited on
Commit
3f70f57
·
verified ·
1 Parent(s): 598d619

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -17,10 +17,11 @@ This is the home for smol models (SmolLM & SmolVLM) and high quality pre-trainin
17
  - [Stack-Edu](https://huggingface.co/datasets/HuggingFaceTB/stack-edu): the best open code pretraining dataset with educational code in 15 programming languages.
18
  - [SmolLM2 models](https://huggingface.co/collections/HuggingFaceTB/smollm2-checkpoints-6723884218bcda64b34d7db9): a series of strong small models in three sizes: 135M, 360M and 1.7B
19
  - [SmolVLM2](https://huggingface.co/HuggingFaceTB/SmolVLM2-2.2B-Instruct): a family of small **Video and Vision** models in three sizes: 2.2B, 500M and 256M. Blog post available [here](https://huggingface.co/blog/smolvlm2).
 
20
 
21
- **News 🗞️**
22
- - **SmolLM3**: SOTA 3B model with dual reasoning, supports 6 languages and long context with strong function calling: [HuggingFaceTB/SmolLM3-3B](https://huggingface.co/HuggingFaceTB/SmolLM3-3B)
23
- - SmolLM3 Engineering Blueprint available [here](https://huggingface.co/datasets/HuggingFaceTB/smollm3-blueprint/blob/main/smollm3-blueprint.pdf).
24
  <div align="center">
25
  <img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/UXSo9zzL7PFvrLCAQfcnz.png" width="700"/>
26
- </div>
 
 
 
 
17
  - [Stack-Edu](https://huggingface.co/datasets/HuggingFaceTB/stack-edu): the best open code pretraining dataset with educational code in 15 programming languages.
18
  - [SmolLM2 models](https://huggingface.co/collections/HuggingFaceTB/smollm2-checkpoints-6723884218bcda64b34d7db9): a series of strong small models in three sizes: 135M, 360M and 1.7B
19
  - [SmolVLM2](https://huggingface.co/HuggingFaceTB/SmolVLM2-2.2B-Instruct): a family of small **Video and Vision** models in three sizes: 2.2B, 500M and 256M. Blog post available [here](https://huggingface.co/blog/smolvlm2).
20
+ - [SmolLM3](https://huggingface.co/HuggingFaceTB/SmolLM3-3B): SOTA 3B model with dual **reasoning**, supports 6 languages and long context with strong function calling. SmolLM3 Engineering Blueprint available [here](https://huggingface.co/datasets/HuggingFaceTB/smollm3-blueprint/blob/main/smollm3-blueprint.pdf)
21
 
 
 
 
22
  <div align="center">
23
  <img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/UXSo9zzL7PFvrLCAQfcnz.png" width="700"/>
24
+ </div>
25
+
26
+ **News 🗞️**
27
+ - **The Smol Training Playbook**: a comprehensive guide to training world-class LLMs [HuggingFaceTB/smol-training-playbook](https://huggingface.co/spaces/HuggingFaceTB/smol-training-playbook)