eliebak HF Staff commited on
Commit
d37c9b9
·
verified ·
1 Parent(s): b6b408d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -35,7 +35,7 @@ SmolLM3 is a 3B parameter language model designed to push the boundaries of smal
35
 
36
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/Zcm_016pWeyFr_uIkT7Ki.png)
37
 
38
- The model is a decoder-only transformer using GQA and NoRope, it was pretrained on 11.2T tokens with a staged curriculum of web, code, math and reasoning data. Post-training included midtraining on 140B reasoning tokens followed by supervised fine-tuning and alignment via Anchored Preference Optimization (APO).
39
 
40
  ### Key features
41
  - Instruct model optimized for **hybrid reasoning**
 
35
 
36
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/Zcm_016pWeyFr_uIkT7Ki.png)
37
 
38
+ The model is a decoder-only transformer using GQA and NoPE, it was pretrained on 11.2T tokens with a staged curriculum of web, code, math and reasoning data. Post-training included midtraining on 140B reasoning tokens followed by supervised fine-tuning and alignment via Anchored Preference Optimization (APO).
39
 
40
  ### Key features
41
  - Instruct model optimized for **hybrid reasoning**