loubnabnl HF Staff commited on
Commit
27b5dbe
·
verified ·
1 Parent(s): dee4608

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -8
README.md CHANGED
@@ -23,8 +23,8 @@ language:
23
 
24
  1. [Model Summary](#model-summary)
25
  2. [Evaluation](#evaluation)
26
- 3. [Limitations](#limitations)
27
- 4. [Training](#training)
28
  5. [License](#license)
29
 
30
  ## Model Summary
@@ -161,13 +161,8 @@ Evaluation results in reasoning mode for SmolLM3 and Qwen3 models:
161
  | Graduate-level reasoning | GPQA Diamond | **41.7** | 39.9 | **55.3** |
162
  | Instruction following | IFEval | 71.2 | **74.2** | **85.4** |
163
  | Alignment | MixEval Hard | 30.8 | **33.9** | **38.0** |
164
- | Alignment | AlpacaEval | **53.7** | 43.4 | **61.6** |
165
  | Multilingual Q&A | Global MMLU | **64.1** | 62.3 | **73.3** |
166
 
167
- ## Limitations
168
-
169
- SmolLM3 can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
170
-
171
  ## Training
172
 
173
  ### Model
@@ -183,12 +178,16 @@ SmolLM3 can produce text on a variety of topics, but the generated content may n
183
  - **Data processing framework:** [datatrove](https://github.com/huggingface/datatrove)
184
  - **Evaluation framework:** [lighteval](https://github.com/huggingface/lighteval)
185
  - **Postraining Framework:** [TRL](https://github.com/huggingface/trl)
186
-
187
  ### Open resources
188
  Here is an infographic with all the training details [TODO].
189
  - The datasets used for pretraining can be found in this [collection](https://huggingface.co/collections/HuggingFaceTB/smollm3-pretraining-datasets-685a7353fdc01aecde51b1d9) and those used in mid-training and pos-training can be found here [TODO]
190
  - The training and evaluation configs and code can be found in the [huggingface/smollm](https://github.com/huggingface/smollm) repository.
191
 
 
 
 
 
192
 
193
  ## License
194
  [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)
 
23
 
24
  1. [Model Summary](#model-summary)
25
  2. [Evaluation](#evaluation)
26
+ 3. [Training](#training)
27
+ 4. [Limitations](#limitations)
28
  5. [License](#license)
29
 
30
  ## Model Summary
 
161
  | Graduate-level reasoning | GPQA Diamond | **41.7** | 39.9 | **55.3** |
162
  | Instruction following | IFEval | 71.2 | **74.2** | **85.4** |
163
  | Alignment | MixEval Hard | 30.8 | **33.9** | **38.0** |
 
164
  | Multilingual Q&A | Global MMLU | **64.1** | 62.3 | **73.3** |
165
 
 
 
 
 
166
  ## Training
167
 
168
  ### Model
 
178
  - **Data processing framework:** [datatrove](https://github.com/huggingface/datatrove)
179
  - **Evaluation framework:** [lighteval](https://github.com/huggingface/lighteval)
180
  - **Postraining Framework:** [TRL](https://github.com/huggingface/trl)
181
+
182
  ### Open resources
183
  Here is an infographic with all the training details [TODO].
184
  - The datasets used for pretraining can be found in this [collection](https://huggingface.co/collections/HuggingFaceTB/smollm3-pretraining-datasets-685a7353fdc01aecde51b1d9) and those used in mid-training and pos-training can be found here [TODO]
185
  - The training and evaluation configs and code can be found in the [huggingface/smollm](https://github.com/huggingface/smollm) repository.
186
 
187
+ ## Limitations
188
+
189
+ SmolLM3 can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
190
+
191
 
192
  ## License
193
  [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0)