Update README.md
Browse files
README.md
CHANGED
|
@@ -188,6 +188,38 @@ Here is an infographic with all the training details [TODO].
|
|
| 188 |
- The datasets used for pretraining can be found in this [collection](https://huggingface.co/collections/HuggingFaceTB/smollm3-pretraining-datasets-685a7353fdc01aecde51b1d9) and those used in mid-training and pos-training can be found here [TODO]
|
| 189 |
- The training and evaluation configs and code can be found in the [huggingface/smollm](https://github.com/huggingface/smollm) repository.
|
| 190 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 191 |
## Limitations
|
| 192 |
|
| 193 |
SmolLM3 can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
|
|
|
|
| 188 |
- The datasets used for pretraining can be found in this [collection](https://huggingface.co/collections/HuggingFaceTB/smollm3-pretraining-datasets-685a7353fdc01aecde51b1d9) and those used in mid-training and pos-training can be found here [TODO]
|
| 189 |
- The training and evaluation configs and code can be found in the [huggingface/smollm](https://github.com/huggingface/smollm) repository.
|
| 190 |
|
| 191 |
+
## Agentic Usage
|
| 192 |
+
|
| 193 |
+
SmolLM3 supports tool calling! Just pass your list of tools under the argument `xml_tools` (for standard tool-calling), or `python_tools` (for calling tools like python functions in a <code> snippet).
|
| 194 |
+
|
| 195 |
+
```python
|
| 196 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 197 |
+
|
| 198 |
+
checkpoint = "HuggingFaceTB/SmolLM3-3B"
|
| 199 |
+
|
| 200 |
+
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
|
| 201 |
+
model = AutoModelForCausalLM.from_pretrained(checkpoint)
|
| 202 |
+
|
| 203 |
+
messages = [
|
| 204 |
+
{
|
| 205 |
+
"role": "user",
|
| 206 |
+
"content": "Hello! How is the weather today in Copenhagen?"
|
| 207 |
+
}
|
| 208 |
+
]
|
| 209 |
+
|
| 210 |
+
inputs = tokenizer.apply_chat_template(
|
| 211 |
+
messages,
|
| 212 |
+
enable_thinking=False,
|
| 213 |
+
xml_tools=[{"name": "get_weather", "description": "Get the weather in a city", "parameters": {"type": "object", "properties": {"city": {"type": "string", "description": "The city to get the weather for"}}}}],
|
| 214 |
+
add_generation_prompt=True,
|
| 215 |
+
tokenize=True,
|
| 216 |
+
return_tensors="pt"
|
| 217 |
+
)
|
| 218 |
+
|
| 219 |
+
outputs = model.generate(inputs)
|
| 220 |
+
print(tokenizer.decode(outputs[0]))
|
| 221 |
+
```
|
| 222 |
+
|
| 223 |
## Limitations
|
| 224 |
|
| 225 |
SmolLM3 can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
|