Instructions to use QuantFactory/SmolLM2-360M-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use QuantFactory/SmolLM2-360M-GGUF with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("QuantFactory/SmolLM2-360M-GGUF", dtype="auto") - llama-cpp-python
How to use QuantFactory/SmolLM2-360M-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="QuantFactory/SmolLM2-360M-GGUF", filename="SmolLM2-360M.Q2_K.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use QuantFactory/SmolLM2-360M-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/SmolLM2-360M-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/SmolLM2-360M-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/SmolLM2-360M-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/SmolLM2-360M-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf QuantFactory/SmolLM2-360M-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf QuantFactory/SmolLM2-360M-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf QuantFactory/SmolLM2-360M-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf QuantFactory/SmolLM2-360M-GGUF:Q4_K_M
Use Docker
docker model run hf.co/QuantFactory/SmolLM2-360M-GGUF:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use QuantFactory/SmolLM2-360M-GGUF with Ollama:
ollama run hf.co/QuantFactory/SmolLM2-360M-GGUF:Q4_K_M
- Unsloth Studio new
How to use QuantFactory/SmolLM2-360M-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/SmolLM2-360M-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/SmolLM2-360M-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for QuantFactory/SmolLM2-360M-GGUF to start chatting
- Docker Model Runner
How to use QuantFactory/SmolLM2-360M-GGUF with Docker Model Runner:
docker model run hf.co/QuantFactory/SmolLM2-360M-GGUF:Q4_K_M
- Lemonade
How to use QuantFactory/SmolLM2-360M-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull QuantFactory/SmolLM2-360M-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.SmolLM2-360M-GGUF-Q4_K_M
List all available models
lemonade list
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf QuantFactory/SmolLM2-360M-GGUF:# Run inference directly in the terminal:
llama-cli -hf QuantFactory/SmolLM2-360M-GGUF:Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf QuantFactory/SmolLM2-360M-GGUF:# Run inference directly in the terminal:
./llama-cli -hf QuantFactory/SmolLM2-360M-GGUF:Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf QuantFactory/SmolLM2-360M-GGUF:# Run inference directly in the terminal:
./build/bin/llama-cli -hf QuantFactory/SmolLM2-360M-GGUF:Use Docker
docker model run hf.co/QuantFactory/SmolLM2-360M-GGUF:QuantFactory/SmolLM2-360M-GGUF
This is quantized version of HuggingFaceTB/SmolLM2-360M created using llama.cpp
Original Model Card
SmolLM2
Table of Contents
Model Summary
SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device.
SmolLM2 demonstrates significant advances over its predecessor SmolLM1, particularly in instruction following, knowledge, reasoning. The 360M model was trained on 4 trillion tokens using a diverse dataset combination: FineWeb-Edu, DCLM, The Stack, along with new filtered datasets we curated and will release soon. We developed the instruct version through supervised fine-tuning (SFT) using a combination of public datasets and our own curated datasets. We then applied Direct Preference Optimization (DPO) using UltraFeedback.
The instruct model additionally supports tasks such as text rewriting, summarization and function calling thanks to datasets developed by Argilla such as Synth-APIGen-v0.1.
How to use
pip install transformers
Running the model on CPU/GPU/multi GPU
- Using full precision
# pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "HuggingFaceTB/SmolLM2-360M"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
- Using
torch.bfloat16
# pip install accelerate
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
checkpoint = "HuggingFaceTB/SmolLM2-360M"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for fp16 use `torch_dtype=torch.float16` instead
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.bfloat16)
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
>>> print(f"Memory footprint: {model.get_memory_footprint() / 1e6:.2f} MB")
Memory footprint: 723.56 MB
Evaluation
In this section, we report the evaluation results of SmolLM2. All evaluations are zero-shot unless stated otherwise, and we use lighteval to run them.
Base Pre-Trained Model
| Metrics | SmolLM2-360M | Qwen2.5-0.5B | SmolLM-360M |
|---|---|---|---|
| HellaSwag | 54.5 | 51.2 | 51.8 |
| ARC (Average) | 53.0 | 45.4 | 50.1 |
| PIQA | 71.7 | 69.9 | 71.6 |
| MMLU (cloze) | 35.8 | 33.7 | 34.4 |
| CommonsenseQA | 38.0 | 31.6 | 35.3 |
| TriviaQA | 16.9 | 4.3 | 9.1 |
| Winogrande | 52.5 | 54.1 | 52.8 |
| OpenBookQA | 37.4 | 37.4 | 37.2 |
| GSM8K (5-shot) | 3.2 | 33.4 | 1.6 |
Instruction Model
| Metric | SmolLM2-360M-Instruct | Qwen2.5-0.5B-Instruct | SmolLM-360M-Instruct |
|---|---|---|---|
| IFEval (Average prompt/inst) | 41.0 | 31.6 | 19.8 |
| MT-Bench | 3.66 | 4.16 | 3.37 |
| HellaSwag | 52.1 | 48.0 | 47.9 |
| ARC (Average) | 43.7 | 37.3 | 38.8 |
| PIQA | 70.8 | 67.2 | 69.4 |
| MMLU (cloze) | 32.8 | 31.7 | 30.6 |
| BBH (3-shot) | 27.3 | 30.7 | 24.4 |
| GSM8K (5-shot) | 7.43 | 26.8 | 1.36 |
Limitations
SmolLM2 models primarily understand and generate content in English. They can produce text on a variety of topics, but the generated content may not always be factually accurate, logically consistent, or free from biases present in the training data. These models should be used as assistive tools rather than definitive sources of information. Users should always verify important information and critically evaluate any generated content.
Training
Model
- Architecture: Transformer decoder
- Pretraining tokens: 4T
- Precision: bfloat16
Hardware
- GPUs: 64 H100
Software
- Training Framework: nanotron
License
Citation
@misc{allal2024SmolLM2,
title={SmolLM2 - with great data, comes great performance},
author={Loubna Ben Allal and Anton Lozhkov and Elie Bakouch and Gabriel Martín Blázquez and Lewis Tunstall and Agustín Piqueres and Andres Marafioti and Cyril Zakka and Leandro von Werra and Thomas Wolf},
year={2024},
}
- Downloads last month
- 7,924
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit

Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/SmolLM2-360M-GGUF:# Run inference directly in the terminal: llama-cli -hf QuantFactory/SmolLM2-360M-GGUF: