SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model
Paper
•
2502.02737
•
Published
•
254
LightweightLLM-135M is a compact, efficient language model designed for on-device usage and low-resource environments. It is derived from the SmolLM2 family, providing strong instruction-following and reasoning capabilities in a lightweight format.
pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "4lph4v3rs3/lightweightl-LLM-135M"
device = "cuda" # or "cpu"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to(device)
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
checkpoint = "4lph4v3rs3/lightweightl-LLM-135M"
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = AutoModelForCausalLM.from_pretrained(
checkpoint,
device_map="auto",
torch_dtype=torch.bfloat16
)
inputs = tokenizer.encode("Gravity is", return_tensors="pt").to("cuda")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
LightweightLLM-135M has been benchmarked on common NLP tasks using zero-shot evaluation, showing strong performance for a 135M-parameter model in instruction following, commonsense reasoning, and text generation.
Use outputs as an assistive tool and verify important information independently.
@misc{allal2025smollm2smolgoesbig,
title={SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model},
author={Loubna Ben Allal et al.},
year={2025},
eprint={2502.02737},
archivePrefix={arXiv},
primaryClass={cs.CL}
}