metadata
license: apache-2.0
base_model:
- HuggingFaceTB/SmolLM3-3B
pipeline_tag: text-generation
library_name: transformers
SmolLM3‑3B • Quantized
🚀 Model Description
This is an int8 quantized version of SmolLM3–3B, a highly efficient, open-source 3 B parameter LLM. It delivers nearly state-of-the-art multilingual reasoning and long-context performance (up to 128k tokens) with drastically reduced memory usage and inference cost, enabling fast deployment on mid‑range GPUs and edge devices.
📏 Quantization Details
- Library: torchao
- Precision: int8 weights and activations
- Benefits: ~50–75% reduction in VRAM usage, enabling 12–16 GB GPU usage, with minimal performance drop on reasoning, coding, and long-context abilities
🎯 Intended Use
Ideal for:
- Scenarios requiring fast LLM inference under constrained VRAM (e.g. small servers or laptops)
- Multilingual reasoning tasks, chain-of-thought logic, and long-context document understanding
- Deployments of dual-mode (think/no_think) conversational agents
- Research into efficient LLM deployment and quantization techniques
⚠️ Limitations
- Slight performance loss compared to full-precision SmolLM3‑3B
- Requires proper benchmarking in your actual environment
- Continues to exhibit standard LLM risks: hallucination, bias, etc.
- Quant performance may vary across languages or context lengths