π About This Model
This model is HuggingFaceTB/SmolLM2-135M converted to GGUF format for use with llama.cpp, Ollama, LM Studio, and other compatible inference engines.
| Property | Value |
|---|---|
| Base Model | HuggingFaceTB/SmolLM2-135M |
| Format | GGUF |
| Quantization | Q4_K_M |
| License | apache-2.0 |
| Created With | QuantLLM |
π Quick Start
Option 1: Python (llama-cpp-python)
from llama_cpp import Llama
# Load the model
llm = Llama.from_pretrained(
repo_id="codewithdark/SmolLM2-135M-GGUF",
filename="SmolLM2-135M-GGUF.Q4_K_M.gguf",
)
# Generate text
output = llm(
"Write a short story about a robot learning to paint:",
max_tokens=256,
echo=True
)
print(output["choices"][0]["text"])
Option 2: Ollama
# Download the model
huggingface-cli download codewithdark/SmolLM2-135M-GGUF SmolLM2-135M-GGUF.Q4_K_M.gguf --local-dir .
# Create Modelfile
echo 'FROM ./SmolLM2-135M-GGUF.Q4_K_M.gguf' > Modelfile
# Import to Ollama
ollama create smollm2-135m-gguf -f Modelfile
# Chat with the model
ollama run smollm2-135m-gguf
Option 3: LM Studio
- Download the
.gguffile from the Files tab above - Open LM Studio β My Models β Add Model
- Select the downloaded file
- Start chatting!
Option 4: llama.cpp CLI
# Download
huggingface-cli download codewithdark/SmolLM2-135M-GGUF SmolLM2-135M-GGUF.Q4_K_M.gguf --local-dir .
# Run inference
./llama-cli -m SmolLM2-135M-GGUF.Q4_K_M.gguf -p "Hello! " -n 128
π Model Details
| Property | Value |
|---|---|
| Original Model | HuggingFaceTB/SmolLM2-135M |
| Format | GGUF |
| Quantization | Q4_K_M |
| License | apache-2.0 |
| Export Date | 2026-04-29 |
| Exported By | QuantLLM v2.1 |
π¦ Quantization Details
This model uses Q4_K_M quantization:
| Property | Value |
|---|---|
| Type | Q4_K_M |
| Bits | 4-bit |
| Quality | π’ β Recommended - Best quality/size balance |
All Available GGUF Quantizations
| Type | Bits | Quality | Best For |
|---|---|---|---|
| Q2_K | 2-bit | π΄ Lowest | Extreme size constraints |
| Q3_K_M | 3-bit | π Low | Very limited memory |
| Q4_K_M | 4-bit | π’ Good | Most users β |
| Q5_K_M | 5-bit | π’ High | Quality-focused |
| Q6_K | 6-bit | π΅ Very High | Near-original |
| Q8_0 | 8-bit | π΅ Excellent | Maximum quality |
π Created with QuantLLM
Convert any model to GGUF, ONNX, or MLX in one line!
from quantllm import turbo
# Load any HuggingFace model
model = turbo("HuggingFaceTB/SmolLM2-135M")
# Export to any format
model.export("gguf", quantization="Q4_K_M")
# Push to HuggingFace
model.push("your-repo", format="gguf")
π Documentation Β· π Report Issue Β· π‘ Request Feature
π Benchmark Results (QuantLLM v2.1)
Exported with QuantLLM from HuggingFaceTB/SmolLM2-135M (134.5M params).
| Quantization | File | Size | Compression vs FP32 |
|---|---|---|---|
| Q2_K | SmolLM2-135M.Q2_K.gguf | 84.1 MB | 6.1x |
| Q4_K_M β | SmolLM2-135M.Q4_K_M.gguf | 100.6 MB | 5.1x |
| Q8_0 | SmolLM2-135M.Q8_0.gguf | 138.1 MB | 3.7x |
FP32 baseline: 541.6 MB (SafeTensors)
How to use
- Downloads last month
- 186
Hardware compatibility
Log In to add your hardware
2-bit
4-bit
8-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
Model tree for QuantLLM/SmolLM2-135M-GGUF
Base model
HuggingFaceTB/SmolLM2-135M