| --- |
| license: apache-2.0 |
| base_model: HuggingFaceTB/SmolLM2-135M |
| library_name: transformers |
| language: |
| - en |
| tags: |
| - quantllm |
| - transformers |
| - safetensors |
| pipeline_tag: text-generation |
| --- |
| |
| <div align="center"> |
|
|
| # π€ SmolLM2-135M-QuantLLM |
|
|
| **HuggingFaceTB/SmolLM2-135M** converted to **SAFETENSORS** format |
|
|
| [](https://github.com/codewithdark-git/QuantLLM) |
| []() |
|
|
|
|
| <a href="https://github.com/codewithdark-git/QuantLLM">β Star QuantLLM on GitHub</a> |
|
|
| </div> |
|
|
| --- |
|
|
|
|
| ## π About This Model |
|
|
| This model is **[HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M)** converted to **SafeTensors** format for use with HuggingFace Transformers and PyTorch. |
|
|
| | Property | Value | |
| |----------|-------| |
| | **Base Model** | [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M) | |
| | **Format** | SAFETENSORS | |
| | **Quantization** | None (Full Precision) | |
| | **License** | apache-2.0 | |
| | **Created With** | [QuantLLM](https://github.com/codewithdark-git/QuantLLM) | |
|
|
|
|
| ## π Quick Start |
|
|
| ### With Transformers |
|
|
| ```python |
| from transformers import AutoModelForCausalLM, AutoTokenizer |
| |
| # Load model and tokenizer |
| model = AutoModelForCausalLM.from_pretrained("codewithdark/SmolLM2-135M-QuantLLM") |
| tokenizer = AutoTokenizer.from_pretrained("codewithdark/SmolLM2-135M-QuantLLM") |
| |
| # Generate text |
| inputs = tokenizer("Once upon a time", return_tensors="pt") |
| outputs = model.generate(**inputs, max_new_tokens=100, do_sample=True) |
| print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
| ``` |
|
|
| ### With QuantLLM |
|
|
| ```python |
| from quantllm import TurboModel |
| |
| # Load with automatic optimization |
| model = TurboModel.from_pretrained("codewithdark/SmolLM2-135M-QuantLLM") |
| |
| # Generate |
| response = model.generate("Write a poem about coding") |
| print(response) |
| ``` |
|
|
| ### Requirements |
|
|
| ```bash |
| pip install transformers torch |
| ``` |
|
|
|
|
| ## π Model Details |
|
|
| | Property | Value | |
| |----------|-------| |
| | **Original Model** | [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M) | |
| | **Format** | SAFETENSORS | |
| | **Quantization** | Full Precision | |
| | **License** | `apache-2.0` | |
| | **Export Date** | 2026-04-29 | |
| | **Exported By** | [QuantLLM v2.1](https://github.com/codewithdark-git/QuantLLM) | |
|
|
|
|
|
|
| --- |
|
|
| ## π Created with QuantLLM |
|
|
| <div align="center"> |
|
|
| [](https://github.com/codewithdark-git/QuantLLM) |
|
|
| **Convert any model to GGUF, ONNX, or MLX in one line!** |
|
|
| ```python |
| from quantllm import turbo |
| |
| # Load any HuggingFace model |
| model = turbo("HuggingFaceTB/SmolLM2-135M") |
| |
| # Export to any format |
| model.export("safetensors", quantization="Q4_K_M") |
| |
| # Push to HuggingFace |
| model.push("your-repo", format="safetensors") |
| ``` |
|
|
| <a href="https://github.com/codewithdark-git/QuantLLM"> |
| <img src="https://img.shields.io/github/stars/codewithdark-git/QuantLLM?style=social" alt="GitHub Stars"> |
| </a> |
|
|
| **[π Documentation](https://github.com/codewithdark-git/QuantLLM#readme)** Β· |
| **[π Report Issue](https://github.com/codewithdark-git/QuantLLM/issues)** Β· |
| **[π‘ Request Feature](https://github.com/codewithdark-git/QuantLLM/issues)** |
|
|
| </div> |
|
|
|
|
| ## π Export Details |
|
|
| Exported with [QuantLLM](https://github.com/codewithdark-git/QuantLLM) from `HuggingFaceTB/SmolLM2-135M` (134.5M params). |
|
|
| | Property | Value | |
| |----------|-------| |
| | **Format** | SafeTensors | |
| | **Size** | 541.6 MB | |
| | **Parameters** | 134.5M | |
| | **Dtype** | float32 | |
|
|
| ### How to use |
|
|
|
|
|
|