codewithdark's picture
Add model card
4711922 verified
---
license: apache-2.0
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
language:
- en
tags:
- quantllm
- transformers
- safetensors
pipeline_tag: text-generation
---
<div align="center">
# πŸ€— SmolLM2-135M-QuantLLM
**HuggingFaceTB/SmolLM2-135M** converted to **SAFETENSORS** format
[![QuantLLM](https://img.shields.io/badge/πŸš€_Made_with-QuantLLM-orange?style=for-the-badge)](https://github.com/codewithdark-git/QuantLLM)
[![Format](https://img.shields.io/badge/Format-SAFETENSORS-blue?style=for-the-badge)]()
<a href="https://github.com/codewithdark-git/QuantLLM">⭐ Star QuantLLM on GitHub</a>
</div>
---
## πŸ“– About This Model
This model is **[HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M)** converted to **SafeTensors** format for use with HuggingFace Transformers and PyTorch.
| Property | Value |
|----------|-------|
| **Base Model** | [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M) |
| **Format** | SAFETENSORS |
| **Quantization** | None (Full Precision) |
| **License** | apache-2.0 |
| **Created With** | [QuantLLM](https://github.com/codewithdark-git/QuantLLM) |
## πŸš€ Quick Start
### With Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("codewithdark/SmolLM2-135M-QuantLLM")
tokenizer = AutoTokenizer.from_pretrained("codewithdark/SmolLM2-135M-QuantLLM")
# Generate text
inputs = tokenizer("Once upon a time", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100, do_sample=True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
### With QuantLLM
```python
from quantllm import TurboModel
# Load with automatic optimization
model = TurboModel.from_pretrained("codewithdark/SmolLM2-135M-QuantLLM")
# Generate
response = model.generate("Write a poem about coding")
print(response)
```
### Requirements
```bash
pip install transformers torch
```
## πŸ“Š Model Details
| Property | Value |
|----------|-------|
| **Original Model** | [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M) |
| **Format** | SAFETENSORS |
| **Quantization** | Full Precision |
| **License** | `apache-2.0` |
| **Export Date** | 2026-04-29 |
| **Exported By** | [QuantLLM v2.1](https://github.com/codewithdark-git/QuantLLM) |
---
## πŸš€ Created with QuantLLM
<div align="center">
[![QuantLLM](https://img.shields.io/badge/πŸš€_QuantLLM-Ultra--fast_LLM_Quantization-orange?style=for-the-badge)](https://github.com/codewithdark-git/QuantLLM)
**Convert any model to GGUF, ONNX, or MLX in one line!**
```python
from quantllm import turbo
# Load any HuggingFace model
model = turbo("HuggingFaceTB/SmolLM2-135M")
# Export to any format
model.export("safetensors", quantization="Q4_K_M")
# Push to HuggingFace
model.push("your-repo", format="safetensors")
```
<a href="https://github.com/codewithdark-git/QuantLLM">
<img src="https://img.shields.io/github/stars/codewithdark-git/QuantLLM?style=social" alt="GitHub Stars">
</a>
**[πŸ“š Documentation](https://github.com/codewithdark-git/QuantLLM#readme)** Β·
**[πŸ› Report Issue](https://github.com/codewithdark-git/QuantLLM/issues)** Β·
**[πŸ’‘ Request Feature](https://github.com/codewithdark-git/QuantLLM/issues)**
</div>
## πŸ“Š Export Details
Exported with [QuantLLM](https://github.com/codewithdark-git/QuantLLM) from `HuggingFaceTB/SmolLM2-135M` (134.5M params).
| Property | Value |
|----------|-------|
| **Format** | SafeTensors |
| **Size** | 541.6 MB |
| **Parameters** | 134.5M |
| **Dtype** | float32 |
### How to use