Qwen2.5-0.5B - BitsAndBytes Quantized Model
This is a 4-bit BitsAndBytes quantized model for HuggingFace transformers.
Model Details
- Base Model: Qwen/Qwen2.5-0.5B
- Quantization: BitsAndBytes 4-bit
- Framework: transformers (CUDA)
Usage with transformers
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(load_in_4bit=True)
model = AutoModelForCausalLM.from_pretrained(
"BondingAI/Qwen2.5-0.5B-bnb-4bit",
quantization_config=bnb_config,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("BondingAI/Qwen2.5-0.5B-bnb-4bit")
License
Please refer to the original model card for licensing information.
- Downloads last month
- 10
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support
Model tree for BondingAI/Qwen2.5-0.5B-bnb-4bit
Base model
Qwen/Qwen2.5-0.5B