isfs/Qwen3.5-2B-Base-int4
This is a 4-bit quantized version of Qwen/Qwen3.5-2B-Base.
The weights on this repository are already quantized (4-bit), significantly reducing disk size and memory usage compared to the original BF16 model.
Model Details
- Base Model: Qwen/Qwen3.5-2B-Base
- Quantization: BitsAndBytes (NF4, Double Quantization)
- Compute Dtype: bfloat16
Usage
You must install bitsandbytes and transformers.
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model_id = "isfs/Qwen3.5-2B-Base-int4"
# Since the weights are already quantized, you can simply load them.
# However, BitsAndBytes still requires a config for loading.
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
- Downloads last month
- 20