isfs/Qwen3.5-2B-Base-int4

This is a 4-bit quantized version of Qwen/Qwen3.5-2B-Base.

The weights on this repository are already quantized (4-bit), significantly reducing disk size and memory usage compared to the original BF16 model.

Model Details

  • Base Model: Qwen/Qwen3.5-2B-Base
  • Quantization: BitsAndBytes (NF4, Double Quantization)
  • Compute Dtype: bfloat16

Usage

You must install bitsandbytes and transformers.

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig

model_id = "isfs/Qwen3.5-2B-Base-int4"

# Since the weights are already quantized, you can simply load them.
# However, BitsAndBytes still requires a config for loading.
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    trust_remote_code=True
)

tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
Downloads last month
20
Safetensors
Model size
2B params
Tensor type
F32
BF16
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support