Text-to-Image
Transformers
Safetensors
Hunyuan
text-generation
hunyuan
quantization
nf4
comfyui
custom-nodes
autoregressive
DiT
HunyuanImage-3.0
bitsandbytes
4bit
custom_code
4-bit precision
Instructions to use dong16/HunyuanImage-3-NF4-v2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use dong16/HunyuanImage-3-NF4-v2 with Transformers:
# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("dong16/HunyuanImage-3-NF4-v2", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
File size: 366 Bytes
94ab947 | 1 2 3 4 5 6 7 8 9 10 | {
"quantization_method": "bitsandbytes_nf4",
"load_in_4bit": true,
"bnb_4bit_quant_type": "nf4",
"bnb_4bit_use_double_quant": true,
"bnb_4bit_compute_dtype": "torch.bfloat16",
"expected_vram_gb": 45,
"notes": "Load with BitsAndBytesConfig for NF4 quantization. Attention layers kept in full precision.",
"attention_layers_quantized": false
} |