Instructions to use zai-org/GLM-4.6V-Flash with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use zai-org/GLM-4.6V-Flash with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="zai-org/GLM-4.6V-Flash") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("zai-org/GLM-4.6V-Flash") model = AutoModelForImageTextToText.from_pretrained("zai-org/GLM-4.6V-Flash") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- HuggingChat
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use zai-org/GLM-4.6V-Flash with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "zai-org/GLM-4.6V-Flash" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "zai-org/GLM-4.6V-Flash", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/zai-org/GLM-4.6V-Flash
- SGLang
How to use zai-org/GLM-4.6V-Flash with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "zai-org/GLM-4.6V-Flash" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "zai-org/GLM-4.6V-Flash", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "zai-org/GLM-4.6V-Flash" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "zai-org/GLM-4.6V-Flash", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use zai-org/GLM-4.6V-Flash with Docker Model Runner:
docker model run hf.co/zai-org/GLM-4.6V-Flash
it run good in colab t4
import torch
from transformers import BitsAndBytesConfig, AutoProcessor, Glm4vForConditionalGeneration
from PIL import Image
MODEL_PATH = "zai-org/GLM-4.6V-Flash"
إعداد ضغط 4-بت
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=torch.float16
)
processor = AutoProcessor.from_pretrained(MODEL_PATH, trust_remote_code=True)
model = Glm4vForConditionalGeneration.from_pretrained(
MODEL_PATH,
quantization_config=bnb_config,
device_map="auto",
trust_remote_code=True
)
--- اقرأ الصورة من ملف محلي ---
image_path = "/content/Grayscale_8bits_palette_sample_image.png" # عدِّل هذا إلى مسار الصورة عندك
image = Image.open(image_path).convert("RGB")
إعداد رسالة: صورة + نص
messages = [
{
"role": "user",
"content": [
# لن تحتاج url لأن الصورة محليّة
{"type": "image"},
{"type": "text", "text": "Describe this image in detail."}
],
}
]
تجهيز المدخلات: الصور + النص
chat_template = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(
text=chat_template,
images=[image],
return_tensors="pt"
)
نقل التينسورات إلى نفس جهاز النموذج (GPU أو CPU)
inputs = {k: v.to(model.device) for k, v in inputs.items()}
inputs.pop("token_type_ids", None)
توليد النص (الوصف)
with torch.no_grad():
generated_ids = model.generate(**inputs, max_new_tokens=51)
output = processor.decode(
generated_ids[0][ inputs["input_ids"].shape[1]: ],
skip_special_tokens=True
)
print(output)
Using a slow image processor as use_fast is unset and a slow processor was saved with this model. use_fast=True will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with use_fast=False.
Unrecognized keys in rope_parameters for 'rope_type'='default': {'partial_rotary_factor', 'mrope_section'}
Download complete:
0.00/0.00 [00:00<?, ?B/s]
Fetching 4 files: 100%
4/4 [00:00<00:00, 164.82it/s]
Loading weights: 100%
704/704 [02:02<00:00, 5.74it/s, Materializing param=model.visual.post_layernorm.weight]
Got it, let's describe this black-and-white image of a parrot. First, the main subject is a parrot, likely a macaw or similar species, with a prominent crest on its head. The parrot is perched on a
!pip install transformers==5.0.0rc0

