Image-Text-to-Text
Transformers
Safetensors
mllama
conversational
text-generation-inference
4-bit precision
bitsandbytes
Instructions to use SeanScripts/Llama-3.2-11B-Vision-Instruct-nf4 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use SeanScripts/Llama-3.2-11B-Vision-Instruct-nf4 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="SeanScripts/Llama-3.2-11B-Vision-Instruct-nf4") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("SeanScripts/Llama-3.2-11B-Vision-Instruct-nf4") model = AutoModelForImageTextToText.from_pretrained("SeanScripts/Llama-3.2-11B-Vision-Instruct-nf4") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use SeanScripts/Llama-3.2-11B-Vision-Instruct-nf4 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "SeanScripts/Llama-3.2-11B-Vision-Instruct-nf4" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SeanScripts/Llama-3.2-11B-Vision-Instruct-nf4", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/SeanScripts/Llama-3.2-11B-Vision-Instruct-nf4
- SGLang
How to use SeanScripts/Llama-3.2-11B-Vision-Instruct-nf4 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "SeanScripts/Llama-3.2-11B-Vision-Instruct-nf4" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SeanScripts/Llama-3.2-11B-Vision-Instruct-nf4", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "SeanScripts/Llama-3.2-11B-Vision-Instruct-nf4" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SeanScripts/Llama-3.2-11B-Vision-Instruct-nf4", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use SeanScripts/Llama-3.2-11B-Vision-Instruct-nf4 with Docker Model Runner:
docker model run hf.co/SeanScripts/Llama-3.2-11B-Vision-Instruct-nf4
Converted from meta-llama/Llama-3.2-11B-Vision-Instruct using BitsAndBytes with NF4 (4-bit) quantization. Not using double quantization.
Requires bitsandbytes to load.
Example usage for image captioning:
from transformers import MllamaForConditionalGeneration, AutoProcessor, BitsAndBytesConfig
from PIL import Image
import time
# Load model
model_id = "SeanScripts/Llama-3.2-11B-Vision-Instruct-nf4"
model = MllamaForConditionalGeneration.from_pretrained(
model_id,
use_safetensors=True,
device_map="cuda:0"
)
# Load tokenizer
processor = AutoProcessor.from_pretrained(model_id)
# Caption a local image (could use a more specific prompt)
IMAGE = Image.open("test.png").convert("RGB")
PROMPT = """<|begin_of_text|><|start_header_id|>user<|end_header_id|>
Caption this image:
<|image|><|eot_id|><|start_header_id|>assistant<|end_header_id|>
"""
inputs = processor(IMAGE, PROMPT, return_tensors="pt").to(model.device)
prompt_tokens = len(inputs['input_ids'][0])
print(f"Prompt tokens: {prompt_tokens}")
t0 = time.time()
generate_ids = model.generate(**inputs, max_new_tokens=256)
t1 = time.time()
total_time = t1 - t0
generated_tokens = len(generate_ids[0]) - prompt_tokens
time_per_token = generated_tokens/total_time
print(f"Generated {generated_tokens} tokens in {total_time:.3f} s ({time_per_token:.3f} tok/s)")
output = processor.decode(generate_ids[0][prompt_tokens:]).replace('<|eot_id|>', '')
print(output)
You can get a set of ComfyUI custom nodes for running this model here: https://github.com/SeanScripts/ComfyUI-PixtralLlamaVision
- Downloads last month
- 23
docker model run hf.co/SeanScripts/Llama-3.2-11B-Vision-Instruct-nf4