Instructions to use HuggingFaceM4/idefics2-8b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use HuggingFaceM4/idefics2-8b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="HuggingFaceM4/idefics2-8b")# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("HuggingFaceM4/idefics2-8b") model = AutoModelForImageTextToText.from_pretrained("HuggingFaceM4/idefics2-8b") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use HuggingFaceM4/idefics2-8b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "HuggingFaceM4/idefics2-8b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HuggingFaceM4/idefics2-8b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/HuggingFaceM4/idefics2-8b
- SGLang
How to use HuggingFaceM4/idefics2-8b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "HuggingFaceM4/idefics2-8b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HuggingFaceM4/idefics2-8b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "HuggingFaceM4/idefics2-8b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "HuggingFaceM4/idefics2-8b", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use HuggingFaceM4/idefics2-8b with Docker Model Runner:
docker model run hf.co/HuggingFaceM4/idefics2-8b
CUDA out of memory
CUDA out of memory. Tried to allocate 12.78 GiB (GPU 0; 15.73 GiB total capacity; 11.21 GiB already allocated; 2.47 GiB free; 12.19 GiB reserved in total by PyTorch)
I have a cluster of GPU 4 GPU of 16GB,
GPU distribution (after model loading):
0: 9246/16300
1: 9246/16300
2: 9246/16300
3: 8038/16300
processor = AutoProcessor.from_pretrained("HuggingFaceM4/idefics2-8b", cache_dir = ".../.cache/huggingface/hub")
model = AutoModelForVision2Seq.from_pretrained("HuggingFaceM4/idefics2-8b", cache_dir = ".../.cache/huggingface/hub", device_map = "auto",, torch_dtype=torch.float16)
I changed this code so weight is distributed on GPU but
generated_ids = model.generate(**inputs, max_new_tokens=60)
GPU distribution (during running this code):
0: 12700/16300
1: 9246/16300
2: 9246/16300
3: 8038/16300
I am getting an error here: CUDA out of memory. Tried to allocate 12.78 GiB (GPU 0; 15.73 GiB total capacity; 11.21 GiB already allocated; 2.47 GiB free; 12.19 GiB reserved in total by PyTorch)
Thank you:)
i tried with AutoProcessor.from_pretrained with do_image_splitting=False
still getting same error
i try to implement a chat model but I am getting errors at this line
inputs = {k: v.to("cuda") for k, v in inputs.items()}
how can I distribute the data on different GPUs or generate low-bit output?
inputs = processor(text=prompt, images=[image1, image2], return_tensors="pt")
Error Solved!
do_image_splitting=False parameter forgot but after this parameter it is working well