Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

MLP-koni
/
Llama-3.2-11B-Vision-Instruct_document

Image-Text-to-Text
Transformers
Safetensors
mllama
conversational
text-generation-inference
Model card Files Files and versions
xet
Community

Instructions to use MLP-koni/Llama-3.2-11B-Vision-Instruct_document with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Transformers

    How to use MLP-koni/Llama-3.2-11B-Vision-Instruct_document with Transformers:

    # Use a pipeline as a high-level helper
    from transformers import pipeline
    
    pipe = pipeline("image-text-to-text", model="MLP-koni/Llama-3.2-11B-Vision-Instruct_document")
    messages = [
        {
            "role": "user",
            "content": [
                {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
                {"type": "text", "text": "What animal is on the candy?"}
            ]
        },
    ]
    pipe(text=messages)
    # Load model directly
    from transformers import AutoProcessor, AutoModelForImageTextToText
    
    processor = AutoProcessor.from_pretrained("MLP-koni/Llama-3.2-11B-Vision-Instruct_document")
    model = AutoModelForImageTextToText.from_pretrained("MLP-koni/Llama-3.2-11B-Vision-Instruct_document")
    messages = [
        {
            "role": "user",
            "content": [
                {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
                {"type": "text", "text": "What animal is on the candy?"}
            ]
        },
    ]
    inputs = processor.apply_chat_template(
    	messages,
    	add_generation_prompt=True,
    	tokenize=True,
    	return_dict=True,
    	return_tensors="pt",
    ).to(model.device)
    
    outputs = model.generate(**inputs, max_new_tokens=40)
    print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
  • Notebooks
  • Google Colab
  • Kaggle
  • Local Apps
  • vLLM

    How to use MLP-koni/Llama-3.2-11B-Vision-Instruct_document with vLLM:

    Install from pip and serve model
    # Install vLLM from pip:
    pip install vllm
    # Start the vLLM server:
    vllm serve "MLP-koni/Llama-3.2-11B-Vision-Instruct_document"
    # Call the server using curl (OpenAI-compatible API):
    curl -X POST "http://localhost:8000/v1/chat/completions" \
    	-H "Content-Type: application/json" \
    	--data '{
    		"model": "MLP-koni/Llama-3.2-11B-Vision-Instruct_document",
    		"messages": [
    			{
    				"role": "user",
    				"content": [
    					{
    						"type": "text",
    						"text": "Describe this image in one sentence."
    					},
    					{
    						"type": "image_url",
    						"image_url": {
    							"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
    						}
    					}
    				]
    			}
    		]
    	}'
    Use Docker
    docker model run hf.co/MLP-koni/Llama-3.2-11B-Vision-Instruct_document
  • SGLang

    How to use MLP-koni/Llama-3.2-11B-Vision-Instruct_document with SGLang:

    Install from pip and serve model
    # Install SGLang from pip:
    pip install sglang
    # Start the SGLang server:
    python3 -m sglang.launch_server \
        --model-path "MLP-koni/Llama-3.2-11B-Vision-Instruct_document" \
        --host 0.0.0.0 \
        --port 30000
    # Call the server using curl (OpenAI-compatible API):
    curl -X POST "http://localhost:30000/v1/chat/completions" \
    	-H "Content-Type: application/json" \
    	--data '{
    		"model": "MLP-koni/Llama-3.2-11B-Vision-Instruct_document",
    		"messages": [
    			{
    				"role": "user",
    				"content": [
    					{
    						"type": "text",
    						"text": "Describe this image in one sentence."
    					},
    					{
    						"type": "image_url",
    						"image_url": {
    							"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
    						}
    					}
    				]
    			}
    		]
    	}'
    Use Docker images
    docker run --gpus all \
        --shm-size 32g \
        -p 30000:30000 \
        -v ~/.cache/huggingface:/root/.cache/huggingface \
        --env "HF_TOKEN=<secret>" \
        --ipc=host \
        lmsysorg/sglang:latest \
        python3 -m sglang.launch_server \
            --model-path "MLP-koni/Llama-3.2-11B-Vision-Instruct_document" \
            --host 0.0.0.0 \
            --port 30000
    # Call the server using curl (OpenAI-compatible API):
    curl -X POST "http://localhost:30000/v1/chat/completions" \
    	-H "Content-Type: application/json" \
    	--data '{
    		"model": "MLP-koni/Llama-3.2-11B-Vision-Instruct_document",
    		"messages": [
    			{
    				"role": "user",
    				"content": [
    					{
    						"type": "text",
    						"text": "Describe this image in one sentence."
    					},
    					{
    						"type": "image_url",
    						"image_url": {
    							"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
    						}
    					}
    				]
    			}
    		]
    	}'
  • Docker Model Runner

    How to use MLP-koni/Llama-3.2-11B-Vision-Instruct_document with Docker Model Runner:

    docker model run hf.co/MLP-koni/Llama-3.2-11B-Vision-Instruct_document

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Gated model
You can list files but not access them

Preview of files found in this repository
  • .gitattributes
    1.57 kB
    Upload processor 6 months ago
  • README.md
    5.17 kB
    Upload processor 6 months ago
  • chat_template.jinja
    4.85 kB
    Upload processor 6 months ago
  • config.json
    1.88 kB
    Upload MllamaForConditionalGeneration 6 months ago
  • generation_config.json
    210 Bytes
    Upload MllamaForConditionalGeneration 6 months ago
  • model-00001-of-00005.safetensors
    4.99 GB
    xet
    Upload MllamaForConditionalGeneration 6 months ago
  • model-00002-of-00005.safetensors
    4.97 GB
    xet
    Upload MllamaForConditionalGeneration 6 months ago
  • model-00003-of-00005.safetensors
    4.92 GB
    xet
    Upload MllamaForConditionalGeneration 6 months ago
  • model-00004-of-00005.safetensors
    5 GB
    xet
    Upload MllamaForConditionalGeneration 6 months ago
  • model-00005-of-00005.safetensors
    1.47 GB
    xet
    Upload MllamaForConditionalGeneration 6 months ago
  • model.safetensors.index.json
    89.5 kB
    Upload MllamaForConditionalGeneration 6 months ago
  • preprocessor_config.json
    477 Bytes
    Upload processor 6 months ago
  • special_tokens_map.json
    454 Bytes
    Upload processor 6 months ago
  • tokenizer.json
    17.2 MB
    xet
    Upload processor 6 months ago
  • tokenizer_config.json
    50.8 kB
    Upload processor 6 months ago