Instructions to use docling-project/SmolDocling-256M-preview with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use docling-project/SmolDocling-256M-preview with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="docling-project/SmolDocling-256M-preview") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("docling-project/SmolDocling-256M-preview") model = AutoModelForImageTextToText.from_pretrained("docling-project/SmolDocling-256M-preview") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use docling-project/SmolDocling-256M-preview with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "docling-project/SmolDocling-256M-preview" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "docling-project/SmolDocling-256M-preview", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/docling-project/SmolDocling-256M-preview
- SGLang
How to use docling-project/SmolDocling-256M-preview with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "docling-project/SmolDocling-256M-preview" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "docling-project/SmolDocling-256M-preview", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "docling-project/SmolDocling-256M-preview" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "docling-project/SmolDocling-256M-preview", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use docling-project/SmolDocling-256M-preview with Docker Model Runner:
docker model run hf.co/docling-project/SmolDocling-256M-preview
Commit History
Update README.md 607718f verified
Update README.md 83c9645 verified
update load_from_doctags usage f443b0a verified
Update README.md 4634ad6 verified
Upload ONNX weights (#7) 11fa51a verified
Update README.md -> Finetune of SmolVLM-256M-Instruct not Idefics (#9) 95f09ca verified
Add MLX example link (#4) cb6aeee verified
fix: Correct license to CDLA-permissive-2.0 (#5) 5be1f5a verified
Update README.md 9086eeb verified
Upload model.safetensors 668322a verified
Upload Idefics3ForConditionalGeneration 97633d4 verified
Update README.md 34a37cc verified
Update README.md 8477e02 verified
Update README.md 5196877 verified
Update README.md a6ad18b verified
Update README.md ec6ba0f verified
Update README.md b7182b2 verified
Update README.md fd16117 verified
Update README.md 60e3cdc verified
Update README.md a3dd76f verified
Update README.md 904b2d3 verified
Update README.md 3e8f36f verified
Update README.md a4c943f verified
Update README.md e9cfa1f verified
Update README.md b8d9a03 verified
Update README.md f432cdc verified
Update README.md e3c14a7 verified
pre-release (11) 95e922e
Ahmed Nassar AHN@zurich.ibm.com commited on