Image-Text-to-Text
Transformers
Diffusers
Safetensors
qwen3_vl
vision-language-model
image-decomposition
conversational
Instructions to use SynLayers/Bbox-caption-8b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use SynLayers/Bbox-caption-8b with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="SynLayers/Bbox-caption-8b") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("SynLayers/Bbox-caption-8b") model = AutoModelForImageTextToText.from_pretrained("SynLayers/Bbox-caption-8b") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use SynLayers/Bbox-caption-8b with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "SynLayers/Bbox-caption-8b" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SynLayers/Bbox-caption-8b", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/SynLayers/Bbox-caption-8b
- SGLang
How to use SynLayers/Bbox-caption-8b with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "SynLayers/Bbox-caption-8b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SynLayers/Bbox-caption-8b", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "SynLayers/Bbox-caption-8b" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SynLayers/Bbox-caption-8b", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use SynLayers/Bbox-caption-8b with Docker Model Runner:
docker model run hf.co/SynLayers/Bbox-caption-8b
| library_name: transformers | |
| tags: | |
| - vision-language-model | |
| - image-decomposition | |
| # SynLayers | |
| This repository contains the assets behind SynLayers, our two-stage image decomposition system. | |
| At the root is the bbox-caption model. Given one image, it predicts: | |
| - a whole-image caption | |
| - bounding boxes for visible objects or layers | |
| The same repo also includes the Stage 2 SynLayers pipeline to do layer decomposition. | |
| If you want the easiest way to try the full system, please use our public demo: | |
| [SynLayers/synlayers](https://huggingface.co/spaces/SynLayers/synlayers) | |
| This repo is not meant to be used as a single generic `DiffusionPipeline(prompt)` model. | |
| The full SynLayers pipeline is: | |
| 1. bbox + whole-caption prediction | |
| 2. layer decomposition into transparent RGBA outputs | |
| If you only want the Stage 1 model at the repo root, you can load it with `transformers`. | |
| ```python | |
| from transformers import AutoProcessor, Qwen3VLForConditionalGeneration | |
| model = Qwen3VLForConditionalGeneration.from_pretrained( | |
| "SynLayers/Bbox-caption-8b", | |
| torch_dtype="auto", | |
| device_map="auto", | |
| ) | |
| processor = AutoProcessor.from_pretrained("SynLayers/Bbox-caption-8b") | |
| ``` | |
| If you want to see more details of our implementation, please check our paper out here: https://arxiv.org/abs/2605.15167 | |
| If you find our work useful, please consider citing: | |
| @misc{wu2026doessyntheticlayereddesign, | |
| title={Does Synthetic Layered Design Data Benefit Layered Design Decomposition?}, | |
| author={Kam Man Wu and Haolin Yang and Qingyu Chen and Yihu Tang and Jingye Chen and Qifeng Chen}, | |
| year={2026}, | |
| eprint={2605.15167}, | |
| archivePrefix={arXiv}, | |
| primaryClass={cs.CV}, | |
| url={https://arxiv.org/abs/2605.15167}, | |
| } | |
| Thanks for trying SynLayers. | |