Instructions to use microsoft/Florence-2-large with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use microsoft/Florence-2-large with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="microsoft/Florence-2-large", trust_remote_code=True)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("microsoft/Florence-2-large", trust_remote_code=True) model = AutoModelForImageTextToText.from_pretrained("microsoft/Florence-2-large", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use microsoft/Florence-2-large with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "microsoft/Florence-2-large" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "microsoft/Florence-2-large", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/microsoft/Florence-2-large
- SGLang
How to use microsoft/Florence-2-large with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "microsoft/Florence-2-large" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "microsoft/Florence-2-large", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "microsoft/Florence-2-large" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "microsoft/Florence-2-large", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use microsoft/Florence-2-large with Docker Model Runner:
docker model run hf.co/microsoft/Florence-2-large
Clarification on JSON Lines Dataset for Multi-Task Fine-Tuning of Florence-2
Hi everyone,
I came across the notebook discussing how to fine-tune Florence-2 for Object Detection, and I have a question regarding the structure of the JSON Lines dataset when fine-tuning for multiple tasks.
Specifically, how should the dataset be formatted if I want to fine-tune for more than one task?
Should the prefix field be a list of task string IDs, while the suffix field contains a list of strings that represent the answers for each task? For example, would the following structure be correct?
{
"prefix": ["<OD>", "<OCR>"],
"suffix": [
"ace of hearts<loc_345><loc_315><loc_582><loc_721>2 of hearts<loc_709><loc_115><loc_888><loc_509>3 of hearts<loc_529><loc_228><loc_735><loc_613>4 of hearts<loc_98><loc_421><loc_415><loc_845>",
"answer_for_ocr"
]
}
Additionally, is there a guide available on how to format datasets for each task?
I appreciate any guidance on this!
Thank you!
@mariaac looking into it
Hi @Andyrasika ,
What about finetuning for OCR with region?
Should the dataset be formatted in a similar way to the OD dataset?
{"image": "path/to/image.png", "prefix": "<OCR_WITH_REGION>", "suffix": "US<loc_895><loc_505><loc_965><loc_555>Hugging Face<loc_647><loc_164><loc_747><loc_216>"}
I haven't found any example for finetuning for this task