How to use microsoft/Florence-2-base with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="microsoft/Florence-2-base", trust_remote_code=True)
# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("microsoft/Florence-2-base", trust_remote_code=True) model = AutoModelForImageTextToText.from_pretrained("microsoft/Florence-2-base", trust_remote_code=True)
How to use microsoft/Florence-2-base with vLLM:
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "microsoft/Florence-2-base" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "microsoft/Florence-2-base", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'
docker model run hf.co/microsoft/Florence-2-base
How to use microsoft/Florence-2-base with SGLang:
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "microsoft/Florence-2-base" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "microsoft/Florence-2-base", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "microsoft/Florence-2-base" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "microsoft/Florence-2-base", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'
How to use microsoft/Florence-2-base with Docker Model Runner:
Hello,
Is there an image pixel size parameter that we can use? The outputs i'm getting are small even though the input is large. Would pixel size affect the accuracy?
Thanks for the great work!
Wanis
· Sign up or log in to comment