Instructions to use aimagelab/ReflectiVA with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use aimagelab/ReflectiVA with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="aimagelab/ReflectiVA") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("aimagelab/ReflectiVA", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use aimagelab/ReflectiVA with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "aimagelab/ReflectiVA" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "aimagelab/ReflectiVA", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/aimagelab/ReflectiVA
- SGLang
How to use aimagelab/ReflectiVA with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "aimagelab/ReflectiVA" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "aimagelab/ReflectiVA", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "aimagelab/ReflectiVA" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "aimagelab/ReflectiVA", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use aimagelab/ReflectiVA with Docker Model Runner:
docker model run hf.co/aimagelab/ReflectiVA
Add links to Github repository, project page and dataset
#1
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,8 +1,9 @@
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
| 3 |
-
pipeline_tag: image-text-to-text
|
| 4 |
license: apache-2.0
|
|
|
|
| 5 |
---
|
|
|
|
| 6 |
# Model Card: Reflective LLaVA (ReflectiVA)
|
| 7 |
|
| 8 |
Multimodal LLMs (MLLMs) are the natural extension of large language models to handle multimodal inputs, combining text and image data.
|
|
@@ -20,7 +21,7 @@ superior performance compared to existing methods.
|
|
| 20 |
|
| 21 |
In this model space, you will find the Overall Model (stage two) weights of ```ReflectiVA```.
|
| 22 |
|
| 23 |
-
For more information, visit our [ReflectiVA repository](https://github.com/aimagelab/ReflectiVA).
|
| 24 |
|
| 25 |
## Citation
|
| 26 |
If you make use of our work, please cite our repo:
|
|
|
|
| 1 |
---
|
| 2 |
library_name: transformers
|
|
|
|
| 3 |
license: apache-2.0
|
| 4 |
+
pipeline_tag: image-text-to-text
|
| 5 |
---
|
| 6 |
+
|
| 7 |
# Model Card: Reflective LLaVA (ReflectiVA)
|
| 8 |
|
| 9 |
Multimodal LLMs (MLLMs) are the natural extension of large language models to handle multimodal inputs, combining text and image data.
|
|
|
|
| 21 |
|
| 22 |
In this model space, you will find the Overall Model (stage two) weights of ```ReflectiVA```.
|
| 23 |
|
| 24 |
+
For more information, visit our [ReflectiVA repository](https://github.com/aimagelab/ReflectiVA), our [project page](https://aimagelab.github.io/ReflectiVA/) and the [dataset](https://huggingface.co/datasets/aimagelab/ReflectiVA-Data).
|
| 25 |
|
| 26 |
## Citation
|
| 27 |
If you make use of our work, please cite our repo:
|