FreedomIntelligence/PubMedVision
Viewer • Updated • 2.15M • 1.06k • 105
How to use FreedomIntelligence/HuatuoGPT-Vision-7B with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="FreedomIntelligence/HuatuoGPT-Vision-7B")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("FreedomIntelligence/HuatuoGPT-Vision-7B", dtype="auto")How to use FreedomIntelligence/HuatuoGPT-Vision-7B with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "FreedomIntelligence/HuatuoGPT-Vision-7B"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "FreedomIntelligence/HuatuoGPT-Vision-7B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/FreedomIntelligence/HuatuoGPT-Vision-7B
How to use FreedomIntelligence/HuatuoGPT-Vision-7B with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "FreedomIntelligence/HuatuoGPT-Vision-7B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "FreedomIntelligence/HuatuoGPT-Vision-7B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "FreedomIntelligence/HuatuoGPT-Vision-7B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "FreedomIntelligence/HuatuoGPT-Vision-7B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use FreedomIntelligence/HuatuoGPT-Vision-7B with Docker Model Runner:
docker model run hf.co/FreedomIntelligence/HuatuoGPT-Vision-7B
# Load model directly
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("FreedomIntelligence/HuatuoGPT-Vision-7B", dtype="auto")HuatuoGPT-Vision is a multimodal LLM for medical applications, built with the PubMedVision dataset. HuatuoGPT-Vision-7B is trained based on Qwen2-7B using the LLaVA-v1.5 architecture.
git clone https://github.com/FreedomIntelligence/HuatuoGPT-Vision.git
query = 'What does the picture show?'
image_paths = ['image_path1']
from cli import HuatuoChatbot
bot = HuatuoChatbot(huatuogpt_vision_model_path) # loads the model
output = bot.inference(query, image_paths) # generates
print(output) # Prints the model output
@misc{chen2024huatuogptvisioninjectingmedicalvisual,
title={HuatuoGPT-Vision, Towards Injecting Medical Visual Knowledge into Multimodal LLMs at Scale},
author={Junying Chen and Ruyi Ouyang and Anningzhe Gao and Shunian Chen and Guiming Hardy Chen and Xidong Wang and Ruifei Zhang and Zhenyang Cai and Ke Ji and Guangjun Yu and Xiang Wan and Benyou Wang},
year={2024},
eprint={2406.19280},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2406.19280},
}
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="FreedomIntelligence/HuatuoGPT-Vision-7B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)