Instructions to use OpenGVLab/Mini-InternVL2-4B-DA-Medical with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OpenGVLab/Mini-InternVL2-4B-DA-Medical with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="OpenGVLab/Mini-InternVL2-4B-DA-Medical", trust_remote_code=True) messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("OpenGVLab/Mini-InternVL2-4B-DA-Medical", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use OpenGVLab/Mini-InternVL2-4B-DA-Medical with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "OpenGVLab/Mini-InternVL2-4B-DA-Medical" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenGVLab/Mini-InternVL2-4B-DA-Medical", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/OpenGVLab/Mini-InternVL2-4B-DA-Medical
- SGLang
How to use OpenGVLab/Mini-InternVL2-4B-DA-Medical with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "OpenGVLab/Mini-InternVL2-4B-DA-Medical" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenGVLab/Mini-InternVL2-4B-DA-Medical", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "OpenGVLab/Mini-InternVL2-4B-DA-Medical" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OpenGVLab/Mini-InternVL2-4B-DA-Medical", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use OpenGVLab/Mini-InternVL2-4B-DA-Medical with Docker Model Runner:
docker model run hf.co/OpenGVLab/Mini-InternVL2-4B-DA-Medical
Wrong tokenizer given "expected str, bytes or os.PathLike object, not NoneType"
Running both the 4B and 2B versions raises Exceptions for me "expected str, bytes or os.PathLike object, not NoneType" when trying to load the tokenizer.
I think the issue is that the LlamaTokenizer expects a tokenizer.model, but this repo contains a tokenizer.json instead.
Here's how I solved it:
- Run the model once, so it has all the files downloaded
- Copy over tokenizer.model (in snapshots)and the file it points to (in blobs) (the file in blobs first, so the pointer works) from the non-adapted version of InternVL2 (either the 2B or 4B version)
- Delete tokenizer.model in the .no_exist directory (it was cached as missing during the initial download run)
Results seem to be good, but if you could add the "real" tokenizer.model, that would be great. Thanks!
I cannot find the tokenizer.model in snapshots file.
They are the 2B or 4B paths I linked
I use 4B but couldn't find the tokenizer.model in snapshots file, In your solution, the second item "Copy over tokenizer.model (in snapshots)and the file it points to (in blobs) (the file in blobs first, so the pointer works) from the non-adapted version of InternVL2 (either the 2B or 4B version)", I have no idea what to copy