Image-Text-to-Text
Transformers
Safetensors
English
gemma4
text-generation-inference
unsloth
conversational
icd-coding
clinical-nlp
medical-ai
Instructions to use nikhil061307/gemma4-e4b-icd-coding with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use nikhil061307/gemma4-e4b-icd-coding with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="nikhil061307/gemma4-e4b-icd-coding") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("nikhil061307/gemma4-e4b-icd-coding") model = AutoModelForImageTextToText.from_pretrained("nikhil061307/gemma4-e4b-icd-coding") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use nikhil061307/gemma4-e4b-icd-coding with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "nikhil061307/gemma4-e4b-icd-coding" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nikhil061307/gemma4-e4b-icd-coding", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/nikhil061307/gemma4-e4b-icd-coding
- SGLang
How to use nikhil061307/gemma4-e4b-icd-coding with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "nikhil061307/gemma4-e4b-icd-coding" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nikhil061307/gemma4-e4b-icd-coding", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "nikhil061307/gemma4-e4b-icd-coding" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "nikhil061307/gemma4-e4b-icd-coding", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Unsloth Studio new
How to use nikhil061307/gemma4-e4b-icd-coding with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for nikhil061307/gemma4-e4b-icd-coding to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for nikhil061307/gemma4-e4b-icd-coding to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for nikhil061307/gemma4-e4b-icd-coding to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="nikhil061307/gemma4-e4b-icd-coding", max_seq_length=2048, ) - Docker Model Runner
How to use nikhil061307/gemma4-e4b-icd-coding with Docker Model Runner:
docker model run hf.co/nikhil061307/gemma4-e4b-icd-coding
File size: 4,340 Bytes
3abe073 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 | {
"audio_token": "<|audio|>",
"backend": "tokenizers",
"boa_token": "<|audio>",
"boi_token": "<|image>",
"bos_token": "<bos>",
"eoa_token": "<audio|>",
"eoc_token": "<channel|>",
"eoi_token": "<image|>",
"eos_token": "<turn|>",
"eot_token": "<turn|>",
"escape_token": "<|\"|>",
"etc_token": "<tool_call|>",
"etd_token": "<tool|>",
"etr_token": "<tool_response|>",
"extra_special_tokens": [
"<|video|>"
],
"image_token": "<|image|>",
"is_local": false,
"mask_token": "<mask>",
"model_max_length": 131072,
"model_specific_special_tokens": {
"audio_token": "<|audio|>",
"boa_token": "<|audio>",
"boi_token": "<|image>",
"eoa_token": "<audio|>",
"eoc_token": "<channel|>",
"eoi_token": "<image|>",
"eot_token": "<turn|>",
"escape_token": "<|\"|>",
"etc_token": "<tool_call|>",
"etd_token": "<tool|>",
"etr_token": "<tool_response|>",
"image_token": "<|image|>",
"soc_token": "<|channel>",
"sot_token": "<|turn>",
"stc_token": "<|tool_call>",
"std_token": "<|tool>",
"str_token": "<|tool_response>",
"think_token": "<|think|>"
},
"pad_token": "<pad>",
"padding_side": "right",
"processor_class": "Gemma4Processor",
"response_schema": {
"properties": {
"content": {
"type": "string"
},
"role": {
"const": "assistant"
},
"thinking": {
"type": "string"
},
"tool_calls": {
"items": {
"properties": {
"function": {
"properties": {
"arguments": {
"additionalProperties": {},
"type": "object",
"x-parser": "gemma4-tool-call"
},
"name": {
"type": "string"
}
},
"type": "object",
"x-regex": "call\\:(?P<name>\\w+)(?P<arguments>\\{.*\\})"
},
"type": {
"const": "function"
}
},
"type": "object"
},
"type": "array",
"x-regex-iterator": "<\\|tool_call>(.*?)<tool_call\\|>"
}
},
"type": "object",
"x-regex": "(\\<\\|channel\\>thought\\n(?P<thinking>.*?)\\<channel\\|\\>)?(?P<content>(?:(?!\\<\\|tool_call\\>)(?!\\<turn\\|\\>).)+)?(?P<tool_calls>\\<\\|tool_call\\>.*\\<tool_call\\|\\>)?(?:\\<turn\\|\\>)?"
},
"soc_token": "<|channel>",
"sot_token": "<|turn>",
"stc_token": "<|tool_call>",
"std_token": "<|tool>",
"str_token": "<|tool_response>",
"think_token": "<|think|>",
"tokenizer_class": "GemmaTokenizer",
"unk_token": "<unk>",
"chat_template": "{{ bos_token }}{%- if messages[0]['role'] == 'system' -%}\n {%- set first_user_prefix = messages[0]['content'] + '\n\n' -%}\n {%- set loop_messages = messages[1:] -%}\n{%- else -%}\n {%- set first_user_prefix = \"\" -%}\n {%- set loop_messages = messages -%}\n{%- endif -%}\n{%- for message in loop_messages -%}\n {%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) -%}\n {{ raise_exception(\"Conversation roles must alternate user/assistant/user/assistant/...\") }}\n {%- endif -%}\n {%- if (message['role'] == 'assistant') -%}\n {%- set role = \"model\" -%}\n {%- else -%}\n {%- set role = message['role'] -%}\n {%- endif -%}\n {{ '<|turn>' + role + '\n' + (first_user_prefix if loop.first else \"\") }}\n {%- if role == \"model\" -%}\n {{ '<|channel>thought\n<channel|>' }}\n {%- endif -%}\n {%- if message['content'] is string -%}\n {{ message['content'] | trim }}\n {%- elif message['content'] is iterable -%}\n {%- for item in message['content'] -%}\n {%- if item['type'] == 'audio' -%}\n {{ '<|audio|>' }}\n {%- elif item['type'] == 'image' -%}\n {{ '<|image|>' }}\n {%- elif item['type'] == 'video' -%}\n {{ '<|video|>' }}\n {%- elif item['type'] == 'text' -%}\n {{ item['text'] | trim }}\n {%- endif -%}\n {%- endfor -%}\n {%- else -%}\n {{ raise_exception(\"Invalid content type\") }}\n {%- endif -%}\n {{ '<turn|>\n' }}\n{%- endfor -%}\n{%- if add_generation_prompt -%}\n {{'<|turn>model\n'}}\n{%- endif -%}\n"
} |