Instructions to use google/translategemma-4b-it with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google/translategemma-4b-it with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="google/translategemma-4b-it") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("google/translategemma-4b-it") model = AutoModelForImageTextToText.from_pretrained("google/translategemma-4b-it") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use google/translategemma-4b-it with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "google/translategemma-4b-it" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "google/translategemma-4b-it", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/google/translategemma-4b-it
- SGLang
How to use google/translategemma-4b-it with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "google/translategemma-4b-it" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "google/translategemma-4b-it", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "google/translategemma-4b-it" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "google/translategemma-4b-it", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use google/translategemma-4b-it with Docker Model Runner:
docker model run hf.co/google/translategemma-4b-it
Support for Mossi (mos) language code in TranslateGemma chat template
Hi folks,
I'm currently fine-tuning using TranslateGemma for French → Mossi translation and ran into a question about language support.
When using tokenizer.apply_chat_template, the template expects source_lang_code and target_lang_code to be present in its internal language mapping. However, the code for Mossi (mos) does not appear to be included.
This results in a Jinja error like:
jinja2.exceptions.UndefinedError: 'dict object' has no attribute 'mos'
My questions:
Is Mossi (mos) officially supported by TranslateGemma?
If not, is there a recommended language code or workaround for low-resource languages not listed in the template?
Thanks a lot for your work on TranslateGemma.
Best regards,
Mahamadi
Hi @madoss ,
Is this the language you are referring to: Moor00e9 (mos), got this from technical report of this model, please refer table 5. https://arxiv.org/pdf/2601.09012
If yes, it's currently paired only with English.
Hi @srikanta-221 , Thanks for your response. The language I am referring to is Moor00e9 (mos).
Is the corresponding code en-MO?
Hi,
Unfortunately no, as the language is paired with English, if you try to directly translate from french to mos, it will throw errors. And no, 'en-mo' is a different language, not related to yours.
Even though the language is paired both directional with English, it won't be in chat_template.jinja and hence you the error. This means the model has ability to translate english to mos and mos to english but not specifically trained like other languages with dedicated datasets.
There are 2 things you can do, you can define your pipeline to translate from french to english and then translate english to mos, if you want to make it in both direction, you have to follow same thing.
Or you can train fine-tune model for custom language, currently it is failing because you are directly giving 'mos' as language code. You can define a variable and assign it language code of your liking, prepare dataset, use LORA and usual fine tuning steps and do the same.
Please refer here for a startup guide on the same. Please note that, this is my implementation, not the officially given. There is no official model specific fine tuning example provided yet, please stay tuned!
https://huggingface.co/google/translategemma-4b-it/discussions/4
This contains relevant details on top of which you can build on.
Also please refer to the Generic Guide for Fine tuning Gemma models: https://ai.google.dev/gemma/docs/tune
Thank you!
Thank you for your support. I will refer to the startup guide for fine-tuning it for my use case.