Support for Mossi (mos) language code in TranslateGemma chat template

#14
by madoss - opened

Hi folks,

I'm currently fine-tuning using TranslateGemma for French → Mossi translation and ran into a question about language support.

When using tokenizer.apply_chat_template, the template expects source_lang_code and target_lang_code to be present in its internal language mapping. However, the code for Mossi (mos) does not appear to be included.

This results in a Jinja error like:

jinja2.exceptions.UndefinedError: 'dict object' has no attribute 'mos'

My questions:

  1. Is Mossi (mos) officially supported by TranslateGemma?

  2. If not, is there a recommended language code or workaround for low-resource languages not listed in the template?

Thanks a lot for your work on TranslateGemma.

Best regards,
Mahamadi

Hi @madoss ,
Is this the language you are referring to: Moor00e9 (mos), got this from technical report of this model, please refer table 5. https://arxiv.org/pdf/2601.09012
If yes, it's currently paired only with English.

Hi @srikanta-221 , Thanks for your response. The language I am referring to is Moor00e9 (mos).
Is the corresponding code en-MO?

Hi,
Unfortunately no, as the language is paired with English, if you try to directly translate from french to mos, it will throw errors. And no, 'en-mo' is a different language, not related to yours.
Even though the language is paired both directional with English, it won't be in chat_template.jinja and hence you the error. This means the model has ability to translate english to mos and mos to english but not specifically trained like other languages with dedicated datasets.
There are 2 things you can do, you can define your pipeline to translate from french to english and then translate english to mos, if you want to make it in both direction, you have to follow same thing.
Or you can train fine-tune model for custom language, currently it is failing because you are directly giving 'mos' as language code. You can define a variable and assign it language code of your liking, prepare dataset, use LORA and usual fine tuning steps and do the same.
Please refer here for a startup guide on the same. Please note that, this is my implementation, not the officially given. There is no official model specific fine tuning example provided yet, please stay tuned!
https://huggingface.co/google/translategemma-4b-it/discussions/4
This contains relevant details on top of which you can build on.
Also please refer to the Generic Guide for Fine tuning Gemma models: https://ai.google.dev/gemma/docs/tune
Thank you!

Thank you for your support. I will refer to the startup guide for fine-tuning it for my use case.

Sign up or log in to comment