TranslateGemma Terminology Control and Context Support for Scientific Translation
#17
by EliasKim - opened
Hi, I am currently using TranslateGemma for scientific / medical paper translation and would like to understand whether the model supports terminology control and document-level context beyond basic prompting.
I have several questions:
- Hotword / Forced Terminology Support
Does TranslateGemma support any built-in mechanism similar to hotwords (like Whisper), forced terminology, keyword biasing, or lexicon constraints during generation?
For example, can I prioritize preferred translations such as:
- sarcopenia -> 근감소증
- muscle fat infiltration -> 근육 지방 침윤
- lean muscle volume -> 제지방 근육 부피
- Context / Reference Text Injection
Is there any supported parameter or recommended input format for providing additional context during translation?
Examples:
- paper abstract
- domain background
- previous paragraph
- document summary
- glossary
I would like the model to use context for disambiguation and terminology consistency.
- Generation Parameters Beyond Prompting
Besides prompt engineering, are there Hugging Face Transformers generation parameters that can help terminology consistency or contextual translation?
Examples:
- forced tokens
- forced decoder ids
- prefix constraints
- prefix_allowed_tokens_fn
- logits_processor
- bad_words_ids
- constrained decoding
- custom token biasing
- Multi-turn / Memory / Document Translation
For translating long scientific documents, what is the best practice for preserving context across multiple sentences or paragraphs?
Should previous translated segments or previous source paragraphs be included in the prompt?
Does TranslateGemma support document-level translation behavior, or is sentence-by-sentence prompting recommended?
- Recommended Best Practice for Scientific Translation
For domains like medicine, biology, and clinical research, what is the recommended method to improve:
- terminology consistency
- abbreviation handling
- paragraph coherence
- accurate technical translation
If possible, could you also share a Hugging Face Transformers implementation example for these use cases?
Thank you.