TranslateBlue v2 (GGUF Q4_K_M)
Translation model focused on 29 languages with emphasis on African languages, in GGUF format for use with llama.cpp and compatible runtimes (iOS, Android, desktop).
Model description
- Base model: Qwen3-4B-Instruct
- Format: GGUF, quantized with Q4_K_M
- Size: ~2.3 GB
- Training: LoRA fine-tuning on parallel translation data (10,000 steps, 16 LoRA layers)
- Training data: 563,986 sentence pairs from 29 languages
Intended use
- Text translation between the supported languages, especially to/from African languages
- Offline translation in mobile and desktop apps (e.g. TranslateBlue)
- Batch or interactive translation via llama.cpp or bindings (Swift, Python, etc.)
Supported languages (29)
| Code | Language | Code | Language | Code | Language |
|---|---|---|---|---|---|
| sw | Swahili | ha | Hausa | yo | Yoruba |
| ig | Igbo | am | Amharic | zu | Zulu |
| xh | Xhosa | af | Afrikaans | so | Somali |
| rw | Kinyarwanda | sn | Shona | tw | Twi |
| ee | Ewe | wo | Wolof | ny | Chichewa |
| ti | Tigrinya | nso | Northern Sotho | tn | Tswana |
| om | Oromo | ve | Venda | nd | Ndebele |
| ar | Arabic | fr | French | pt | Portuguese |
| es | Spanish | de | German | zh | Chinese |
| ja | Japanese | ko | Korean | en | English |
Limitations
- Best for short to medium sentences; very long texts may lose quality.
- Low-resource pairs may be less accurate than high-resource ones.
- No built-in language detection; source and target languages should be specified in the prompt.
How to use
Prompt format
Use a clear translation instruction, for example:
Translate from English to Swahili:
Hello, how are you?
Or:
Translate from French to Hausa:
Bonjour, comment allez-vous?
With llama.cpp (command line)
llama-cli -m translateblue-v2-q4_k_m.gguf \
-p "Translate from English to Swahili:\n\nHello, how are you?" \
-n 64 --temp 0.3
With Python (llama-cpp-python)
from llama_cpp import Llama
llm = Llama(model_path="translateblue-v2-q4_k_m.gguf")
out = llm("Translate from English to Swahili:\n\nHello, how are you?", max_tokens=64, temperature=0.3)
print(out["choices"][0]["text"])
In iOS / Swift (e.g. TranslateBlue)
The model is registered as TranslateBlue v2 (GGUF). After downloading via the app, it runs with the built-in LlamaCppService using the same prompt format above.
Training details
| Setting | Value |
|---|---|
| Base model | Qwen3-4B-Instruct |
| Method | LoRA |
| LoRA layers | 16 |
| Steps | 10,000 |
| Training samples | 563,986 |
| Validation loss | ~2.5 |
License
MIT.
Citation
If you use this model in research or a product, please cite the base model (Qwen3) and the TranslateBlue project as appropriate.
- Downloads last month
- 6
Hardware compatibility
Log In to add your hardware
4-bit