Update README.md
Browse files
README.md
CHANGED
|
@@ -13,7 +13,7 @@ base_model:
|
|
| 13 |
|
| 14 |
## Model Overview
|
| 15 |
The Riva-Translate-4B-Instruct Neural Machine Translation model translates text in 12 languages. The supported languages are: English(en), German(de), European Spanish(es-ES), LATAM Spanish(es-US), France(fr), Brazillian Portugese(pt-BR), Russian(ru), Simplified Chinese(zh-CN), Traditional Chinese(zh-TW), Japanese(ja),Korean(ko), Arabic(ar).
|
| 16 |
-
This model was developed based on the decoder-only Transformer architecture. It is a fine-tuned version of a 4B Base model that was pruned and distilled from nvidia/Mistral-NeMo-Minitron-8B-Base using our LLM compression technique. The model was trained using a multi-stage CPT and SFT. It uses tiktoken as the tokenizer. The model supports a context length of 8K tokens.
|
| 17 |
This model is ready for commercial use.
|
| 18 |
|
| 19 |
**Model Developer:** NVIDIA
|
|
|
|
| 13 |
|
| 14 |
## Model Overview
|
| 15 |
The Riva-Translate-4B-Instruct Neural Machine Translation model translates text in 12 languages. The supported languages are: English(en), German(de), European Spanish(es-ES), LATAM Spanish(es-US), France(fr), Brazillian Portugese(pt-BR), Russian(ru), Simplified Chinese(zh-CN), Traditional Chinese(zh-TW), Japanese(ja),Korean(ko), Arabic(ar).
|
| 16 |
+
This model was developed based on the decoder-only Transformer architecture. It is a fine-tuned version of a 4B Base model that was pruned and distilled from [nvidia/Mistral-NeMo-Minitron-8B-Base](https://huggingface.co/nvidia/Mistral-NeMo-Minitron-8B-Base) using our LLM compression technique. The model was trained using a multi-stage CPT and SFT. It uses tiktoken as the tokenizer. The model supports a context length of 8K tokens.
|
| 17 |
This model is ready for commercial use.
|
| 18 |
|
| 19 |
**Model Developer:** NVIDIA
|