Update README.md
Browse files
README.md
CHANGED
|
@@ -3,17 +3,21 @@ license: other
|
|
| 3 |
license_name: nvidia-open-model-license-agreement
|
| 4 |
license_link: >-
|
| 5 |
https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
|
| 6 |
-
|
| 7 |
library_name: transformers
|
| 8 |
base_model:
|
| 9 |
-
- nvidia/Mistral-NeMo-
|
| 10 |
---
|
| 11 |
|
| 12 |
# Riva-Translate-4B-Instruct
|
| 13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
## Model Overview
|
| 15 |
The Riva-Translate-4B-Instruct Neural Machine Translation model translates text in 12 languages. The supported languages are: English(en), German(de), European Spanish(es-ES), LATAM Spanish(es-US), France(fr), Brazillian Portugese(pt-BR), Russian(ru), Simplified Chinese(zh-CN), Traditional Chinese(zh-TW), Japanese(ja),Korean(ko), Arabic(ar).
|
| 16 |
-
This model was developed based on the decoder-only Transformer architecture. It is a fine-tuned version of a 4B Base model that was pruned and distilled from [nvidia/Mistral-NeMo-
|
| 17 |
|
| 18 |
|
| 19 |
**Model Developer:** NVIDIA
|
|
|
|
| 3 |
license_name: nvidia-open-model-license-agreement
|
| 4 |
license_link: >-
|
| 5 |
https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
|
|
|
|
| 6 |
library_name: transformers
|
| 7 |
base_model:
|
| 8 |
+
- nvidia/Mistral-NeMo-12B-Base
|
| 9 |
---
|
| 10 |
|
| 11 |
# Riva-Translate-4B-Instruct
|
| 12 |
|
| 13 |
+
## 🚀 **Announcement**
|
| 14 |
+
We’re excited to introduce the latest update to our Riva-Translate-4B-Instruct model!
|
| 15 |
+
Explore **[nvidia/Riva-Translate-4B-Instruct-v1.1](https://huggingface.co/nvidia/Riva-Translate-4B-Instruct-v1.1)** to experience improved translation quality and enhanced performance.
|
| 16 |
+
|
| 17 |
+
|
| 18 |
## Model Overview
|
| 19 |
The Riva-Translate-4B-Instruct Neural Machine Translation model translates text in 12 languages. The supported languages are: English(en), German(de), European Spanish(es-ES), LATAM Spanish(es-US), France(fr), Brazillian Portugese(pt-BR), Russian(ru), Simplified Chinese(zh-CN), Traditional Chinese(zh-TW), Japanese(ja),Korean(ko), Arabic(ar).
|
| 20 |
+
This model was developed based on the decoder-only Transformer architecture. It is a fine-tuned version of a 4B Base model that was pruned and distilled from [nvidia/Mistral-NeMo-12B-Base](https://huggingface.co/nvidia/Mistral-NeMo-12B-Base) using our LLM compression technique. The model was trained using a multi-stage CPT and SFT. It uses tiktoken as the tokenizer. The model supports a context length of 8K tokens.
|
| 21 |
|
| 22 |
|
| 23 |
**Model Developer:** NVIDIA
|