Created using Olive-AI: olive optimize --model_name_or_path nvidia/Riva-Translate-4B-Instruct --output_path models/riva_onnx --precision int4 --block_size 64
- Downloads last month
- 23
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for TrentB/Riva-Translate-4B-Instruct-ONNX
Base model
nvidia/Mistral-NeMo-12B-Base
Finetuned
nvidia/Riva-Translate-4B-Instruct