OpusMT-Zh-En: Optimized for Qualcomm Devices

OpusMT Chinese to English translation model is a state-of-the-art neural machine translation system designed for translating Chinese text into English. This model is based on the Marian transformer architecture and has been optimized for edge inference by splitting into encoder and decoder components with modified attention mechanisms. It exhibits robust performance for real-world translation tasks, making it highly reliable for practical applications. The model supports input sequences up to 256 tokens and can generate English translations with high accuracy.

This is based on the implementation of OpusMT-Zh-En found here. This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the Qualcomm® AI Hub Models library to export with custom configurations. More details on model performance across various devices, can be found here.

Qualcomm AI Hub Models uses Qualcomm AI Hub Workbench to compile, profile, and evaluate this model. Sign up to run these models on a hosted Qualcomm® device.

Getting Started

There are two ways to deploy this model on your device:

Option 1: Download Pre-Exported Models

Below are pre-exported model assets ready for deployment.

Runtime Precision Chipset SDK Versions Download
VOICE_AI float Snapdragon® 8 Elite Gen 5 Mobile QAIRT 2.45 Download
VOICE_AI float Snapdragon® X2 Elite QAIRT 2.45 Download
VOICE_AI float Snapdragon® X Elite QAIRT 2.45 Download
VOICE_AI float Snapdragon® 8 Gen 3 Mobile QAIRT 2.45 Download
VOICE_AI float Qualcomm® QCS8550 (Proxy) QAIRT 2.45 Download
VOICE_AI float Qualcomm® SA8775P QAIRT 2.45 Download
VOICE_AI float Snapdragon® 8 Elite For Galaxy Mobile QAIRT 2.45 Download
VOICE_AI float Qualcomm® SA7255P QAIRT 2.45 Download
VOICE_AI float Qualcomm® SA8295P QAIRT 2.45 Download
VOICE_AI float Qualcomm® QCS9075 QAIRT 2.45 Download
VOICE_AI float Qualcomm® QCS8450 (Proxy) QAIRT 2.45 Download

For more device-specific assets and performance metrics, visit OpusMT-Zh-En on Qualcomm® AI Hub.

Option 2: Export with Custom Configurations

Use the Qualcomm® AI Hub Models Python library to compile and export the model with your own:

  • Custom weights (e.g., fine-tuned checkpoints)
  • Custom input shapes
  • Target device and runtime configurations

This option is ideal if you need to customize the model beyond the default configuration provided here.

See our repository for OpusMT-Zh-En on GitHub for usage instructions.

Model Details

Model Type: Model_use_case.text_generation

Model Stats:

  • Model checkpoint: Helsinki-NLP/opus-mt-zh-en
  • Input resolution: 256 tokens (Chinese text)
  • Max input sequence length: 256 tokens
  • Max output sequence length: 256 tokens
  • Number of parameters (encoder): ~74M
  • Model size (encoder) (float): ~280 MB
  • Number of parameters (decoder): ~74M
  • Model size (decoder) (float): ~280 MB
  • Number of encoder layers: 6
  • Number of decoder layers: 6
  • Attention heads: 8
  • Hidden dimension: 512

Performance Summary

Model Runtime Precision Chipset Inference Time (ms) Peak Memory Range (MB) Primary Compute Unit
decoder VOICE_AI float Snapdragon® 8 Elite Gen 5 Mobile 2.154 ms 1 - 11 MB NPU
decoder VOICE_AI float Snapdragon® 8 Elite Mobile 2.4 ms 0 - 9 MB NPU
decoder VOICE_AI float Snapdragon® X2 Elite 2.208 ms 12 - 12 MB NPU
decoder VOICE_AI float Snapdragon® X Elite 3.075 ms 12 - 12 MB NPU
decoder VOICE_AI float Snapdragon® X Elite 3.075 ms 12 - 12 MB NPU
decoder VOICE_AI float Snapdragon® 8 Gen 3 Mobile 2.854 ms 12 - 20 MB NPU
decoder VOICE_AI float Qualcomm® QCS8275 (Proxy) 6.608 ms 12 - 19 MB NPU
decoder VOICE_AI float Qualcomm® QCS8550 (Proxy) 3.457 ms 12 - 14 MB NPU
decoder VOICE_AI float Qualcomm® SA8775P 4.201 ms 12 - 20 MB NPU
decoder VOICE_AI float Qualcomm® SA8775P 4.201 ms 12 - 20 MB NPU
decoder VOICE_AI float Qualcomm® SA8775P 4.201 ms 12 - 20 MB NPU
decoder VOICE_AI float Qualcomm® QCS9075 3.941 ms 12 - 26 MB NPU
decoder VOICE_AI float Qualcomm® QCS8450 (Proxy) 4.512 ms 12 - 21 MB NPU
decoder VOICE_AI float Qualcomm® SA7255P 6.608 ms 12 - 19 MB NPU
decoder VOICE_AI float Qualcomm® SA8295P 4.479 ms 12 - 17 MB NPU
decoder VOICE_AI float Snapdragon® 8 Elite For Galaxy Mobile 2.4 ms 0 - 9 MB NPU
encoder VOICE_AI float Snapdragon® 8 Elite Gen 5 Mobile 1.749 ms 0 - 10 MB NPU
encoder VOICE_AI float Snapdragon® 8 Elite Mobile 2.233 ms 0 - 9 MB NPU
encoder VOICE_AI float Snapdragon® X2 Elite 2.141 ms 0 - 0 MB NPU
encoder VOICE_AI float Snapdragon® X Elite 3.907 ms 0 - 0 MB NPU
encoder VOICE_AI float Snapdragon® X Elite 3.907 ms 0 - 0 MB NPU
encoder VOICE_AI float Snapdragon® 8 Gen 3 Mobile 2.64 ms 0 - 7 MB NPU
encoder VOICE_AI float Qualcomm® QCS8275 (Proxy) 12.645 ms 0 - 8 MB NPU
encoder VOICE_AI float Qualcomm® QCS8550 (Proxy) 3.63 ms 0 - 2 MB NPU
encoder VOICE_AI float Qualcomm® SA8775P 4.735 ms 0 - 9 MB NPU
encoder VOICE_AI float Qualcomm® SA8775P 4.735 ms 0 - 9 MB NPU
encoder VOICE_AI float Qualcomm® SA8775P 4.735 ms 0 - 9 MB NPU
encoder VOICE_AI float Qualcomm® QCS9075 4.569 ms 0 - 8 MB NPU
encoder VOICE_AI float Qualcomm® QCS8450 (Proxy) 5.032 ms 0 - 9 MB NPU
encoder VOICE_AI float Qualcomm® SA7255P 12.645 ms 0 - 8 MB NPU
encoder VOICE_AI float Qualcomm® SA8295P 5.349 ms 0 - 5 MB NPU
encoder VOICE_AI float Snapdragon® 8 Elite For Galaxy Mobile 2.233 ms 0 - 9 MB NPU

License

  • The license for the original implementation of OpusMT-Zh-En can be found here.

References

Community

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support