OpusMT-En-Es: Optimized for Mobile Deployment

OpusMT English to Spanish neural machine translation model based on MarianMT transformer architecture

OpusMT English to Spanish translation model is a state-of-the-art neural machine translation system designed for translating English text into Spanish. This model is based on the Marian transformer architecture and has been optimized for edge inference by splitting into encoder and decoder components with modified attention mechanisms. It exhibits robust performance for real-world translation tasks, making it highly reliable for practical applications. The model supports input sequences up to 256 tokens and can generate Spanish translations with high accuracy.

This model is an implementation of OpusMT-En-Es found here.

This repository provides scripts to run OpusMT-En-Es on Qualcomm® devices. More details on model performance across various devices, can be found here.

Model Details

  • Model Type: Model_use_case.text_generation
  • Model Stats:
    • Model checkpoint: Helsinki-NLP/opus-mt-en-es
    • Input resolution: 256 tokens (English text)
    • Max input sequence length: 256 tokens
    • Max output sequence length: 256 tokens
    • Number of parameters (OpusMTEncoder): ~74M
    • Model size (OpusMTEncoder) (float): ~280 MB
    • Number of parameters (OpusMTDecoder): ~74M
    • Model size (OpusMTDecoder) (float): ~280 MB
    • Number of encoder layers: 6
    • Number of decoder layers: 6
    • Attention heads: 8
    • Hidden dimension: 512
Model Precision Device Chipset Target Runtime Inference Time (ms) Peak Memory Range (MB) Primary Compute Unit Target Model
OpusMTEncoder float QCS8275 (Proxy) Qualcomm® QCS8275 (Proxy) TFLITE 12.78 ms 6 - 181 MB NPU OpusMT-En-Es.tflite
OpusMTEncoder float QCS8275 (Proxy) Qualcomm® QCS8275 (Proxy) QNN_DLC 12.545 ms 0 - 139 MB NPU OpusMT-En-Es.dlc
OpusMTEncoder float QCS8450 (Proxy) Qualcomm® QCS8450 (Proxy) TFLITE 5.097 ms 6 - 341 MB NPU OpusMT-En-Es.tflite
OpusMTEncoder float QCS8450 (Proxy) Qualcomm® QCS8450 (Proxy) QNN_DLC 4.772 ms 0 - 171 MB NPU OpusMT-En-Es.dlc
OpusMTEncoder float QCS8550 (Proxy) Qualcomm® QCS8550 (Proxy) TFLITE 3.66 ms 0 - 2 MB NPU OpusMT-En-Es.tflite
OpusMTEncoder float QCS8550 (Proxy) Qualcomm® QCS8550 (Proxy) QNN_DLC 3.523 ms 0 - 193 MB NPU OpusMT-En-Es.dlc
OpusMTEncoder float QCS8550 (Proxy) Qualcomm® QCS8550 (Proxy) ONNX 6.073 ms 0 - 114 MB NPU OpusMT-En-Es.onnx.zip
OpusMTEncoder float QCS9075 (Proxy) Qualcomm® QCS9075 (Proxy) TFLITE 4.881 ms 6 - 181 MB NPU OpusMT-En-Es.tflite
OpusMTEncoder float QCS9075 (Proxy) Qualcomm® QCS9075 (Proxy) QNN_DLC 19.331 ms 0 - 139 MB NPU OpusMT-En-Es.dlc
OpusMTEncoder float SA7255P ADP Qualcomm® SA7255P TFLITE 12.78 ms 6 - 181 MB NPU OpusMT-En-Es.tflite
OpusMTEncoder float SA7255P ADP Qualcomm® SA7255P QNN_DLC 12.545 ms 0 - 139 MB NPU OpusMT-En-Es.dlc
OpusMTEncoder float SA8295P ADP Qualcomm® SA8295P TFLITE 5.533 ms 6 - 179 MB NPU OpusMT-En-Es.tflite
OpusMTEncoder float SA8295P ADP Qualcomm® SA8295P QNN_DLC 5.269 ms 0 - 139 MB NPU OpusMT-En-Es.dlc
OpusMTEncoder float SA8775P ADP Qualcomm® SA8775P TFLITE 4.881 ms 6 - 181 MB NPU OpusMT-En-Es.tflite
OpusMTEncoder float SA8775P ADP Qualcomm® SA8775P QNN_DLC 19.331 ms 0 - 139 MB NPU OpusMT-En-Es.dlc
OpusMTEncoder float Samsung Galaxy S24 Snapdragon® 8 Gen 3 Mobile TFLITE 2.67 ms 0 - 341 MB NPU OpusMT-En-Es.tflite
OpusMTEncoder float Samsung Galaxy S24 Snapdragon® 8 Gen 3 Mobile QNN_DLC 2.547 ms 0 - 175 MB NPU OpusMT-En-Es.dlc
OpusMTEncoder float Samsung Galaxy S24 Snapdragon® 8 Gen 3 Mobile ONNX 4.165 ms 15 - 328 MB NPU OpusMT-En-Es.onnx.zip
OpusMTEncoder float Samsung Galaxy S25 Snapdragon® 8 Elite For Galaxy Mobile TFLITE 2.256 ms 0 - 355 MB NPU OpusMT-En-Es.tflite
OpusMTEncoder float Samsung Galaxy S25 Snapdragon® 8 Elite For Galaxy Mobile QNN_DLC 2.137 ms 0 - 142 MB NPU OpusMT-En-Es.dlc
OpusMTEncoder float Samsung Galaxy S25 Snapdragon® 8 Elite For Galaxy Mobile ONNX 3.536 ms 0 - 334 MB NPU OpusMT-En-Es.onnx.zip
OpusMTEncoder float Snapdragon 8 Elite Gen 5 QRD Snapdragon® 8 Elite Gen 5 Mobile TFLITE 1.774 ms 0 - 315 MB NPU OpusMT-En-Es.tflite
OpusMTEncoder float Snapdragon 8 Elite Gen 5 QRD Snapdragon® 8 Elite Gen 5 Mobile QNN_DLC 1.623 ms 0 - 145 MB NPU OpusMT-En-Es.dlc
OpusMTEncoder float Snapdragon 8 Elite Gen 5 QRD Snapdragon® 8 Elite Gen 5 Mobile ONNX 2.862 ms 0 - 298 MB NPU OpusMT-En-Es.onnx.zip
OpusMTEncoder float Snapdragon X Elite CRD Snapdragon® X Elite QNN_DLC 3.856 ms 0 - 0 MB NPU OpusMT-En-Es.dlc
OpusMTEncoder float Snapdragon X Elite CRD Snapdragon® X Elite ONNX 5.821 ms 109 - 109 MB NPU OpusMT-En-Es.onnx.zip
OpusMTDecoder float QCS8275 (Proxy) Qualcomm® QCS8275 (Proxy) TFLITE 6.339 ms 0 - 424 MB NPU OpusMT-En-Es.tflite
OpusMTDecoder float QCS8275 (Proxy) Qualcomm® QCS8275 (Proxy) QNN_DLC 5.363 ms 6 - 242 MB NPU OpusMT-En-Es.dlc
OpusMTDecoder float QCS8450 (Proxy) Qualcomm® QCS8450 (Proxy) TFLITE 4.706 ms 0 - 426 MB NPU OpusMT-En-Es.tflite
OpusMTDecoder float QCS8450 (Proxy) Qualcomm® QCS8450 (Proxy) QNN_DLC 4.01 ms 2 - 247 MB NPU OpusMT-En-Es.dlc
OpusMTDecoder float QCS8550 (Proxy) Qualcomm® QCS8550 (Proxy) TFLITE 3.448 ms 0 - 3 MB NPU OpusMT-En-Es.tflite
OpusMTDecoder float QCS8550 (Proxy) Qualcomm® QCS8550 (Proxy) QNN_DLC 2.89 ms 2 - 438 MB NPU OpusMT-En-Es.dlc
OpusMTDecoder float QCS8550 (Proxy) Qualcomm® QCS8550 (Proxy) ONNX 3.818 ms 12 - 15 MB NPU OpusMT-En-Es.onnx.zip
OpusMTDecoder float QCS9075 (Proxy) Qualcomm® QCS9075 (Proxy) TFLITE 4.116 ms 0 - 292 MB NPU OpusMT-En-Es.tflite
OpusMTDecoder float QCS9075 (Proxy) Qualcomm® QCS9075 (Proxy) QNN_DLC 3.501 ms 6 - 243 MB NPU OpusMT-En-Es.dlc
OpusMTDecoder float SA7255P ADP Qualcomm® SA7255P TFLITE 6.339 ms 0 - 424 MB NPU OpusMT-En-Es.tflite
OpusMTDecoder float SA7255P ADP Qualcomm® SA7255P QNN_DLC 5.363 ms 6 - 242 MB NPU OpusMT-En-Es.dlc
OpusMTDecoder float SA8295P ADP Qualcomm® SA8295P TFLITE 4.75 ms 0 - 284 MB NPU OpusMT-En-Es.tflite
OpusMTDecoder float SA8295P ADP Qualcomm® SA8295P QNN_DLC 4.082 ms 6 - 234 MB NPU OpusMT-En-Es.dlc
OpusMTDecoder float SA8775P ADP Qualcomm® SA8775P TFLITE 4.116 ms 0 - 292 MB NPU OpusMT-En-Es.tflite
OpusMTDecoder float SA8775P ADP Qualcomm® SA8775P QNN_DLC 3.501 ms 6 - 243 MB NPU OpusMT-En-Es.dlc
OpusMTDecoder float Samsung Galaxy S24 Snapdragon® 8 Gen 3 Mobile TFLITE 2.651 ms 0 - 571 MB NPU OpusMT-En-Es.tflite
OpusMTDecoder float Samsung Galaxy S24 Snapdragon® 8 Gen 3 Mobile QNN_DLC 2.175 ms 0 - 258 MB NPU OpusMT-En-Es.dlc
OpusMTDecoder float Samsung Galaxy S24 Snapdragon® 8 Gen 3 Mobile ONNX 2.931 ms 0 - 415 MB NPU OpusMT-En-Es.onnx.zip
OpusMTDecoder float Samsung Galaxy S25 Snapdragon® 8 Elite For Galaxy Mobile TFLITE 2.309 ms 0 - 485 MB NPU OpusMT-En-Es.tflite
OpusMTDecoder float Samsung Galaxy S25 Snapdragon® 8 Elite For Galaxy Mobile QNN_DLC 1.976 ms 0 - 234 MB NPU OpusMT-En-Es.dlc
OpusMTDecoder float Samsung Galaxy S25 Snapdragon® 8 Elite For Galaxy Mobile ONNX 2.58 ms 0 - 442 MB NPU OpusMT-En-Es.onnx.zip
OpusMTDecoder float Snapdragon 8 Elite Gen 5 QRD Snapdragon® 8 Elite Gen 5 Mobile TFLITE 2.17 ms 0 - 434 MB NPU OpusMT-En-Es.tflite
OpusMTDecoder float Snapdragon 8 Elite Gen 5 QRD Snapdragon® 8 Elite Gen 5 Mobile QNN_DLC 1.867 ms 0 - 243 MB NPU OpusMT-En-Es.dlc
OpusMTDecoder float Snapdragon 8 Elite Gen 5 QRD Snapdragon® 8 Elite Gen 5 Mobile ONNX 2.387 ms 1 - 405 MB NPU OpusMT-En-Es.onnx.zip
OpusMTDecoder float Snapdragon X Elite CRD Snapdragon® X Elite QNN_DLC 2.751 ms 6 - 6 MB NPU OpusMT-En-Es.dlc
OpusMTDecoder float Snapdragon X Elite CRD Snapdragon® X Elite ONNX 3.195 ms 161 - 161 MB NPU OpusMT-En-Es.onnx.zip

Installation

Install the package via pip:

# NOTE: 3.10 <= PYTHON_VERSION < 3.14 is supported.
pip install "qai-hub-models[opus-mt-en-es]"

Configure Qualcomm® AI Hub Workbench to run this model on a cloud-hosted device

Sign-in to Qualcomm® AI Hub Workbench with your Qualcomm® ID. Once signed in navigate to Account -> Settings -> API Token.

With this API token, you can configure your client to run models on the cloud hosted devices.

qai-hub configure --api_token API_TOKEN

Navigate to docs for more information.

Demo off target

The package contains a simple end-to-end demo that downloads pre-trained weights and runs this model on a sample input.

python -m qai_hub_models.models.opus_mt_en_es.demo

The above demo runs a reference implementation of pre-processing, model inference, and post processing.

NOTE: If you want running in a Jupyter Notebook or Google Colab like environment, please add the following to your cell (instead of the above).

%run -m qai_hub_models.models.opus_mt_en_es.demo

Run model on a cloud-hosted device

In addition to the demo, you can also run the model on a cloud-hosted Qualcomm® device. This script does the following:

  • Performance check on-device on a cloud-hosted device
  • Downloads compiled assets that can be deployed on-device for Android.
  • Accuracy check between PyTorch and on-device outputs.
python -m qai_hub_models.models.opus_mt_en_es.export

How does this work?

This export script leverages Qualcomm® AI Hub to optimize, validate, and deploy this model on-device. Lets go through each step below in detail:

Step 1: Compile model for on-device deployment

To compile a PyTorch model for on-device deployment, we first trace the model in memory using the jit.trace and then call the submit_compile_job API.

import torch

import qai_hub as hub
from qai_hub_models.models.opus_mt_en_es import Model

# Load the model
torch_model = Model.from_pretrained()

# Device
device = hub.Device("Samsung Galaxy S25")

# Trace model
input_shape = torch_model.get_input_spec()
sample_inputs = torch_model.sample_inputs()

pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])

# Compile model on a specific device
compile_job = hub.submit_compile_job(
    model=pt_model,
    device=device,
    input_specs=torch_model.get_input_spec(),
)

# Get target model to run on-device
target_model = compile_job.get_target_model()

Step 2: Performance profiling on cloud-hosted device

After compiling models from step 1. Models can be profiled model on-device using the target_model. Note that this scripts runs the model on a device automatically provisioned in the cloud. Once the job is submitted, you can navigate to a provided job URL to view a variety of on-device performance metrics.

profile_job = hub.submit_profile_job(
    model=target_model,
    device=device,
)
        

Step 3: Verify on-device accuracy

To verify the accuracy of the model on-device, you can run on-device inference on sample input data on the same cloud hosted device.

input_data = torch_model.sample_inputs()
inference_job = hub.submit_inference_job(
    model=target_model,
    device=device,
    inputs=input_data,
)
    on_device_output = inference_job.download_output_data()

With the output of the model, you can compute like PSNR, relative errors or spot check the output with expected output.

Note: This on-device profiling and inference requires access to Qualcomm® AI Hub Workbench. Sign up for access.

Deploying compiled model to Android

The models can be deployed using multiple runtimes:

  • TensorFlow Lite (.tflite export): This tutorial provides a guide to deploy the .tflite model in an Android application.

  • QNN (.so export ): This sample app provides instructions on how to use the .so shared library in an Android application.

View on Qualcomm® AI Hub

Get more details on OpusMT-En-Es's performance across various devices here. Explore all available models on Qualcomm® AI Hub

License

  • The license for the original implementation of OpusMT-En-Es can be found here.

References

Community

Downloads last month
7
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support