xlm-roberta-base - LiteRT

This is a LiteRT (formerly TensorFlow Lite) conversion of FacebookAI/xlm-roberta-base for efficient on-device inference.

Model Details

Property Value
Original Model FacebookAI/xlm-roberta-base
Format LiteRT (.tflite)
File Size 1060.0 MB
Task Multilingual Feature Extraction (100 languages)
Max Sequence Length 128
Output Dimension 768
Pooling Mode N/A (Full hidden states)

Performance

Benchmarked on AMD CPU (WSL2):

Metric Value
Inference Latency 74.6 ms
Throughput 13.4 tokens/sec
Cosine Similarity vs Original 1.0000 ✅

Quick Start

import numpy as np
from ai_edge_litert.interpreter import Interpreter
from transformers import AutoTokenizer

# Load model and tokenizer
interpreter = Interpreter(model_path="FacebookAI_xlm-roberta-base.tflite")
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

tokenizer = AutoTokenizer.from_pretrained("FacebookAI/xlm-roberta-base")

def get_hidden_states(text: str) -> np.ndarray:
    """Get hidden states for input text."""
    encoded = tokenizer(
        text,
        padding="max_length",
        max_length=128,
        truncation=True,
        return_tensors="np"
    )

    interpreter.set_tensor(input_details[0]["index"], encoded["input_ids"].astype(np.int64))
    interpreter.set_tensor(input_details[1]["index"], encoded["attention_mask"].astype(np.int64))
    interpreter.invoke()

    return interpreter.get_tensor(output_details[0]["index"])

# Example
hidden = get_hidden_states("Hello, world!")
cls_embedding = hidden[0, 0, :]  # CLS token for classification
print(f"Hidden shape: {hidden.shape}")  # (1, 128, 768)

Files

  • FacebookAI_xlm-roberta-base.tflite - The LiteRT model file

Conversion Details

  • Conversion Tool: ai-edge-torch
  • Conversion Date: 2026-01-12
  • Source Framework: PyTorch → LiteRT
  • Validation: Cosine similarity 1.0000 vs original

Intended Use

  • Mobile Applications: On-device semantic search, RAG systems
  • Edge Devices: IoT, embedded systems, Raspberry Pi
  • Offline Processing: Privacy-preserving inference
  • Low-latency Applications: Real-time processing

Limitations

  • Fixed sequence length (128 tokens)
  • CPU inference (GPU delegate requires setup)
  • Tokenizer loaded separately from original model
  • Float32 precision

License

This model inherits the license from the original:

Citation

@article{conneau2019unsupervised,
    title={Unsupervised Cross-lingual Representation Learning at Scale},
    author={Conneau, Alexis and Khandelwal, Kartikay and others},
    journal={arXiv preprint arXiv:1911.02116},
    year={2019}
}

Acknowledgments


Converted by Bombek1

Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Bombek1/xlm-roberta-base-litert

Finetuned
(3724)
this model

Paper for Bombek1/xlm-roberta-base-litert