RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper
•
1907.11692
•
Published
•
9
This is a LiteRT (formerly TensorFlow Lite) conversion of FacebookAI/roberta-base for efficient on-device inference.
| Property | Value |
|---|---|
| Original Model | FacebookAI/roberta-base |
| Format | LiteRT (.tflite) |
| File Size | 473.7 MB |
| Task | Feature Extraction / Classification Base |
| Max Sequence Length | 128 |
| Output Dimension | 768 |
| Pooling Mode | N/A (Full hidden states) |
Benchmarked on AMD CPU (WSL2):
| Metric | Value |
|---|---|
| Inference Latency | 81.2 ms |
| Throughput | 12.3/sec |
| Cosine Similarity vs Original | 1.0000 ✅ |
import numpy as np
from ai_edge_litert.interpreter import Interpreter
from transformers import AutoTokenizer
# Load model and tokenizer
interpreter = Interpreter(model_path="FacebookAI_roberta-base.tflite")
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
tokenizer = AutoTokenizer.from_pretrained("FacebookAI/roberta-base")
def get_hidden_states(text: str) -> np.ndarray:
"""Get hidden states for input text."""
encoded = tokenizer(
text,
padding="max_length",
max_length=128,
truncation=True,
return_tensors="np"
)
interpreter.set_tensor(input_details[0]["index"], encoded["input_ids"].astype(np.int64))
interpreter.set_tensor(input_details[1]["index"], encoded["attention_mask"].astype(np.int64))
interpreter.invoke()
return interpreter.get_tensor(output_details[0]["index"])
# Example
hidden = get_hidden_states("Hello, world!")
cls_embedding = hidden[0, 0, :] # CLS token for classification
print(f"Hidden shape: {hidden.shape}") # (1, 128, 768)
FacebookAI_roberta-base.tflite - The LiteRT model fileThis model inherits the license from the original:
@article{liu2019roberta,
title={RoBERTa: A Robustly Optimized BERT Pretraining Approach},
author={Liu, Yinhan and Ott, Myle and others},
journal={arXiv preprint arXiv:1907.11692},
year={2019}
}
Converted by Bombek1
Base model
FacebookAI/roberta-base