--- tags: - bert - transformers - litert - tflite - edge - on-device license: mit base_model: FacebookAI/roberta-base pipeline_tag: feature-extraction --- # roberta-base - LiteRT This is a [LiteRT](https://ai.google.dev/edge/litert) (formerly TensorFlow Lite) conversion of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) for efficient on-device inference. ## Model Details | Property | Value | |----------|-------| | **Original Model** | [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) | | **Format** | LiteRT (.tflite) | | **File Size** | 473.7 MB | | **Task** | Feature Extraction / Classification Base | | **Max Sequence Length** | 128 | | **Output Dimension** | 768 | | **Pooling Mode** | N/A (Full hidden states) | ## Performance Benchmarked on AMD CPU (WSL2): | Metric | Value | |--------|-------| | **Inference Latency** | 81.2 ms | | **Throughput** | 12.3/sec | | **Cosine Similarity vs Original** | 1.0000 ✅ | ## Quick Start ```python import numpy as np from ai_edge_litert.interpreter import Interpreter from transformers import AutoTokenizer # Load model and tokenizer interpreter = Interpreter(model_path="FacebookAI_roberta-base.tflite") interpreter.allocate_tensors() input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() tokenizer = AutoTokenizer.from_pretrained("FacebookAI/roberta-base") def get_hidden_states(text: str) -> np.ndarray: """Get hidden states for input text.""" encoded = tokenizer( text, padding="max_length", max_length=128, truncation=True, return_tensors="np" ) interpreter.set_tensor(input_details[0]["index"], encoded["input_ids"].astype(np.int64)) interpreter.set_tensor(input_details[1]["index"], encoded["attention_mask"].astype(np.int64)) interpreter.invoke() return interpreter.get_tensor(output_details[0]["index"]) # Example hidden = get_hidden_states("Hello, world!") cls_embedding = hidden[0, 0, :] # CLS token for classification print(f"Hidden shape: {hidden.shape}") # (1, 128, 768) ``` ## Files - `FacebookAI_roberta-base.tflite` - The LiteRT model file ## Conversion Details - **Conversion Tool**: [ai-edge-torch](https://github.com/google-ai-edge/ai-edge-torch) - **Conversion Date**: 2026-01-12 - **Source Framework**: PyTorch → LiteRT - **Validation**: Cosine similarity 1.0000 vs original ## Intended Use - **Mobile Applications**: On-device semantic search, RAG systems - **Edge Devices**: IoT, embedded systems, Raspberry Pi - **Offline Processing**: Privacy-preserving inference - **Low-latency Applications**: Real-time processing ## Limitations - Fixed sequence length (128 tokens) - CPU inference (GPU delegate requires setup) - Tokenizer loaded separately from original model - Float32 precision ## License This model inherits the license from the original: - **License**: MIT ([source](https://huggingface.co/FacebookAI/roberta-base)) ## Citation ```bibtex @article{liu2019roberta, title={RoBERTa: A Robustly Optimized BERT Pretraining Approach}, author={Liu, Yinhan and Ott, Myle and others}, journal={arXiv preprint arXiv:1907.11692}, year={2019} } ``` ## Acknowledgments - Original model by [FacebookAI](https://huggingface.co/FacebookAI) - Conversion using [ai-edge-torch](https://github.com/google-ai-edge/ai-edge-torch) --- *Converted by [Bombek1](https://huggingface.co/Bombek1)*