Bombek1 commited on
Commit
3033f6b
·
verified ·
1 Parent(s): 17d19b0

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +125 -0
README.md ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - bert
4
+ - transformers
5
+ - litert
6
+ - tflite
7
+ - edge
8
+ - on-device
9
+ license: mit
10
+ base_model: FacebookAI/roberta-base
11
+ pipeline_tag: feature-extraction
12
+ ---
13
+
14
+ # roberta-base - LiteRT
15
+
16
+ This is a [LiteRT](https://ai.google.dev/edge/litert) (formerly TensorFlow Lite) conversion of [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) for efficient on-device inference.
17
+
18
+ ## Model Details
19
+
20
+ | Property | Value |
21
+ |----------|-------|
22
+ | **Original Model** | [FacebookAI/roberta-base](https://huggingface.co/FacebookAI/roberta-base) |
23
+ | **Format** | LiteRT (.tflite) |
24
+ | **File Size** | 473.7 MB |
25
+ | **Task** | Feature Extraction / Classification Base |
26
+ | **Max Sequence Length** | 128 |
27
+ | **Output Dimension** | 768 |
28
+ | **Pooling Mode** | N/A (Full hidden states) |
29
+
30
+ ## Performance
31
+
32
+ Benchmarked on Intel CPU (WSL2):
33
+
34
+ | Metric | Value |
35
+ |--------|-------|
36
+ | **Inference Latency** | 81.2 ms |
37
+ | **Throughput** | 12.3/sec |
38
+ | **Cosine Similarity vs Original** | 1.0000 ✅ |
39
+
40
+ ## Quick Start
41
+
42
+ ```python
43
+ import numpy as np
44
+ from ai_edge_litert.interpreter import Interpreter
45
+ from transformers import AutoTokenizer
46
+
47
+ # Load model and tokenizer
48
+ interpreter = Interpreter(model_path="FacebookAI_roberta-base.tflite")
49
+ interpreter.allocate_tensors()
50
+ input_details = interpreter.get_input_details()
51
+ output_details = interpreter.get_output_details()
52
+
53
+ tokenizer = AutoTokenizer.from_pretrained("FacebookAI/roberta-base")
54
+
55
+ def get_hidden_states(text: str) -> np.ndarray:
56
+ """Get hidden states for input text."""
57
+ encoded = tokenizer(
58
+ text,
59
+ padding="max_length",
60
+ max_length=128,
61
+ truncation=True,
62
+ return_tensors="np"
63
+ )
64
+
65
+ interpreter.set_tensor(input_details[0]["index"], encoded["input_ids"].astype(np.int64))
66
+ interpreter.set_tensor(input_details[1]["index"], encoded["attention_mask"].astype(np.int64))
67
+ interpreter.invoke()
68
+
69
+ return interpreter.get_tensor(output_details[0]["index"])
70
+
71
+ # Example
72
+ hidden = get_hidden_states("Hello, world!")
73
+ cls_embedding = hidden[0, 0, :] # CLS token for classification
74
+ print(f"Hidden shape: {hidden.shape}") # (1, 128, 768)
75
+ ```
76
+
77
+ ## Files
78
+
79
+ - `FacebookAI_roberta-base.tflite` - The LiteRT model file
80
+
81
+ ## Conversion Details
82
+
83
+ - **Conversion Tool**: [ai-edge-torch](https://github.com/google-ai-edge/ai-edge-torch)
84
+ - **Conversion Date**: 2026-01-12
85
+ - **Source Framework**: PyTorch → LiteRT
86
+ - **Validation**: Cosine similarity 1.0000 vs original
87
+
88
+ ## Intended Use
89
+
90
+ - **Mobile Applications**: On-device semantic search, RAG systems
91
+ - **Edge Devices**: IoT, embedded systems, Raspberry Pi
92
+ - **Offline Processing**: Privacy-preserving inference
93
+ - **Low-latency Applications**: Real-time processing
94
+
95
+ ## Limitations
96
+
97
+ - Fixed sequence length (128 tokens)
98
+ - CPU inference (GPU delegate requires setup)
99
+ - Tokenizer loaded separately from original model
100
+ - Float32 precision
101
+
102
+ ## License
103
+
104
+ This model inherits the license from the original:
105
+ - **License**: MIT ([source](https://huggingface.co/FacebookAI/roberta-base))
106
+
107
+ ## Citation
108
+
109
+ ```bibtex
110
+ @article{liu2019roberta,
111
+ title={RoBERTa: A Robustly Optimized BERT Pretraining Approach},
112
+ author={Liu, Yinhan and Ott, Myle and others},
113
+ journal={arXiv preprint arXiv:1907.11692},
114
+ year={2019}
115
+ }
116
+ ```
117
+
118
+ ## Acknowledgments
119
+
120
+ - Original model by [FacebookAI](https://huggingface.co/FacebookAI)
121
+ - Conversion using [ai-edge-torch](https://github.com/google-ai-edge/ai-edge-torch)
122
+
123
+ ---
124
+
125
+ *Converted by [Bombek1](https://huggingface.co/Bombek1)*