cstr commited on
Commit
25ccdb4
·
verified ·
1 Parent(s): f00d7e4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +358 -0
README.md ADDED
@@ -0,0 +1,358 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - multilingual
4
+ - af
5
+ - sq
6
+ - ar
7
+ - an
8
+ - hy
9
+ - ast
10
+ - az
11
+ - ba
12
+ - eu
13
+ - bar
14
+ - be
15
+ - bn
16
+ - inc
17
+ - bs
18
+ - br
19
+ - bg
20
+ - my
21
+ - ca
22
+ - ceb
23
+ - ce
24
+ - zh
25
+ - cv
26
+ - hr
27
+ - cs
28
+ - da
29
+ - nl
30
+ - en
31
+ - et
32
+ - fi
33
+ - fr
34
+ - gl
35
+ - ka
36
+ - de
37
+ - el
38
+ - gu
39
+ - ht
40
+ - he
41
+ - hi
42
+ - hu
43
+ - is
44
+ - io
45
+ - id
46
+ - ga
47
+ - it
48
+ - ja
49
+ - jv
50
+ - kn
51
+ - kk
52
+ - ky
53
+ - ko
54
+ - la
55
+ - lv
56
+ - lt
57
+ - roa
58
+ - nds
59
+ - lm
60
+ - mk
61
+ - mg
62
+ - ms
63
+ - ml
64
+ - mr
65
+ - mn
66
+ - min
67
+ - ne
68
+ - new
69
+ - nb
70
+ - nn
71
+ - oc
72
+ - fa
73
+ - pms
74
+ - pl
75
+ - pt
76
+ - pa
77
+ - ro
78
+ - ru
79
+ - sco
80
+ - sr
81
+ - hr
82
+ - scn
83
+ - sk
84
+ - sl
85
+ - aze
86
+ - es
87
+ - su
88
+ - sw
89
+ - sv
90
+ - tl
91
+ - tg
92
+ - th
93
+ - ta
94
+ - tt
95
+ - te
96
+ - tr
97
+ - uk
98
+ - ud
99
+ - uz
100
+ - vi
101
+ - vo
102
+ - war
103
+ - cy
104
+ - fry
105
+ - pnb
106
+ - yo
107
+ tags:
108
+ - onnx
109
+ - awesome-align
110
+ - word-alignment
111
+ - bert
112
+ - int8
113
+ pipeline_tag: feature-extraction
114
+ license: apache-2.0
115
+ datasets:
116
+ - wikipedia
117
+ ---
118
+
119
+ # Awesome-Align mBERT (ONNX INT8 Quantized)
120
+
121
+ This repository contains a **quantized INT8** version of **bert-base-multilingual-cased**, specifically optimized for word alignment using the **awesome-align** methodology.
122
+
123
+ ### Model Details
124
+
125
+ * **Base Model:** `bert-base-multilingual-cased`
126
+ * **Truncation:** Truncated to the **first 8 layers** (the optimal "sweet spot" for word alignment).
127
+ * **Format:** ONNX INT8 (Quantized)
128
+ * **Size:** **~150 MB** (approx. 75% smaller than the FP32 version).
129
+ * **Optimization:** Quantized using `torchao` and `Optimum` with settings optimized for **ARM64/Apple Silicon (M1/M2/M3)**.
130
+
131
+ ### Performance (MacBook Air M1 Benchmark)
132
+
133
+ | Metric | FP32 | INT8 (This Model) |
134
+ | --- | --- | --- |
135
+ | **Average Latency** | ~65 ms / sentence | **~38 ms / sentence** |
136
+ | **Speedup** | 1x | **~1.7x Faster** |
137
+ | **Accuracy** | Baseline | Identical Links |
138
+
139
+ ### Usage
140
+
141
+ This model is intended to be used with `onnxruntime` on CPU for maximum efficiency. Alignments are calculated using Cosine Similarity and Mutual Argmax (Intersection).
142
+
143
+ ```python
144
+ import numpy as np
145
+ import onnxruntime as ort
146
+ from transformers import AutoTokenizer
147
+
148
+ # 1. Load Model and Tokenizer
149
+ # Point to your local download or the Hub ID
150
+ model_id = "cstr/awesome-align-onnx-int8"
151
+ session = ort.InferenceSession("model.onnx", providers=['CPUExecutionProvider'])
152
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
153
+
154
+ def get_word_embeddings(words):
155
+ # Tokenize with subword mapping (is_split_into_words is critical)
156
+ encoded = tokenizer(words, is_split_into_words=True, return_tensors="np")
157
+
158
+ # Track which subwords belong to which original word index
159
+ word_map = []
160
+ for i, w in enumerate(words):
161
+ sub_tokens = tokenizer.tokenize(w) or [tokenizer.unk_token]
162
+ word_map.extend([i] * len(sub_tokens))
163
+
164
+ # Run inference
165
+ outputs = session.run(None, {
166
+ "input_ids": encoded["input_ids"],
167
+ "attention_mask": encoded["attention_mask"]
168
+ })
169
+
170
+ # Slicing: [Batch 0, remove CLS/SEP, all hidden features]
171
+ embeddings = outputs[0][0, 1:-1, :]
172
+ return embeddings, word_map
173
+
174
+ def align(src_words, tgt_words):
175
+ # Get embeddings and maps
176
+ src_embeds, src_map = get_word_embeddings(src_words)
177
+ tgt_embeds, tgt_map = get_word_embeddings(tgt_words)
178
+
179
+ # Compute Cosine Similarity
180
+ src_norm = src_embeds / np.linalg.norm(src_embeds, axis=-1, keepdims=True)
181
+ tgt_norm = tgt_embeds / np.linalg.norm(tgt_embeds, axis=-1, keepdims=True)
182
+ similarity = np.dot(src_norm, tgt_norm.T)
183
+
184
+ # Mutual Argmax (Intersection) Logic for high precision
185
+ best_tgt_for_src = np.argmax(similarity, axis=1)
186
+ best_src_for_tgt = np.argmax(similarity, axis=0)
187
+
188
+ alignment = set()
189
+ for i, j in enumerate(best_tgt_for_src):
190
+ if best_src_for_tgt[j] == i:
191
+ alignment.add((src_map[i], tgt_map[j]))
192
+
193
+ return sorted(list(alignment))
194
+
195
+ # Example Usage
196
+ src = ["I", "will", "go", "to", "the", "hospital"]
197
+ tgt = ["Ich", "werde", "ins", "Krankenhaus", "gehen"]
198
+ links = align(src, tgt)
199
+
200
+ print(f"Alignment Links: {links}")
201
+
202
+ ```
203
+
204
+ ### Technical Notes
205
+
206
+ * **Subword Handling**: This model is based on mBERT; it uses WordPiece tokenization. The provided script maps these sub-tokens back to original word indices to ensure logical word-to-word alignments.
207
+ * **CPU Optimization**: The INT8 quantization uses **per-channel** asymmetric quantization, which is highly efficient for the ARM NEON instruction set on Apple Silicon.
208
+ * **Layer 8 Extraction**: Only the first 8 layers were exported to ONNX to reduce computational overhead and disk space without sacrificing alignment quality.
209
+
210
+ Original model card follows:
211
+
212
+ ---
213
+
214
+ # BERT multilingual base model (cased)
215
+
216
+ Pretrained model on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective.
217
+ It was introduced in [this paper](https://arxiv.org/abs/1810.04805) and first released in
218
+ [this repository](https://github.com/google-research/bert). This model is case sensitive: it makes a difference
219
+ between english and English.
220
+
221
+ Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by
222
+ the Hugging Face team.
223
+
224
+ ## Model description
225
+
226
+ BERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means
227
+ it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
228
+ publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
229
+ was pretrained with two objectives:
230
+
231
+ - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
232
+ the entire masked sentence through the model and has to predict the masked words. This is different from traditional
233
+ recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
234
+ GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
235
+ sentence.
236
+ - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
237
+ they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
238
+ predict if the two sentences were following each other or not.
239
+
240
+ This way, the model learns an inner representation of the languages in the training set that can then be used to
241
+ extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a
242
+ standard classifier using the features produced by the BERT model as inputs.
243
+
244
+ ## Intended uses & limitations
245
+
246
+ You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
247
+ be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for
248
+ fine-tuned versions on a task that interests you.
249
+
250
+ Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
251
+ to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
252
+ generation you should look at model like GPT2.
253
+
254
+ ### How to use
255
+
256
+ You can use this model directly with a pipeline for masked language modeling:
257
+
258
+ ```python
259
+ >>> from transformers import pipeline
260
+ >>> unmasker = pipeline('fill-mask', model='bert-base-multilingual-cased')
261
+ >>> unmasker("Hello I'm a [MASK] model.")
262
+
263
+ [{'sequence': "[CLS] Hello I'm a model model. [SEP]",
264
+ 'score': 0.10182085633277893,
265
+ 'token': 13192,
266
+ 'token_str': 'model'},
267
+ {'sequence': "[CLS] Hello I'm a world model. [SEP]",
268
+ 'score': 0.052126359194517136,
269
+ 'token': 11356,
270
+ 'token_str': 'world'},
271
+ {'sequence': "[CLS] Hello I'm a data model. [SEP]",
272
+ 'score': 0.048930276185274124,
273
+ 'token': 11165,
274
+ 'token_str': 'data'},
275
+ {'sequence': "[CLS] Hello I'm a flight model. [SEP]",
276
+ 'score': 0.02036019042134285,
277
+ 'token': 23578,
278
+ 'token_str': 'flight'},
279
+ {'sequence': "[CLS] Hello I'm a business model. [SEP]",
280
+ 'score': 0.020079681649804115,
281
+ 'token': 14155,
282
+ 'token_str': 'business'}]
283
+ ```
284
+
285
+ Here is how to use this model to get the features of a given text in PyTorch:
286
+
287
+ ```python
288
+ from transformers import BertTokenizer, BertModel
289
+ tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
290
+ model = BertModel.from_pretrained("bert-base-multilingual-cased")
291
+ text = "Replace me by any text you'd like."
292
+ encoded_input = tokenizer(text, return_tensors='pt')
293
+ output = model(**encoded_input)
294
+ ```
295
+
296
+ and in TensorFlow:
297
+
298
+ ```python
299
+ from transformers import BertTokenizer, TFBertModel
300
+ tokenizer = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
301
+ model = TFBertModel.from_pretrained("bert-base-multilingual-cased")
302
+ text = "Replace me by any text you'd like."
303
+ encoded_input = tokenizer(text, return_tensors='tf')
304
+ output = model(encoded_input)
305
+ ```
306
+
307
+ ## Training data
308
+
309
+ The BERT model was pretrained on the 104 languages with the largest Wikipedias. You can find the complete list
310
+ [here](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages).
311
+
312
+ ## Training procedure
313
+
314
+ ### Preprocessing
315
+
316
+ The texts are lowercased and tokenized using WordPiece and a shared vocabulary size of 110,000. The languages with a
317
+ larger Wikipedia are under-sampled and the ones with lower resources are oversampled. For languages like Chinese,
318
+ Japanese Kanji and Korean Hanja that don't have space, a CJK Unicode block is added around every character.
319
+
320
+ The inputs of the model are then of the form:
321
+
322
+ ```
323
+ [CLS] Sentence A [SEP] Sentence B [SEP]
324
+ ```
325
+
326
+ With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
327
+ the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
328
+ consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
329
+ "sentences" has a combined length of less than 512 tokens.
330
+
331
+ The details of the masking procedure for each sentence are the following:
332
+ - 15% of the tokens are masked.
333
+ - In 80% of the cases, the masked tokens are replaced by `[MASK]`.
334
+ - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
335
+ - In the 10% remaining cases, the masked tokens are left as is.
336
+
337
+
338
+ ### BibTeX entry and citation info
339
+
340
+ ```bibtex
341
+ @article{DBLP:journals/corr/abs-1810-04805,
342
+ author = {Jacob Devlin and
343
+ Ming{-}Wei Chang and
344
+ Kenton Lee and
345
+ Kristina Toutanova},
346
+ title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
347
+ Understanding},
348
+ journal = {CoRR},
349
+ volume = {abs/1810.04805},
350
+ year = {2018},
351
+ url = {http://arxiv.org/abs/1810.04805},
352
+ archivePrefix = {arXiv},
353
+ eprint = {1810.04805},
354
+ timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
355
+ biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
356
+ bibsource = {dblp computer science bibliography, https://dblp.org}
357
+ }
358
+ ```