vysri commited on
Commit
36bb25c
·
verified ·
1 Parent(s): ef73c69

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -0
README.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices (Safetensors Checkpoint)
2
+
3
+ MobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks.
4
+ See [here](https://huggingface.co/google/mobilebert-uncased) for the original model checkpoint in TensorFlow. This is simply that checkpoint converted to safetensors.
5
+
6
+ ## Example usage in `transformers`
7
+ ```python
8
+ from transformers import MobileBertTokenizer, MobileBertForMaskedLM
9
+ import torch
10
+
11
+ tokenizer = MobileBertTokenizer.from_pretrained("google/mobilebert-uncased")
12
+
13
+ model = MobileBertForMaskedLM.from_pretrained(
14
+ "vysri/mobilebert-uncased-pytorch"
15
+ )
16
+
17
+ model.eval()
18
+ sentence = "The capital of France is [MASK]."
19
+ inputs = tokenizer(sentence, return_tensors="pt")
20
+
21
+ with torch.no_grad():
22
+ outputs = model(**inputs)
23
+
24
+ mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
25
+ predicted_token_id = outputs.logits[0, mask_token_index].argmax(axis=-1)
26
+ predicted_token = tokenizer.decode(predicted_token_id)
27
+
28
+ print(f"Input: {sentence}")
29
+ print(f"Prediction: {predicted_token}")
30
+
31
+ ```
32
+