| ## MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices (Safetensors Checkpoint) | |
| MobileBERT is a thin version of BERT_LARGE, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. | |
| See [here](https://huggingface.co/google/mobilebert-uncased) for the original model checkpoint in TensorFlow. This is simply that checkpoint converted to safetensors. | |
| ## Example usage in `transformers` | |
| ```python | |
| from transformers import MobileBertTokenizer, MobileBertForMaskedLM | |
| import torch | |
| tokenizer = MobileBertTokenizer.from_pretrained("google/mobilebert-uncased") | |
| model = MobileBertForMaskedLM.from_pretrained( | |
| "vysri/mobilebert-uncased-pytorch" | |
| ) | |
| model.eval() | |
| sentence = "The capital of France is [MASK]." | |
| inputs = tokenizer(sentence, return_tensors="pt") | |
| with torch.no_grad(): | |
| outputs = model(**inputs) | |
| mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] | |
| predicted_token_id = outputs.logits[0, mask_token_index].argmax(axis=-1) | |
| predicted_token = tokenizer.decode(predicted_token_id) | |
| print(f"Input: {sentence}") | |
| print(f"Prediction: {predicted_token}") | |
| ``` | |