| | --- |
| | language: en |
| | tags: |
| | - exbert |
| | - multiberts |
| | - multiberts-seed-1 |
| | license: apache-2.0 |
| | datasets: |
| | - bookcorpus |
| | - wikipedia |
| | --- |
| | # MultiBERTs Seed 1 Checkpoint 2000k (uncased) |
| | Seed 1 intermediate checkpoint 2000k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in |
| | [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in |
| | [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. |
| | The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference |
| | between english and English. |
| |
|
| | Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). |
| |
|
| | ## Model description |
| | MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it |
| | was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of |
| | publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it |
| | was pretrained with two objectives: |
| | - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run |
| | the entire masked sentence through the model and has to predict the masked words. This is different from traditional |
| | recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like |
| | GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the |
| | sentence. |
| | - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes |
| | they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to |
| | predict if the two sentences were following each other or not. |
| | This way, the model learns an inner representation of the English language that can then be used to extract features |
| | useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard |
| | classifier using the features produced by the MultiBERTs model as inputs. |
| |
|
| | ## Intended uses & limitations |
| | You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to |
| | be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for |
| | fine-tuned versions on a task that interests you. |
| | Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) |
| | to make decisions, such as sequence classification, token classification or question answering. For tasks such as text |
| | generation you should look at model like GPT2. |
| |
|
| | ### How to use |
| | Here is how to use this model to get the features of a given text in PyTorch: |
| | ```python |
| | from transformers import BertTokenizer, BertModel |
| | tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-2000k') |
| | model = BertModel.from_pretrained("multiberts-seed-1-2000k") |
| | text = "Replace me by any text you'd like." |
| | encoded_input = tokenizer(text, return_tensors='pt') |
| | output = model(**encoded_input) |
| | ``` |
| |
|
| | ### Limitations and bias |
| | Even if the training data used for this model could be characterized as fairly neutral, this model can have biased |
| | predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular |
| | checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. |
| |
|
| | ## Training data |
| | The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 |
| | unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and |
| | headers). |
| | ## Training procedure |
| |
|
| | ### Preprocessing |
| | The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are |
| | then of the form: |
| | ``` |
| | [CLS] Sentence A [SEP] Sentence B [SEP] |
| | ``` |
| | With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in |
| | the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a |
| | consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two |
| | "sentences" has a combined length of less than 512 tokens. |
| | The details of the masking procedure for each sentence are the following: |
| | - 15% of the tokens are masked. |
| | - In 80% of the cases, the masked tokens are replaced by `[MASK]`. |
| | - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. |
| | - In the 10% remaining cases, the masked tokens are left as is. |
| |
|
| | ### Pretraining |
| | The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size |
| | of 256. The sequence length was set to 512 throughout. The optimizer |
| | used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, |
| | learning rate warmup for 10,000 steps and linear decay of the learning rate after. |
| |
|
| | ### BibTeX entry and citation info |
| | ```bibtex |
| | @article{DBLP:journals/corr/abs-2106-16163, |
| | author = {Thibault Sellam and |
| | Steve Yadlowsky and |
| | Jason Wei and |
| | Naomi Saphra and |
| | Alexander D'Amour and |
| | Tal Linzen and |
| | Jasmijn Bastings and |
| | Iulia Turc and |
| | Jacob Eisenstein and |
| | Dipanjan Das and |
| | Ian Tenney and |
| | Ellie Pavlick}, |
| | title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, |
| | journal = {CoRR}, |
| | volume = {abs/2106.16163}, |
| | year = {2021}, |
| | url = {https://arxiv.org/abs/2106.16163}, |
| | eprinttype = {arXiv}, |
| | eprint = {2106.16163}, |
| | timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, |
| | biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, |
| | bibsource = {dblp computer science bibliography, https://dblp.org} |
| | } |
| | ``` |
| | <a href="https://huggingface.co/exbert/?model=multiberts"> |
| | <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> |
| | </a> |
| | |