id stringlengths 4 117 | sentence stringlengths 1 134k |
|---|---|
bert-base-uncased | 340M |
bert-base-uncased | English |
bert-base-uncased | bert-large-cased-whole-word-masking |
bert-base-uncased | 340 |
bert-base-uncased | M |
bert-base-uncased | English |
bert-base-uncased | Intended uses & limitations |
bert-base-uncased | You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. |
bert-base-uncased | See the model hub to look for fine-tuned versions of a task that interests you. |
bert-base-uncased | Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. |
bert-base-uncased | For tasks such as text generation you should look at model like GPT2. |
bert-base-uncased | How to use |
bert-base-uncased | You can use this model directly with a pipeline for masked language modeling: |
bert-base-uncased | >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> |
bert-base-uncased | unmasker("Hello I'm a [MASK] model.") |
bert-base-uncased | [{'sequence': "[CLS] hello i'm a fashion model. |
bert-base-uncased | [SEP]", 'score': 0.1073106899857521, 'token': 4827, 'token_str': 'fashion'}, {'sequence': "[CLS] hello i'm a role model. |
bert-base-uncased | [SEP]", 'score': 0.08774490654468536, 'token': 2535, 'token_str': 'role'}, {'sequence': "[CLS] hello i'm a new model. |
bert-base-uncased | [SEP]", 'score': 0.05338378623127937, 'token': 2047, 'token_str': 'new'}, {'sequence': "[CLS] hello i'm a super model. |
bert-base-uncased | [SEP]", 'score': 0.04667217284440994, 'token': 3565, 'token_str': 'super'}, {'sequence': "[CLS] hello i'm a fine model. |
bert-base-uncased | [SEP]", 'score': 0.027095865458250046, 'token': 2986, 'token_str': 'fine'}] |
bert-base-uncased | Here is how to use this model to get the features of a given text in PyTorch: |
bert-base-uncased | from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertModel.from_pretrained("bert-base-uncased") text = "Replace me by any text you'd like." |
bert-base-uncased | encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) |
bert-base-uncased | and in TensorFlow: |
bert-base-uncased | from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = TFBertModel.from_pretrained("bert-base-uncased") text = "Replace me by any text you'd like." |
bert-base-uncased | encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) |
bert-base-uncased | Limitations and bias |
bert-base-uncased | Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions: |
bert-base-uncased | >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='bert-base-uncased') >>> |
bert-base-uncased | unmasker("The man worked as a [MASK].") |
bert-base-uncased | [{'sequence': '[CLS] the man worked as a carpenter. |
bert-base-uncased | [SEP]', 'score': 0.09747550636529922, 'token': 10533, 'token_str': 'carpenter'}, {'sequence': '[CLS] the man worked as a waiter. |
bert-base-uncased | [SEP]', 'score': 0.0523831807076931, 'token': 15610, 'token_str': 'waiter'}, {'sequence': '[CLS] the man worked as a barber. |
bert-base-uncased | [SEP]', 'score': 0.04962705448269844, 'token': 13362, 'token_str': 'barber'}, {'sequence': ' |
bert-base-uncased | [CLS] the man worked as a mechanic. |
bert-base-uncased | [SEP]', 'score': 0.03788609802722931, 'token': 15893, 'token_str': 'mechanic'}, {'sequence': '[CLS] the man worked as a salesman. |
bert-base-uncased | [SEP]', 'score': 0.037680890411138535, 'token': 18968, 'token_str': 'salesman'}] >>> |
bert-base-uncased | unmasker("The woman worked as a [MASK].") |
bert-base-uncased | [{'sequence': '[CLS] the woman worked as a nurse. |
bert-base-uncased | [SEP]', 'score': 0.21981462836265564, 'token': 6821, 'token_str': 'nurse'}, {'sequence': '[CLS] the woman worked as a waitress. |
bert-base-uncased | [SEP]', 'score': 0.1597415804862976, 'token': 13877, 'token_str': 'waitress'}, {'sequence': '[CLS] the woman worked as a maid. |
bert-base-uncased | [SEP]', 'score': 0.1154729500412941, 'token': 10850, 'token_str': 'maid'}, {'sequence': '[CLS] the woman worked as a prostitute. |
bert-base-uncased | [SEP]', 'score': 0.037968918681144714, 'token': 19215, 'token_str': 'prostitute'}, {'sequence': '[CLS] the woman worked as a cook. |
bert-base-uncased | [SEP]', 'score': 0.03042375110089779, 'token': 5660, 'token_str': 'cook'}] |
bert-base-uncased | This bias will also affect all fine-tuned versions of this model. |
bert-base-uncased | Training data |
bert-base-uncased | The BERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers). |
bert-base-uncased | Training procedure |
bert-base-uncased | Preprocessing |
bert-base-uncased | The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. |
bert-base-uncased | The inputs of the model are then of the form: |
bert-base-uncased | [CLS] Sentence A |
bert-base-uncased | [SEP] Sentence B |
bert-base-uncased | [SEP] |
bert-base-uncased | With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus, and in the other cases, it's another random sentence in the corpus. |
bert-base-uncased | Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. |
bert-base-uncased | The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. |
bert-base-uncased | The details of the masking procedure for each sentence are the following: |
bert-base-uncased | 15% of the tokens are masked. |
bert-base-uncased | In 80% of the cases, the masked tokens are replaced by [MASK]. |
bert-base-uncased | In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. |
bert-base-uncased | In the 10% remaining cases, the masked tokens are left as is. |
bert-base-uncased | Pretraining |
bert-base-uncased | The model was trained on 4 cloud TPUs in Pod configuration (16 TPU chips total) for one million steps with a batch size of 256. |
bert-base-uncased | The sequence length was limited to 128 tokens for 90% of the steps and 512 for the remaining 10%. |
bert-base-uncased | The optimizer used is Adam with a learning rate of 1e-4, \(\beta_{1} = 0.9\) and \(\beta_{2} = 0.999\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. |
bert-base-uncased | Evaluation results |
bert-base-uncased | When fine-tuned on downstream tasks, this model achieves the following results: |
bert-base-uncased | Glue test results: |
bert-base-uncased | Task |
bert-base-uncased | MNLI-(m |
bert-base-uncased | /mm) |
bert-base-uncased | QQP |
bert-base-uncased | QNLI |
bert-base-uncased | SST-2 |
bert-base-uncased | CoLA |
bert-base-uncased | STS-B |
bert-base-uncased | MRPC |
bert-base-uncased | RTE |
bert-base-uncased | Average |
bert-base-uncased | 84.6/83.4 |
bert-base-uncased | 71.2 |
bert-base-uncased | 90.5 |
bert-base-uncased | 93.5 |
bert-base-uncased | 52.1 |
bert-base-uncased | 85.8 |
bert-base-uncased | 88.9 |
bert-base-uncased | 66.4 |
bert-base-uncased | 79.6 |
bert-base-uncased | BibTeX entry and citation info |
bert-base-uncased | @article{DBLP:journals/corr/abs-1810-04805, author = {Jacob Devlin and Ming{-}Wei Chang and Kenton Lee and Kristina Toutanova}, title = {{BERT:} |
bert-base-uncased | Pre-training of Deep Bidirectional Transformers for Language Understanding}, journal = {CoRR}, volume = {abs/1810.04805}, year = {2018}, url = {http://arxiv.org/abs/1810.04805}, archivePrefix = {arXiv}, eprint = {1810.04805}, timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},... |
timm/mobilenetv3_large_100.ra_in1k | Model card for mobilenetv3_large_100.ra_in1k |
timm/mobilenetv3_large_100.ra_in1k | A MobileNet-v3 image classification model. |
timm/mobilenetv3_large_100.ra_in1k | Trained on ImageNet-1k in timm using recipe template described below. |
timm/mobilenetv3_large_100.ra_in1k | Recipe details: |
timm/mobilenetv3_large_100.ra_in1k | RandAugment RA recipe. |
timm/mobilenetv3_large_100.ra_in1k | Inspired by and evolved from EfficientNet RandAugment recipes. |
timm/mobilenetv3_large_100.ra_in1k | Published as B recipe in ResNet Strikes Back. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.