Instructions to use google-bert/bert-base-uncased with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google-bert/bert-base-uncased with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="google-bert/bert-base-uncased")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("google-bert/bert-base-uncased") model = AutoModelForMaskedLM.from_pretrained("google-bert/bert-base-uncased") - Inference
- Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -40,7 +40,7 @@ classifier using the features produced by the BERT model as inputs.
|
|
| 40 |
|
| 41 |
## Model variations
|
| 42 |
|
| 43 |
-
BERT has originally been released in base and large variations, for cased and uncased input text. The uncased models also strips out
|
| 44 |
Chinese and multilingual uncased and cased versions followed shortly after.
|
| 45 |
Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of two models.
|
| 46 |
Other 24 smaller models are released afterward.
|
|
|
|
| 40 |
|
| 41 |
## Model variations
|
| 42 |
|
| 43 |
+
BERT has originally been released in base and large variations, for cased and uncased input text. The uncased models also strips out any accent markers.
|
| 44 |
Chinese and multilingual uncased and cased versions followed shortly after.
|
| 45 |
Modified preprocessing with whole word masking has replaced subpiece masking in a following work, with the release of two models.
|
| 46 |
Other 24 smaller models are released afterward.
|