Instructions to use Twitter/twhin-bert-large with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Twitter/twhin-bert-large with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="Twitter/twhin-bert-large")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("Twitter/twhin-bert-large") model = AutoModelForMaskedLM.from_pretrained("Twitter/twhin-bert-large") - Notebooks
- Google Colab
- Kaggle
Make mask token explicit
#1
by codesue - opened
Make the mask token explicit to prevent the error No mask_token (<mask>) found on the input in the Hosted Inference API. See related issue for more context on this fix.
codesue changed pull request title from Make mast token explicit to Make mask token explicit
ahelk changed pull request status to merged