Instructions to use ArnavL/twteval-pretrained with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ArnavL/twteval-pretrained with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="ArnavL/twteval-pretrained")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("ArnavL/twteval-pretrained") model = AutoModelForMaskedLM.from_pretrained("ArnavL/twteval-pretrained") - Notebooks
- Google Colab
- Kaggle
Librarian Bot: Update dataset YAML metadata for model
#1
by librarian-bot - opened
This is a pull request to add a dataset, ArnavL/TWTEval-Pretraining-Processed, to the metadata for your model (defined in the YAML block of your model's README.md).
The pull request was made by librarian-bot and used a combination of rules and/or machine learning to suggest this additional metadata.
If this suggestion is incorrect, feel free to close this pull request.
Librarian Bot was made by @davanstrien; feel free to get in touch with feedback.