Instructions to use pere/roberta-base-exp-8 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use pere/roberta-base-exp-8 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="pere/roberta-base-exp-8")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("pere/roberta-base-exp-8") model = AutoModelForMaskedLM.from_pretrained("pere/roberta-base-exp-8") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 9b59c36ea117076fc7e6f34f06232f4d578404b88d5c21f5ca5cf5462cd02bdd
- Size of remote file:
- 1.11 GB
- SHA256:
- 9042b69e0b1c94c7114eb1053998800b02dd4de191ce68ecaa7aaf5ae4c648ff
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.