Fill-Mask
Transformers
PyTorch
English
roberta
How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("fill-mask", model="omarmomen/roberta_base_32k_final")
# Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM

tokenizer = AutoTokenizer.from_pretrained("omarmomen/roberta_base_32k_final")
model = AutoModelForMaskedLM.from_pretrained("omarmomen/roberta_base_32k_final")
Quick Links

Model Card for omarmomen/roberta_base_32k_final

This model is part of the experiments in the published paper at the BabyLM workshop in CoNLL 2023. The paper titled "Increasing The Performance of Cognitively Inspired Data-Efficient Language Models via Implicit Structure Building" (https://aclanthology.org/2023.conll-babylm.29/)

omarmomen/roberta_base_32k_final is a baseline RobertaModel.

The model is pretrained on the BabyLM 10M dataset using a custom pretrained RobertaTokenizer (https://huggingface.co/omarmomen/babylm_tokenizer_32k).

https://arxiv.org/abs/2310.20589

Downloads last month
8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train omarmomen/roberta_base_32k_final

Paper for omarmomen/roberta_base_32k_final