Increasing The Performance of Cognitively Inspired Data-Efficient Language Models via Implicit Structure Building
Paper • 2310.20589 • Published
# Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("omarmomen/structroberta_s2_final", trust_remote_code=True)
model = AutoModelForMaskedLM.from_pretrained("omarmomen/structroberta_s2_final", trust_remote_code=True)This model is part of the experiments in the published paper at the BabyLM workshop in CoNLL 2023. The paper titled "Increasing The Performance of Cognitively Inspired Data-Efficient Language Models via Implicit Structure Building" (https://aclanthology.org/2023.conll-babylm.29/)
omarmomen/structroberta_s2_final is a modification on the Roberta Model to incorporate syntactic inductive bias using an unsupervised parsing mechanism.
This model variant places the parser network after 4 attention blocks.
The model is pretrained on the BabyLM 10M dataset using a custom pretrained RobertaTokenizer (https://huggingface.co/omarmomen/babylm_tokenizer_32k).
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="omarmomen/structroberta_s2_final", trust_remote_code=True)