metadata
datasets:
- phonemetransformers/IPA-CHILDES
language:
- en
IPA CHILDES Models: Large
A phoneme-based GPT-2 model trained on the largest section of the IPA-CHILDES dataset for the paper BabyLM's First Words: Word Segmentation as a Phonological Probing Task.
The model has 19M non-embedding parameters and was trained on 18M tokens. It was evaluated for phonological knowledge using the word segmentation task. Check out the paper for more details. Training and analysis scripts can be found here.
To load a model:
from transformers import AutoModel
english_na_model = AutoModel.from_pretrained('phonemetransformers/ipa-childes-models-large', subfolder='EnglishNA')