Update README.md
Browse files
README.md
CHANGED
|
@@ -7,7 +7,7 @@ language:
|
|
| 7 |
|
| 8 |
# IPA CHILDES Models: Large
|
| 9 |
|
| 10 |
-
A phoneme-based GPT-2 model trained on the largest section of the [IPA-CHILDES](https://huggingface.co/datasets/phonemetransformers/IPA-CHILDES) dataset for the paper [BabyLM's First Words: Word Segmentation as a Phonological Probing Task]().
|
| 11 |
|
| 12 |
The model has 19M non-embedding parameters and was trained on 18M tokens. It was evaluated for phonological knowledge using the *word segmentation* task. Check out the paper for more details. Training and analysis scripts can be found [here](https://github.com/codebyzeb/PhonemeTransformers).
|
| 13 |
|
|
|
|
| 7 |
|
| 8 |
# IPA CHILDES Models: Large
|
| 9 |
|
| 10 |
+
A phoneme-based GPT-2 model trained on the largest section of the [IPA-CHILDES](https://huggingface.co/datasets/phonemetransformers/IPA-CHILDES) dataset for the paper [BabyLM's First Words: Word Segmentation as a Phonological Probing Task](https://arxiv.org/abs/2504.03338).
|
| 11 |
|
| 12 |
The model has 19M non-embedding parameters and was trained on 18M tokens. It was evaluated for phonological knowledge using the *word segmentation* task. Check out the paper for more details. Training and analysis scripts can be found [here](https://github.com/codebyzeb/PhonemeTransformers).
|
| 13 |
|