Update README.md
Browse files
README.md
CHANGED
|
@@ -13,7 +13,7 @@ library_name: transformers
|
|
| 13 |
This model is part of the experiments in the published paper at the BabyLM workshop in CoNLL 2023.
|
| 14 |
The paper titled "Increasing The Performance of Cognitively Inspired Data-Efficient Language Models via Implicit Structure Building" (https://aclanthology.org/2023.conll-babylm.29/)
|
| 15 |
|
| 16 |
-
<strong>omarmomen/
|
| 17 |
|
| 18 |
This model variant places the parser network ahead of all the attention blocks.
|
| 19 |
|
|
|
|
| 13 |
This model is part of the experiments in the published paper at the BabyLM workshop in CoNLL 2023.
|
| 14 |
The paper titled "Increasing The Performance of Cognitively Inspired Data-Efficient Language Models via Implicit Structure Building" (https://aclanthology.org/2023.conll-babylm.29/)
|
| 15 |
|
| 16 |
+
<strong>omarmomen/structformer_s1_final_with_pos</strong> is a modification of the vanilla transformer encoder to incorporate syntactic inductive bias using an unsupervised parsing mechanism.
|
| 17 |
|
| 18 |
This model variant places the parser network ahead of all the attention blocks.
|
| 19 |
|