davda54 commited on
Commit
199dfb6
·
1 Parent(s): ee89185

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -13
README.md CHANGED
@@ -9,25 +9,14 @@ tags:
9
  license: cc-by-4.0
10
  ---
11
 
12
- # BNC-BERT
13
 
14
  - Paper: [Trained on 100 million words and still in shape: BERT meets British National Corpus](https://arxiv.org/abs/2303.09859)
15
  - GitHub: [ltgoslo/ltg-bert](https://github.com/ltgoslo/ltg-bert)
16
 
17
- ## Example usage
18
-
19
- This model currently needs a custom wrapper from `modeling_ltgbert.py`. Then you can use it like this:
20
-
21
- ```python
22
- import torch
23
- from transformers import AutoTokenizer
24
- from modeling_ltgbert import LtgBertForMaskedLM
25
-
26
- tokenizer = AutoTokenizer.from_pretrained("path/to/folder")
27
- bert = LtgBertForMaskedLM.from_pretrained("path/to/folder")
28
- ```
29
 
30
  ## Please cite the following publication (just arXiv for now)
 
31
  ```bibtex
32
  @inproceedings{samuel-etal-2023-trained,
33
  title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
 
9
  license: cc-by-4.0
10
  ---
11
 
12
+ # LTG-BERT for the BabyLM challenge
13
 
14
  - Paper: [Trained on 100 million words and still in shape: BERT meets British National Corpus](https://arxiv.org/abs/2303.09859)
15
  - GitHub: [ltgoslo/ltg-bert](https://github.com/ltgoslo/ltg-bert)
16
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
  ## Please cite the following publication (just arXiv for now)
19
+
20
  ```bibtex
21
  @inproceedings{samuel-etal-2023-trained,
22
  title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",