AnthonyKamau rdiehlmartinez commited on
Commit
41207fa
·
0 Parent(s):

Duplicate from cambridge-climb/BabyLM

Browse files

Co-authored-by: Richard Diehl Martinez <rdiehlmartinez@users.noreply.huggingface.co>

This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +255 -0
  2. BabyLM.py +157 -0
  3. README.md +68 -0
  4. clean/100M/aochildes.txt +3 -0
  5. clean/100M/bnc_spoken.txt +3 -0
  6. clean/100M/cbt.txt +3 -0
  7. clean/100M/children_stories.txt +3 -0
  8. clean/100M/gutenberg.txt +3 -0
  9. clean/100M/open_subtitles.txt +3 -0
  10. clean/100M/qed.txt +3 -0
  11. clean/100M/simple_wikipedia.txt +3 -0
  12. clean/100M/switchboard.txt +0 -0
  13. clean/100M/wikipedia.txt +3 -0
  14. clean/10M/aochildes.txt +0 -0
  15. clean/10M/bnc_spoken.txt +0 -0
  16. clean/10M/cbt.txt +0 -0
  17. clean/10M/children_stories.txt +0 -0
  18. clean/10M/gutenberg.txt +0 -0
  19. clean/10M/open_subtitles.txt +3 -0
  20. clean/10M/qed.txt +0 -0
  21. clean/10M/simple_wikipedia.txt +0 -0
  22. clean/10M/switchboard.txt +0 -0
  23. clean/10M/wikipedia.txt +0 -0
  24. clean/dev/aochildes.txt +0 -0
  25. clean/dev/bnc_spoken.txt +0 -0
  26. clean/dev/cbt.txt +0 -0
  27. clean/dev/children_stories.txt +0 -0
  28. clean/dev/gutenberg.txt +0 -0
  29. clean/dev/open_subtitles.txt +3 -0
  30. clean/dev/qed.txt +0 -0
  31. clean/dev/simple_wikipedia.txt +0 -0
  32. clean/dev/switchboard.txt +0 -0
  33. clean/dev/wikipedia.txt +0 -0
  34. clean/test/aochildes.txt +0 -0
  35. clean/test/bnc_spoken.txt +0 -0
  36. clean/test/cbt.txt +0 -0
  37. clean/test/children_stories.txt +0 -0
  38. clean/test/gutenberg.txt +0 -0
  39. clean/test/open_subtitles.txt +3 -0
  40. clean/test/qed.txt +0 -0
  41. clean/test/simple_wikipedia.txt +3 -0
  42. clean/test/switchboard.txt +0 -0
  43. clean/test/wikipedia.txt +0 -0
  44. clean_data.py +269 -0
  45. clean_tagged/100M/aochildes.txt +3 -0
  46. clean_tagged/100M/bnc_spoken.txt +3 -0
  47. clean_tagged/100M/cbt.txt +3 -0
  48. clean_tagged/100M/children_stories.txt +3 -0
  49. clean_tagged/100M/gutenberg.txt +3 -0
  50. clean_tagged/100M/open_subtitles.txt +3 -0
.gitattributes ADDED
@@ -0,0 +1,255 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ # Audio files - uncompressed
37
+ *.pcm filter=lfs diff=lfs merge=lfs -text
38
+ *.sam filter=lfs diff=lfs merge=lfs -text
39
+ *.raw filter=lfs diff=lfs merge=lfs -text
40
+ # Audio files - compressed
41
+ *.aac filter=lfs diff=lfs merge=lfs -text
42
+ *.flac filter=lfs diff=lfs merge=lfs -text
43
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
44
+ *.ogg filter=lfs diff=lfs merge=lfs -text
45
+ *.wav filter=lfs diff=lfs merge=lfs -text
46
+ # Image files - uncompressed
47
+ *.bmp filter=lfs diff=lfs merge=lfs -text
48
+ *.gif filter=lfs diff=lfs merge=lfs -text
49
+ *.png filter=lfs diff=lfs merge=lfs -text
50
+ *.tiff filter=lfs diff=lfs merge=lfs -text
51
+ # Image files - compressed
52
+ *.jpg filter=lfs diff=lfs merge=lfs -text
53
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
54
+ *.webp filter=lfs diff=lfs merge=lfs -text
55
+ 10M/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
56
+ dev/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
57
+ test/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
58
+ test/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
59
+ 100M/aochildes.txt filter=lfs diff=lfs merge=lfs -text
60
+ 100M/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
61
+ 100M/cbt.txt filter=lfs diff=lfs merge=lfs -text
62
+ 100M/children_stories.txt filter=lfs diff=lfs merge=lfs -text
63
+ 100M/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
64
+ 100M/qed.txt filter=lfs diff=lfs merge=lfs -text
65
+ 100M/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
66
+ 100M/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
67
+ 100M/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
68
+ clean/100M/aochildes.txt filter=lfs diff=lfs merge=lfs -text
69
+ clean/100M/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
70
+ clean/100M/cbt.txt filter=lfs diff=lfs merge=lfs -text
71
+ clean/100M/children_stories.txt filter=lfs diff=lfs merge=lfs -text
72
+ clean/100M/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
73
+ clean/100M/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
74
+ clean/100M/qed.txt filter=lfs diff=lfs merge=lfs -text
75
+ clean/100M/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
76
+ clean/100M/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
77
+ clean/10M/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
78
+ clean/dev/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
79
+ clean/test/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
80
+ clean/test/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
81
+ tagged_gold/100M/aochildes.txt filter=lfs diff=lfs merge=lfs -text
82
+ tagged_gold/100M/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
83
+ tagged_gold/100M/cbt.txt filter=lfs diff=lfs merge=lfs -text
84
+ tagged_gold/100M/children_stories.txt filter=lfs diff=lfs merge=lfs -text
85
+ tagged_gold/100M/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
86
+ tagged_gold/100M/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
87
+ tagged_gold/100M/qed.txt filter=lfs diff=lfs merge=lfs -text
88
+ tagged_gold/100M/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
89
+ tagged_gold/100M/switchboard.txt filter=lfs diff=lfs merge=lfs -text
90
+ tagged_gold/100M/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
91
+ tagged_gold/10M/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
92
+ tagged_gold/10M/cbt.txt filter=lfs diff=lfs merge=lfs -text
93
+ tagged_gold/10M/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
94
+ tagged_gold/10M/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
95
+ tagged_gold/10M/qed.txt filter=lfs diff=lfs merge=lfs -text
96
+ tagged_gold/10M/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
97
+ tagged_gold/10M/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
98
+ tagged_gold/dev/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
99
+ tagged_gold/dev/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
100
+ tagged_gold/dev/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
101
+ tagged_gold/dev/qed.txt filter=lfs diff=lfs merge=lfs -text
102
+ tagged_gold/dev/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
103
+ tagged_gold/dev/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
104
+ tagged_gold/test/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
105
+ tagged_gold/test/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
106
+ tagged_gold/test/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
107
+ tagged_gold/test/qed.txt filter=lfs diff=lfs merge=lfs -text
108
+ tagged_gold/test/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
109
+ tagged_gold/test/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
110
+ tagged/100M/aochildes.txt filter=lfs diff=lfs merge=lfs -text
111
+ tagged/100M/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
112
+ tagged/100M/cbt.txt filter=lfs diff=lfs merge=lfs -text
113
+ tagged/100M/children_stories.txt filter=lfs diff=lfs merge=lfs -text
114
+ tagged/100M/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
115
+ tagged/100M/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
116
+ tagged/100M/qed.txt filter=lfs diff=lfs merge=lfs -text
117
+ tagged/100M/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
118
+ tagged/100M/switchboard.txt filter=lfs diff=lfs merge=lfs -text
119
+ tagged/100M/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
120
+ tagged/10M/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
121
+ tagged/10M/cbt.txt filter=lfs diff=lfs merge=lfs -text
122
+ tagged/10M/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
123
+ tagged/10M/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
124
+ tagged/10M/qed.txt filter=lfs diff=lfs merge=lfs -text
125
+ tagged/10M/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
126
+ tagged/10M/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
127
+ tagged/dev/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
128
+ tagged/dev/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
129
+ tagged/dev/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
130
+ tagged/dev/qed.txt filter=lfs diff=lfs merge=lfs -text
131
+ tagged/dev/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
132
+ tagged/dev/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
133
+ tagged/test/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
134
+ tagged/test/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
135
+ tagged/test/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
136
+ tagged/test/qed.txt filter=lfs diff=lfs merge=lfs -text
137
+ tagged/test/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
138
+ tagged/test/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
139
+ clean_tagged_gold/100M/aochildes.txt filter=lfs diff=lfs merge=lfs -text
140
+ clean_tagged_gold/100M/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
141
+ clean_tagged_gold/100M/cbt.txt filter=lfs diff=lfs merge=lfs -text
142
+ clean_tagged_gold/100M/children_stories.txt filter=lfs diff=lfs merge=lfs -text
143
+ clean_tagged_gold/100M/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
144
+ clean_tagged_gold/100M/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
145
+ clean_tagged_gold/100M/qed.txt filter=lfs diff=lfs merge=lfs -text
146
+ clean_tagged_gold/100M/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
147
+ clean_tagged_gold/100M/switchboard.txt filter=lfs diff=lfs merge=lfs -text
148
+ clean_tagged_gold/100M/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
149
+ clean_tagged_gold/10M/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
150
+ clean_tagged_gold/10M/cbt.txt filter=lfs diff=lfs merge=lfs -text
151
+ clean_tagged_gold/10M/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
152
+ clean_tagged_gold/10M/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
153
+ clean_tagged_gold/10M/qed.txt filter=lfs diff=lfs merge=lfs -text
154
+ clean_tagged_gold/10M/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
155
+ clean_tagged_gold/10M/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
156
+ clean_tagged_gold/dev/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
157
+ clean_tagged_gold/dev/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
158
+ clean_tagged_gold/dev/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
159
+ clean_tagged_gold/dev/qed.txt filter=lfs diff=lfs merge=lfs -text
160
+ clean_tagged_gold/dev/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
161
+ clean_tagged_gold/dev/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
162
+ clean_tagged_gold/test/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
163
+ clean_tagged_gold/test/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
164
+ clean_tagged_gold/test/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
165
+ clean_tagged_gold/test/qed.txt filter=lfs diff=lfs merge=lfs -text
166
+ clean_tagged_gold/test/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
167
+ clean_tagged_gold/test/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
168
+ clean_tagged/100M/aochildes.txt filter=lfs diff=lfs merge=lfs -text
169
+ clean_tagged/100M/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
170
+ clean_tagged/100M/cbt.txt filter=lfs diff=lfs merge=lfs -text
171
+ clean_tagged/100M/children_stories.txt filter=lfs diff=lfs merge=lfs -text
172
+ clean_tagged/100M/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
173
+ clean_tagged/100M/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
174
+ clean_tagged/100M/qed.txt filter=lfs diff=lfs merge=lfs -text
175
+ clean_tagged/100M/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
176
+ clean_tagged/100M/switchboard.txt filter=lfs diff=lfs merge=lfs -text
177
+ clean_tagged/100M/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
178
+ clean_tagged/10M/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
179
+ clean_tagged/10M/cbt.txt filter=lfs diff=lfs merge=lfs -text
180
+ clean_tagged/10M/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
181
+ clean_tagged/10M/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
182
+ clean_tagged/10M/qed.txt filter=lfs diff=lfs merge=lfs -text
183
+ clean_tagged/10M/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
184
+ clean_tagged/10M/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
185
+ clean_tagged/dev/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
186
+ clean_tagged/dev/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
187
+ clean_tagged/dev/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
188
+ clean_tagged/dev/qed.txt filter=lfs diff=lfs merge=lfs -text
189
+ clean_tagged/dev/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
190
+ clean_tagged/dev/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
191
+ clean_tagged/test/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
192
+ clean_tagged/test/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
193
+ clean_tagged/test/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
194
+ clean_tagged/test/qed.txt filter=lfs diff=lfs merge=lfs -text
195
+ clean_tagged/test/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
196
+ clean_tagged/test/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
197
+ original_tagged_gold/100M/aochildes.txt filter=lfs diff=lfs merge=lfs -text
198
+ original_tagged_gold/100M/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
199
+ original_tagged_gold/100M/cbt.txt filter=lfs diff=lfs merge=lfs -text
200
+ original_tagged_gold/100M/children_stories.txt filter=lfs diff=lfs merge=lfs -text
201
+ original_tagged_gold/100M/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
202
+ original_tagged_gold/100M/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
203
+ original_tagged_gold/100M/qed.txt filter=lfs diff=lfs merge=lfs -text
204
+ original_tagged_gold/100M/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
205
+ original_tagged_gold/100M/switchboard.txt filter=lfs diff=lfs merge=lfs -text
206
+ original_tagged_gold/100M/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
207
+ original_tagged_gold/10M/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
208
+ original_tagged_gold/10M/cbt.txt filter=lfs diff=lfs merge=lfs -text
209
+ original_tagged_gold/10M/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
210
+ original_tagged_gold/10M/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
211
+ original_tagged_gold/10M/qed.txt filter=lfs diff=lfs merge=lfs -text
212
+ original_tagged_gold/10M/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
213
+ original_tagged_gold/10M/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
214
+ original_tagged_gold/dev/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
215
+ original_tagged_gold/dev/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
216
+ original_tagged_gold/dev/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
217
+ original_tagged_gold/dev/qed.txt filter=lfs diff=lfs merge=lfs -text
218
+ original_tagged_gold/dev/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
219
+ original_tagged_gold/dev/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
220
+ original_tagged_gold/test/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
221
+ original_tagged_gold/test/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
222
+ original_tagged_gold/test/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
223
+ original_tagged_gold/test/qed.txt filter=lfs diff=lfs merge=lfs -text
224
+ original_tagged_gold/test/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
225
+ original_tagged_gold/test/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
226
+ original_tagged/100M/aochildes.txt filter=lfs diff=lfs merge=lfs -text
227
+ original_tagged/100M/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
228
+ original_tagged/100M/cbt.txt filter=lfs diff=lfs merge=lfs -text
229
+ original_tagged/100M/children_stories.txt filter=lfs diff=lfs merge=lfs -text
230
+ original_tagged/100M/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
231
+ original_tagged/100M/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
232
+ original_tagged/100M/qed.txt filter=lfs diff=lfs merge=lfs -text
233
+ original_tagged/100M/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
234
+ original_tagged/100M/switchboard.txt filter=lfs diff=lfs merge=lfs -text
235
+ original_tagged/100M/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
236
+ original_tagged/10M/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
237
+ original_tagged/10M/cbt.txt filter=lfs diff=lfs merge=lfs -text
238
+ original_tagged/10M/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
239
+ original_tagged/10M/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
240
+ original_tagged/10M/qed.txt filter=lfs diff=lfs merge=lfs -text
241
+ original_tagged/10M/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
242
+ original_tagged/10M/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
243
+ original_tagged/dev/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
244
+ original_tagged/dev/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
245
+ original_tagged/dev/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
246
+ original_tagged/dev/qed.txt filter=lfs diff=lfs merge=lfs -text
247
+ original_tagged/dev/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
248
+ original_tagged/dev/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
249
+ original_tagged/test/bnc_spoken.txt filter=lfs diff=lfs merge=lfs -text
250
+ original_tagged/test/gutenberg.txt filter=lfs diff=lfs merge=lfs -text
251
+ original_tagged/test/open_subtitles.txt filter=lfs diff=lfs merge=lfs -text
252
+ original_tagged/test/qed.txt filter=lfs diff=lfs merge=lfs -text
253
+ original_tagged/test/simple_wikipedia.txt filter=lfs diff=lfs merge=lfs -text
254
+ original_tagged/test/wikipedia.txt filter=lfs diff=lfs merge=lfs -text
255
+ *.txt filter=lfs diff=lfs merge=lfs -text
BabyLM.py ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import datasets
2
+
3
+ from typing import List
4
+
5
+ _DESCRIPTION = """\
6
+ Dataset for the shared baby language modeling task.
7
+ The goal is to train a language model from scratch on this data which represents
8
+ roughly the amount of text and speech data a young child observes.
9
+ """
10
+
11
+ _HOMEPAGE = "https://babylm.github.io"
12
+
13
+ filenames = [
14
+ "aochildes.txt",
15
+ "bnc_spoken.txt",
16
+ "cbt.txt",
17
+ "children_stories.txt",
18
+ "gutenberg.txt",
19
+ "open_subtitles.txt",
20
+ "qed.txt",
21
+ "simple_wikipedia.txt",
22
+ "switchboard.txt",
23
+ "wikipedia.txt"
24
+ ]
25
+ class BabyLM(datasets.GeneratorBasedBuilder):
26
+
27
+ BUILDER_CONFIGS = [
28
+ datasets.BuilderConfig(
29
+ name="original_strict_small",
30
+ description="Original dataset, 10M words, no POS tags",
31
+ version="1.0.0",
32
+ ),
33
+ datasets.BuilderConfig(
34
+ name="strict_small",
35
+ description="Cleaned version of the dataset, 10M words, unsupervised POS tags",
36
+ version="1.0.0",
37
+ ),
38
+ datasets.BuilderConfig(
39
+ name="original_strict",
40
+ description="Original dataset, 100M words, no POS tags",
41
+ version="1.0.0",
42
+ ),
43
+ datasets.BuilderConfig(
44
+ name="strict",
45
+ description="Cleaned version of the dataset, 100M words, unsupervised POS tags",
46
+ version="1.0.0",
47
+ ),
48
+ datasets.BuilderConfig(
49
+ name="original_strict_small_gold",
50
+ description="Original dataset, 10M words, gold POS tags",
51
+ version="1.0.0",
52
+ ),
53
+ datasets.BuilderConfig(
54
+ name="strict_small_gold",
55
+ description="Cleaned version of the dataset, 10M words, gold POS tags",
56
+ version="1.0.0",
57
+ ),
58
+ datasets.BuilderConfig(
59
+ name="original_strict_gold",
60
+ description="Original dataset, 100M words, gold POS tags",
61
+ version="1.0.0",
62
+ ),
63
+ datasets.BuilderConfig(
64
+ name="strict_gold",
65
+ description="Cleaned version of the dataset, 100M words, gold POS tags",
66
+ version="1.0.0",
67
+ ),
68
+ ]
69
+
70
+ DEFAULT_CONFIG_NAME = "strict_small"
71
+
72
+ def _info(self):
73
+ features = datasets.Features(
74
+ {
75
+ "text": datasets.Value("string"),
76
+ "tagged_text": datasets.Value("string"),
77
+ "filename": datasets.Value("string"),
78
+ }
79
+ )
80
+ return datasets.DatasetInfo(
81
+ # This is the description that will appear on the datasets page.
82
+ description=_DESCRIPTION,
83
+ features=features, # Here we define them above because they are different between the two configurations
84
+ homepage=_HOMEPAGE,
85
+ )
86
+
87
+
88
+ def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
89
+ """
90
+ Returns data for different splits
91
+ """
92
+
93
+ if "strict_small" in self.config.name:
94
+ train_data_dir = "10M"
95
+ else:
96
+ train_data_dir = "100M"
97
+
98
+ folder = 'original_tagged' if 'original' in self.config.name else 'clean_tagged'
99
+ folder = folder + '_gold' if 'gold' in self.config.name else folder
100
+
101
+ urls_to_download = {
102
+ "train": [f"{folder}/{train_data_dir}/{fn}" for fn in filenames],
103
+ "dev": [f"{folder}/dev/{fn}" for fn in filenames],
104
+ "test": [f"{folder}/test/{fn}" for fn in filenames]
105
+ }
106
+
107
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
108
+
109
+ return [
110
+ datasets.SplitGenerator(
111
+ name=datasets.Split.TRAIN,
112
+ gen_kwargs={
113
+ "split": "train",
114
+ "filepaths": downloaded_files["train"]}
115
+ ),
116
+ datasets.SplitGenerator(
117
+ name=datasets.Split.VALIDATION,
118
+ gen_kwargs={
119
+ "split": "dev",
120
+ "filepaths": downloaded_files["dev"]}
121
+ ),
122
+ datasets.SplitGenerator(
123
+ name=datasets.Split.TEST,
124
+ gen_kwargs={
125
+ "split": "test",
126
+ "filepaths": downloaded_files["test"]
127
+ }
128
+ ),
129
+ ]
130
+
131
+ # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
132
+ def _generate_examples(self, split, filepaths):
133
+ # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.
134
+
135
+ # the filepaths should be a list of filepaths
136
+ if isinstance(filepaths, str):
137
+ filepaths = [filepaths]
138
+
139
+ global_idx = 0
140
+
141
+ for filepath in filepaths:
142
+ with open(filepath, encoding="utf-8") as f:
143
+ is_tags = False
144
+ text = ""
145
+ filename = ""
146
+ # Every other row contains POS tags. First row is the filename (we can't use filepath since the file path changes upon caching)
147
+ for row in f:
148
+ if filename == "":
149
+ filename = row.strip()
150
+ continue
151
+ if is_tags:
152
+ yield global_idx, {"text": text.strip(), "tagged_text": row.strip(), "filename": filename}
153
+ global_idx += 1
154
+ is_tags = False
155
+ else:
156
+ text = row
157
+ is_tags = True
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - language modeling
6
+ - cognitive modeling
7
+ pretty_name: Baby Language Modeling Dataset
8
+ size_categories:
9
+ - 10M<n<100M
10
+ ---
11
+
12
+ # BabyLM Dataset
13
+ This download includes LM Pretraining data for the 2023 CoNLL/CMCL shared task, [The BabyLM Challenge](https://babylm.github.io/). The (unzipped) data is not large, only ~700MB.
14
+
15
+
16
+ Note that there is also a multi-lingual version of this dataset, that is availabled under the `multi-lingual` branch of the dataset repository.
17
+
18
+ ## Contents of this download
19
+ - `10M`: 10M-word training set for the *strict-small* track.
20
+ - `dev`: Development set for both tracks (10M words)
21
+ - `test`: Test set for both tracks (10M words)
22
+
23
+ Each directory above contains a single `.txt` file from each of the 10 domains listed below.
24
+
25
+ ## Composition of the data
26
+ All datasets are sampled from a mixture of 10 data domains, shown below, along with their respective weights in the distributed dataset.
27
+
28
+ | Source | Weight | Domain | Citation | Website | License |
29
+ | --- | --- | --- | --- | --- | --- |
30
+ | OpenSubtitles | 30% | Dialogue, Scripted | Lison & Tiedermann (2016) | [link](https://opus.nlpl.eu/OpenSubtitles-v2018.php) | Open source |
31
+ | Simple English Wikipedia | 15% | Nonfiction | -- | [link](https://dumps.wikimedia.org/simplewiki/20221201/) | [link](https://dumps.wikimedia.org/legal.html) |
32
+ | BNC | 10% | Dialogue | BNC Consortium (2007) | [link](http://www.natcorp.ox.ac.uk/) | [link](http://www.natcorp.ox.ac.uk/docs/licence.html) <sup>1</sup> |
33
+ | Project Gutenberg | 10% | Fiction, Nonfiction | Gerlach & Font-Clos (2020) | [link](https://github.com/pgcorpus/gutenberg) | [link](https://www.gutenberg.org/policy/license.html) |
34
+ | QED | 10% | Dialogue, Education | Abdelali et al. (2014) | [link](https://opus.nlpl.eu/QED.php) | [link](https://opus.nlpl.eu/QED.php) |
35
+ | Wikipedia | 10% | Nonfiction | -- | [link](https://dumps.wikimedia.org/enwiki/20221220/) | [link](https://dumps.wikimedia.org/legal.html) |
36
+ | Children's Book Test | 6% | Fiction, Child-Directed | Hill et al. (2016) | [link](https://research.facebook.com/downloads/babi/) | Public domain |
37
+ | CHILDES | 4% | Dialogue, Child-Directed | MacWhinney (2000) | | [link](https://talkbank.org/share/rules.html) |
38
+ | Children's Stories | 4% | Fiction, Child-Directed | -- | [link](https://www.kaggle.com/datasets/edenbd/children-stories-text-corpus) | Public domain |
39
+ | Switchboard | 1% | Dialogue | Godfrey et al. (1992), Stolcke et al., (2000) | [link](http://compprag.christopherpotts.net/swda.html) | [link](http://compprag.christopherpotts.net/swda.html) |
40
+
41
+ <sup>1</sup> Our distribution of part of the BNC Texts is permitted under the fair dealings provision of copyright law (see term (2g) in the BNC license).
42
+
43
+
44
+ ## Data preprocessing
45
+
46
+ Data was minimally preprocessed to conform to a plain text format. We did not tokenize the data. Documents are not necessarily complete are newline separated.
47
+
48
+ For documentation of the preprocessing pipeline, consult the following repo: https://github.com/babylm/babylm_data_preprocessing
49
+
50
+
51
+ ## References
52
+ Abdelali, A., Guzman, F., Sajjad, H., & Vogel, S. (2014). The AMARA Corpus: Building parallel language resources for the educational domain. In Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC 2014). 1856-1862.
53
+
54
+ BNC Consortium. (2007). The British National Corpus, XML Edition. Oxford Text Archive, http://hdl.handle.net/20.500.12024/2554.
55
+
56
+ Gerlach, M., & Font-Clos, F. (2020). A standardized Project Gutenberg corpus for statistical analysis of natural language and quantitative linguistics. Entropy, 22(1), 126.
57
+
58
+ Godfrey, J. J., Holliman, E. C., & McDaniel, J. (1992). SWITCHBOARD: Telephone speech corpus for research and development. In Acoustics, Speech, and Signal Processing, IEEE International Conference on (Vol. 1, pp. 517-520). IEEE Computer Society.
59
+
60
+ Hill, F., Bordes, A., Chopra, S., Weston, J. (2016). The Goldilocks principle: Reading children’s books with explicit memory representations. In Proceedings of the 4th International Conference on Learning Representations (ICLR 2016).
61
+
62
+ Lison, P. & Tiedemann, J. (2016). OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016).
63
+
64
+ MacWhinney, B. (2000). The CHILDES Project: Tools for analyzing talk. Third Edition. Mahwah, NJ: Lawrence Erlbaum Associates.
65
+
66
+ Stolcke, A., Ries, K., Coccaro, N., Shriberg, E., Bates, R., Jurafsky, D., Taylor, P., Martin, R., Van Ess-Dykema, C., & Meteer, M. (2000). Dialogue act modeling for automatic tagging and recognition of conversational speech. Computational linguistics, 26(3), 339-373.
67
+
68
+ Tiedemann, J. (2012). Parallel Data, Tools and Interfaces in OPUS. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012).
clean/100M/aochildes.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b627623c5422fae5b2e3b6614184ff4066ea4c2a033983dcdfbb91a3be87a4a
3
+ size 17447936
clean/100M/bnc_spoken.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:753cfd7932d277dc3736bf41ca5b76629b935e944b01a4b6e6aae77733a5d0ad
3
+ size 42266089
clean/100M/cbt.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:43abeae5c4a81ed43f8259dac4a8d5360b9c9140d7ca434e0609a987979ed074
3
+ size 24794311
clean/100M/children_stories.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a639f960bc0504734a99b6381ebd4bc5855d0cdb581d025c90afe243f1962a4c
3
+ size 17389237
clean/100M/gutenberg.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4409be98ca86a78e7f883b0f19e5e04ae402818ab1ca7fb096093912872204a
3
+ size 45634534
clean/100M/open_subtitles.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a44b1b3a9893597b4bc0693bec307573783b9c35a01718502e5b565e2595fce6
3
+ size 162139182
clean/100M/qed.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d632fb2b1c6422333465da35606eb026c2c7e48ea8960aa4ed160cd3a06a6d29
3
+ size 54976346
clean/100M/simple_wikipedia.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:291b40800fee70c2ef0d3772bf1528a459b6ddf3a72865e3394260794c7e3049
3
+ size 84743739
clean/100M/switchboard.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/100M/wikipedia.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bf9ee4ae4b088928960d3a8279f2d0252f601ccf48137db67ea4b4f237c39a88
3
+ size 60504343
clean/10M/aochildes.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/10M/bnc_spoken.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/10M/cbt.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/10M/children_stories.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/10M/gutenberg.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/10M/open_subtitles.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c8dad8af34ae522cbb48a6016b3b0500591c7a6dafc9875dfa84c895aa2829b3
3
+ size 16049913
clean/10M/qed.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/10M/simple_wikipedia.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/10M/switchboard.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/10M/wikipedia.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/dev/aochildes.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/dev/bnc_spoken.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/dev/cbt.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/dev/children_stories.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/dev/gutenberg.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/dev/open_subtitles.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:667b875c46c907d6ae36769e78d251f71b0e3fa4ee1eb0982db640fea045fdfa
3
+ size 15296290
clean/dev/qed.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/dev/simple_wikipedia.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/dev/switchboard.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/dev/wikipedia.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/test/aochildes.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/test/bnc_spoken.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/test/cbt.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/test/children_stories.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/test/gutenberg.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/test/open_subtitles.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d014575c19afbfcf5d157fdb056e51a6dd9172300bafac13edd462c8a4422dbe
3
+ size 14445856
clean/test/qed.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/test/simple_wikipedia.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8e85f4d2bcc06c41820cc1a2e00983657724b3b35ad4f70faadb9ade8ec3617
3
+ size 10574563
clean/test/switchboard.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean/test/wikipedia.txt ADDED
The diff for this file is too large to render. See raw diff
 
clean_data.py ADDED
@@ -0,0 +1,269 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """ Script used to clean the data. """
2
+
3
+ import os
4
+ import re
5
+ from nltk import tokenize
6
+
7
+ def clean_aochildes(lines):
8
+ """ For aochildes, we remove the space between the punctuation mark and the final word and join together every 5 lines """
9
+ new_lines = []
10
+ joined = []
11
+ for i, line in enumerate(lines):
12
+ new_line = line[:-3] + line[-2:]
13
+ joined.append(new_line.strip())
14
+ if i % 5 == 0:
15
+ new_lines.append(" ".join(joined) + "\n")
16
+ joined = []
17
+ return new_lines
18
+
19
+ def clean_bnc_spoken(lines):
20
+ """ For bnc_spoken, we lowercase """
21
+ new_lines = []
22
+ for line in lines:
23
+ new_line = line.lower()
24
+ if new_line != '\n':
25
+ new_lines.append(new_line)
26
+ return new_lines
27
+
28
+ def clean_cbt(lines):
29
+ """ For cbt, we lowercase and normalise punctuation """
30
+ punctuation = ['.', ',', '?', '!', ':', ';', '(', ')', '[', ']', '{', '}', '"', "'", '“', '”', '—', '–']
31
+ new_lines = []
32
+ for line in lines:
33
+ new_line = line.lower()
34
+ new_line = new_line.replace(": ' ", ": \"")
35
+ new_line = new_line.replace("''", "\"")
36
+ new_line = new_line.replace(" '\n", "\"\n")
37
+ new_line = new_line.replace(" ' ", "\" ")
38
+ new_line = new_line.replace(" `` ", " \"")
39
+ new_line = new_line.replace("` ", " \"")
40
+ new_line = new_line.replace("`", "\"")
41
+ new_line = new_line.replace("’", "\"")
42
+ for punct in punctuation:
43
+ new_line = new_line.replace(f" {punct}", punct)
44
+ new_lines.append(new_line)
45
+ return new_lines
46
+
47
+ def clean_children_stories(lines):
48
+ """ For children_stories, we lowercase """
49
+ new_lines = []
50
+ for line in lines:
51
+ new_line = line.lower().strip()
52
+ if new_line != '':
53
+ new_lines.append(new_line + "\n")
54
+ return new_lines
55
+
56
+ def clean_gutenberg(lines):
57
+ """ For gutenberg, we lowercase, remove italics and group lines into paragraphs. We also remove any lines containing '*' or 'p.' """
58
+ # Get paragraphs
59
+ paragraphs = []
60
+ paragraph = ""
61
+ for line in lines:
62
+ # Remove italics
63
+ tmp_line = line.lower().strip().replace('_','')
64
+ if tmp_line == "" and paragraph != "":
65
+ if len(paragraph.split()) > 2 and not paragraph.split()[-1][-1].isnumeric(): # Remove paragraphs with less than 3 words and those that end in a number (probably part of a bibliography)
66
+ paragraphs.append(paragraph[:-1] + '\n')
67
+ paragraph = ""
68
+ else:
69
+ paragraph += tmp_line + " "
70
+
71
+ # Bad characters - gutenberg has a lot of figures, footnotes, chapter names etc that we want to remove
72
+ bad_chars = ['*', 'p.', '=', '|', '[', ']', ' ', ' ', 'v.']
73
+ new_lines = [p.strip()+'\n' for p in paragraphs if not any([c in p for c in bad_chars]) and p != '' and p != '\n' and p[0] != '(']
74
+ return new_lines
75
+
76
+ def clean_open_subtitles(lines):
77
+ """ For open_subtitles, we lowercase, remove subtitle dashes and fix the lowercase 'l' problem. We also join every 5 lines. """
78
+ punctuation = ['.', ',', '?', '!', ':', ';', '(', ')', '[', ']', '{', '}', '"', "'", '“', '”', '—', '–', ' ', '\n']
79
+ new_lines = []
80
+ joined = []
81
+ count = 0
82
+ for line in lines:
83
+ new_line = line.lower()
84
+ # Skip music lines
85
+ if '♪' in new_line or '[' in new_line or ']' in new_line or '‎' in new_line:
86
+ continue
87
+ if new_line[0:2] in ["- ", "– ", "— "]:
88
+ new_line = new_line[2:]
89
+ if new_line[0] in ["-", "–", "—"]:
90
+ new_line = new_line[1:]
91
+ new_line = ' ' + new_line
92
+ for punct in punctuation:
93
+ new_line = new_line.replace(f" l{punct}", f" i{punct}")
94
+ new_line = new_line.replace(f" lm{punct}", f" im{punct}")
95
+ new_line = new_line.replace(f" lf{punct}", f" if{punct}")
96
+ new_line = new_line.replace(' lc', ' ic')
97
+ new_line = new_line.replace(' ld', ' id')
98
+ new_line = new_line.replace(' lj', ' i j')
99
+ new_line = new_line.replace(' ln', ' in')
100
+ new_line = new_line.replace(' lp', ' ip')
101
+ new_line = new_line.replace(' lr', ' ir')
102
+ new_line = new_line.replace(' ls', ' is')
103
+ new_line = new_line.replace(' isd', ' lsd')
104
+ new_line = new_line.replace(' lt', ' it')
105
+ new_line = new_line.replace(' lt', ' it')
106
+ new_line = new_line.replace(' lv', ' iv')
107
+ if new_line.strip() != '':
108
+ joined.append(new_line.strip())
109
+ count += 1
110
+ if count % 5 == 0:
111
+ new_lines.append(" ".join(joined) + '\n')
112
+ joined = []
113
+ return new_lines
114
+
115
+ def clean_qed(lines):
116
+ """ For qed, we lowercase and normalise punctuation, remove words contained in parentheses,
117
+ remove lines that are just character's names and fix the lowercase 'l' problem. We also join every 5 lines. """
118
+
119
+ new_lines = []
120
+ count = 0
121
+ joined = []
122
+ for line in lines:
123
+ # Before lowercasing, check if the words in the line are uppercase containing lowercase 'l' instead of 'I' and fix accordingly
124
+ words = line.split()
125
+ for i, word in enumerate(words):
126
+ if word.replace('l','I').isupper() and 'l' in word and word != 'I\'ll':
127
+ words[i] = word.replace('l', 'I')
128
+ new_line = ' '.join(words).lower()
129
+ new_line = new_line.replace(' lc', ' ic')
130
+ new_line = new_line.replace(' ld', ' id')
131
+ new_line = new_line.replace(' lj', ' i j')
132
+ new_line = new_line.replace(' ln', ' in')
133
+ new_line = new_line.replace(' lp', ' ip')
134
+ new_line = new_line.replace(' lr', ' ir')
135
+ new_line = new_line.replace(' ls', ' is')
136
+ new_line = new_line.replace(' isd', ' lsd')
137
+ new_line = new_line.replace(' lt', ' it')
138
+ new_line = new_line.replace(' lt', ' it')
139
+ new_line = new_line.replace(' lv', ' iv')
140
+ new_line = new_line.replace('&amp;gt;', '')
141
+ new_line = new_line.replace('&amp;lt;i', '')
142
+ new_line = new_line.replace('&amp;lt;/i', '')
143
+ new_line = new_line.replace('&amp;gt;i', '')
144
+ new_line = new_line.replace('&amp;gt;/i', '')
145
+ new_line = new_line.replace('&amp;gt', '')
146
+ new_line = new_line.replace('&amp;lt', '')
147
+ new_line = new_line.replace('&amp;amp;', '')
148
+
149
+ # Skip lines that are just character names, e.g. "AMY GOODMAN:"
150
+ if len(new_line.strip()) < 1 or (len(words) <= 3 and new_line.strip()[-1] == ':'):
151
+ continue
152
+
153
+ # Remove subtitle dashes
154
+ if new_line[0:2] == "- ":
155
+ new_line = new_line[2:]
156
+ if new_line[0] == "-":
157
+ new_line = new_line[1:]
158
+
159
+ # Remove substrings contained within circular or square parantheses (screen descriptions)
160
+ pattern = r'\([^)]*\)'
161
+ new_line = re.sub(pattern, '', new_line)
162
+ pattern = r'\[[^)]*\]'
163
+ new_line = re.sub(pattern, '', new_line)
164
+ new_line = new_line.replace('"', '\'')
165
+
166
+ # Remove strange characters
167
+ new_line = new_line.replace('#','')
168
+ new_line = new_line.replace('*','')
169
+
170
+ new_line = new_line.strip()
171
+ if new_line != "":
172
+ joined.append(new_line)
173
+ count += 1
174
+ if count % 5 == 0:
175
+ new_lines.append(" ".join(joined) + '\n')
176
+ joined = []
177
+ return new_lines
178
+
179
+ def clean_simple_wikipedia(lines):
180
+ """ For simple_wikipedia, we lowercase, remove empty lines and article names."""
181
+ new_lines = []
182
+ next_line_is_article_name = False
183
+ for line in lines:
184
+ if next_line_is_article_name:
185
+ next_line_is_article_name = False
186
+ continue
187
+ if line.strip() == "":
188
+ next_line_is_article_name = True
189
+ continue
190
+ if len(line.split()) > 2:
191
+ new_lines.append(line.lower())
192
+ return new_lines
193
+
194
+ def clean_switchboard(lines):
195
+ """ For switchboard, we lowercase and join every 5 lines. """
196
+ new_lines = []
197
+ count = 0
198
+ joined = []
199
+ for line in lines:
200
+ new_line = line.lower().strip()
201
+ joined.append(new_line)
202
+ count += 1
203
+ if count % 5 == 0:
204
+ new_lines.append(" ".join(joined) + '\n')
205
+ joined = []
206
+ return new_lines
207
+
208
+ def clean_wikipedia(lines):
209
+ """ For wikipedia, we lowercase and remove empty lines and article names.
210
+ We also remove lines that seem to be figure names or table entries. """
211
+ new_lines = []
212
+ for line in lines:
213
+ new_line = line.strip()
214
+ words = new_line.split()
215
+
216
+ # Remove empty lines and article names
217
+ if new_line == "":
218
+ continue
219
+ if new_line[0] == "=" and new_line[-1] == "=":
220
+ continue
221
+
222
+ # Filter out lines that seem to be figure names or table entries
223
+ all_numeric = True
224
+ all_uppercase = True
225
+ for word in words:
226
+ if not word.isnumeric():
227
+ all_numeric = False
228
+ if not word[0].isupper():
229
+ all_uppercase = False
230
+ if all_numeric or all_uppercase:
231
+ continue
232
+
233
+ new_lines.append(new_line.lower().strip() + '\n')
234
+ return new_lines
235
+
236
+ CLEAN_FUNCTIONS = {'aochildes' : clean_aochildes, 'bnc_spoken' : clean_bnc_spoken, 'cbt' : clean_cbt, 'children_stories' : clean_children_stories, 'gutenberg' : clean_gutenberg, 'open_subtitles' : clean_open_subtitles, 'qed' : clean_qed, 'simple_wikipedia' : clean_simple_wikipedia, 'switchboard' : clean_switchboard, 'wikipedia' : clean_wikipedia}
237
+ FOLDERS = ['10M', '100M', 'dev', 'test']
238
+
239
+ if __name__ == "__main__":
240
+
241
+ # Read all text files from directory "BabyLM"
242
+ all_files = []
243
+ for folder in FOLDERS:
244
+ for root, dirs, files in os.walk(f"original/{folder}"):
245
+ for file in files:
246
+ if file.endswith(".txt"):
247
+ all_files.append(os.path.join(root, file))
248
+
249
+ for file in all_files:
250
+ print(file)
251
+ with open(file, 'r') as f:
252
+ lines = f.readlines()
253
+
254
+ # Get the corpus name
255
+ corpus_name = os.path.basename(file).split('.')[0]
256
+
257
+ # Clean the data
258
+ if CLEAN_FUNCTIONS[corpus_name] is not None:
259
+ lines = CLEAN_FUNCTIONS[corpus_name](lines)
260
+ # Replace multiple spaces with single space
261
+ lines = [re.sub(' +', ' ', line) for line in lines if line.strip() != '']
262
+
263
+ # Write the new file
264
+ new_file = file.replace('original', 'clean')
265
+ os.makedirs(os.path.dirname(new_file), exist_ok=True)
266
+ with open(new_file, 'w') as f:
267
+ # Save file name to file, so we can later recover the original file names
268
+ f.write(new_file.split('/')[-1] + '\n')
269
+ f.writelines(lines)
clean_tagged/100M/aochildes.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d1d48a72960a6945a413fea75fec073eff6233a32bd34a0a1cde4529e73e05f
3
+ size 99052359
clean_tagged/100M/bnc_spoken.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b63a2d1d8758d120894c4c0ec76f639cfbf4a91e28ac3a3331d53fd74e044865
3
+ size 224736302
clean_tagged/100M/cbt.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:832fc2f825df0e55379ba8babe311c1ca8fce4f4af7ae58abe62a6523ae20017
3
+ size 130181150
clean_tagged/100M/children_stories.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5db87d6bb0b412bc8047dc89d18a2eb8cbd8e5091440ba716d32cb20275dd2de
3
+ size 90790537
clean_tagged/100M/gutenberg.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:397a1df618df43ea230a9067e85f859312a0a60814e63ac6499cd51584bf00a1
3
+ size 231906973
clean_tagged/100M/open_subtitles.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46879e28ae8aafa5add491b7bf727e792fb323dc1b6bb0f5e61a231b86934b94
3
+ size 896907196