fortvivlan commited on
Commit
0c72cc2
·
verified ·
1 Parent(s): bb23d9a

Model save

Browse files
Files changed (10) hide show
  1. README.md +72 -0
  2. config.json +1349 -0
  3. configuration.py +48 -0
  4. dependency_classifier.py +299 -0
  5. encoder.py +109 -0
  6. mlp_classifier.py +46 -0
  7. model.safetensors +3 -0
  8. modeling_parser.py +190 -0
  9. training_args.bin +3 -0
  10. utils.py +66 -0
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: xlm-roberta-base
3
+ datasets: CoBaLD/enhanced-cobald
4
+ language: en
5
+ library_name: transformers
6
+ license: gpl-3.0
7
+ metrics:
8
+ - accuracy
9
+ - f1
10
+ pipeline_tag: cobald-parsing
11
+ tags:
12
+ - pytorch
13
+ model-index:
14
+ - name: CoBaLD/xlm-roberta-base-cobald-parser
15
+ results:
16
+ - task:
17
+ type: token-classification
18
+ dataset:
19
+ name: enhanced-cobald
20
+ type: CoBaLD/enhanced-cobald
21
+ split: validation
22
+ metrics:
23
+ - type: f1
24
+ value: 0.9407420102135771
25
+ name: Null F1
26
+ - type: f1
27
+ value: 0.8121551520477537
28
+ name: Lemma F1
29
+ - type: f1
30
+ value: 0.796868228149055
31
+ name: Morphology F1
32
+ - type: accuracy
33
+ value: 0.7526259569165035
34
+ name: Ud Jaccard
35
+ - type: accuracy
36
+ value: 0.7973622150406149
37
+ name: Eud Jaccard
38
+ - type: f1
39
+ value: 0.5622097111852032
40
+ name: Miscs F1
41
+ - type: f1
42
+ value: 0.6366999708137279
43
+ name: Deepslot F1
44
+ - type: f1
45
+ value: 0.6301756147279635
46
+ name: Semclass F1
47
+ ---
48
+
49
+ # Model Card for xlm-roberta-base-cobald-parser
50
+
51
+ A transformer-based multihead parser for CoBaLD annotation.
52
+
53
+ This model parses a pre-tokenized CoNLL-U text and jointly labels each token with three tiers of tags:
54
+ * Grammatical tags (lemma, UPOS, XPOS, morphological features),
55
+ * Syntactic tags (basic and enhanced Universal Dependencies),
56
+ * Semantic tags (deep slot and semantic class).
57
+
58
+ ## Model Sources
59
+
60
+ - **Repository:** https://github.com/CobaldAnnotation/CobaldParser
61
+ - **Paper:** https://dialogue-conf.org/wp-content/uploads/2025/04/BaiukIBaiukAPetrovaM.009.pdf
62
+ - **Demo:** [coming soon]
63
+
64
+ ## Citation
65
+
66
+ @inproceedings{baiuk2025cobald,
67
+ title={CoBaLD Parser: Joint Morphosyntactic and Semantic Annotation},
68
+ author={Baiuk, Ilia and Baiuk, Alexandra and Petrova, Maria},
69
+ booktitle={Proceedings of the International Conference "Dialogue"},
70
+ volume={I},
71
+ year={2025}
72
+ }
config.json ADDED
@@ -0,0 +1,1349 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation": "relu",
3
+ "architectures": [
4
+ "CobaldParser"
5
+ ],
6
+ "auto_map": {
7
+ "AutoConfig": "configuration.CobaldParserConfig",
8
+ "AutoModel": "modeling_parser.CobaldParser"
9
+ },
10
+ "consecutive_null_limit": 3,
11
+ "custom_pipelines": {
12
+ "cobald-parsing": {
13
+ "impl": "pipeline.ConlluTokenClassificationPipeline",
14
+ "pt": "CobaldParser"
15
+ }
16
+ },
17
+ "deepslot_classifier_hidden_size": 256,
18
+ "dependency_classifier_hidden_size": 128,
19
+ "dropout": 0.1,
20
+ "encoder_model_name": "xlm-roberta-base",
21
+ "lemma_classifier_hidden_size": 512,
22
+ "misc_classifier_hidden_size": 512,
23
+ "model_type": "cobald_parser",
24
+ "morphology_classifier_hidden_size": 512,
25
+ "null_classifier_hidden_size": 512,
26
+ "semclass_classifier_hidden_size": 512,
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.51.3",
29
+ "vocabulary": {
30
+ "deepslot": {
31
+ "0": "$Dislocation",
32
+ "1": "Addition",
33
+ "2": "AdditionalParticipant",
34
+ "3": "Addressee",
35
+ "4": "Addressee_Metaphoric",
36
+ "5": "Agent",
37
+ "6": "Agent_Metaphoric",
38
+ "7": "AttachedProperty",
39
+ "8": "BehalfOfEntity",
40
+ "9": "BeneMalefactive",
41
+ "10": "Causator",
42
+ "11": "Cause",
43
+ "12": "Ch_Parameter",
44
+ "13": "Ch_Reference",
45
+ "14": "Characteristic",
46
+ "15": "Chemical_Composite",
47
+ "16": "ClassifiedEntity",
48
+ "17": "Comparison",
49
+ "18": "ComparisonBase",
50
+ "19": "Comparison_Symmetrical",
51
+ "20": "Composition",
52
+ "21": "Concession",
53
+ "22": "ConcessiveCondition",
54
+ "23": "Concurrent",
55
+ "24": "Concurrent_Complement",
56
+ "25": "Condition",
57
+ "26": "Consequence",
58
+ "27": "ContentOfContainer",
59
+ "28": "ContrAgent",
60
+ "29": "ContrAgent_Metaphoric",
61
+ "30": "ContrObject",
62
+ "31": "Core_Hyphen_Component",
63
+ "32": "Correlative",
64
+ "33": "Criterion",
65
+ "34": "Degree",
66
+ "35": "DegreeNumerative",
67
+ "36": "Dependent_Hyphen_Component",
68
+ "37": "Elective",
69
+ "38": "Empty_Subject_It",
70
+ "39": "Experiencer",
71
+ "40": "Experiencer_Metaphoric",
72
+ "41": "Explication",
73
+ "42": "Fabricative",
74
+ "43": "FormOfRepresentation",
75
+ "44": "Function",
76
+ "45": "GappingRemnant",
77
+ "46": "Instrument",
78
+ "47": "Instrument_Situation",
79
+ "48": "Interval_Beginning",
80
+ "49": "Interval_End",
81
+ "50": "Landmark",
82
+ "51": "Limitation",
83
+ "52": "Locative",
84
+ "53": "Locative_Distance",
85
+ "54": "Locative_FinalPoint",
86
+ "55": "Locative_InitialPoint",
87
+ "56": "Locative_Route",
88
+ "57": "Manner",
89
+ "58": "MannerOfPositionAndMotion",
90
+ "59": "Manner_Configuration",
91
+ "60": "Manner_Reduplication",
92
+ "61": "MathCharacteristic",
93
+ "62": "MeasureSpecification",
94
+ "63": "Member",
95
+ "64": "MetaphoricLocative",
96
+ "65": "Metaphoric_FinalPoint",
97
+ "66": "Metaphoric_InitialPoint",
98
+ "67": "Metaphoric_Route",
99
+ "68": "Motive",
100
+ "69": "Motive_Warranty",
101
+ "70": "MovingLandmark",
102
+ "71": "Name_Title",
103
+ "72": "Object",
104
+ "73": "Object_Relation",
105
+ "74": "Object_Situation",
106
+ "75": "OneAnother",
107
+ "76": "Opposition",
108
+ "77": "OrderInTimeAndSpace",
109
+ "78": "Original_Object",
110
+ "79": "Original_Situation",
111
+ "80": "Parenthetical",
112
+ "81": "Part",
113
+ "82": "PartAsOrientation",
114
+ "83": "Part_Situation",
115
+ "84": "ParticipleRelativeClause",
116
+ "85": "Particles_Accentuation",
117
+ "86": "PaymentBy_NonMonetaryUnits",
118
+ "87": "PersonImplicit",
119
+ "88": "PlaceOfContact",
120
+ "89": "Possessor",
121
+ "90": "Possessor_Locative",
122
+ "91": "Possessor_Metaphoric",
123
+ "92": "Possessor_Situational",
124
+ "93": "PragmaticEvaluation",
125
+ "94": "Predicate",
126
+ "95": "Predicate_Adverb",
127
+ "96": "Predicate_DiscoursiveUnits",
128
+ "97": "Predicate_Noun",
129
+ "98": "PrincipleOfOrganization",
130
+ "99": "Proportion_FirstComponent",
131
+ "100": "Proportion_To",
132
+ "101": "Purpose",
133
+ "102": "Purpose_Distributive",
134
+ "103": "QuantifiedEntity",
135
+ "104": "Quantity",
136
+ "105": "Quantity_Pragmatic",
137
+ "106": "Raising_Target",
138
+ "107": "Relative",
139
+ "108": "Resultative",
140
+ "109": "Route_Situation",
141
+ "110": "SetEnvironment",
142
+ "111": "Set_Classification",
143
+ "112": "Set_General",
144
+ "113": "Source",
145
+ "114": "Specification",
146
+ "115": "Specifier_Number",
147
+ "116": "Spectator",
148
+ "117": "SpeechEtiquette",
149
+ "118": "Sphere",
150
+ "119": "StaffOfPossessors",
151
+ "120": "Standpoint",
152
+ "121": "State",
153
+ "122": "Stimulus",
154
+ "123": "SupportedEntity",
155
+ "124": "TagQuestion",
156
+ "125": "TagSubject",
157
+ "126": "Theme",
158
+ "127": "ThemeRhematic",
159
+ "128": "Time",
160
+ "129": "Vocative",
161
+ "130": "Vocative_Metaphoric",
162
+ "131": "Whole",
163
+ "132": "Whole_Complement"
164
+ },
165
+ "eud_deprel": {
166
+ "0": "acl",
167
+ "1": "acl:after",
168
+ "2": "acl:as",
169
+ "3": "acl:as_to",
170
+ "4": "acl:before",
171
+ "5": "acl:cleft",
172
+ "6": "acl:relcl",
173
+ "7": "acl:that",
174
+ "8": "acl:to",
175
+ "9": "acl:when",
176
+ "10": "advcl",
177
+ "11": "advcl:after",
178
+ "12": "advcl:although",
179
+ "13": "advcl:as",
180
+ "14": "advcl:as_long_as",
181
+ "15": "advcl:as_soon_as",
182
+ "16": "advcl:because",
183
+ "17": "advcl:before",
184
+ "18": "advcl:for",
185
+ "19": "advcl:if",
186
+ "20": "advcl:in_that",
187
+ "21": "advcl:like",
188
+ "22": "advcl:once",
189
+ "23": "advcl:relcl",
190
+ "24": "advcl:so",
191
+ "25": "advcl:so_that",
192
+ "26": "advcl:than",
193
+ "27": "advcl:that",
194
+ "28": "advcl:though",
195
+ "29": "advcl:to",
196
+ "30": "advcl:unless",
197
+ "31": "advcl:when",
198
+ "32": "advcl:whereas",
199
+ "33": "advcl:whether",
200
+ "34": "advcl:while",
201
+ "35": "advcl:whilst",
202
+ "36": "advmod",
203
+ "37": "amod",
204
+ "38": "appos",
205
+ "39": "aux",
206
+ "40": "aux:pass",
207
+ "41": "case",
208
+ "42": "cc",
209
+ "43": "ccomp",
210
+ "44": "ccomp:whether",
211
+ "45": "compound",
212
+ "46": "compound:prt",
213
+ "47": "conj",
214
+ "48": "conj:and",
215
+ "49": "conj:but",
216
+ "50": "conj:or",
217
+ "51": "cop",
218
+ "52": "csubj",
219
+ "53": "dep",
220
+ "54": "det",
221
+ "55": "det:predet",
222
+ "56": "discourse",
223
+ "57": "fixed",
224
+ "58": "flat",
225
+ "59": "flat:foreign",
226
+ "60": "flat:name",
227
+ "61": "flatname",
228
+ "62": "goeswith",
229
+ "63": "iobj",
230
+ "64": "list",
231
+ "65": "mark",
232
+ "66": "nmod",
233
+ "67": "nmod:about",
234
+ "68": "nmod:across",
235
+ "69": "nmod:after",
236
+ "70": "nmod:against",
237
+ "71": "nmod:among",
238
+ "72": "nmod:around",
239
+ "73": "nmod:as",
240
+ "74": "nmod:at",
241
+ "75": "nmod:because_of",
242
+ "76": "nmod:before",
243
+ "77": "nmod:behind",
244
+ "78": "nmod:between",
245
+ "79": "nmod:by",
246
+ "80": "nmod:during",
247
+ "81": "nmod:for",
248
+ "82": "nmod:from",
249
+ "83": "nmod:in",
250
+ "84": "nmod:in_front_of",
251
+ "85": "nmod:include",
252
+ "86": "nmod:including",
253
+ "87": "nmod:instead_of",
254
+ "88": "nmod:into",
255
+ "89": "nmod:like",
256
+ "90": "nmod:near",
257
+ "91": "nmod:npmod",
258
+ "92": "nmod:of",
259
+ "93": "nmod:off",
260
+ "94": "nmod:on",
261
+ "95": "nmod:onto",
262
+ "96": "nmod:other_than",
263
+ "97": "nmod:out_of",
264
+ "98": "nmod:over",
265
+ "99": "nmod:per",
266
+ "100": "nmod:plus",
267
+ "101": "nmod:poss",
268
+ "102": "nmod:rather_than",
269
+ "103": "nmod:since",
270
+ "104": "nmod:such_as",
271
+ "105": "nmod:than",
272
+ "106": "nmod:through",
273
+ "107": "nmod:throughout",
274
+ "108": "nmod:tmod",
275
+ "109": "nmod:to",
276
+ "110": "nmod:towards",
277
+ "111": "nmod:under",
278
+ "112": "nmod:until",
279
+ "113": "nmod:up_to",
280
+ "114": "nmod:up_until",
281
+ "115": "nmod:via",
282
+ "116": "nmod:whether",
283
+ "117": "nmod:with",
284
+ "118": "nmod:without",
285
+ "119": "nsubj",
286
+ "120": "nsubj:outer",
287
+ "121": "nsubj:pass",
288
+ "122": "nsubj:xsubj",
289
+ "123": "nummod",
290
+ "124": "nummod:gov",
291
+ "125": "obj",
292
+ "126": "obl",
293
+ "127": "obl:about",
294
+ "128": "obl:across",
295
+ "129": "obl:after",
296
+ "130": "obl:against",
297
+ "131": "obl:along",
298
+ "132": "obl:amid",
299
+ "133": "obl:amidst",
300
+ "134": "obl:among",
301
+ "135": "obl:amongst",
302
+ "136": "obl:apart_from",
303
+ "137": "obl:around",
304
+ "138": "obl:as",
305
+ "139": "obl:as_for",
306
+ "140": "obl:aside_from",
307
+ "141": "obl:at",
308
+ "142": "obl:because_of",
309
+ "143": "obl:before",
310
+ "144": "obl:behind",
311
+ "145": "obl:between",
312
+ "146": "obl:beyond",
313
+ "147": "obl:by",
314
+ "148": "obl:during",
315
+ "149": "obl:except",
316
+ "150": "obl:excluding",
317
+ "151": "obl:for",
318
+ "152": "obl:from",
319
+ "153": "obl:in",
320
+ "154": "obl:in_front_of",
321
+ "155": "obl:in_lieu_of",
322
+ "156": "obl:in_to",
323
+ "157": "obl:including",
324
+ "158": "obl:instead_of",
325
+ "159": "obl:into",
326
+ "160": "obl:like",
327
+ "161": "obl:near",
328
+ "162": "obl:npmod",
329
+ "163": "obl:of",
330
+ "164": "obl:off",
331
+ "165": "obl:on",
332
+ "166": "obl:on_to",
333
+ "167": "obl:onto",
334
+ "168": "obl:other_than",
335
+ "169": "obl:out",
336
+ "170": "obl:out_of",
337
+ "171": "obl:over",
338
+ "172": "obl:past",
339
+ "173": "obl:per",
340
+ "174": "obl:plus",
341
+ "175": "obl:rather_than",
342
+ "176": "obl:round",
343
+ "177": "obl:since",
344
+ "178": "obl:such_as",
345
+ "179": "obl:than",
346
+ "180": "obl:through",
347
+ "181": "obl:throughout",
348
+ "182": "obl:tmod",
349
+ "183": "obl:to",
350
+ "184": "obl:toward",
351
+ "185": "obl:towards",
352
+ "186": "obl:under",
353
+ "187": "obl:until",
354
+ "188": "obl:up_on",
355
+ "189": "obl:up_to",
356
+ "190": "obl:up_until",
357
+ "191": "obl:upon",
358
+ "192": "obl:via",
359
+ "193": "obl:with",
360
+ "194": "obl:without",
361
+ "195": "parataxis",
362
+ "196": "punct",
363
+ "197": "ref",
364
+ "198": "root",
365
+ "199": "vocative",
366
+ "200": "xcomp"
367
+ },
368
+ "joint_feats": {
369
+ "0": "ADJ#Adjective#Abbr=Yes|Degree=Pos",
370
+ "1": "ADJ#Adjective#Degree=Cmp",
371
+ "2": "ADJ#Adjective#Degree=Pos",
372
+ "3": "ADJ#Adjective#Degree=Sup",
373
+ "4": "ADJ#None#Degree=Cmp",
374
+ "5": "ADJ#None#Degree=Pos",
375
+ "6": "ADJ#None#Degree=Pos|NumType=Ord",
376
+ "7": "ADJ#None#Degree=Sup",
377
+ "8": "ADJ#None#None",
378
+ "9": "ADJ#Numeral#Degree=Pos|NumForm=Digit|NumType=Ord",
379
+ "10": "ADJ#Numeral#Degree=Pos|NumForm=Word|NumType=Ord",
380
+ "11": "ADJ#Prefixoid#None",
381
+ "12": "ADP#Adverb#None",
382
+ "13": "ADP#None#None",
383
+ "14": "ADP#Preposition#None",
384
+ "15": "ADV#Adjective#Degree=Pos",
385
+ "16": "ADV#Adverb#Degree=Cmp",
386
+ "17": "ADV#Adverb#Degree=Pos",
387
+ "18": "ADV#Adverb#Degree=Pos|NumType=Mult",
388
+ "19": "ADV#Adverb#Degree=Sup",
389
+ "20": "ADV#Adverb#None",
390
+ "21": "ADV#Adverb#NumType=Mult",
391
+ "22": "ADV#Adverb#Polarity=Neg",
392
+ "23": "ADV#Adverb#PronType=Dem",
393
+ "24": "ADV#Invariable#Degree=Cmp",
394
+ "25": "ADV#Invariable#None",
395
+ "26": "ADV#None#Degree=Cmp",
396
+ "27": "ADV#None#Degree=Pos",
397
+ "28": "ADV#None#Degree=Sup",
398
+ "29": "ADV#None#None",
399
+ "30": "ADV#None#NumType=Mult",
400
+ "31": "ADV#None#PronType=Dem",
401
+ "32": "ADV#None#PronType=Int",
402
+ "33": "ADV#Prefixoid#None",
403
+ "34": "AUX#None#Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin",
404
+ "35": "AUX#Verb#Mood=Ind|Number=Plur|Person=1|Tense=Past|VerbForm=Fin",
405
+ "36": "AUX#Verb#Mood=Ind|Number=Plur|Person=1|Tense=Pres|VerbForm=Fin",
406
+ "37": "AUX#Verb#Mood=Ind|Number=Plur|Person=2|Tense=Pres|VerbForm=Fin",
407
+ "38": "AUX#Verb#Mood=Ind|Number=Plur|Person=3|Tense=Past|VerbForm=Fin",
408
+ "39": "AUX#Verb#Mood=Ind|Number=Plur|Person=3|Tense=Pres|VerbForm=Fin",
409
+ "40": "AUX#Verb#Mood=Ind|Number=Sing|Person=1|Tense=Past|VerbForm=Fin",
410
+ "41": "AUX#Verb#Mood=Ind|Number=Sing|Person=1|Tense=Pres|VerbForm=Fin",
411
+ "42": "AUX#Verb#Mood=Ind|Number=Sing|Person=2|Tense=Past|VerbForm=Fin",
412
+ "43": "AUX#Verb#Mood=Ind|Number=Sing|Person=2|Tense=Pres|VerbForm=Fin",
413
+ "44": "AUX#Verb#Mood=Ind|Number=Sing|Person=3|Tense=Past|VerbForm=Fin",
414
+ "45": "AUX#Verb#Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin",
415
+ "46": "AUX#Verb#Mood=Sub|Number=Plur|Person=1|Tense=Past|VerbForm=Fin",
416
+ "47": "AUX#Verb#Mood=Sub|Number=Plur|Tense=Past|VerbForm=Part",
417
+ "48": "AUX#Verb#Number=Plur|Tense=Past|VerbForm=Part",
418
+ "49": "AUX#Verb#Number=Plur|Tense=Pres|VerbForm=Part",
419
+ "50": "AUX#Verb#VerbForm=Fin",
420
+ "51": "AUX#Verb#VerbForm=Ger",
421
+ "52": "AUX#Verb#VerbForm=Inf",
422
+ "53": "CCONJ#Conjunction#None",
423
+ "54": "CCONJ#None#None",
424
+ "55": "DET#Adjective#PronType=Tot",
425
+ "56": "DET#Article#Definite=Def|PronType=Art",
426
+ "57": "DET#Article#Definite=Ind|PronType=Art",
427
+ "58": "DET#Conjunction#Definite=Def|PronType=Art",
428
+ "59": "DET#None#Definite=Def|PronType=Art",
429
+ "60": "DET#None#Definite=EMPTY",
430
+ "61": "DET#None#Definite=Ind|PronType=Art",
431
+ "62": "DET#None#None",
432
+ "63": "DET#None#Number=Sing|PronType=Dem",
433
+ "64": "DET#None#PronType=Int",
434
+ "65": "DET#None#PronType=Neg",
435
+ "66": "DET#None#PronType=Rcp",
436
+ "67": "DET#None#PronType=Tot",
437
+ "68": "DET#Prefixoid#None",
438
+ "69": "DET#Pronoun#None",
439
+ "70": "DET#Pronoun#Number=Plur|PronType=Dem",
440
+ "71": "DET#Pronoun#Number=Sing|PronType=Dem",
441
+ "72": "DET#Pronoun#Polarity=Neg",
442
+ "73": "DET#Pronoun#PronType=Ind",
443
+ "74": "DET#Pronoun#PronType=Int",
444
+ "75": "DET#Pronoun#PronType=Rel",
445
+ "76": "DET#Pronoun#PronType=Tot",
446
+ "77": "INTJ#Interjection#None",
447
+ "78": "NOUN#Adverb#Number=Sing",
448
+ "79": "NOUN#None#Number=Plur",
449
+ "80": "NOUN#None#Number=Sing",
450
+ "81": "NOUN#Noun#Abbr=Yes|Number=Plur",
451
+ "82": "NOUN#Noun#Abbr=Yes|Number=Sing",
452
+ "83": "NOUN#Noun#NumType=Frac|Number=Sing",
453
+ "84": "NOUN#Noun#Number=Plur",
454
+ "85": "NOUN#Noun#Number=Sing",
455
+ "86": "NOUN#Noun#Number=Sing|Polarity=Neg",
456
+ "87": "NOUN#Noun#VerbForm=Fin",
457
+ "88": "NOUN#Prefixoid#None",
458
+ "89": "NOUN#Prefixoid#Number=Sing",
459
+ "90": "NUM#None#Degree=Pos|NumType=Ord",
460
+ "91": "NUM#None#NumType=Card",
461
+ "92": "NUM#Noun#NumForm=Word|NumType=Card",
462
+ "93": "NUM#Numeral#None",
463
+ "94": "NUM#Numeral#NumForm=Digit|NumType=Card",
464
+ "95": "NUM#Numeral#NumForm=Digit|NumType=Frac",
465
+ "96": "NUM#Numeral#NumForm=Roman|NumType=Card",
466
+ "97": "NUM#Numeral#NumForm=Word|NumType=Card",
467
+ "98": "NUM#Numeral#NumType=Card",
468
+ "99": "PART#None#None",
469
+ "100": "PART#None#Polarity=Neg",
470
+ "101": "PART#Particle#None",
471
+ "102": "PART#Particle#Polarity=Neg",
472
+ "103": "PPROPN#None#Number=Plur",
473
+ "104": "PRON#None#Gender=Neut|Number=Sing|Person=3|Poss=Yes|PronType=Prs",
474
+ "105": "PRON#None#Number=Sing",
475
+ "106": "PRON#None#Number=Sing|PronType=Dem",
476
+ "107": "PRON#None#Number=Sing|PronType=Ind",
477
+ "108": "PRON#None#PronType=Int",
478
+ "109": "PRON#None#PronType=Rel",
479
+ "110": "PRON#Pronoun#Case=Acc|Gender=Fem|Number=Sing|Person=3|PronType=Prs",
480
+ "111": "PRON#Pronoun#Case=Acc|Gender=Fem|Number=Sing|Person=3|PronType=Prs|Reflex=Yes",
481
+ "112": "PRON#Pronoun#Case=Acc|Gender=Masc|Number=Sing|Person=3|PronType=Prs",
482
+ "113": "PRON#Pronoun#Case=Acc|Gender=Masc|Number=Sing|Person=3|PronType=Prs|Reflex=Yes",
483
+ "114": "PRON#Pronoun#Case=Acc|Gender=Neut|Number=Sing|Person=3|PronType=Prs",
484
+ "115": "PRON#Pronoun#Case=Acc|Gender=Neut|Number=Sing|Person=3|PronType=Prs|Reflex=Yes",
485
+ "116": "PRON#Pronoun#Case=Acc|Number=Plur|Person=1|PronType=Prs",
486
+ "117": "PRON#Pronoun#Case=Acc|Number=Plur|Person=1|PronType=Prs|Reflex=Yes",
487
+ "118": "PRON#Pronoun#Case=Acc|Number=Plur|Person=2|PronType=Prs",
488
+ "119": "PRON#Pronoun#Case=Acc|Number=Plur|Person=3|PronType=Prs",
489
+ "120": "PRON#Pronoun#Case=Acc|Number=Plur|Person=3|PronType=Prs|Reflex=Yes",
490
+ "121": "PRON#Pronoun#Case=Acc|Number=Sing|Person=1|PronType=Prs",
491
+ "122": "PRON#Pronoun#Case=Acc|Number=Sing|Person=2|PronType=Prs",
492
+ "123": "PRON#Pronoun#Case=Acc|Number=Sing|Person=2|PronType=Prs|Reflex=Yes",
493
+ "124": "PRON#Pronoun#Case=Gen|Gender=Fem|Number=Sing|Person=3|Poss=Yes|PronType=Prs",
494
+ "125": "PRON#Pronoun#Case=Gen|Gender=Masc|Number=Sing|Person=3|Poss=Yes|PronType=Prs",
495
+ "126": "PRON#Pronoun#Case=Gen|Gender=Neut|Number=Sing|Person=3|Poss=Yes|PronType=Prs",
496
+ "127": "PRON#Pronoun#Case=Gen|Number=Plur|Person=1|Poss=Yes|PronType=Prs",
497
+ "128": "PRON#Pronoun#Case=Gen|Number=Plur|Person=3|Poss=Yes|PronType=Prs",
498
+ "129": "PRON#Pronoun#Case=Gen|Number=Sing|Person=1|Poss=Yes|PronType=Prs",
499
+ "130": "PRON#Pronoun#Case=Gen|Number=Sing|Person=2|Poss=Yes|PronType=Prs",
500
+ "131": "PRON#Pronoun#Case=Nom|Gender=Fem|Number=Sing|Person=3|PronType=Prs",
501
+ "132": "PRON#Pronoun#Case=Nom|Gender=Masc|Number=Sing|Person=3|PronType=Prs",
502
+ "133": "PRON#Pronoun#Case=Nom|Gender=Masc|Number=Sing|Person=3|PronType=Prs|Reflex=Yes",
503
+ "134": "PRON#Pronoun#Case=Nom|Gender=Neut|Number=Sing|Person=3|PronType=Prs",
504
+ "135": "PRON#Pronoun#Case=Nom|Gender=Neut|Number=Sing|Person=3|PronType=Prs|Reflex=Yes",
505
+ "136": "PRON#Pronoun#Case=Nom|Number=Plur|Person=1|PronType=Prs",
506
+ "137": "PRON#Pronoun#Case=Nom|Number=Plur|Person=2|PronType=Prs",
507
+ "138": "PRON#Pronoun#Case=Nom|Number=Plur|Person=3|PronType=Prs",
508
+ "139": "PRON#Pronoun#Case=Nom|Number=Plur|Person=3|PronType=Prs|Reflex=Yes",
509
+ "140": "PRON#Pronoun#Case=Nom|Number=Sing|Person=1|PronType=Prs",
510
+ "141": "PRON#Pronoun#Case=Nom|Number=Sing|Person=2|PronType=Prs",
511
+ "142": "PRON#Pronoun#None",
512
+ "143": "PRON#Pronoun#Number=Plur",
513
+ "144": "PRON#Pronoun#Number=Plur|PronType=Dem",
514
+ "145": "PRON#Pronoun#Number=Plur|PronType=Tot",
515
+ "146": "PRON#Pronoun#Number=Sing",
516
+ "147": "PRON#Pronoun#Number=Sing|Polarity=Neg|PronType=Neg",
517
+ "148": "PRON#Pronoun#Number=Sing|PronType=Dem",
518
+ "149": "PRON#Pronoun#Number=Sing|PronType=Ind",
519
+ "150": "PRON#Pronoun#Number=Sing|PronType=Neg",
520
+ "151": "PRON#Pronoun#Number=Sing|Reflex=Yes",
521
+ "152": "PRON#Pronoun#PronType=Ind",
522
+ "153": "PRON#Pronoun#PronType=Int",
523
+ "154": "PRON#Pronoun#PronType=Rel",
524
+ "155": "PROPN#None#Abbr=Yes",
525
+ "156": "PROPN#None#Number=Plur",
526
+ "157": "PROPN#None#Number=Sing",
527
+ "158": "PROPN#Noun#Abbr=Yes|Number=Plur",
528
+ "159": "PROPN#Noun#Abbr=Yes|Number=Sing",
529
+ "160": "PROPN#Noun#Number=Plur",
530
+ "161": "PROPN#Noun#Number=Sing",
531
+ "162": "PROPN#Noun#Number=Sing|Polarity=Neg",
532
+ "163": "PROPN#Noun#PronType=Dem",
533
+ "164": "PROPN#Noun#VerbForm=Fin",
534
+ "165": "PROPN#Prefixoid#Number=Sing",
535
+ "166": "PUNCT#None#None",
536
+ "167": "PUNCT#PUNCT#None",
537
+ "168": "Prefixoid#Prefixoid#None",
538
+ "169": "SCONJ#Conjunction#None",
539
+ "170": "SCONJ#None#None",
540
+ "171": "SYM#Conjunction#None",
541
+ "172": "SYM#Noun#None",
542
+ "173": "SYM#Noun#Number=Sing",
543
+ "174": "VERB#None#Mood=Ind|Tense=Past|VerbForm=Fin",
544
+ "175": "VERB#None#Tense=Past|VerbForm=Part",
545
+ "176": "VERB#None#VerbForm=Ger",
546
+ "177": "VERB#None#VerbForm=Inf",
547
+ "178": "VERB#Verb#Mood=Imp|VerbForm=Inf",
548
+ "179": "VERB#Verb#Mood=Ind|Number=Plur|Person=1|Tense=Past|VerbForm=Fin",
549
+ "180": "VERB#Verb#Mood=Ind|Number=Plur|Person=1|Tense=Pres|VerbForm=Fin",
550
+ "181": "VERB#Verb#Mood=Ind|Number=Plur|Person=2|Tense=Pres|VerbForm=Fin",
551
+ "182": "VERB#Verb#Mood=Ind|Number=Plur|Person=3|Tense=Past|VerbForm=Fin",
552
+ "183": "VERB#Verb#Mood=Ind|Number=Plur|Person=3|Tense=Pres|VerbForm=Fin",
553
+ "184": "VERB#Verb#Mood=Ind|Number=Sing|Person=1|Tense=Past|VerbForm=Fin",
554
+ "185": "VERB#Verb#Mood=Ind|Number=Sing|Person=1|Tense=Pres|VerbForm=Fin",
555
+ "186": "VERB#Verb#Mood=Ind|Number=Sing|Person=2|Tense=Past|VerbForm=Fin",
556
+ "187": "VERB#Verb#Mood=Ind|Number=Sing|Person=2|Tense=Pres|VerbForm=Fin",
557
+ "188": "VERB#Verb#Mood=Ind|Number=Sing|Person=3|Polarity=Neg|Tense=Pres|VerbForm=Fin",
558
+ "189": "VERB#Verb#Mood=Ind|Number=Sing|Person=3|Tense=Past|VerbForm=Fin",
559
+ "190": "VERB#Verb#Mood=Ind|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin",
560
+ "191": "VERB#Verb#Mood=Sub|Number=Plur|Person=1|Tense=Past|VerbForm=Fin",
561
+ "192": "VERB#Verb#Mood=Sub|Tense=Past|VerbForm=Part",
562
+ "193": "VERB#Verb#Mood=Sub|Tense=Past|VerbForm=Part|Voice=Pass",
563
+ "194": "VERB#Verb#Mood=Sub|VerbForm=Inf",
564
+ "195": "VERB#Verb#Person=1|Tense=Past|VerbForm=Part",
565
+ "196": "VERB#Verb#Person=1|Tense=Past|VerbForm=Part|Voice=Pass",
566
+ "197": "VERB#Verb#Person=1|Tense=Pres|VerbForm=Ger",
567
+ "198": "VERB#Verb#Person=1|Tense=Pres|VerbForm=Inf",
568
+ "199": "VERB#Verb#Person=1|Tense=Pres|VerbForm=Part",
569
+ "200": "VERB#Verb#Person=2|Tense=Pres|VerbForm=Inf",
570
+ "201": "VERB#Verb#Tense=Past|VerbForm=Part",
571
+ "202": "VERB#Verb#Tense=Past|VerbForm=Part|Voice=Pass",
572
+ "203": "VERB#Verb#Tense=Pres|VerbForm=Part",
573
+ "204": "VERB#Verb#VerbForm=Fin",
574
+ "205": "VERB#Verb#VerbForm=Ger",
575
+ "206": "VERB#Verb#VerbForm=Inf",
576
+ "207": "X#None#Foreign=Yes",
577
+ "208": "X#None#None",
578
+ "209": "X#None#Typo=Yes",
579
+ "210": "X#None#foreign=Yes"
580
+ },
581
+ "lemma_rule": {
582
+ "0": "cut_prefix=0|cut_suffix=0|append_suffix=",
583
+ "1": "cut_prefix=0|cut_suffix=0|append_suffix='",
584
+ "2": "cut_prefix=0|cut_suffix=0|append_suffix=.",
585
+ "3": "cut_prefix=0|cut_suffix=0|append_suffix=d",
586
+ "4": "cut_prefix=0|cut_suffix=0|append_suffix=e",
587
+ "5": "cut_prefix=0|cut_suffix=0|append_suffix=n",
588
+ "6": "cut_prefix=0|cut_suffix=0|append_suffix=o",
589
+ "7": "cut_prefix=0|cut_suffix=0|append_suffix=s",
590
+ "8": "cut_prefix=0|cut_suffix=0|append_suffix=t",
591
+ "9": "cut_prefix=0|cut_suffix=0|append_suffix=y",
592
+ "10": "cut_prefix=0|cut_suffix=11|append_suffix=#url",
593
+ "11": "cut_prefix=0|cut_suffix=12|append_suffix=#url",
594
+ "12": "cut_prefix=0|cut_suffix=14|append_suffix=#url",
595
+ "13": "cut_prefix=0|cut_suffix=1|append_suffix=",
596
+ "14": "cut_prefix=0|cut_suffix=1|append_suffix=ad",
597
+ "15": "cut_prefix=0|cut_suffix=1|append_suffix=be",
598
+ "16": "cut_prefix=0|cut_suffix=1|append_suffix=d",
599
+ "17": "cut_prefix=0|cut_suffix=1|append_suffix=e",
600
+ "18": "cut_prefix=0|cut_suffix=1|append_suffix=ed",
601
+ "19": "cut_prefix=0|cut_suffix=1|append_suffix=et",
602
+ "20": "cut_prefix=0|cut_suffix=1|append_suffix=ght",
603
+ "21": "cut_prefix=0|cut_suffix=1|append_suffix=have",
604
+ "22": "cut_prefix=0|cut_suffix=1|append_suffix=ill",
605
+ "23": "cut_prefix=0|cut_suffix=1|append_suffix=o",
606
+ "24": "cut_prefix=0|cut_suffix=1|append_suffix=on",
607
+ "25": "cut_prefix=0|cut_suffix=1|append_suffix=ot",
608
+ "26": "cut_prefix=0|cut_suffix=1|append_suffix=um",
609
+ "27": "cut_prefix=0|cut_suffix=1|append_suffix=ve",
610
+ "28": "cut_prefix=0|cut_suffix=1|append_suffix=y",
611
+ "29": "cut_prefix=0|cut_suffix=1|append_suffix=\u00e9",
612
+ "30": "cut_prefix=0|cut_suffix=1|append_suffix=\u014d",
613
+ "31": "cut_prefix=0|cut_suffix=20|append_suffix=",
614
+ "32": "cut_prefix=0|cut_suffix=2|append_suffix=",
615
+ "33": "cut_prefix=0|cut_suffix=2|append_suffix=$",
616
+ "34": "cut_prefix=0|cut_suffix=2|append_suffix=a",
617
+ "35": "cut_prefix=0|cut_suffix=2|append_suffix=an",
618
+ "36": "cut_prefix=0|cut_suffix=2|append_suffix=ave",
619
+ "37": "cut_prefix=0|cut_suffix=2|append_suffix=aw",
620
+ "38": "cut_prefix=0|cut_suffix=2|append_suffix=be",
621
+ "39": "cut_prefix=0|cut_suffix=2|append_suffix=e",
622
+ "40": "cut_prefix=0|cut_suffix=2|append_suffix=ee",
623
+ "41": "cut_prefix=0|cut_suffix=2|append_suffix=el",
624
+ "42": "cut_prefix=0|cut_suffix=2|append_suffix=ep",
625
+ "43": "cut_prefix=0|cut_suffix=2|append_suffix=er",
626
+ "44": "cut_prefix=0|cut_suffix=2|append_suffix=et",
627
+ "45": "cut_prefix=0|cut_suffix=2|append_suffix=have",
628
+ "46": "cut_prefix=0|cut_suffix=2|append_suffix=i",
629
+ "47": "cut_prefix=0|cut_suffix=2|append_suffix=ig",
630
+ "48": "cut_prefix=0|cut_suffix=2|append_suffix=in",
631
+ "49": "cut_prefix=0|cut_suffix=2|append_suffix=is",
632
+ "50": "cut_prefix=0|cut_suffix=2|append_suffix=it",
633
+ "51": "cut_prefix=0|cut_suffix=2|append_suffix=ke",
634
+ "52": "cut_prefix=0|cut_suffix=2|append_suffix=l",
635
+ "53": "cut_prefix=0|cut_suffix=2|append_suffix=ny",
636
+ "54": "cut_prefix=0|cut_suffix=2|append_suffix=o",
637
+ "55": "cut_prefix=0|cut_suffix=2|append_suffix=ose",
638
+ "56": "cut_prefix=0|cut_suffix=2|append_suffix=ot",
639
+ "57": "cut_prefix=0|cut_suffix=2|append_suffix=ow",
640
+ "58": "cut_prefix=0|cut_suffix=2|append_suffix=un",
641
+ "59": "cut_prefix=0|cut_suffix=2|append_suffix=we",
642
+ "60": "cut_prefix=0|cut_suffix=2|append_suffix=y",
643
+ "61": "cut_prefix=0|cut_suffix=2|append_suffix=\u00e8s",
644
+ "62": "cut_prefix=0|cut_suffix=2|append_suffix=\u00e9o",
645
+ "63": "cut_prefix=0|cut_suffix=3|append_suffix=",
646
+ "64": "cut_prefix=0|cut_suffix=3|append_suffix=-up",
647
+ "65": "cut_prefix=0|cut_suffix=3|append_suffix=ake",
648
+ "66": "cut_prefix=0|cut_suffix=3|append_suffix=and",
649
+ "67": "cut_prefix=0|cut_suffix=3|append_suffix=any",
650
+ "68": "cut_prefix=0|cut_suffix=3|append_suffix=at",
651
+ "69": "cut_prefix=0|cut_suffix=3|append_suffix=be",
652
+ "70": "cut_prefix=0|cut_suffix=3|append_suffix=e",
653
+ "71": "cut_prefix=0|cut_suffix=3|append_suffix=eak",
654
+ "72": "cut_prefix=0|cut_suffix=3|append_suffix=eal",
655
+ "73": "cut_prefix=0|cut_suffix=3|append_suffix=ear",
656
+ "74": "cut_prefix=0|cut_suffix=3|append_suffix=ell",
657
+ "75": "cut_prefix=0|cut_suffix=3|append_suffix=f",
658
+ "76": "cut_prefix=0|cut_suffix=3|append_suffix=fe",
659
+ "77": "cut_prefix=0|cut_suffix=3|append_suffix=ick",
660
+ "78": "cut_prefix=0|cut_suffix=3|append_suffix=ike",
661
+ "79": "cut_prefix=0|cut_suffix=3|append_suffix=ine",
662
+ "80": "cut_prefix=0|cut_suffix=3|append_suffix=ink",
663
+ "81": "cut_prefix=0|cut_suffix=3|append_suffix=is",
664
+ "82": "cut_prefix=0|cut_suffix=3|append_suffix=ite",
665
+ "83": "cut_prefix=0|cut_suffix=3|append_suffix=ive",
666
+ "84": "cut_prefix=0|cut_suffix=3|append_suffix=m",
667
+ "85": "cut_prefix=0|cut_suffix=3|append_suffix=ome",
668
+ "86": "cut_prefix=0|cut_suffix=3|append_suffix=oot",
669
+ "87": "cut_prefix=0|cut_suffix=3|append_suffix=ose",
670
+ "88": "cut_prefix=0|cut_suffix=3|append_suffix=sia",
671
+ "89": "cut_prefix=0|cut_suffix=3|append_suffix=uch",
672
+ "90": "cut_prefix=0|cut_suffix=3|append_suffix=y",
673
+ "91": "cut_prefix=0|cut_suffix=3|append_suffix=ze",
674
+ "92": "cut_prefix=0|cut_suffix=3|append_suffix=\u00e8ne",
675
+ "93": "cut_prefix=0|cut_suffix=3|append_suffix=\u00e8re",
676
+ "94": "cut_prefix=0|cut_suffix=4|append_suffix=",
677
+ "95": "cut_prefix=0|cut_suffix=4|append_suffix=#url",
678
+ "96": "cut_prefix=0|cut_suffix=4|append_suffix=-up",
679
+ "97": "cut_prefix=0|cut_suffix=4|append_suffix=all",
680
+ "98": "cut_prefix=0|cut_suffix=4|append_suffix=an",
681
+ "99": "cut_prefix=0|cut_suffix=4|append_suffix=ay",
682
+ "100": "cut_prefix=0|cut_suffix=4|append_suffix=eak",
683
+ "101": "cut_prefix=0|cut_suffix=4|append_suffix=eal",
684
+ "102": "cut_prefix=0|cut_suffix=4|append_suffix=eeze",
685
+ "103": "cut_prefix=0|cut_suffix=4|append_suffix=go",
686
+ "104": "cut_prefix=0|cut_suffix=4|append_suffix=good",
687
+ "105": "cut_prefix=0|cut_suffix=4|append_suffix=ie",
688
+ "106": "cut_prefix=0|cut_suffix=4|append_suffix=ill",
689
+ "107": "cut_prefix=0|cut_suffix=4|append_suffix=ind",
690
+ "108": "cut_prefix=0|cut_suffix=4|append_suffix=ingly",
691
+ "109": "cut_prefix=0|cut_suffix=4|append_suffix=ke",
692
+ "110": "cut_prefix=0|cut_suffix=4|append_suffix=nment",
693
+ "111": "cut_prefix=0|cut_suffix=4|append_suffix=t",
694
+ "112": "cut_prefix=0|cut_suffix=4|append_suffix=tch",
695
+ "113": "cut_prefix=0|cut_suffix=4|append_suffix=y",
696
+ "114": "cut_prefix=0|cut_suffix=4|append_suffix=\u00edtez",
697
+ "115": "cut_prefix=0|cut_suffix=5|append_suffix=-chat",
698
+ "116": "cut_prefix=0|cut_suffix=5|append_suffix=bad",
699
+ "117": "cut_prefix=0|cut_suffix=5|append_suffix=badly",
700
+ "118": "cut_prefix=0|cut_suffix=5|append_suffix=be",
701
+ "119": "cut_prefix=0|cut_suffix=5|append_suffix=each",
702
+ "120": "cut_prefix=0|cut_suffix=5|append_suffix=ead",
703
+ "121": "cut_prefix=0|cut_suffix=5|append_suffix=eek",
704
+ "122": "cut_prefix=0|cut_suffix=5|append_suffix=esto",
705
+ "123": "cut_prefix=0|cut_suffix=5|append_suffix=et",
706
+ "124": "cut_prefix=0|cut_suffix=5|append_suffix=etts",
707
+ "125": "cut_prefix=0|cut_suffix=5|append_suffix=he",
708
+ "126": "cut_prefix=0|cut_suffix=5|append_suffix=ician",
709
+ "127": "cut_prefix=0|cut_suffix=5|append_suffix=ill",
710
+ "128": "cut_prefix=0|cut_suffix=5|append_suffix=ing",
711
+ "129": "cut_prefix=0|cut_suffix=5|append_suffix=ink",
712
+ "130": "cut_prefix=0|cut_suffix=5|append_suffix=kick",
713
+ "131": "cut_prefix=0|cut_suffix=5|append_suffix=lation",
714
+ "132": "cut_prefix=0|cut_suffix=5|append_suffix=ry",
715
+ "133": "cut_prefix=0|cut_suffix=5|append_suffix=seek",
716
+ "134": "cut_prefix=0|cut_suffix=5|append_suffix=uy",
717
+ "135": "cut_prefix=0|cut_suffix=5|append_suffix=\u00e9r\u00e8se",
718
+ "136": "cut_prefix=0|cut_suffix=6|append_suffix=ar",
719
+ "137": "cut_prefix=0|cut_suffix=6|append_suffix=good",
720
+ "138": "cut_prefix=0|cut_suffix=6|append_suffix=pany",
721
+ "139": "cut_prefix=0|cut_suffix=6|append_suffix=rule",
722
+ "140": "cut_prefix=0|cut_suffix=6|append_suffix=zation",
723
+ "141": "cut_prefix=0|cut_suffix=7|append_suffix=efine",
724
+ "142": "cut_prefix=1|cut_suffix=0|append_suffix=",
725
+ "143": "cut_prefix=1|cut_suffix=2|append_suffix=",
726
+ "144": "cut_prefix=1|cut_suffix=2|append_suffix=ll",
727
+ "145": "cut_prefix=1|cut_suffix=4|append_suffix=ll",
728
+ "146": "cut_prefix=1|cut_suffix=6|append_suffix=url",
729
+ "147": "cut_prefix=2|cut_suffix=0|append_suffix=",
730
+ "148": "cut_prefix=2|cut_suffix=1|append_suffix=",
731
+ "149": "cut_prefix=3|cut_suffix=0|append_suffix=",
732
+ "150": "cut_prefix=3|cut_suffix=1|append_suffix=",
733
+ "151": "cut_prefix=3|cut_suffix=1|append_suffix=e",
734
+ "152": "cut_prefix=3|cut_suffix=2|append_suffix=",
735
+ "153": "cut_prefix=4|cut_suffix=0|append_suffix=",
736
+ "154": "cut_prefix=4|cut_suffix=1|append_suffix=g",
737
+ "155": "cut_prefix=4|cut_suffix=20|append_suffix=rl",
738
+ "156": "cut_prefix=5|cut_suffix=0|append_suffix=",
739
+ "157": "cut_prefix=5|cut_suffix=4|append_suffix=",
740
+ "158": "cut_prefix=6|cut_suffix=0|append_suffix=",
741
+ "159": "cut_prefix=7|cut_suffix=0|append_suffix="
742
+ },
743
+ "misc": {
744
+ "0": "NoSpaceBefore=Yes",
745
+ "1": "SpaceAfter=No",
746
+ "2": "acl",
747
+ "3": "amod",
748
+ "4": "appos",
749
+ "5": "ccomp",
750
+ "6": "compound",
751
+ "7": "conj",
752
+ "8": "ellipsis",
753
+ "9": "flat:name",
754
+ "10": "flatname",
755
+ "11": "nmod",
756
+ "12": "nsubj",
757
+ "13": "obj",
758
+ "14": "obl",
759
+ "15": "root",
760
+ "16": "xcomp"
761
+ },
762
+ "semclass": {
763
+ "0": "ABILITY_OF_BEING",
764
+ "1": "ACCESSORY",
765
+ "2": "ACT",
766
+ "3": "ACTIVITY",
767
+ "4": "ACTIVITY_BY_INTEREST",
768
+ "5": "ADMINISTRATIVE_REGION",
769
+ "6": "ADVENTURE",
770
+ "7": "AGGREGATE",
771
+ "8": "AGGREGATE_OF_LIVING_OBJECTS",
772
+ "9": "AGGREGATE_OF_MACHINERY_OR_TRANSPORT",
773
+ "10": "AGGRESSIVE_ACTIONS",
774
+ "11": "AGREEMENT_VERBS",
775
+ "12": "AGRICULTURAL_PROCESSING",
776
+ "13": "AMBIENCE_ENVIRONMENT",
777
+ "14": "APPARATUS",
778
+ "15": "AREA_OF_HUMAN_ACTIVITY",
779
+ "16": "ARRANGEMENTS",
780
+ "17": "ARTEFACT",
781
+ "18": "ARTICLES",
782
+ "19": "ATTRIBUTIVE",
783
+ "20": "AUXILIARY_VERBS",
784
+ "21": "BAD_DANGEROUS_EVENT",
785
+ "22": "BE",
786
+ "23": "BEGIN_TO_TAKE_PLACE",
787
+ "24": "BEHAVIOUR",
788
+ "25": "BEING",
789
+ "26": "BEVERAGE",
790
+ "27": "BE_STATE",
791
+ "28": "BIJOUTERIE_AND_JEWELLERY",
792
+ "29": "BODY",
793
+ "30": "BOOM",
794
+ "31": "BUSINESS",
795
+ "32": "BUSY_FREE_OCCUPIED",
796
+ "33": "CARGO",
797
+ "34": "CHANGE_OF_MATTER_PHYSICAL_STATE",
798
+ "35": "CHANGE_OF_ORGANIC_OBJECTS",
799
+ "36": "CHANGE_OF_POST_AND_JOB",
800
+ "37": "CHARACTERISTIC_GENERAL",
801
+ "38": "CHEMICAL_CHANGES",
802
+ "39": "CHOOSING_SORTING",
803
+ "40": "CH_ABSTRACT_GENERALIZED",
804
+ "41": "CH_APPEARANCE",
805
+ "42": "CH_ASPECT",
806
+ "43": "CH_BENEFIT",
807
+ "44": "CH_BY_RESIDENCE",
808
+ "45": "CH_BY_SENSORY_PERCEPTION",
809
+ "46": "CH_BY_WORLD_OUTLOOK_EDUCATION_AESTHETIC",
810
+ "47": "CH_CLASSIFICATION",
811
+ "48": "CH_COMPOSITION",
812
+ "49": "CH_CONFIGURATION_AND_FORM",
813
+ "50": "CH_COVERING",
814
+ "51": "CH_CRIMINAL_ACTIVITY",
815
+ "52": "CH_DEGREE",
816
+ "53": "CH_DEGREE_AND_INTENSITY",
817
+ "54": "CH_DISPOSITION_AND_MOTION",
818
+ "55": "CH_DISTRIBUTION",
819
+ "56": "CH_EVALUATION",
820
+ "57": "CH_EVALUATION_OF_HUMAN_TEMPER_AND_ACTIVITY",
821
+ "58": "CH_FULLNESS",
822
+ "59": "CH_FUNCTIONING_OF_ENTITY",
823
+ "60": "CH_INFORMATION",
824
+ "61": "CH_INTENTION_CONCENTRATION",
825
+ "62": "CH_LANGUAGE",
826
+ "63": "CH_MAGNITUDE",
827
+ "64": "CH_MEASURE",
828
+ "65": "CH_OF_CONNECTIONS",
829
+ "66": "CH_OF_INTENSITY",
830
+ "67": "CH_OF_LOCATION",
831
+ "68": "CH_OF_VISUAL_AUDIBLE_REPRESENTATION",
832
+ "69": "CH_PARAMETER_OF_MATTER",
833
+ "70": "CH_PARAMETER_OF_OBJECT_AND_SUBSTANCE",
834
+ "71": "CH_PARAMETER_SPEED",
835
+ "72": "CH_PERCEPTIBILITY",
836
+ "73": "CH_PERSON_IDENTITY",
837
+ "74": "CH_PHYSICAL_STATE",
838
+ "75": "CH_POWER_AND_EFFECT",
839
+ "76": "CH_PRICE_AND_SUMS",
840
+ "77": "CH_REFERENCE_AND_QUANTIFICATION",
841
+ "78": "CH_RENOWN",
842
+ "79": "CH_RESISTANCE_TO_IMPACT",
843
+ "80": "CH_RHYTHM",
844
+ "81": "CH_SALIENCE",
845
+ "82": "CH_SCALE",
846
+ "83": "CH_SOCIAL_CHARACTERISTIC",
847
+ "84": "CH_SPHERE_OF_COVERAGE",
848
+ "85": "CH_STYLE",
849
+ "86": "CH_SURFACE_EDGE",
850
+ "87": "CH_SYSTEM_STRUCTURE",
851
+ "88": "CH_TYPE_OF_POSSESSION_AND_PARTICIPATION",
852
+ "89": "CIRCUMSTANCE",
853
+ "90": "CLASSIFICATION_TYPES",
854
+ "91": "CLASSIFICATION_UNIT",
855
+ "92": "CLOTHES",
856
+ "93": "COGNITIVE_OBJECT",
857
+ "94": "COMMUNICATIONS",
858
+ "95": "COMPOSITE_PARTICLES",
859
+ "96": "COMPOSITE_SUFFIXES",
860
+ "97": "CONDITIONS_IN_NATURE",
861
+ "98": "CONDITION_IN_ECONOMICS",
862
+ "99": "CONDITION_OF_EXPERIENCER_AND_NATURE",
863
+ "100": "CONDITION_SITUATION",
864
+ "101": "CONDITION_STATE",
865
+ "102": "CONFLICT_INTERACTION",
866
+ "103": "CONJUNCTIONS",
867
+ "104": "CONSTRUCTION_AS_WHOLE",
868
+ "105": "CONTACT_VERBS",
869
+ "106": "CONTACT_WITH_CONTRAGENT",
870
+ "107": "CONTAINER",
871
+ "108": "CONTAIN_INCLUDE_FORM",
872
+ "109": "CONTINUE_TO_HAVE",
873
+ "110": "CONTINUE_TO_TAKE_PLACE",
874
+ "111": "COORDINATING_CONJUNCTIONS",
875
+ "112": "CORRELATIVES",
876
+ "113": "COSMOS_AND_COSMIC_OBJECTS",
877
+ "114": "COST",
878
+ "115": "COUNTRY_AS_ADMINISTRATIVE_UNIT",
879
+ "116": "CREATION_VERBS",
880
+ "117": "CREATIVE_WORK",
881
+ "118": "CREATIVE_WORK_BY_GENRE",
882
+ "119": "CRISIS",
883
+ "120": "CULTURE",
884
+ "121": "DECLINE",
885
+ "122": "DECORATING_AND_FINISHING",
886
+ "123": "DEFEND_SAVE",
887
+ "124": "DEGREE_OF_FIT",
888
+ "125": "DEGREE_OF_SIZE_OR_SCALE",
889
+ "126": "DESTRUCTION_VERBS",
890
+ "127": "DEVICE",
891
+ "128": "DEVICE_FOR_ANIMALS",
892
+ "129": "DEVICE_FOR_CLOSING_AND_LOCKING",
893
+ "130": "DEVICE_FOR_HEATING",
894
+ "131": "DEVICE_FOR_LIFTING_OBJECTS",
895
+ "132": "DEVICE_FOR_MEASURING_AND_COUNTING",
896
+ "133": "DIFFICULTIES",
897
+ "134": "DIFFICULT_AND_EASY",
898
+ "135": "DIMENSION",
899
+ "136": "DIMENSIONS_CHAR",
900
+ "137": "DISCOURSIVE_UNITS",
901
+ "138": "DISTANT_CONTACT",
902
+ "139": "DOCUMENT",
903
+ "140": "DYNAMIC_ARTS",
904
+ "141": "ECONOMIC_CHANGES",
905
+ "142": "ECONOMY",
906
+ "143": "EFFICIENCY_PRODUCTIVITY",
907
+ "144": "ELECTIONS",
908
+ "145": "EMBARGO",
909
+ "146": "EMOTIONS_AND_THEIR_EXPRESSION",
910
+ "147": "EMPTY_SUBJECT",
911
+ "148": "ENDINGS",
912
+ "149": "END_TO_TAKE_PLACE",
913
+ "150": "ENGINEERING_COMMUNICATIONS",
914
+ "151": "ENTITY_AS_RESULT_OF_ACTIVITY",
915
+ "152": "ENTITY_BY_FUNCTION_AND_PROPERTY",
916
+ "153": "ENTITY_BY_RELATION_TO_MAIN_PART",
917
+ "154": "ENTITY_BY_VALUE",
918
+ "155": "ENTITY_GENERAL",
919
+ "156": "ENTITY_OR_SITUATION_PRONOUN",
920
+ "157": "ETIQUETTE_COMMUNICATION",
921
+ "158": "EVENT",
922
+ "159": "EVERYDAY_PROCESSING",
923
+ "160": "EXISTENCE_AND_POSSESSION",
924
+ "161": "FACT_INCIDENT",
925
+ "162": "FATE",
926
+ "163": "FEELING_AS_CONDITION",
927
+ "164": "FINE_ARTS_OBJECTS",
928
+ "165": "FOOD",
929
+ "166": "FORCE_IN_PHYSICS",
930
+ "167": "FREQUENCY_CHAR",
931
+ "168": "FURNISHINGS_AND_DECORATION",
932
+ "169": "GENERAL_ACTION",
933
+ "170": "GOOD_BAD_CONDITION",
934
+ "171": "GRAMMATICAL_ELEMENTS",
935
+ "172": "GROUP",
936
+ "173": "HAVE_CLOTHING_ON",
937
+ "174": "HERITAGE",
938
+ "175": "HIERARCHICAL_VERBS",
939
+ "176": "HISTORICAL_LOCALITY_BY_NAME",
940
+ "177": "IDENTIFYING_ATTRIBUTE",
941
+ "178": "IDIOMATICAL_ELEMENTS",
942
+ "179": "IHNABITED_LOCALITY",
943
+ "180": "INFORMATION",
944
+ "181": "INFORMATION_BEARER",
945
+ "182": "INFORMATION_COMMUNICATIONS",
946
+ "183": "INHABITED_LOCALITY",
947
+ "184": "INNOVATION",
948
+ "185": "INSTRUMENT",
949
+ "186": "INTELLECTUAL_ACTIVITY",
950
+ "187": "INTERPERSONAL_RELATIONS",
951
+ "188": "KIND",
952
+ "189": "KITCHENWARE_AND_TABLEWARE",
953
+ "190": "KNOWLEDGE",
954
+ "191": "KNOWLEDGE_FROM_EXPERIENCE",
955
+ "192": "KNOWLEDGE_FROM_EXPERIENCE_AND_DEDUCTION",
956
+ "193": "LACK_AND_PLENTY",
957
+ "194": "LAWS_AND_STANDARDS",
958
+ "195": "LINES",
959
+ "196": "LINE_FOR_COMMUNICATION",
960
+ "197": "LINGUISTIC_OBJECTS",
961
+ "198": "MAKE_EFFORTS",
962
+ "199": "MANAGE_FAIL_CONDITION",
963
+ "200": "MARKET_AS_AREA_OF_ACTIVITY",
964
+ "201": "MATERIALITY_CHAR",
965
+ "202": "MATHEMATICAL_OBJECTS",
966
+ "203": "MEANING_SENSE",
967
+ "204": "MEDICAL_OPERATIONS",
968
+ "205": "MENTAL_OBJECT",
969
+ "206": "METHOD_APPROACH_TECHNIQUE",
970
+ "207": "MIX_AS_AGGREGATE",
971
+ "208": "MODALITY",
972
+ "209": "MODE_OF_EXPRESSIVENESS",
973
+ "210": "MONEY",
974
+ "211": "MOTION",
975
+ "212": "MOTION_ACTIVITY",
976
+ "213": "MOTIVATE",
977
+ "214": "MOVEMENT_AS_ACTIVITY",
978
+ "215": "MULTIMEDIA",
979
+ "216": "MUSICAL_INSTRUMENT",
980
+ "217": "MYSTERY_SECRET",
981
+ "218": "NATURALNESS_GENUINENESS_CHAR",
982
+ "219": "NETWORK",
983
+ "220": "NONPRODUCTIVE_AREA",
984
+ "221": "NORMATIVE_LEGAL_ACTIVITY",
985
+ "222": "OBJECTS_BY_FORM_OF_MANIFESTATION",
986
+ "223": "OBJECTS_BY_FUNCTION",
987
+ "224": "OBJECT_BY_FUNCTION_AND_PROPERTY",
988
+ "225": "OBJECT_BY_SHAPE",
989
+ "226": "OBJECT_IN_NATURE",
990
+ "227": "OCCUPATIONS",
991
+ "228": "OPERATING_STATE",
992
+ "229": "OPTICAL_DEVICE_AND_ITS_PARTS",
993
+ "230": "ORDER_DISORDER",
994
+ "231": "ORGANIC_NON_ORGANIC",
995
+ "232": "ORGANIC_OBJECTS",
996
+ "233": "ORGANIZATION",
997
+ "234": "ORGANIZED_AGGREGATE",
998
+ "235": "ORIENTATION_IN_SPACE",
999
+ "236": "OUTFIT",
1000
+ "237": "PARTICLES",
1001
+ "238": "PART_OF_ARTEFACT",
1002
+ "239": "PART_OF_CLOTHES",
1003
+ "240": "PART_OF_CONSTRUCTION",
1004
+ "241": "PART_OF_CREATIVE_WORK",
1005
+ "242": "PART_OF_FOOTWEAR",
1006
+ "243": "PART_OF_ORGANISM",
1007
+ "244": "PART_OF_WORLD",
1008
+ "245": "PART_OR_PORTION_OF_ENTITY",
1009
+ "246": "PATH_AS_DIRECTION_OF_ACTIVITY",
1010
+ "247": "PEACE",
1011
+ "248": "PERCEPTION_ACTIVITY",
1012
+ "249": "PHENOMENON",
1013
+ "250": "PHRASAL_PARTICLES",
1014
+ "251": "PHYSICAL_AND_BIOLOGICAL_PROPERTIES",
1015
+ "252": "PHYSICAL_CHEMICAL_DAMAGE",
1016
+ "253": "PHYSICAL_OBJECT",
1017
+ "254": "PHYSICAL_OBJECT_AND_SUBSTANCE_CHAR",
1018
+ "255": "PHYSICAL_PSYCHIC_CONDITION",
1019
+ "256": "PHYSIOLOGICAL_PROCESSES",
1020
+ "257": "PLACE",
1021
+ "258": "PLANT",
1022
+ "259": "POINTS_AS_PLACE",
1023
+ "260": "POSITION_AS_STATUS",
1024
+ "261": "POSITION_IN_HIERARCHY",
1025
+ "262": "POSITION_IN_SPACE",
1026
+ "263": "POWER_CHAR",
1027
+ "264": "POWER_RIGHT",
1028
+ "265": "PREMISES",
1029
+ "266": "PREPOSITION",
1030
+ "267": "PRESSURE_CHAR",
1031
+ "268": "PROBLEMS_TO_SOLVE",
1032
+ "269": "PROCESSING",
1033
+ "270": "PROCESS_AND_ITS_STAGES",
1034
+ "271": "PROCESS_PARAMETER",
1035
+ "272": "PRODUCT",
1036
+ "273": "PRODUCTION_AS_TIME_ART",
1037
+ "274": "PRODUCTIVE_AREA",
1038
+ "275": "PUBLIC_ACTIVITY",
1039
+ "276": "PUBLIC_AND_POLITICAL_ACTIVITY",
1040
+ "277": "QUIETNESS",
1041
+ "278": "READINESS",
1042
+ "279": "REALITY",
1043
+ "280": "RELATIVE_ENTITY",
1044
+ "281": "RELATIVE_PART_OF_INHABITED_LOCALITY",
1045
+ "282": "RELATIVE_SPACE",
1046
+ "283": "RELIGIOUS_OBJECT",
1047
+ "284": "REMOVING_DESTRUCTION",
1048
+ "285": "RESERVE",
1049
+ "286": "RESULTS_OF_GIVING_INFORMATION_AND_SPEECH_ACTIVITY",
1050
+ "287": "RESULTS_OF_MAKING_DECISIONS",
1051
+ "288": "RESULTS_OF_MENTAL_ACTIVITY",
1052
+ "289": "RESULT_CONSEQUENCE",
1053
+ "290": "REVEAL_CONCEAL_INFORMATION",
1054
+ "291": "REWARD_AS_ENTITY",
1055
+ "292": "RISK_DANGER",
1056
+ "293": "SAMPLE_AS_AGGREGATE",
1057
+ "294": "SCALE_DIVISION",
1058
+ "295": "SCHEDULE_FOR_ACTIVITY",
1059
+ "296": "SCIENCE",
1060
+ "297": "SCIENTIFIC_AND_LITERARY_WORK",
1061
+ "298": "SEPARATION_PROCESSING",
1062
+ "299": "SERIES_IN_SCIENCE",
1063
+ "300": "SEXUAL_ACTIVITIES",
1064
+ "301": "SILENCE_AS_SOUNDLESSNESS",
1065
+ "302": "SITUATION",
1066
+ "303": "SOCIAL_CONDITIONS_OF_BEING",
1067
+ "304": "SOCIAL_INSTITUTION",
1068
+ "305": "SPACE_AND_SPATIAL_OBJECTS",
1069
+ "306": "SPACE_BY_PARTICULAR_PROPERTIES",
1070
+ "307": "SPACE_BY_RELIGIOUS_BELIEFS",
1071
+ "308": "SPACE_TIME_ART",
1072
+ "309": "SPHERE_OF_ACTIVITY_GENERAL",
1073
+ "310": "SPORT",
1074
+ "311": "SPORT_DEVICE",
1075
+ "312": "STAGNATION",
1076
+ "313": "STATE_AREA",
1077
+ "314": "STATE_OF_MIND",
1078
+ "315": "STEADINESS_OF_FORM_OR_POSITION",
1079
+ "316": "STREET_OR_TOWN_SUFFIXES",
1080
+ "317": "SUBSTANCE",
1081
+ "318": "SURFACE_AND_ITS_SPECIALITIES",
1082
+ "319": "SYMBOLS_FOR_INFORMATION_TRANSFER",
1083
+ "320": "SYSTEM_AS_AGGREGATE",
1084
+ "321": "TEETH_AND_TONGUE_CONTACT",
1085
+ "322": "TEMPERATURE_CHAR",
1086
+ "323": "TENDENCY_AND_DISPOSITION",
1087
+ "324": "TERRITORY_AREA",
1088
+ "325": "TEST_FOR_EXPERIENCER",
1089
+ "326": "TEXTS_OF_PROGRAMS",
1090
+ "327": "TEXT_OBJECTS_AND_DOCUMENTS",
1091
+ "328": "TEXT_WITH_ADDRESSEE",
1092
+ "329": "THE_EARTH_AND_ITS_SPATIAL_PARTS",
1093
+ "330": "THE_GOOD_BAD",
1094
+ "331": "THE_MAGIC",
1095
+ "332": "TIME",
1096
+ "333": "TOPIC_SUBJECT",
1097
+ "334": "TOTALITY_OF_DEGREE",
1098
+ "335": "TO_ACCOMPANY_WITH",
1099
+ "336": "TO_ACCUSE_AND_VINDICATE",
1100
+ "337": "TO_ADAPT",
1101
+ "338": "TO_ADD",
1102
+ "339": "TO_ADJUST_AND_REPAIR",
1103
+ "340": "TO_AIM",
1104
+ "341": "TO_ANALYSE_AND_RESEARCH",
1105
+ "342": "TO_ANIMATE_PICTURE",
1106
+ "343": "TO_APPLAUD",
1107
+ "344": "TO_APPLY_COAT",
1108
+ "345": "TO_APPROACH_COME_TO_SOME_POINT_OR_STATE",
1109
+ "346": "TO_ARREST",
1110
+ "347": "TO_ASSEMBLE",
1111
+ "348": "TO_ATTRIBUTE_AS_TO_ADD",
1112
+ "349": "TO_AVOID",
1113
+ "350": "TO_BEAT_AND_PRICK",
1114
+ "351": "TO_BETRAY_AND_LEAVE",
1115
+ "352": "TO_BE_ABOUT_TO_HAPPEN",
1116
+ "353": "TO_BE_A_SIGN_OF",
1117
+ "354": "TO_BE_BASED",
1118
+ "355": "TO_BE_DESCENDED",
1119
+ "356": "TO_BE_GUIDED",
1120
+ "357": "TO_BE_SEEN_IN_FIELD_OF_VIEW",
1121
+ "358": "TO_BLOW_UP",
1122
+ "359": "TO_BREAK",
1123
+ "360": "TO_BUILD",
1124
+ "361": "TO_CALL_AND_DESIGNATE",
1125
+ "362": "TO_CANCEL",
1126
+ "363": "TO_CARE_AND_BRING_UP",
1127
+ "364": "TO_CAUSE_OR_STOP_MOVEMENT",
1128
+ "365": "TO_CAUSE_SUCCESS",
1129
+ "366": "TO_CELEBRATE",
1130
+ "367": "TO_CERTIFY",
1131
+ "368": "TO_CHALLENGE_TO_INVITE",
1132
+ "369": "TO_CHANGE",
1133
+ "370": "TO_CHANGE_FORM",
1134
+ "371": "TO_CHARACTERIZE",
1135
+ "372": "TO_CITE",
1136
+ "373": "TO_CLOSE",
1137
+ "374": "TO_COME_OR_TO_LEAVE_SPHERE_OF_ACTIVITY",
1138
+ "375": "TO_COMMENT",
1139
+ "376": "TO_COMMIT",
1140
+ "377": "TO_COMMUNICATE",
1141
+ "378": "TO_COMPEL_AND_EVOKE",
1142
+ "379": "TO_COMPEL_TO_ACCEPT",
1143
+ "380": "TO_COMPOSE_SYMBOLS",
1144
+ "381": "TO_CONCLUDE",
1145
+ "382": "TO_CONNIVE",
1146
+ "383": "TO_CONTRIBUTE_AND_HINDER",
1147
+ "384": "TO_CORRECT",
1148
+ "385": "TO_COUNT",
1149
+ "386": "TO_COURT_AND_FLIRT",
1150
+ "387": "TO_CREATE_HOLE",
1151
+ "388": "TO_DECIDE",
1152
+ "389": "TO_DESTINE",
1153
+ "390": "TO_DEVELOP",
1154
+ "391": "TO_DIG_PROCESS",
1155
+ "392": "TO_DIRECT_CREATIVE_WORK",
1156
+ "393": "TO_DISAPPEAR_LOSE_GET_RID_OF",
1157
+ "394": "TO_DISTRACT_DEFLECT",
1158
+ "395": "TO_DIVIDE",
1159
+ "396": "TO_ECONOMIZE",
1160
+ "397": "TO_EMIT",
1161
+ "398": "TO_EXIST",
1162
+ "399": "TO_FABRICATE",
1163
+ "400": "TO_FEEL_AND_EXPRESS_MENTAL_ATTITUDE_TO",
1164
+ "401": "TO_FLOW_IN_TIME",
1165
+ "402": "TO_FORGIVE",
1166
+ "403": "TO_FORM",
1167
+ "404": "TO_FORMULATE",
1168
+ "405": "TO_GENERATE",
1169
+ "406": "TO_GESTURE",
1170
+ "407": "TO_GET",
1171
+ "408": "TO_GET_INFORMATION",
1172
+ "409": "TO_GIVE",
1173
+ "410": "TO_GIVE_SIGNALS",
1174
+ "411": "TO_GO_ON_STRIKE",
1175
+ "412": "TO_GUESS",
1176
+ "413": "TO_HIDE",
1177
+ "414": "TO_HURRY_TO_TARRY",
1178
+ "415": "TO_INDEX",
1179
+ "416": "TO_INDUCE_PHYSICAL_PROPERTIES",
1180
+ "417": "TO_INTERACT",
1181
+ "418": "TO_INTERCHANGE",
1182
+ "419": "TO_INTERPRET",
1183
+ "420": "TO_INVENT",
1184
+ "421": "TO_INVOLVE",
1185
+ "422": "TO_JOIN",
1186
+ "423": "TO_JOIN_PHYSICAL_OBJECTS",
1187
+ "424": "TO_KEEP_VIOLATE_NORMS",
1188
+ "425": "TO_LEARN_AND_RESEARCH",
1189
+ "426": "TO_LET_DOWN",
1190
+ "427": "TO_LIQUIDATE",
1191
+ "428": "TO_MAKE",
1192
+ "429": "TO_MARRY_DIVORCE_ENGAGE",
1193
+ "430": "TO_MEAN",
1194
+ "431": "TO_MEASURE",
1195
+ "432": "TO_MIX",
1196
+ "433": "TO_MOVE_IN_GAMES",
1197
+ "434": "TO_OPEN",
1198
+ "435": "TO_ORGANIZE_EVENT",
1199
+ "436": "TO_OVERTHROW",
1200
+ "437": "TO_PARTICIPATE",
1201
+ "438": "TO_PERCEIVE",
1202
+ "439": "TO_PERFORM",
1203
+ "440": "TO_PERFORM_MATHS_OPERATIONS",
1204
+ "441": "TO_PERSUADE_SMB_TO_DO_SMTH",
1205
+ "442": "TO_PICKET",
1206
+ "443": "TO_PICTURE_DRAW",
1207
+ "444": "TO_PLAN_CREATIVE_AND_PHYSICAL_OBJECTS",
1208
+ "445": "TO_PLAY_GAMES",
1209
+ "446": "TO_POSSESS",
1210
+ "447": "TO_PRESS",
1211
+ "448": "TO_PRESS_AS_TOUCH",
1212
+ "449": "TO_PREVENT_SMTH",
1213
+ "450": "TO_PRINT_TEXT_PHOTO",
1214
+ "451": "TO_PROCESS_INFORMATION",
1215
+ "452": "TO_PROCESS_PHYSICAL_OBJECT",
1216
+ "453": "TO_PRODUCE_CERTAIN_SOUNDS",
1217
+ "454": "TO_PROGRAM",
1218
+ "455": "TO_PRONOUNCE",
1219
+ "456": "TO_PROPOSE",
1220
+ "457": "TO_PUNISH",
1221
+ "458": "TO_RATIFY",
1222
+ "459": "TO_REACT",
1223
+ "460": "TO_READ_READABLE",
1224
+ "461": "TO_REBEL",
1225
+ "462": "TO_RECEIVE_CALLERS",
1226
+ "463": "TO_REFLECT",
1227
+ "464": "TO_REGISTER",
1228
+ "465": "TO_REIGN_AS_TO_TAKE_PLACE",
1229
+ "466": "TO_RELEASE",
1230
+ "467": "TO_RESTORE",
1231
+ "468": "TO_REVENGE",
1232
+ "469": "TO_RUB_AND_SCRATCH",
1233
+ "470": "TO_SABOTAGE",
1234
+ "471": "TO_SCREEN",
1235
+ "472": "TO_SEDUCE",
1236
+ "473": "TO_SEEK_FIND",
1237
+ "474": "TO_SEND_TO_DELIVER",
1238
+ "475": "TO_SET",
1239
+ "476": "TO_SHARE",
1240
+ "477": "TO_SHINE",
1241
+ "478": "TO_SHOOT_PHOTO_OR_FILM",
1242
+ "479": "TO_SHOW",
1243
+ "480": "TO_SMOKE",
1244
+ "481": "TO_SOUND",
1245
+ "482": "TO_SPEND",
1246
+ "483": "TO_SPEND_INEFFECTIVELY",
1247
+ "484": "TO_SPEND_TIME",
1248
+ "485": "TO_SPOIL",
1249
+ "486": "TO_STOP_SPEAKING",
1250
+ "487": "TO_SUBSCRIBE",
1251
+ "488": "TO_SUBSTITUTE_AND_EXCHANGE",
1252
+ "489": "TO_SUMMARIZE",
1253
+ "490": "TO_SUPPORT_AND_OPPOSE",
1254
+ "491": "TO_SYMBOLIZE",
1255
+ "492": "TO_TAKE",
1256
+ "493": "TO_TAKE_FOOD_OR_MEDICINE",
1257
+ "494": "TO_TAKE_INTO_CONSIDERATION",
1258
+ "495": "TO_TAKE_PLACE_IN_NATURE",
1259
+ "496": "TO_TEASE_AND_JOKE",
1260
+ "497": "TO_TELEPHONE",
1261
+ "498": "TO_TERRORIZE",
1262
+ "499": "TO_THINK_ABOUT",
1263
+ "500": "TO_THINK_OUT",
1264
+ "501": "TO_TORTURE",
1265
+ "502": "TO_TOUCH",
1266
+ "503": "TO_TRADE",
1267
+ "504": "TO_TURN_INTO",
1268
+ "505": "TO_UNDERSTATE_TO_EXAGGERATE",
1269
+ "506": "TO_USE",
1270
+ "507": "TO_UTTER_ANIMAL_SOUNDS",
1271
+ "508": "TO_VISUALIZE",
1272
+ "509": "TO_WAIT",
1273
+ "510": "TO_WORK",
1274
+ "511": "TO_WRITE",
1275
+ "512": "TRANSPORT",
1276
+ "513": "TRANSPORT_COMMUNICATIONS",
1277
+ "514": "TRIAL",
1278
+ "515": "TRICK_MACHINATION",
1279
+ "516": "UNCERTAINTY",
1280
+ "517": "UNDERTAKING",
1281
+ "518": "UNIT_OF_INFORMATION_QUANTITY",
1282
+ "519": "UNKNOWN_SUBSTANTIVE_CLASS",
1283
+ "520": "URBAN_SPACE_AND_ROADS",
1284
+ "521": "VALUABLE",
1285
+ "522": "VERBAL_COMMUNICATION",
1286
+ "523": "VIOLENCE",
1287
+ "524": "VIRTUAL_OBJECT",
1288
+ "525": "VIRTUAL_TRANSFERENCE",
1289
+ "526": "VISUAL_CHARACTERISTICS",
1290
+ "527": "VISUAL_REPRESENTATION",
1291
+ "528": "WEAPON_AND_ITS_PART",
1292
+ "529": "WEIGHT_CHAR",
1293
+ "530": "WORLD_OUTLOOK",
1294
+ "531": "YES_NO_VERBS"
1295
+ },
1296
+ "ud_deprel": {
1297
+ "0": "acl",
1298
+ "1": "acl:cleft",
1299
+ "2": "acl:relcl",
1300
+ "3": "advcl",
1301
+ "4": "advcl:relcl",
1302
+ "5": "advmod",
1303
+ "6": "amod",
1304
+ "7": "appos",
1305
+ "8": "aux",
1306
+ "9": "aux:pass",
1307
+ "10": "case",
1308
+ "11": "cc",
1309
+ "12": "ccomp",
1310
+ "13": "compound",
1311
+ "14": "compound:prt",
1312
+ "15": "conj",
1313
+ "16": "cop",
1314
+ "17": "csubj",
1315
+ "18": "dep",
1316
+ "19": "det",
1317
+ "20": "det:predet",
1318
+ "21": "discourse",
1319
+ "22": "fixed",
1320
+ "23": "flat",
1321
+ "24": "flat:foreign",
1322
+ "25": "flat:name",
1323
+ "26": "flatname",
1324
+ "27": "goeswith",
1325
+ "28": "iobj",
1326
+ "29": "list",
1327
+ "30": "mark",
1328
+ "31": "nmod",
1329
+ "32": "nmod:npmod",
1330
+ "33": "nmod:poss",
1331
+ "34": "nmod:tmod",
1332
+ "35": "nsubj",
1333
+ "36": "nsubj:outer",
1334
+ "37": "nsubj:pass",
1335
+ "38": "nummod",
1336
+ "39": "nummod:gov",
1337
+ "40": "obj",
1338
+ "41": "obl",
1339
+ "42": "obl:npmod",
1340
+ "43": "obl:tmod",
1341
+ "44": "orphan",
1342
+ "45": "parataxis",
1343
+ "46": "punct",
1344
+ "47": "root",
1345
+ "48": "vocative",
1346
+ "49": "xcomp"
1347
+ }
1348
+ }
1349
+ }
configuration.py ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import PretrainedConfig
2
+
3
+
4
+ class CobaldParserConfig(PretrainedConfig):
5
+ model_type = "cobald_parser"
6
+
7
+ def __init__(
8
+ self,
9
+ encoder_model_name: str = None,
10
+ null_classifier_hidden_size: int = 0,
11
+ lemma_classifier_hidden_size: int = 0,
12
+ morphology_classifier_hidden_size: int = 0,
13
+ dependency_classifier_hidden_size: int = 0,
14
+ misc_classifier_hidden_size: int = 0,
15
+ deepslot_classifier_hidden_size: int = 0,
16
+ semclass_classifier_hidden_size: int = 0,
17
+ activation: str = 'relu',
18
+ dropout: float = 0.1,
19
+ consecutive_null_limit: int = 0,
20
+ vocabulary: dict[dict[int, str]] = {},
21
+ **kwargs
22
+ ):
23
+ self.encoder_model_name = encoder_model_name
24
+ self.null_classifier_hidden_size = null_classifier_hidden_size
25
+ self.consecutive_null_limit = consecutive_null_limit
26
+ self.lemma_classifier_hidden_size = lemma_classifier_hidden_size
27
+ self.morphology_classifier_hidden_size = morphology_classifier_hidden_size
28
+ self.dependency_classifier_hidden_size = dependency_classifier_hidden_size
29
+ self.misc_classifier_hidden_size = misc_classifier_hidden_size
30
+ self.deepslot_classifier_hidden_size = deepslot_classifier_hidden_size
31
+ self.semclass_classifier_hidden_size = semclass_classifier_hidden_size
32
+ self.activation = activation
33
+ self.dropout = dropout
34
+ # The serialized config stores mappings as strings,
35
+ # e.g. {"0": "acl", "1": "conj"}, so we have to convert them to int.
36
+ self.vocabulary = {
37
+ column: {int(k): v for k, v in labels.items()}
38
+ for column, labels in vocabulary.items()
39
+ }
40
+ # HACK: Tell HF hub about custom pipeline.
41
+ # It should not be hardcoded like this but other workaround are worse imo.
42
+ self.custom_pipelines = {
43
+ "cobald-parsing": {
44
+ "impl": "pipeline.ConlluTokenClassificationPipeline",
45
+ "pt": "CobaldParser",
46
+ }
47
+ }
48
+ super().__init__(**kwargs)
dependency_classifier.py ADDED
@@ -0,0 +1,299 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import override
2
+ from copy import deepcopy
3
+
4
+ import numpy as np
5
+
6
+ import torch
7
+ from torch import nn
8
+ from torch import Tensor, FloatTensor, BoolTensor, LongTensor
9
+ import torch.nn.functional as F
10
+
11
+ from transformers.activations import ACT2FN
12
+
13
+ from cobald_parser.bilinear_matrix_attention import BilinearMatrixAttention
14
+ from cobald_parser.chu_liu_edmonds import decode_mst
15
+ from cobald_parser.utils import pairwise_mask, replace_masked_values
16
+
17
+
18
+ class DependencyHeadBase(nn.Module):
19
+ """
20
+ Base class for scoring arcs and relations between tokens in a dependency tree/graph.
21
+ """
22
+
23
+ def __init__(self, hidden_size: int, n_rels: int):
24
+ super().__init__()
25
+
26
+ self.arc_attention = BilinearMatrixAttention(
27
+ hidden_size,
28
+ hidden_size,
29
+ use_input_biases=True,
30
+ n_labels=1
31
+ )
32
+ self.rel_attention = BilinearMatrixAttention(
33
+ hidden_size,
34
+ hidden_size,
35
+ use_input_biases=True,
36
+ n_labels=n_rels
37
+ )
38
+
39
+ def forward(
40
+ self,
41
+ h_arc_head: Tensor, # [batch_size, seq_len, hidden_size]
42
+ h_arc_dep: Tensor, # ...
43
+ h_rel_head: Tensor, # ...
44
+ h_rel_dep: Tensor, # ...
45
+ gold_arcs: LongTensor, # [batch_size, seq_len, seq_len]
46
+ mask: BoolTensor # [batch_size, seq_len]
47
+ ) -> dict[str, Tensor]:
48
+
49
+ # Score arcs.
50
+ # s_arc[:, i, j] = score of edge j -> i.
51
+ s_arc = self.arc_attention(h_arc_head, h_arc_dep)
52
+ # Mask undesirable values (padding, nulls, etc.) with -inf.
53
+ replace_masked_values(s_arc, pairwise_mask(mask), replace_with=-1e8)
54
+ # Score arcs' relations.
55
+ # [batch_size, seq_len, seq_len, num_labels]
56
+ s_rel = self.rel_attention(h_rel_head, h_rel_dep).permute(0, 2, 3, 1)
57
+
58
+ # Calculate loss.
59
+ loss = 0.0
60
+ if gold_arcs is not None:
61
+ loss += self.calc_arc_loss(s_arc, gold_arcs)
62
+ loss += self.calc_rel_loss(s_rel, gold_arcs)
63
+
64
+ # Predict arcs based on the scores.
65
+ # [batch_size, seq_len, seq_len]
66
+ pred_arcs_3d = self.predict_arcs(s_arc, mask)
67
+ # [batch_size, seq_len, seq_len]
68
+ pred_rels_3d = self.predict_rels(s_rel)
69
+ # [n_pred_arcs, 4]
70
+ preds_combined = self.combine_arcs_rels(pred_arcs_3d, pred_rels_3d)
71
+ return {
72
+ 'preds': preds_combined,
73
+ 'loss': loss
74
+ }
75
+
76
+ @staticmethod
77
+ def calc_arc_loss(
78
+ s_arc: Tensor, # [batch_size, seq_len, seq_len]
79
+ gold_arcs: LongTensor # [n_arcs, 4]
80
+ ) -> Tensor:
81
+ """Calculate arc loss."""
82
+ raise NotImplementedError
83
+
84
+ @staticmethod
85
+ def calc_rel_loss(
86
+ s_rel: Tensor, # [batch_size, seq_len, seq_len, num_labels]
87
+ gold_arcs: LongTensor # [n_arcs, 4]
88
+ ) -> Tensor:
89
+ batch_idxs, arcs_from, arcs_to, rels = gold_arcs.T
90
+ return F.cross_entropy(s_rel[batch_idxs, arcs_from, arcs_to], rels)
91
+
92
+ def predict_arcs(
93
+ self,
94
+ s_arc: Tensor, # [batch_size, seq_len, seq_len]
95
+ mask: BoolTensor # [batch_size, seq_len]
96
+ ) -> LongTensor:
97
+ """Predict arcs from scores."""
98
+ raise NotImplementedError
99
+
100
+ def predict_rels(
101
+ self,
102
+ s_rel: FloatTensor
103
+ ) -> LongTensor:
104
+ return s_rel.argmax(dim=-1).long()
105
+
106
+ @staticmethod
107
+ def combine_arcs_rels(
108
+ pred_arcs: LongTensor,
109
+ pred_rels: LongTensor
110
+ ) -> LongTensor:
111
+ """Select relations towards predicted arcs."""
112
+ assert pred_arcs.shape == pred_rels.shape
113
+ # Get indices where arcs exist
114
+ indices = pred_arcs.nonzero(as_tuple=True)
115
+ batch_idxs, from_idxs, to_idxs = indices
116
+ # Get corresponding relation types
117
+ rel_types = pred_rels[batch_idxs, from_idxs, to_idxs]
118
+ # Stack as [batch_idx, from_idx, to_idx, rel_type]
119
+ return torch.stack([batch_idxs, from_idxs, to_idxs, rel_types], dim=1)
120
+
121
+
122
+ class DependencyHead(DependencyHeadBase):
123
+ """
124
+ Basic UD syntax specialization that predicts single edge for each token.
125
+ """
126
+
127
+ @override
128
+ def predict_arcs(
129
+ self,
130
+ s_arc: Tensor, # [batch_size, seq_len, seq_len]
131
+ mask: BoolTensor # [batch_size, seq_len]
132
+ ) -> Tensor:
133
+
134
+ if self.training:
135
+ # During training, use fast greedy decoding.
136
+ # - [batch_size, seq_len]
137
+ pred_arcs_seq = s_arc.argmax(dim=-1)
138
+ else:
139
+ # During inference, diligently decode Maximum Spanning Tree.
140
+ pred_arcs_seq = self._mst_decode(s_arc, mask)
141
+ # FIXME
142
+ # pred_arcs_seq = s_arc.argmax(dim=-1)
143
+
144
+ # Upscale arcs sequence of shape [batch_size, seq_len]
145
+ # to matrix of shape [batch_size, seq_len, seq_len].
146
+ pred_arcs = F.one_hot(pred_arcs_seq, num_classes=pred_arcs_seq.size(1)).long()
147
+ return pred_arcs
148
+
149
+ def _mst_decode(
150
+ self,
151
+ s_arc: Tensor, # [batch_size, seq_len, seq_len]
152
+ mask: Tensor # [batch_size, seq_len]
153
+ ) -> tuple[Tensor, Tensor]:
154
+
155
+ batch_size = s_arc.size(0)
156
+ device = s_arc.device
157
+ s_arc = s_arc.cpu()
158
+
159
+ # Convert scores to probabilities, as `decode_mst` expects non-negative values.
160
+ arc_probs = nn.functional.softmax(s_arc, dim=-1)
161
+ # Transpose arcs, because decode_mst defines 'energy' matrix as
162
+ # energy[i,j] = "Score that `i` is the head of `j`",
163
+ # whereas
164
+ # arc_probs[i,j] = "Probability that `j` is the head of `i`".
165
+ arc_probs = arc_probs.transpose(1, 2)
166
+
167
+ # `decode_mst` knows nothing about UD and ROOT, so we have to manually
168
+ # zero probabilities of arcs leading to ROOT to make sure ROOT is a source node
169
+ # of a graph.
170
+
171
+ # Decode ROOT positions from diagonals.
172
+ # shape: [batch_size]
173
+ root_idxs = arc_probs.diagonal(dim1=1, dim2=2).argmax(dim=-1)
174
+ # Zero out arcs leading to ROOTs.
175
+ arc_probs[torch.arange(batch_size), :, root_idxs] = 0.0
176
+
177
+ pred_arcs = []
178
+ for sample_idx in range(batch_size):
179
+ energy = arc_probs[sample_idx]
180
+ # has_labels=False because we will decode them manually later.
181
+ lengths = mask[sample_idx].sum()
182
+ heads, _ = decode_mst(energy, lengths, has_labels=False)
183
+ # Some nodes may be isolated. Pick heads greedily in this case.
184
+ heads[heads <= 0] = s_arc[sample_idx].argmax(dim=-1)[heads <= 0]
185
+ pred_arcs.append(heads)
186
+
187
+ # shape: [batch_size, seq_len]
188
+ pred_arcs = torch.from_numpy(np.stack(pred_arcs)).long().to(device)
189
+ return pred_arcs
190
+
191
+ @staticmethod
192
+ @override
193
+ def calc_arc_loss(
194
+ s_arc: Tensor, # [batch_size, seq_len, seq_len]
195
+ gold_arcs: LongTensor # [n_arcs, 4]
196
+ ) -> tuple[Tensor, Tensor]:
197
+ batch_idxs, from_idxs, to_idxs, _ = gold_arcs.T
198
+ return F.cross_entropy(s_arc[batch_idxs, from_idxs], to_idxs)
199
+
200
+
201
+ class MultiDependencyHead(DependencyHeadBase):
202
+ """
203
+ Enhanced UD syntax specialization that predicts multiple edges for each token.
204
+ """
205
+
206
+ @override
207
+ def predict_arcs(
208
+ self,
209
+ s_arc: Tensor, # [batch_size, seq_len, seq_len]
210
+ mask: BoolTensor # [batch_size, seq_len]
211
+ ) -> Tensor:
212
+ # Convert scores to probabilities.
213
+ arc_probs = torch.sigmoid(s_arc)
214
+ # Find confident arcs (with prob > 0.5).
215
+ return arc_probs.round().long()
216
+
217
+ @staticmethod
218
+ @override
219
+ def calc_arc_loss(
220
+ s_arc: Tensor, # [batch_size, seq_len, seq_len]
221
+ gold_arcs: LongTensor # [n_arcs, 4]
222
+ ) -> Tensor:
223
+ batch_idxs, from_idxs, to_idxs, _ = gold_arcs.T
224
+ # Gold arcs but as a matrix, where matrix[i, arcs_from, arc_to] = 1.0 if arcs is present.
225
+ gold_arcs_matrix = torch.zeros_like(s_arc)
226
+ gold_arcs_matrix[batch_idxs, from_idxs, to_idxs] = 1.0
227
+ # Padded arcs's logits are huge negative values that doesn't contribute to the loss.
228
+ return F.binary_cross_entropy_with_logits(s_arc, gold_arcs_matrix)
229
+
230
+
231
+ class DependencyClassifier(nn.Module):
232
+ """
233
+ Dozat and Manning's biaffine dependency classifier.
234
+ """
235
+
236
+ def __init__(
237
+ self,
238
+ input_size: int,
239
+ hidden_size: int,
240
+ n_rels_ud: int,
241
+ n_rels_eud: int,
242
+ activation: str,
243
+ dropout: float,
244
+ ):
245
+ super().__init__()
246
+
247
+ self.arc_dep_mlp = nn.Sequential(
248
+ nn.Dropout(dropout),
249
+ nn.Linear(input_size, hidden_size),
250
+ ACT2FN[activation],
251
+ nn.Dropout(dropout)
252
+ )
253
+ # All mlps are equal.
254
+ self.arc_head_mlp = deepcopy(self.arc_dep_mlp)
255
+ self.rel_dep_mlp = deepcopy(self.arc_dep_mlp)
256
+ self.rel_head_mlp = deepcopy(self.arc_dep_mlp)
257
+
258
+ self.dependency_head_ud = DependencyHead(hidden_size, n_rels_ud)
259
+ self.dependency_head_eud = MultiDependencyHead(hidden_size, n_rels_eud)
260
+
261
+ def forward(
262
+ self,
263
+ embeddings: Tensor, # [batch_size, seq_len, embedding_size]
264
+ gold_ud: Tensor, # [n_ud_arcs, 4]
265
+ gold_eud: Tensor, # [n_eud_arcs, 4]
266
+ mask_ud: Tensor, # [batch_size, seq_len]
267
+ mask_eud: Tensor # [batch_size, seq_len]
268
+ ) -> dict[str, Tensor]:
269
+
270
+ # - [batch_size, seq_len, hidden_size]
271
+ h_arc_head = self.arc_head_mlp(embeddings)
272
+ h_arc_dep = self.arc_dep_mlp(embeddings)
273
+ h_rel_head = self.rel_head_mlp(embeddings)
274
+ h_rel_dep = self.rel_dep_mlp(embeddings)
275
+
276
+ # Share the h vectors between dependency and multi-dependency heads.
277
+ output_ud = self.dependency_head_ud(
278
+ h_arc_head,
279
+ h_arc_dep,
280
+ h_rel_head,
281
+ h_rel_dep,
282
+ gold_arcs=gold_ud,
283
+ mask=mask_ud
284
+ )
285
+ output_eud = self.dependency_head_eud(
286
+ h_arc_head,
287
+ h_arc_dep,
288
+ h_rel_head,
289
+ h_rel_dep,
290
+ gold_arcs=gold_eud,
291
+ mask=mask_eud
292
+ )
293
+
294
+ return {
295
+ 'preds_ud': output_ud["preds"],
296
+ 'preds_eud': output_eud["preds"],
297
+ 'loss_ud': output_ud["loss"],
298
+ 'loss_eud': output_eud["loss"]
299
+ }
encoder.py ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from torch import nn
3
+ from torch import Tensor, LongTensor
4
+
5
+ from transformers import AutoTokenizer, AutoModel
6
+
7
+
8
+ class WordTransformerEncoder(nn.Module):
9
+ """
10
+ Encodes sentences into word-level embeddings using a pretrained MLM transformer.
11
+ """
12
+ def __init__(self, model_name: str):
13
+ super().__init__()
14
+ self.tokenizer = AutoTokenizer.from_pretrained(model_name)
15
+ # Model like BERT, RoBERTa, etc.
16
+ self.model = AutoModel.from_pretrained(model_name)
17
+
18
+ def forward(self, words: list[list[str]]) -> Tensor:
19
+ """
20
+ Build words embeddings.
21
+
22
+ - Tokenizes input sentences into subtokens.
23
+ - Passes the subtokens through the pre-trained transformer model.
24
+ - Aggregates subtoken embeddings into word embeddings using mean pooling.
25
+ """
26
+ batch_size = len(words)
27
+
28
+ # BPE tokenization: split words into subtokens, e.g. ['kidding'] -> ['▁ki', 'dding'].
29
+ subtokens = self.tokenizer(
30
+ words,
31
+ padding=True,
32
+ truncation=True,
33
+ is_split_into_words=True,
34
+ return_tensors='pt'
35
+ )
36
+ subtokens = subtokens.to(self.model.device)
37
+ # Index words from 1 and reserve 0 for special subtokens (e.g. <s>, </s>, padding, etc.).
38
+ # Such numeration makes a following aggregation easier.
39
+ words_ids = torch.stack([
40
+ torch.tensor(
41
+ [word_id + 1 if word_id is not None else 0 for word_id in subtokens.word_ids(batch_idx)],
42
+ dtype=torch.long,
43
+ device=self.model.device
44
+ )
45
+ for batch_idx in range(batch_size)
46
+ ])
47
+
48
+ # Run model and extract subtokens embeddings from the last layer.
49
+ subtokens_embeddings = self.model(**subtokens).last_hidden_state
50
+
51
+ # Aggreate subtokens embeddings into words embeddings.
52
+ # [batch_size, n_words, embedding_size]
53
+ words_emeddings = self._aggregate_subtokens_embeddings(subtokens_embeddings, words_ids)
54
+ return words_emeddings
55
+
56
+ def _aggregate_subtokens_embeddings(
57
+ self,
58
+ subtokens_embeddings: Tensor, # [batch_size, n_subtokens, embedding_size]
59
+ words_ids: LongTensor # [batch_size, n_subtokens]
60
+ ) -> Tensor:
61
+ """
62
+ Aggregate subtoken embeddings into word embeddings by averaging.
63
+
64
+ This method ensures that multiple subtokens corresponding to a single word are combined
65
+ into a single embedding.
66
+ """
67
+ batch_size, n_subtokens, embedding_size = subtokens_embeddings.shape
68
+ # The number of words in a sentence plus an "auxiliary" word in the beginnig.
69
+ n_words = torch.max(words_ids) + 1
70
+
71
+ words_embeddings = torch.zeros(
72
+ size=(batch_size, n_words, embedding_size),
73
+ dtype=subtokens_embeddings.dtype,
74
+ device=self.model.device
75
+ )
76
+ words_ids_expanded = words_ids.unsqueeze(-1).expand(batch_size, n_subtokens, embedding_size)
77
+
78
+ # Use scatter_reduce_ to average embeddings of subtokens corresponding to the same word.
79
+ # All the padding and special subtokens will be aggregated into an "auxiliary" first embedding,
80
+ # namely into words_embeddings[:, 0, :].
81
+ words_embeddings.scatter_reduce_(
82
+ dim=1,
83
+ index=words_ids_expanded,
84
+ src=subtokens_embeddings,
85
+ reduce="mean",
86
+ include_self=False
87
+ )
88
+ # Now remove the auxiliary word in the beginning.
89
+ words_embeddings = words_embeddings[:, 1:, :]
90
+ return words_embeddings
91
+
92
+ def get_embedding_size(self) -> int:
93
+ """Returns the embedding size of the transformer model, e.g. 768 for BERT."""
94
+ return self.model.config.hidden_size
95
+
96
+ def get_embeddings_layer(self):
97
+ """Returns the embeddings model."""
98
+ return self.model.embeddings
99
+
100
+ def get_transformer_layers(self) -> list[nn.Module]:
101
+ """
102
+ Return a flat list of all transformer-*block* layers, excluding embeddings/poolers, etc.
103
+ """
104
+ layers = []
105
+ for sub in self.model.modules():
106
+ # find all ModuleLists (these always hold the actual block layers)
107
+ if isinstance(sub, nn.ModuleList):
108
+ layers.extend(list(sub))
109
+ return layers
mlp_classifier.py ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from torch import nn
3
+ from torch import Tensor, LongTensor
4
+
5
+ from transformers.activations import ACT2FN
6
+
7
+
8
+ class MlpClassifier(nn.Module):
9
+ """ Simple feed-forward multilayer perceptron classifier. """
10
+
11
+ def __init__(
12
+ self,
13
+ input_size: int,
14
+ hidden_size: int,
15
+ n_classes: int,
16
+ activation: str,
17
+ dropout: float,
18
+ class_weights: list[float] = None,
19
+ ):
20
+ super().__init__()
21
+
22
+ self.n_classes = n_classes
23
+ self.classifier = nn.Sequential(
24
+ nn.Dropout(dropout),
25
+ nn.Linear(input_size, hidden_size),
26
+ ACT2FN[activation],
27
+ nn.Dropout(dropout),
28
+ nn.Linear(hidden_size, n_classes)
29
+ )
30
+ if class_weights is not None:
31
+ class_weights = torch.tensor(class_weights, dtype=torch.long)
32
+ self.cross_entropy = nn.CrossEntropyLoss(weight=class_weights)
33
+
34
+ def forward(self, embeddings: Tensor, labels: LongTensor = None) -> dict:
35
+ logits = self.classifier(embeddings)
36
+ # Calculate loss.
37
+ loss = 0.0
38
+ if labels is not None:
39
+ # Reshape tensors to match expected dimensions
40
+ loss = self.cross_entropy(
41
+ logits.view(-1, self.n_classes),
42
+ labels.view(-1)
43
+ )
44
+ # Predictions.
45
+ preds = logits.argmax(dim=-1)
46
+ return {'preds': preds, 'loss': loss}
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3be34b3952a70dcaae940ac8770a5efe432e93deef5120277513647d5a0cbfd5
3
+ size 1141314800
modeling_parser.py ADDED
@@ -0,0 +1,190 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from torch import nn
2
+ from torch import LongTensor
3
+ from transformers import PreTrainedModel
4
+ from transformers.modeling_outputs import ModelOutput
5
+ from dataclasses import dataclass
6
+
7
+ from .configuration import CobaldParserConfig
8
+ from .encoder import WordTransformerEncoder
9
+ from .mlp_classifier import MlpClassifier
10
+ from .dependency_classifier import DependencyClassifier
11
+ from .utils import (
12
+ build_padding_mask,
13
+ build_null_mask,
14
+ prepend_cls,
15
+ remove_nulls,
16
+ add_nulls
17
+ )
18
+
19
+
20
+ @dataclass
21
+ class CobaldParserOutput(ModelOutput):
22
+ """
23
+ Output type for CobaldParser.
24
+ """
25
+ loss: float = None
26
+ words: list = None
27
+ counting_mask: LongTensor = None
28
+ lemma_rules: LongTensor = None
29
+ joint_feats: LongTensor = None
30
+ deps_ud: LongTensor = None
31
+ deps_eud: LongTensor = None
32
+ miscs: LongTensor = None
33
+ deepslots: LongTensor = None
34
+ semclasses: LongTensor = None
35
+
36
+
37
+ class CobaldParser(PreTrainedModel):
38
+ """Morpho-Syntax-Semantic Parser."""
39
+
40
+ config_class = CobaldParserConfig
41
+
42
+ def __init__(self, config: CobaldParserConfig):
43
+ super().__init__(config)
44
+
45
+ self.encoder = WordTransformerEncoder(
46
+ model_name=config.encoder_model_name
47
+ )
48
+ embedding_size = self.encoder.get_embedding_size()
49
+
50
+ self.classifiers = nn.ModuleDict()
51
+ self.classifiers["null"] = MlpClassifier(
52
+ input_size=self.encoder.get_embedding_size(),
53
+ hidden_size=config.null_classifier_hidden_size,
54
+ n_classes=config.consecutive_null_limit + 1,
55
+ activation=config.activation,
56
+ dropout=config.dropout
57
+ )
58
+ if "lemma_rule" in config.vocabulary:
59
+ self.classifiers["lemma_rule"] = MlpClassifier(
60
+ input_size=embedding_size,
61
+ hidden_size=config.lemma_classifier_hidden_size,
62
+ n_classes=len(config.vocabulary["lemma_rule"]),
63
+ activation=config.activation,
64
+ dropout=config.dropout
65
+ )
66
+ if "joint_feats" in config.vocabulary:
67
+ self.classifiers["joint_feats"] = MlpClassifier(
68
+ input_size=embedding_size,
69
+ hidden_size=config.morphology_classifier_hidden_size,
70
+ n_classes=len(config.vocabulary["joint_feats"]),
71
+ activation=config.activation,
72
+ dropout=config.dropout
73
+ )
74
+ if "ud_deprel" in config.vocabulary or "eud_deprel" in config.vocabulary:
75
+ self.classifiers["syntax"] = DependencyClassifier(
76
+ input_size=embedding_size,
77
+ hidden_size=config.dependency_classifier_hidden_size,
78
+ n_rels_ud=len(config.vocabulary["ud_deprel"]),
79
+ n_rels_eud=len(config.vocabulary["eud_deprel"]),
80
+ activation=config.activation,
81
+ dropout=config.dropout
82
+ )
83
+ if "misc" in config.vocabulary:
84
+ self.classifiers["misc"] = MlpClassifier(
85
+ input_size=embedding_size,
86
+ hidden_size=config.misc_classifier_hidden_size,
87
+ n_classes=len(config.vocabulary["misc"]),
88
+ activation=config.activation,
89
+ dropout=config.dropout
90
+ )
91
+ if "deepslot" in config.vocabulary:
92
+ self.classifiers["deepslot"] = MlpClassifier(
93
+ input_size=embedding_size,
94
+ hidden_size=config.deepslot_classifier_hidden_size,
95
+ n_classes=len(config.vocabulary["deepslot"]),
96
+ activation=config.activation,
97
+ dropout=config.dropout
98
+ )
99
+ if "semclass" in config.vocabulary:
100
+ self.classifiers["semclass"] = MlpClassifier(
101
+ input_size=embedding_size,
102
+ hidden_size=config.semclass_classifier_hidden_size,
103
+ n_classes=len(config.vocabulary["semclass"]),
104
+ activation=config.activation,
105
+ dropout=config.dropout
106
+ )
107
+
108
+ def forward(
109
+ self,
110
+ words: list[list[str]],
111
+ counting_masks: LongTensor = None,
112
+ lemma_rules: LongTensor = None,
113
+ joint_feats: LongTensor = None,
114
+ deps_ud: LongTensor = None,
115
+ deps_eud: LongTensor = None,
116
+ miscs: LongTensor = None,
117
+ deepslots: LongTensor = None,
118
+ semclasses: LongTensor = None,
119
+ sent_ids: list[str] = None,
120
+ texts: list[str] = None,
121
+ inference_mode: bool = False
122
+ ) -> CobaldParserOutput:
123
+ result = {}
124
+
125
+ # Extra [CLS] token accounts for the case when #NULL is the first token in a sentence.
126
+ words_with_cls = prepend_cls(words)
127
+ words_without_nulls = remove_nulls(words_with_cls)
128
+ # Embeddings of words without nulls.
129
+ embeddings_without_nulls = self.encoder(words_without_nulls)
130
+ # Predict nulls.
131
+ null_output = self.classifiers["null"](embeddings_without_nulls, counting_masks)
132
+ result["counting_mask"] = null_output['preds']
133
+ result["loss"] = null_output["loss"]
134
+
135
+ # "Teacher forcing": during training, pass the original words (with gold nulls)
136
+ # to the classification heads, so that they are trained upon correct sentences.
137
+ if inference_mode:
138
+ # Restore predicted nulls in the original sentences.
139
+ result["words"] = add_nulls(words, null_output["preds"])
140
+ else:
141
+ result["words"] = words
142
+
143
+ # Encode words with nulls.
144
+ # [batch_size, seq_len, embedding_size]
145
+ embeddings = self.encoder(result["words"])
146
+
147
+ # Predict lemmas and morphological features.
148
+ if "lemma_rule" in self.classifiers:
149
+ lemma_output = self.classifiers["lemma_rule"](embeddings, lemma_rules)
150
+ result["lemma_rules"] = lemma_output['preds']
151
+ result["loss"] += lemma_output['loss']
152
+
153
+ if "joint_feats" in self.classifiers:
154
+ joint_feats_output = self.classifiers["joint_feats"](embeddings, joint_feats)
155
+ result["joint_feats"] = joint_feats_output['preds']
156
+ result["loss"] += joint_feats_output['loss']
157
+
158
+ # Predict syntax.
159
+ if "syntax" in self.classifiers:
160
+ padding_mask = build_padding_mask(result["words"], self.device)
161
+ null_mask = build_null_mask(result["words"], self.device)
162
+ deps_output = self.classifiers["syntax"](
163
+ embeddings,
164
+ deps_ud,
165
+ deps_eud,
166
+ mask_ud=(padding_mask & ~null_mask),
167
+ mask_eud=padding_mask
168
+ )
169
+ result["deps_ud"] = deps_output['preds_ud']
170
+ result["deps_eud"] = deps_output['preds_eud']
171
+ result["loss"] += deps_output['loss_ud'] + deps_output['loss_eud']
172
+
173
+ # Predict miscellaneous features.
174
+ if "misc" in self.classifiers:
175
+ misc_output = self.classifiers["misc"](embeddings, miscs)
176
+ result["miscs"] = misc_output['preds']
177
+ result["loss"] += misc_output['loss']
178
+
179
+ # Predict semantics.
180
+ if "deepslot" in self.classifiers:
181
+ deepslot_output = self.classifiers["deepslot"](embeddings, deepslots)
182
+ result["deepslots"] = deepslot_output['preds']
183
+ result["loss"] += deepslot_output['loss']
184
+
185
+ if "semclass" in self.classifiers:
186
+ semclass_output = self.classifiers["semclass"](embeddings, semclasses)
187
+ result["semclasses"] = semclass_output['preds']
188
+ result["loss"] += semclass_output['loss']
189
+
190
+ return CobaldParserOutput(**result)
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f1e97ee035fe84c43a07fe6a5a6d4ed2f7622633ddf66e7aa4bd0e51055e660
3
+ size 5496
utils.py ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from torch import Tensor
3
+
4
+
5
+ def pad_sequences(sequences: list[Tensor], padding_value: int) -> Tensor:
6
+ """
7
+ Stack 1d tensors (sequences) into a single 2d tensor so that each sequence is padded on the
8
+ right.
9
+ """
10
+ return torch.nn.utils.rnn.pad_sequence(sequences, padding_value=padding_value, batch_first=True)
11
+
12
+
13
+ def _build_condition_mask(sentences: list[list[str]], condition_fn: callable, device) -> Tensor:
14
+ masks = [
15
+ torch.tensor([condition_fn(word) for word in sentence], dtype=bool, device=device)
16
+ for sentence in sentences
17
+ ]
18
+ return pad_sequences(masks, padding_value=False)
19
+
20
+ def build_padding_mask(sentences: list[list[str]], device) -> Tensor:
21
+ return _build_condition_mask(sentences, condition_fn=lambda word: True, device=device)
22
+
23
+ def build_null_mask(sentences: list[list[str]], device) -> Tensor:
24
+ return _build_condition_mask(sentences, condition_fn=lambda word: word == "#NULL", device=device)
25
+
26
+
27
+ def pairwise_mask(masks1d: Tensor) -> Tensor:
28
+ """
29
+ Calculate an outer product of a mask, i.e. masks2d[:, i, j] = masks1d[:, i] & masks1d[:, j].
30
+ """
31
+ return masks1d[:, None, :] & masks1d[:, :, None]
32
+
33
+
34
+ # Credits: https://docs.allennlp.org/main/api/nn/util/#replace_masked_values
35
+ def replace_masked_values(tensor: Tensor, mask: Tensor, replace_with: float):
36
+ """
37
+ Replace all masked values in tensor with `replace_with`.
38
+ """
39
+ assert tensor.dim() == mask.dim(), "tensor.dim() of {tensor.dim()} != mask.dim() of {mask.dim()}"
40
+ tensor.masked_fill_(~mask, replace_with)
41
+
42
+
43
+ def prepend_cls(sentences: list[list[str]]) -> list[list[str]]:
44
+ """
45
+ Return a copy of sentences with [CLS] token prepended.
46
+ """
47
+ return [["[CLS]", *sentence] for sentence in sentences]
48
+
49
+ def remove_nulls(sentences: list[list[str]]) -> list[list[str]]:
50
+ """
51
+ Return a copy of sentences with nulls removed.
52
+ """
53
+ return [[word for word in sentence if word != "#NULL"] for sentence in sentences]
54
+
55
+ def add_nulls(sentences: list[list[str]], counting_mask) -> list[list[str]]:
56
+ """
57
+ Return a copy of sentences with nulls restored according to counting masks.
58
+ """
59
+ sentences_with_nulls = []
60
+ for sentence, counting_mask in zip(sentences, counting_mask):
61
+ sentence_with_nulls = []
62
+ for word, n_nulls_to_insert in zip(sentence, counting_mask):
63
+ sentence_with_nulls.append(word)
64
+ sentence_with_nulls.extend(["#NULL"] * n_nulls_to_insert)
65
+ sentences_with_nulls.append(sentence_with_nulls)
66
+ return sentences_with_nulls