mrqorib commited on
Commit
c705f71
·
verified ·
1 Parent(s): 54debb1

Upload data

Browse files
.gitattributes CHANGED
@@ -57,3 +57,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ data/nucle3.2.sgml filter=lfs diff=lfs merge=lfs -text
CoNLL-preproc-README ADDED
@@ -0,0 +1,224 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ====================================================
2
+
3
+ CoNLL-2014 Shared Task: Grammatical Error Correction
4
+
5
+ Description of Preprocessed NUCLE Data Set
6
+
7
+ 7 April 2014 Version 3.2
8
+ ====================================================
9
+
10
+
11
+ 1. General
12
+ ==========
13
+
14
+ This README file describes a preprocessed version of the NUS Corpus of
15
+ Learner English (NUCLE). For information about NUCLE, please refer to
16
+ the NUCLE README file. For information about the CoNLL-2014 shared
17
+ task, please refer to the shared task website.
18
+
19
+ The preprocessed data set, following earlier CoNLL shared tasks,
20
+ provides syntactic information for the raw texts in NUCLE. For each
21
+ sentence, the part of speech tags, dependency parse tree, and
22
+ constituent parse tree are encoded in a column format.
23
+
24
+ In NUCLE, annotations are made at the character level, which means
25
+ both the start offset and the end offset of an error annotation are
26
+ character positions in the corresponding paragraph. In this
27
+ preprocessed version, annotations are made at the token level, which
28
+ means the start offset and the end offset are indexes of tokens in the
29
+ corresponding sentence.
30
+
31
+ This README is updated on 4 August 2014.
32
+
33
+
34
+ 2. Files
35
+ ========
36
+
37
+ Two files are to be generated:
38
+
39
+ conll14st-preprocessed.conll
40
+ conll14st-preprocessed.conll.ann
41
+
42
+ conll14st-preprocessed.conll contains the preprocessed data in
43
+ CoNLL-style column format. This file is not included in this
44
+ distribution due to the size.
45
+
46
+ conll14st-preprocessed.conll.ann contains token-level annotations.
47
+
48
+
49
+ 3. Preprocessing systems
50
+ ========================
51
+
52
+ The NUCLE corpus is preprocessed with the following steps to generate
53
+ this preprocessed data set:
54
+
55
+ a). sentence splitting, using nltk punkt [1].
56
+ Note: the version used to generate the files is before the fixing
57
+ of issue 514
58
+ b). word tokenization, using nltk word_tokenize [1].
59
+ c). POS tags, dependency parse trees, and constituent parse trees,
60
+ using the Stanford parser [2].
61
+ d). projecting character-level annotation to token-level annotation.
62
+
63
+ Results from (a-c) are in conll14st-preprocessed.conll. The projected
64
+ annotations (d) are included in conll14st-preprocessed.conll.ann.
65
+
66
+
67
+ 4. Data format
68
+ ==============
69
+
70
+ Here is an example sentence in conll14st-preprocessed.conll:
71
+
72
+ NID PID SID TOKENID TOKEN POS DPHEAD DPREL SYNT
73
+
74
+ 829 1 2 0 This DT 1 det (ROOT(S(NP*
75
+ 829 1 2 1 will NN 7 nsubj *)
76
+ 829 1 2 2 , , - - *
77
+ 829 1 2 3 if IN 4 mark (SBAR*
78
+ 829 1 2 4 not RB 7 dep (FRAG*
79
+ 829 1 2 5 already RB 4 dep (ADVP*)))
80
+ 829 1 2 6 , , - - *
81
+ 829 1 2 7 caused VBD -1 root (VP*
82
+ 829 1 2 8 problems NNS 7 dobj (NP*)
83
+ 829 1 2 9 as IN 11 mark (SBAR*
84
+ 829 1 2 10 there EX 11 expl (S(NP*)
85
+ 829 1 2 11 are VBP 7 advcl (VP*
86
+ 829 1 2 12 very RB 13 advmod (NP(NP(ADJP*
87
+ 829 1 2 13 limited VBN 14 amod *)
88
+ 829 1 2 14 spaces NNS 11 nsubj *)
89
+ 829 1 2 15 for IN 14 prep (PP*
90
+ 829 1 2 16 us PRP 15 pobj (NP*)))))))
91
+ 829 1 2 17 . . - - *))
92
+
93
+
94
+ The columns represent the following:
95
+
96
+ Column Type Description
97
+
98
+ 0 NID Document id of the sentence, equals to "nid" in NUCLE.
99
+ 1 PID Paragraph index of the sentence, according to the paragraphing in NUCLE (<p></p>).
100
+ 2 SID Sentence index in paragraph, each sentence has its own index starting from 0.
101
+ 3 TOKENID Token index in the sentence, starting from 0.
102
+ 4 TOKEN Word/token.
103
+ 5 POS Part of speech tag.
104
+ 6 DPHEAD Index of parent in dependency tree.
105
+ 7 DPREL Dependency relation with parent.
106
+ 8 SYNT Constituent tree. The constituent tree can be recovered as follows:
107
+ (a) Replacing "*" in this column with a string "(pos word)",
108
+ where pos is the value of column 5, word is the value of column 4.
109
+ (b) Concatenating all the strings in (a) gives
110
+ the bracketing structure of the constituent parse tree.
111
+
112
+ ------------------------------------------------------------------------
113
+
114
+ Here is the corresponding token-level annotation for the above
115
+ sentence (in conll14st-preprocessed.conll.ann):
116
+
117
+ <ANNOTATION>
118
+ <MISTAKE nid="829" pid="1" sid="2" start_token="7" end_token="8">
119
+ <TYPE>Vform</TYPE>
120
+ <CORRECTION>cause</CORRECTION>
121
+ </MISTAKE>
122
+ <MISTAKE nid="829" pid="1" sid="2" start_token="14" end_token="15">
123
+ <TYPE>Nn</TYPE>
124
+ <CORRECTION>space</CORRECTION>
125
+ </MISTAKE>
126
+ <MISTAKE nid="829" pid="1" sid="2" start_token="11" end_token="12">
127
+ <TYPE>SVA</TYPE>
128
+ <CORRECTION>is</CORRECTION>
129
+ </MISTAKE>
130
+ </ANNOTATION>
131
+
132
+ The tags represent the following:
133
+
134
+ Tag Description
135
+
136
+ <ANNOTATION> Each <ANNOTATION></ANNOTATION> section identifies annotations for one sentence.
137
+
138
+ <MISTAKE> Identifies an error annotation, with the following attributes:
139
+ nid: Document id of the sentence, equals to the NID column (column 0) in .conll file
140
+ pid: Paragraph index of the sentence, equals to the PID column in .conll file
141
+ sid: Sentence index in paragraph, equals to the SID column in .conll file
142
+ start_token: The token index (TOKENID column in .conll file) which is the start of annotation.
143
+ end_token: The token index which is the end of the annotation.
144
+
145
+ <TYPE> Error tag (refer to the NUCLE corpus README file for the complete list of error tags).
146
+
147
+ <CORRECTION> Correction, replacing tokens in the interval [start_token, end_token) with the
148
+ correction string will result in a corrected sentence.
149
+
150
+ ------------------------------------------------------------------------
151
+
152
+ How to map a sentence to its annotation?
153
+
154
+ In conll14st-preprocessed.conll, different sentences are separated by
155
+ empty lines, and <ANNOTATION></ANNOTATION> sections are also separated
156
+ by empty lines in conll14st-preprocessed.conll.ann. A sentence maps
157
+ to one <ANNOTATION></ANNOTATION> section, with the same nid, pid, and
158
+ sid. If a sentence has no annotation, there is no
159
+ <ANNOTATION></ANNOTATION> section for it. The order of the
160
+ <ANNOTATION></ANNOTATION> sections is the same as the order of
161
+ sentences in the preprocesed file.
162
+
163
+
164
+ 5. Updates included in version 2.1
165
+ ==================================
166
+
167
+ The error categories Wcip and Rloc have been mapped to Prep, Wci,
168
+ ArtOrDet, and Rloc-, to facilitate the detection and correction of
169
+ preposition errors and article/determiner errors. See the NUCLE README
170
+ file for details on how the mapping was done.
171
+
172
+ 50 overlapping error annotations involving the 5 error tags ArtOrDet,
173
+ Prep, Nn, Vform, and SVA have been modified such that they do not
174
+ overlap in the revised error annotations.
175
+
176
+ Some minor mistakes in error annotations have been corrected.
177
+
178
+ This was done for the CoNLL-2013 Shared Task.
179
+
180
+
181
+ 6. Updates included in version 3.0
182
+ ==================================
183
+
184
+ The overlapping error annotations of the remaining error tags have
185
+ been modified such that they do not overlap in the revised error
186
+ annotations.
187
+
188
+ Some minor mistakes in error annotations have been corrected.
189
+
190
+
191
+ 7. Updates included in version 3.1
192
+ ==================================
193
+
194
+ Duplicate error annotations, each having the same span and correction
195
+ string but different error type, are removed to keep only one of
196
+ those.
197
+
198
+ Previously, annotations spanning to the end of the paragraph was
199
+ detected as cross-sentence and therefore not included in the
200
+ preprocessed format. This has been fixed such that they are included
201
+ in the preprocessed format.
202
+
203
+ Some minor mistakes in error annotations have been corrected.
204
+
205
+
206
+ 8. Updates included in version 3.2
207
+ ==================================
208
+
209
+ The preprocessing script has been fixed such that gold edits that
210
+ insert empty string are not included in the token-level gold edit and
211
+ scorer answer files.
212
+
213
+
214
+ 9. References
215
+ =============
216
+
217
+ [1] Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language
218
+ Processing with Python. O'Reilly Media Inc. http://nltk.org/
219
+
220
+ [2] Dan Klein and Christopher D. Manning. 2003. Accurate Unlexicalized
221
+ Parsing. Proceedings of the 41st Annual Meeting of the Association for
222
+ Computational Linguistics, pp. 423-430.
223
+ Stanford parser version 2.0.1.
224
+ http://nlp.stanford.edu/software/stanford-parser-2012-03-09.tgz
README ADDED
@@ -0,0 +1,261 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ NUCLE Release 3.3
2
+ 24 Jan 2019
3
+
4
+ This README file describes the NUS Corpus of Learner English (NUCLE).
5
+ It was collected in a collaboration project between the National
6
+ University of Singapore (NUS) Natural Language Processing (NLP) Group
7
+ led by Prof. Hwee Tou Ng and the NUS Centre for English Language
8
+ Communication (CELC) led by Prof. Siew Mei Wu. The work was carried
9
+ out as part of the PhD thesis research of Daniel Dahlmeier at the NUS
10
+ NLP Group.
11
+
12
+ The corpus is distributed under the standard NUS licensing agreement
13
+ available when downloading the corpus. Any questions regarding NUCLE
14
+ should be directed to Hwee Tou Ng at: nght@comp.nus.edu.sg
15
+
16
+ If you are using the NUCLE corpus in your work, please include a
17
+ citation of the following paper:
18
+
19
+ Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu (2013). Building a
20
+ Large Annotated Corpus of Learner English: The NUS Corpus of Learner
21
+ English. Proceedings of the Eighth Workshop on Innovative Use of NLP
22
+ for Building Educational Applications (BEA 2013). (pp. 22 --
23
+ 31). Atlanta, Georgia, USA.
24
+
25
+
26
+ 1. About
27
+ ========
28
+
29
+ NUCLE is a corpus of learner English. It consists of about 1,400
30
+ essays written by university students at the National University of
31
+ Singapore on a wide range of topics, such as environmental pollution,
32
+ healthcare, etc. It contains over one million words which are
33
+ completely annotated with error categories and corrections. All
34
+ annotations have been performed by professional English instructors at
35
+ the NUS CELC.
36
+
37
+
38
+ 2. Data Format
39
+ ==============
40
+
41
+ The corpus is distributed in a simple SGML format. All annotations
42
+ come in a "stand-off" format. The start position and end position of
43
+ an annotation are given by paragraph and character offsets.
44
+ Paragraphs are enclosed in <P>...</P> tags. Paragraphs and characters
45
+ are counted starting from zero. Each annotation includes the following
46
+ fields: the error category, the correction, and optionally a
47
+ comment. If the correction replaces the original text at the given
48
+ location, it should fix the grammatical error.
49
+
50
+ Example:
51
+
52
+ <DOC nid="840">
53
+ <TEXT>
54
+ <P>
55
+ Engineering design process can be defined as a process ...
56
+ </P>
57
+ <P>
58
+ Firstly, engineering design ...
59
+ </P>
60
+ ...
61
+ </TEXT>
62
+ <ANNOTATION teacher_id="173">
63
+ <MISTAKE start_par="0" start_off="0" end_par="0" end_off="26">
64
+ <TYPE>ArtOrDet</TYPE>
65
+ <CORRECTION>The engineering design process</CORRECTION>
66
+ </MISTAKE>
67
+ ...
68
+ </ANNOTATION>
69
+ </DOC>
70
+ <DOC nid="862">
71
+ ...
72
+
73
+
74
+ Below is a complete list of the error categories in NUCLE release 2.0:
75
+
76
+ ERROR TAG ERROR CATEGORY
77
+ ---------------------------
78
+ Vt Verb tense
79
+ Vm Verb modal
80
+ V0 Missing verb
81
+ Vform Verb form
82
+ SVA Subject-verb-agreement
83
+ ArtOrDet Article or Determiner
84
+ Nn Noun number
85
+ Npos Noun possesive
86
+ Pform Pronoun form
87
+ Pref Pronoun reference
88
+ Wcip Wrong collocation/idiom/preposition
89
+ Wa Acronyms
90
+ Wform Word form
91
+ Wtone Tone
92
+ Srun Runons, comma splice
93
+ Smod Dangling modifier
94
+ Spar Parallelism
95
+ Sfrag Fragment
96
+ Ssub Subordinate clause
97
+ WOinc Incorrect sentence form
98
+ WOadv Adverb/adjective position
99
+ Trans Link word/phrases
100
+ Mec Punctuation, capitalization, spelling, typos
101
+ Rloc Local redundancy
102
+ Cit Citation
103
+ Others Other errors
104
+ Um Unclear meaning (cannot be corrected)
105
+
106
+
107
+ 3. Updates included in version 2.1
108
+ ==================================
109
+
110
+ The major change made in version 2.1 is to map the error categories
111
+ Wcip and Rloc to Prep, Wci, ArtOrDet, and Rloc-.
112
+
113
+ In the original NUCLE corpus, there is not an explicit preposition
114
+ error category. Instead, preposition errors are part of the Wcip
115
+ (Wrong collocation/idiom/preposition) and Rloc (local redundancy)
116
+ error categories. In addition, redundant article or determiner errors
117
+ are part of the Rloc error category.
118
+
119
+ To facilitate the detection and correction of preposition errors and
120
+ article/determiner errors, we perform mapping of error categories in
121
+ the original NUCLE corpus. The mapping relies on POS tags, constituent
122
+ parse trees, and error annotations at the token level.
123
+
124
+ (a) Conditions to change from the error category Wcip or Rloc to Prep:
125
+
126
+ This applies to replacing a preposition by another preposition, or
127
+ deleting a preposition. The string to be replaced is one word w with
128
+ POS tag IN or TO, the parent of w is a PP in the constituent parse
129
+ tree, and the replacement is either a preposition or the empty string.
130
+
131
+ (b) Conditions to change from the error category Wcip to Prep:
132
+
133
+ This applies to inserting a preposition. The replacement is a
134
+ preposition (one word only) and the immediately following word is
135
+ tagged as VBG or is the first word of a noun phrase (NP).
136
+
137
+ (c) Conditions to change from the error category Rloc to ArtOrDet:
138
+
139
+ The single word has POS tag DT and the replacement is the empty
140
+ string.
141
+
142
+ The remaining unaffected "Wcip" errors are assigned the new error
143
+ category "Wci" and the remaining unaffected "Rloc" errors are assigned
144
+ the new error category "Rloc-".
145
+
146
+ List of 36 Prepositions:
147
+
148
+ about along among around as at beside besides between by down during
149
+ except for from in inside into of off on onto outside over through to
150
+ toward towards under underneath until up upon with within without
151
+
152
+ Below is a complete list of 28 error categories in NUCLE release 2.1:
153
+
154
+ ERROR TAG ERROR CATEGORY
155
+ ---------------------------
156
+ Vt Verb tense
157
+ Vm Verb modal
158
+ V0 Missing verb
159
+ Vform Verb form
160
+ SVA Subject-verb-agreement
161
+ ArtOrDet Article or Determiner
162
+ Nn Noun number
163
+ Npos Noun possesive
164
+ Pform Pronoun form
165
+ Pref Pronoun reference
166
+ Prep Preposition
167
+ Wci Wrong collocation/idiom
168
+ Wa Acronyms
169
+ Wform Word form
170
+ Wtone Tone
171
+ Srun Runons, comma splice
172
+ Smod Dangling modifier
173
+ Spar Parallelism
174
+ Sfrag Fragment
175
+ Ssub Subordinate clause
176
+ WOinc Incorrect sentence form
177
+ WOadv Adverb/adjective position
178
+ Trans Link word/phrases
179
+ Mec Punctuation, capitalization, spelling, typos
180
+ Rloc- Local redundancy
181
+ Cit Citation
182
+ Others Other errors
183
+ Um Unclear meaning (cannot be corrected)
184
+
185
+
186
+ 3. Updates included in version 2.2
187
+ ==================================
188
+
189
+ - Fixed the bug on expanding an error annotation involving part of a
190
+ token to the full token.
191
+
192
+ - Other miscellaneous corrections were made.
193
+
194
+
195
+ 4. Updates included in version 2.3
196
+ ==================================
197
+
198
+ - Fixed the bug involving tokenization of punctuation symbols in the
199
+ correction string.
200
+
201
+ - Fixed the tokenization example in the README file of the M^2 scorer
202
+ to reflect the real tokenization to be used and removed irrelevant
203
+ codes from the scorer package.
204
+
205
+
206
+ 5. Updates included in version 3.0
207
+ ==================================
208
+
209
+ - Resolved overlapping annotations in the NUCLE corpus to make them
210
+ non-overlapping.
211
+
212
+ - Corrected some minor mistakes in error annotations.
213
+
214
+
215
+ 6. Updates included in version 3.1
216
+ ==================================
217
+
218
+ - Removed duplicate annotations in the NUCLE corpus with the same span
219
+ and correction string but different error type so as to keep only one of
220
+ those annotations. This fix only affects 0.1% of all annotations.
221
+
222
+ - Fixed end-of-paragraph annotations so that the end offset of such
223
+ annotations is the last character position in the paragraph. This fix
224
+ only affects 0.7% of all annotations.
225
+
226
+ - Corrected some minor mistakes in error annotations.
227
+
228
+ - Inclusion of the CoNLL-2013 test data, with all the known problems
229
+ described above fixed. Participating teams in the CoNLL-2014 shared
230
+ task can make use of the CoNLL-2013 test data in training and
231
+ developing their systems if they wish to do so.
232
+
233
+ - Fixed a minor bug in the M2 scorer that caused duplicate insertion
234
+ edits to receive high scores.
235
+
236
+
237
+ 7. Updates included in version 3.2
238
+ ==================================
239
+
240
+ - Fixed the preprocessing script such that a gold edit that inserts an
241
+ empty string is not included in the token-level gold edit and scorer
242
+ answer files.
243
+
244
+ - Removed one edit that inserted an empty string from the CoNLL-2014
245
+ test data. Also removed such instances from the NUCLE training data.
246
+
247
+ - Fixed a bug in the M2 scorer arising from scoring against gold edits
248
+ from multiple annotators. Specifically, the bug sometimes caused
249
+ incorrect scores to be reported when scoring against the gold edits
250
+ of subsequent annotators (other than the first annotator).
251
+
252
+ - Fixed a bug in the M2 scorer that caused erroneous scores to be
253
+ reported when dealing with insertion edits followed by deletion edits
254
+ (or vice versa).
255
+
256
+
257
+ 8. Updates included in version 3.3
258
+ ==================================
259
+
260
+ - Added a subdirectory "bea2019" to contain the ERRANT typed NUCLE M2
261
+ file for the BEA 2019 shared task on grammatical error correction.
README.md ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # NUCLE Release 3.3
2
+ **24 Jan 2019**
3
+
4
+ This page describes the **NUS Corpus of Learner English (NUCLE)**.
5
+ It was collected in a collaboration project between the National University of Singapore (NUS) Natural Language Processing (NLP) Group led by Prof. Hwee Tou Ng and the NUS Centre for English Language Communication (CELC) led by Prof. Siew Mei Wu. The work was carried out as part of the PhD thesis research of Daniel Dahlmeier at the NUS NLP Group.
6
+
7
+ The corpus is distributed under the standard NUS licensing agreement available when downloading the corpus.
8
+
9
+ Any questions regarding NUCLE should be directed to Hwee Tou Ng at: `nght@nus.edu.sg`.
10
+
11
+ If you use the NUCLE corpus in your work, please cite the following paper:
12
+
13
+ > Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu (2013). *Building a Large Annotated Corpus of Learner English: The NUS Corpus of Learner English.*
14
+ > Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2013), pp. 22–31. Atlanta, Georgia, USA.
15
+
16
+ ---
17
+
18
+ ## 1. About
19
+
20
+ NUCLE is a corpus of learner English. It consists of about 1,400 essays written by university students at the National University of Singapore on a wide range of topics, such as environmental pollution, healthcare, etc. It contains over one million words which are completely annotated with error categories and corrections. All annotations have been performed by professional English instructors at the NUS CELC.
21
+
22
+ ---
23
+
24
+ ## 2. Data Format
25
+
26
+ The corpus is distributed in a simple SGML format. All annotations come in a "stand-off" format. The start position and end position of an annotation are given by paragraph and character offsets. Paragraphs are enclosed in `<P>...</P>` tags. Paragraphs and characters are counted starting from zero. Each annotation includes the following fields: the error category, the correction, and optionally a comment. If the correction replaces the original text at the given location, it should fix the grammatical error.
27
+
28
+ ### Example
29
+
30
+ ```sgml
31
+ <DOC nid="840">
32
+ <TEXT>
33
+ <P>
34
+ Engineering design process can be defined as a process ...
35
+ </P>
36
+ <P>
37
+ Firstly, engineering design ...
38
+ </P>
39
+ ...
40
+ </TEXT>
41
+ <ANNOTATION teacher_id="173">
42
+ <MISTAKE start_par="0" start_off="0" end_par="0" end_off="26">
43
+ <TYPE>ArtOrDet</TYPE>
44
+ <CORRECTION>The engineering design process</CORRECTION>
45
+ </MISTAKE>
46
+ ...
47
+ </ANNOTATION>
48
+ </DOC>
49
+ <DOC nid="862">
50
+ ...
51
+ ```
52
+
53
+ ### Error categories in NUCLE release 2.0:
54
+
55
+ | ERROR TAG | ERROR CATEGORY |
56
+ |-----------|---------------------------------------------------------|
57
+ | Vt | Verb tense |
58
+ | Vm | Verb modal |
59
+ | V0 | Missing verb |
60
+ | Vform | Verb form |
61
+ | SVA | Subject–verb agreement |
62
+ | ArtOrDet | Article or Determiner |
63
+ | Nn | Noun number |
64
+ | Npos | Noun possessive |
65
+ | Pform | Pronoun form |
66
+ | Pref | Pronoun reference |
67
+ | Wcip | Wrong collocation/idiom/preposition |
68
+ | Wa | Acronyms |
69
+ | Wform | Word form |
70
+ | Wtone | Tone |
71
+ | Srun | Run-ons, comma splice |
72
+ | Smod | Dangling modifier |
73
+ | Spar | Parallelism |
74
+ | Sfrag | Fragment |
75
+ | Ssub | Subordinate clause |
76
+ | WOinc | Incorrect sentence form |
77
+ | WOadv | Adverb/adjective position |
78
+ | Trans | Link word/phrases |
79
+ | Mec | Punctuation, capitalization, spelling, typos |
80
+ | Rloc | Local redundancy |
81
+ | Cit | Citation |
82
+ | Others | Other errors |
83
+ | Um | Unclear meaning (cannot be corrected) |
84
+
85
+ ---
86
+
87
+ ## 3. Updates in Version 2.1
88
+
89
+ The major change made in version 2.1 is to map the error categories `Wcip` and `Rloc` to `Prep`, `Wci`, `ArtOrDet`, and `Rloc-`.
90
+
91
+ In the original NUCLE corpus, there is not an explicit preposition error category. Instead, preposition errors are part of the Wcip (Wrong collocation/idiom/preposition) and Rloc (local redundancy) error categories. In addition, redundant article or determiner errors are part of the Rloc error category.
92
+
93
+ To facilitate the detection and correction of preposition errors and article/determiner errors, we perform mapping of error categories in the original NUCLE corpus. The mapping relies on POS tags, constituent parse trees, and error annotations at the token level.
94
+
95
+ ### (a) Conditions to change from the error category Wcip or Rloc to Prep:
96
+
97
+ This applies to replacing a preposition by another preposition, or deleting a preposition. The string to be replaced is one word w with POS tag IN or TO, the parent of w is a PP in the constituent parse tree, and the replacement is either a preposition or the empty string.
98
+
99
+ ### (b) Conditions to change from the error category Wcip to Prep:
100
+
101
+ This applies to inserting a preposition. The replacement is a preposition (one word only) and the immediately following word is tagged as VBG or is the first word of a noun phrase (NP).
102
+
103
+ ### (c) Conditions to change from the error category Rloc to ArtOrDet:
104
+
105
+ The single word has POS tag DT and the replacement is the empty string.
106
+
107
+ The remaining unaffected `Wcip` errors are assigned the new error category `Wci` and the remaining unaffected `Rloc` errors are assigned the new error category `Rloc-`.
108
+
109
+ ### List of 36 Prepositions:
110
+
111
+ ```
112
+ about along among around as at beside besides between by down during except for from in inside into of off on onto outside over through to toward towards under underneath until up upon with within without
113
+ ```
114
+
115
+ ### Error categories in NUCLE release 2.1:
116
+
117
+ | ERROR TAG | ERROR CATEGORY |
118
+ |-----------|-----------------------------------------|
119
+ | Vt | Verb tense |
120
+ | Vm | Verb modal |
121
+ | V0 | Missing verb |
122
+ | Vform | Verb form |
123
+ | SVA | Subject–verb agreement |
124
+ | ArtOrDet | Article or Determiner |
125
+ | Nn | Noun number |
126
+ | Npos | Noun possessive |
127
+ | Pform | Pronoun form |
128
+ | Pref | Pronoun reference |
129
+ | Prep | Preposition |
130
+ | Wci | Wrong collocation/idiom |
131
+ | Wa | Acronyms |
132
+ | Wform | Word form |
133
+ | Wtone | Tone |
134
+ | Srun | Run-ons, comma splice |
135
+ | Smod | Dangling modifier |
136
+ | Spar | Parallelism |
137
+ | Sfrag | Fragment |
138
+ | Ssub | Subordinate clause |
139
+ | WOinc | Incorrect sentence form |
140
+ | WOadv | Adverb/adjective position |
141
+ | Trans | Link word/phrases |
142
+ | Mec | Punctuation, capitalization, spelling, typos |
143
+ | Rloc- | Local redundancy |
144
+ | Cit | Citation |
145
+ | Others | Other errors |
146
+ | Um | Unclear meaning (cannot be corrected) |
147
+
148
+
149
+ ---
150
+
151
+ ## 4. Updates included in version 2.2
152
+
153
+ - Fixed the bug on expanding an error annotation involving part of a token to the full token.
154
+ - Other miscellaneous corrections were made.
155
+
156
+ ---
157
+
158
+ ## 5. Updates included in version 2.3
159
+
160
+ - Fixed the bug involving tokenization of punctuation symbols in the correction string.
161
+ - Fixed the tokenization example in the README file of the M^2 scorer to reflect the real tokenization to be used and removed irrelevant codes from the scorer package.
162
+
163
+ ---
164
+
165
+ ## 6. Updates included in version 3.0
166
+
167
+ - Resolved overlapping annotations in the NUCLE corpus to make them non-overlapping.
168
+ - Corrected some minor mistakes in error annotations.
169
+
170
+ ---
171
+
172
+ ## 7. Updates included in version 3.1
173
+
174
+ - Removed duplicate annotations in the NUCLE corpus with the same span and correction string but different error type so as to keep only one of those annotations. This fix only affects 0.1% of all annotations.
175
+ - Fixed end-of-paragraph annotations so that the end offset of such annotations is the last character position in the paragraph. This fix only affects 0.7% of all annotations.
176
+ - Corrected some minor mistakes in error annotations.
177
+ - Inclusion of the CoNLL-2013 test data, with all the known problems described above fixed. Participating teams in the CoNLL-2014 shared task can make use of the CoNLL-2013 test data in training and developing their systems if they wish to do so.
178
+ - Fixed a minor bug in the M2 scorer that caused duplicate insertion edits to receive high scores.
179
+
180
+ ---
181
+
182
+ ## 8. Updates included in version 3.2
183
+
184
+ - Fixed the preprocessing script such that a gold edit that inserts an empty string is not included in the token-level gold edit and scorer answer files.
185
+ - Removed one edit that inserted an empty string from the CoNLL-2014 test data. Also removed such instances from the NUCLE training data.
186
+ - Fixed a bug in the M2 scorer arising from scoring against gold edits from multiple annotators. Specifically, the bug sometimes caused incorrect scores to be reported when scoring against the gold edits of subsequent annotators (other than the first annotator).
187
+ - Fixed a bug in the M2 scorer that caused erroneous scores to be reported when dealing with insertion edits followed by deletion edits (or vice versa).
188
+
189
+ ---
190
+
191
+ ## 9. Updates included in version 3.3
192
+
193
+ - Added a subdirectory "bea2019" to contain the ERRANT typed NUCLE M2 file for the BEA 2019 shared task on grammatical error correction.
bea2019/nucle.train.gold.bea19.m2 ADDED
The diff for this file is too large to render. See raw diff
 
bea2019/readme.txt ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ This directory contains the official NUCLE training file used in the BEA2019 shared task.
2
+
3
+ Specifically, nucle.train.gold.bea19.m2 is the same as the NUCLE M2 file used in the CoNLL2014 shared task, except the error types have been automatically standardised using the ERRANT framework: https://github.com/chrisjbryant/errant
4
+
5
+ The official BEA19 file was generated using the following command in Python 3.5:
6
+
7
+ python3 errant/m2_to_m2.py -gold <nucle.conll2014.m2> -out nucle.train.gold.bea19.m2
8
+
9
+ This used spacy v1.9.0 and the en_core_web_sm-1.2.0 model.
data/conll14st-preprocessed.conll.ann ADDED
The diff for this file is too large to render. See raw diff
 
data/conll14st-preprocessed.m2 ADDED
The diff for this file is too large to render. See raw diff
 
data/nucle3.2.sgml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8441a60060730bdc6af1e79a064c68441ecb24a66da91ea2aab0061a64a130ac
3
+ size 12636154
scripts/README ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ====================================================
2
+
3
+ CoNLL-2014 Shared Task: Grammatical Error Correction
4
+
5
+ Description of Data Preprocessing Scripts
6
+
7
+ 4 Aug 2014 Version 3.2
8
+ ====================================================
9
+
10
+
11
+ Table of Contents
12
+ =================
13
+
14
+ 1. General
15
+ 2. Pre-requisites
16
+ 3. Usage
17
+
18
+ 1. General
19
+ ==========
20
+
21
+ This README file describes the usage of scripts for preprocessing the NUCLE version 3.2 corpus.
22
+
23
+ Quickstart:
24
+
25
+ a. Regenerate the preprocessed files with full syntactic information:
26
+ % python preprocess.py -o nucle.sgml conllFileName annFileName m2FileName
27
+
28
+ b. Get tokenized annotations without syntactic information:
29
+ % python preprocess.py -l nucle.sgml conllFileName annFileName m2FileName
30
+
31
+ where
32
+ nucle.sgml - input SGML file
33
+ conllFileName - output file that contains pre-processed sentences in CoNLL format.
34
+ annFileName - output file that contains standoff error annotations.
35
+ m2FileName - output file that contains error annotations in the M2 scorer format.
36
+
37
+ 2. Pre-requisites
38
+ =================
39
+
40
+ + Python (2.6.4, other versions >= 2.6.4, < 3.0 might work but are not tested)
41
+ + nltk (http://www.nltk.org, version 2.0b7, needed for sentence splitting and word tokenization)
42
+ + Stanford parser (version 2.0.1, http://nlp.stanford.edu/software/stanford-parser-2012-03-09.tgz)
43
+
44
+ If you only use the scripts to generate error annotations needed by the M2 scorer, Stanford parser is not required.
45
+ Otherwise, "stanford-parser-2012-03-09" need to be in the same directory as "scripts".
46
+
47
+ 3. Usage
48
+ ========
49
+
50
+ Preprocessing the data from single annotation
51
+
52
+ Usage: python preprocess.py OPTIONS sgmlFileName conllFileName annotationFileName m2FileName
53
+
54
+ Where
55
+ sgmlFileName - NUCLE SGML file
56
+ conllFileName - output file name for pre-processed sentences in CoNLL format (e.g., conll14st-preprocessed.conll).
57
+ annotationFileName - output file name for error annotations (e.g., conll14st-preprocessed.conll.ann).
58
+ m2FileName - output file name in the M2 scorer format (e.g., conll14st-preprocessed.conll.m2).
59
+
60
+ OPTIONS
61
+ -o - output will contain POS tags and parse tree info (i.e., the same as the released preprocessed file, runs slowly).
62
+ -l - output will NOT contain POS tags and parse tree info (runs quickly).
63
+
scripts/iparser.py ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # iparser.py
2
+ #
3
+ # Author: Yuanbin Wu
4
+ # National University of Singapore (NUS)
5
+ # Date: 12 Mar 2013
6
+ # Version: 1.0
7
+ #
8
+ # Contact: wuyb@comp.nus.edu.sg
9
+ #
10
+ # This script is distributed to support the CoNLL-2013 Shared Task.
11
+ # It is free for research and educational purposes.
12
+
13
+ import os
14
+ import sys
15
+
16
+ class stanfordparser:
17
+
18
+ def __init__(self):
19
+ pass
20
+
21
+ def parse_batch(self, sentenceDumpedFileName, parsingDumpedFileName):
22
+
23
+ if os.path.exists('../stanford-parser-2012-03-09') == False:
24
+ print >> sys.stderr, 'can not find Stanford parser directory'
25
+ sys.exit(1)
26
+
27
+ # tokenized
28
+ cmd = r'java -server -mx4096m -cp "../stanford-parser-2012-03-09/*:" edu.stanford.nlp.parser.lexparser.LexicalizedParser -retainTMPSubcategories -sentences newline -tokenized -escaper edu.stanford.nlp.process.PTBEscapingProcessor -outputFormat "wordsAndTags, penn, typedDependencies" -outputFormatOptions "basicDependencies" edu/stanford/nlp/models/lexparser/englishPCFG.ser.gz ' + sentenceDumpedFileName
29
+
30
+ r = os.popen(cmd).read().strip().decode('utf-8')
31
+ f = open(parsingDumpedFileName, 'w')
32
+ f.write(r.encode('utf-8'))
33
+ f.close()
34
+
35
+ rlist = r.replace('\n\n\n', '\n\n\n\n').split('\n\n')
36
+ return rlist
scripts/nucle_doc.py ADDED
@@ -0,0 +1,188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # nucle_doc.py
2
+ #
3
+ # Author: Yuanbin Wu
4
+ # National University of Singapore (NUS)
5
+ # Date: 12 Mar 2013
6
+ # Version: 1.0
7
+ #
8
+ # Contact: wuyb@comp.nus.edu.sg
9
+ #
10
+ # This script is distributed to support the CoNLL-2013 Shared Task.
11
+ # It is free for research and educational purposes.
12
+
13
+ import os
14
+ import sys
15
+ from nltk import word_tokenize
16
+
17
+ class nucle_doc:
18
+ def __init__(self):
19
+ self.docattrs = None
20
+
21
+ self.matric = ''
22
+ self.email = ''
23
+ self.nationality = ''
24
+ self.firstLanguage = ''
25
+ self.schoolLanguage = ''
26
+ self.englishTests = ''
27
+
28
+ self.paragraphs = []
29
+ self.annotation = []
30
+ self.mistakes = []
31
+
32
+ self.sentences = []
33
+
34
+ def buildSentence(self, sentstr, dpnode, constituentstr, poslist, chunklist):
35
+ self.sentences[-1].append(nucle_sent(sentstr, dpnode, constituentstr, poslist, chunklist))
36
+
37
+ def addSentence(self, sent):
38
+ self.sentences[-1].append(sent)
39
+
40
+ def findMistake(self, par, pos):
41
+ for m in self.mistakes:
42
+ if par == m['start_par'] and pos >= m['start_off'] and pos < m['end_off']:
43
+ return m
44
+ return None
45
+
46
+
47
+ class nucle_sent:
48
+ def __init__(self, sentstr, dpnode, constituentstr, poslist, chunklist):
49
+ self.sentstr = sentstr
50
+ self.words = word_tokenize(sentstr)
51
+ self.dpnodes = dpnode
52
+ self.constituentstr = constituentstr
53
+ self.constituentlist = []
54
+ self.poslist = poslist
55
+ self.chunklist = chunklist
56
+
57
+ def buildConstituentList(self):
58
+
59
+ s = self.constituentstr.strip().replace('\n', '').replace(' ', '')
60
+ r = []
61
+ i = 0
62
+ while i < len(s):
63
+ j = i
64
+ while j < len(s) and s[j] != ')':
65
+ j += 1
66
+ k = j
67
+ while k < len(s) and s[k] == ')':
68
+ k += 1
69
+
70
+ nodeWholeStr = s[i:k]
71
+ lastLRBIndex = nodeWholeStr.rfind('(')
72
+ nodeStr = nodeWholeStr[:lastLRBIndex] + '*' + s[j+1:k]
73
+
74
+ r.append(nodeStr)
75
+ i = k
76
+
77
+ if len(r) != len(self.words):
78
+ print >> sys.stderr, 'Error in buiding constituent tree bits: different length with words.'
79
+ print >> sys.stderr, len(r), len(self.words)
80
+ print >> sys.stderr, ' '.join(r).encode('utf-8')
81
+ print >> sys.stderr, words
82
+ sys.exit(1)
83
+
84
+ self.constituentlist = r
85
+
86
+
87
+
88
+ def setDpNode(self, dpnode):
89
+ self.dpnodes = dpnode
90
+
91
+ def setPOSList(self, poslist):
92
+ self.poslist = poslist
93
+
94
+ def setConstituentStr(self, constituentstr):
95
+ self.constituentstr = constituentstr
96
+
97
+ def setConstituentList(self, constituentlist):
98
+ self.constituentlist = constituentlist
99
+
100
+ def setWords(self, words):
101
+ self.words = words
102
+
103
+ def setChunkList(self, chunklist):
104
+ self.chunklist = chunklist
105
+
106
+ def getDpNode(self):
107
+ return self.dpnodes
108
+
109
+ def getPOSList(self):
110
+ return self.poslist
111
+
112
+ def getConstituentStr(self):
113
+ return self.constituentstr
114
+
115
+ def getConstituentList(self):
116
+ return self.constituentlist
117
+
118
+ def getWords(self):
119
+ return self.words
120
+
121
+ def getChunkList(self):
122
+ return self.chunklist
123
+
124
+ def getConllFormat(self, doc, paragraphIndex, sentIndex):
125
+
126
+ table = []
127
+
128
+ dpnodes = self.getDpNode()
129
+ poslist = self.getPOSList()
130
+ #chunklist = self.getChunkList()
131
+ words = self.getWords()
132
+ constituentlist = self.getConstituentList()
133
+
134
+ if len(poslist) == 0:
135
+ hasParseInfo = 0
136
+ else:
137
+ hasParseInfo = 1
138
+
139
+ if len(words) != len(poslist) and len(poslist) != 0:
140
+ print >> sys.stderr, 'Error in buiding Conll Format: different length stanford parser postags and words.'
141
+ print >> sys.stderr, 'len words:', len(words), words
142
+ print >> sys.stderr, 'len poslist:', len(poslist), poslist
143
+ sys.exit(1)
144
+
145
+ for wdindex in xrange(len(words)):
146
+
147
+ word = words[wdindex]
148
+
149
+ row = []
150
+ row.append(doc.docattrs[0][1]) #docinfo
151
+ row.append(paragraphIndex) #paragraph index
152
+ row.append(sentIndex) #paragraph index
153
+ row.append(wdindex) #word index
154
+ row.append(word) #word
155
+
156
+ #row.append(chunknode.label) #chunk
157
+ if hasParseInfo == 1:
158
+
159
+ posword = poslist[wdindex]
160
+ splitp = posword.rfind('/')
161
+ pos = posword[splitp+1 : ].strip()
162
+
163
+ #chunknode = chunklist[wdindex]
164
+
165
+ constituentnode = constituentlist[wdindex]
166
+
167
+ dpnode = None
168
+ for d in dpnodes:
169
+ if d.index == wdindex:
170
+ dpnode = d
171
+ break
172
+
173
+ row.append(pos) #POS
174
+ if dpnode == None:
175
+ row.append('-')
176
+ row.append('-')
177
+ else:
178
+ row.append(dpnode.parent_index) #dp parent
179
+ row.append(dpnode.grammarrole) #dp label
180
+ row.append(constituentnode) #constituent
181
+
182
+ table.append(row)
183
+
184
+ return table
185
+
186
+
187
+
188
+
scripts/nuclesgmlparser.py ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # nuclesgmlparser.py
2
+ #
3
+ # Author: Yuanbin Wu
4
+ # National University of Singapore (NUS)
5
+ # Date: 12 Mar 2013
6
+ # Version: 1.0
7
+ #
8
+ # Contact: wuyb@comp.nus.edu.sg
9
+ #
10
+ # This script is distributed to support the CoNLL-2013 Shared Task.
11
+ # It is free for research and educational purposes.
12
+
13
+ from sgmllib import SGMLParser
14
+ from nucle_doc import nucle_doc
15
+
16
+
17
+ class nuclesgmlparser(SGMLParser):
18
+ def __init__(self):
19
+ SGMLParser.__init__(self)
20
+ self.docs = []
21
+
22
+ def reset(self):
23
+ self.docs = []
24
+ self.data = []
25
+ SGMLParser.reset(self)
26
+
27
+ def unknow_starttag(self, tag, attrs):
28
+ pass
29
+
30
+ def unknow_endtag(self):
31
+ pass
32
+
33
+ def start_doc(self, attrs):
34
+ self.docs.append(nucle_doc())
35
+ self.docs[-1].docattrs = attrs
36
+
37
+ def end_doc(self):
38
+ pass
39
+
40
+ def start_matric(self, attrs):
41
+ pass
42
+
43
+ def end_matric(self):
44
+ self.docs[-1].matric = ''.join(self.data)
45
+ self.data = []
46
+ pass
47
+
48
+ def start_email(self, attrs):
49
+ pass
50
+
51
+ def end_email(self):
52
+ self.docs[-1].email = ''.join(self.data)
53
+ self.data = []
54
+ pass
55
+
56
+ def start_nationality(self, attrs):
57
+ pass
58
+
59
+ def end_nationality(self):
60
+ self.docs[-1].nationality = ''.join(self.data)
61
+ self.data = []
62
+ pass
63
+
64
+ def start_first_language(self, attrs):
65
+ pass
66
+
67
+ def end_first_language(self):
68
+ self.docs[-1].firstLanguage = ''.join(self.data)
69
+ self.data = []
70
+ pass
71
+
72
+ def start_school_language(self, attrs):
73
+ pass
74
+
75
+ def end_school_language(self):
76
+ self.docs[-1].schoolLanguage = ''.join(self.data)
77
+ self.data = []
78
+ pass
79
+
80
+ def start_english_tests(self, attrs):
81
+ pass
82
+
83
+ def end_english_tests(self):
84
+ self.docs[-1].englishTests = ''.join(self.data)
85
+ self.data = []
86
+ pass
87
+
88
+
89
+ def start_text(self, attrs):
90
+ pass
91
+
92
+ def end_text(self):
93
+ pass
94
+
95
+ def start_title(self, attrs):
96
+ pass
97
+
98
+ def end_title(self):
99
+ self.docs[-1].paragraphs.append(''.join(self.data))
100
+ self.data = []
101
+ pass
102
+
103
+
104
+ def start_p(self, attrs):
105
+ pass
106
+
107
+ def end_p(self):
108
+ self.docs[-1].paragraphs.append(''.join(self.data))
109
+ self.data = []
110
+ pass
111
+
112
+
113
+ def start_annotation(self, attrs):
114
+ self.docs[-1].annotation.append(attrs)
115
+
116
+ def end_annotation(self):
117
+ pass
118
+
119
+ def start_mistake(self, attrs):
120
+ d = {}
121
+ for t in attrs:
122
+ d[t[0]] = int(t[1])
123
+ self.docs[-1].mistakes.append(d)
124
+ pass
125
+
126
+ def end_mistake(self):
127
+ pass
128
+
129
+ def start_type(self, attrs):
130
+ pass
131
+
132
+ def end_type(self):
133
+ self.docs[-1].mistakes[-1]['type'] = ''.join(self.data)
134
+ self.data = []
135
+
136
+ def start_correction(self, attrs):
137
+ pass
138
+
139
+ def end_correction(self):
140
+ self.docs[-1].mistakes[-1]['correction'] = ''.join(self.data)
141
+ self.data = []
142
+
143
+ def start_comment(self, attrs):
144
+ pass
145
+
146
+ def end_comment(self):
147
+ self.docs[-1].mistakes[-1]['comment'] = ''.join( self.data)
148
+ self.data = []
149
+
150
+
151
+ def handle_charref(self, ref):
152
+ self.data.append('&' + ref)
153
+
154
+ def handle_entityref(self, ref):
155
+ self.data.append('&' + ref)
156
+
157
+ def handle_data(self, text):
158
+ if text.strip() == '':
159
+ self.data.append('')
160
+ return
161
+ else:
162
+ if text.startswith('\n'):
163
+ text = text[1:]
164
+ if text.endswith('\n'):
165
+ text = text[:-1]
166
+ self.data.append(text)
167
+
168
+
scripts/parser_feature.py ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # parser_feature.py
2
+ #
3
+ # Author: Yuanbin Wu
4
+ # National University of Singapore (NUS)
5
+ # Date: 12 Mar 2013
6
+ # Version: 1.0
7
+ #
8
+ # Contact: wuyb@comp.nus.edu.sg
9
+ #
10
+ # This script is distributed to support the CoNLL-2013 Shared Task.
11
+ # It is free for research and educational purposes.
12
+
13
+
14
+
15
+ import iparser
16
+
17
+ class stanpartreenode:
18
+ def __init__(self, strnode):
19
+
20
+ if strnode == '':
21
+ self.grammarrole = ''
22
+ self.parent_index = -1
23
+ self.index = -1
24
+ self.parent_word = ''
25
+ self.word = ''
26
+ self.POS = ''
27
+ return
28
+
29
+ groleend = strnode.find('(')
30
+ self.grammarrole = strnode[ : groleend]
31
+ content = strnode[groleend + 1: len(strnode)-1]
32
+ dadAndme = content.partition(', ')
33
+ dad = dadAndme[0]
34
+ me = dadAndme[2]
35
+ dadsep = dad.rfind('-')
36
+ mesep = me.rfind('-')
37
+ self.parent_index = int(dad[dadsep + 1 : ]) - 1
38
+ self.parent_word = dad[0 : dadsep]
39
+ self.index = int(me[mesep + 1 : ]) - 1
40
+ self.word = me[0 : mesep]
41
+ self.POS = ''
42
+
43
+
44
+ def DependTree_Batch(sentenceDumpedFileName, parsingDumpedFileName):
45
+
46
+ sparser = iparser.stanfordparser()
47
+ results = sparser.parse_batch(sentenceDumpedFileName, parsingDumpedFileName)
48
+ nodeslist = []
49
+
50
+ k = 0
51
+ while k < len(results):
52
+ PoSlist = results[k].split(' ')
53
+ constituentstr = results[k+1]
54
+ table = results[k+2].split('\n')
55
+ nodes = []
56
+ for i in range(0, len(table)):
57
+ nodes.append( stanpartreenode(table[i]) )
58
+ nodeslist.append((nodes, constituentstr, PoSlist))
59
+ k += 3
60
+ return nodeslist
61
+
62
+ def DependTree_Batch_Parsefile(parsingDumpedFileName):
63
+
64
+ f = open(parsingDumpedFileName, 'r')
65
+ results = f.read().decode('utf-8').replace('\n\n\n', '\n\n\n\n').split('\n\n')
66
+ f.close()
67
+ nodeslist = []
68
+
69
+ k = 0
70
+ while k < len(results):
71
+ PoSlist = results[k].split(' ')
72
+ constituentstr = results[k+1]
73
+ table = results[k+2].split('\n')
74
+
75
+ nodes = []
76
+ for i in range(0, len(table)):
77
+ nodes.append( stanpartreenode(table[i]) )
78
+ nodeslist.append((nodes, constituentstr, PoSlist))
79
+ k += 3
80
+ return nodeslist
scripts/preprocess.py ADDED
@@ -0,0 +1,509 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/python
2
+
3
+ # preprocess.py
4
+ #
5
+ # Author: Yuanbin Wu
6
+ # National University of Singapore (NUS)
7
+ # Date: 12 Mar 2013
8
+ # Version: 1.0
9
+ #
10
+ # Contact: wuyb@comp.nus.edu.sg
11
+ #
12
+ # This script is distributed to support the CoNLL-2013 Shared Task.
13
+ # It is free for research and educational purposes.
14
+ #
15
+ # Usage: python preprocess.py OPTIONS sgmlFileName conllFileName annotationFileName m2FileName
16
+ # Options:
17
+ # -o generate conllFile, annotationFile, m2File from sgmlFile, with parser info.
18
+ # -l generate conllFile, annotationFile, m2File from sgmlFile, without parser info.
19
+
20
+
21
+ import parser_feature
22
+ from nuclesgmlparser import nuclesgmlparser
23
+ from nucle_doc import *
24
+ import nltk.data
25
+ from nltk import word_tokenize
26
+ import cPickle as pickle
27
+ import re
28
+ import sys
29
+ import os
30
+ import getopt
31
+
32
+ class PreProcessor:
33
+
34
+ def __init__(self):
35
+
36
+ self.sentenceTokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
37
+ self.sentenceDumpedFile = 'sentence_file'
38
+ self.docsDumpedFileName = 'docs'
39
+ self.parsingDumpedFileName = 'parse_file'
40
+
41
+ def readNUCLE(self, fn):
42
+
43
+ f = open(fn, 'r')
44
+ parser = nuclesgmlparser()
45
+ filestr = f.read()
46
+ filestr = filestr.decode('utf-8')
47
+
48
+ #Fix Reference tag
49
+ p = re.compile(r'(<REFERENCE>\n<P>\n.*\n)<P>')
50
+ filestr = p.sub(r'\1</P>', filestr)
51
+
52
+ parser.feed(filestr)
53
+ f.close()
54
+ parser.close()
55
+
56
+ return parser.docs
57
+
58
+
59
+ def sentenceSplit(self, docs):
60
+
61
+ for doc in docs:
62
+ for par in doc.paragraphs:
63
+ doc.sentences.append([])
64
+ for s in self.sentenceTokenizer.tokenize(par):
65
+ doc.buildSentence(s, [], '', [], [])
66
+ return docs
67
+
68
+
69
+ def featureGeneration(self, docs, option):
70
+
71
+ # build parsing feature
72
+ # the sentence for parsing is dump to self.sentenceDumpedFile
73
+ f = open(self.sentenceDumpedFile, 'w')
74
+
75
+ for doc in docs:
76
+ for par in doc.paragraphs:
77
+ doc.sentences.append([])
78
+ for s in self.sentenceTokenizer.tokenize(par):
79
+ sent = nucle_sent(s, [], '', [], [])
80
+ doc.addSentence(sent)
81
+ tokenizedSentStr = ' '.join(sent.getWords()) + '\n'
82
+
83
+ f.write(tokenizedSentStr.encode('utf-8'))
84
+ f.close()
85
+
86
+ if option == 0:
87
+ nodelist = parser_feature.DependTree_Batch(self.sentenceDumpedFile, self.parsingDumpedFileName)
88
+ elif option == 1 :
89
+ nodelist = parser_feature.DependTree_Batch_Parsefile(self.parsingDumpedFileName)
90
+ else:
91
+ return
92
+
93
+ i = 0
94
+ for doc in docs:
95
+ for slist in doc.sentences:
96
+ for s in slist:
97
+ if s.sentstr.strip() == '':
98
+ continue
99
+
100
+ s.setDpNode(nodelist[i][0])
101
+ s.setConstituentStr(nodelist[i][1])
102
+ s.setPOSList(nodelist[i][2])
103
+ s.buildConstituentList()
104
+
105
+ i += 1
106
+
107
+ f = file(self.docsDumpedFileName,'w')
108
+ pickle.dump(docs, f)
109
+ f.close()
110
+ return docs
111
+
112
+
113
+ def conllFileGeneration(self, docs, conllFileName, annotationFileName, m2FileName):
114
+
115
+ fcolumn = open(conllFileName, 'w')
116
+ fannotation = open(annotationFileName, 'w')
117
+ fm2 = open(m2FileName, 'w')
118
+
119
+ for doc in docs:
120
+ for slistIndex in xrange(len(doc.sentences)):
121
+ slist = doc.sentences[slistIndex]
122
+ for sentid in xrange(len(slist)):
123
+
124
+ sent = slist[sentid]
125
+
126
+ # annotation string list
127
+ annotationList = []
128
+
129
+ # m2 format annotation string list
130
+ m2AnnotationList = []
131
+
132
+ # build colums
133
+ table = sent.getConllFormat(doc, slistIndex, sentid)
134
+ tokenizedSentStr = ' '.join(sent.getWords())
135
+
136
+ #Add annotation info
137
+ sentoffset = doc.paragraphs[slistIndex].index(sent.sentstr)
138
+ for m in doc.mistakes:
139
+
140
+ if m['start_par'] != slistIndex or \
141
+ m['start_par'] != m['end_par'] or \
142
+ m['start_off'] < sentoffset or \
143
+ m['start_off'] >= sentoffset + len(sent.sentstr) or \
144
+ m['end_off'] <sentoffset or \
145
+ m['end_off'] > sentoffset + len(sent.sentstr):
146
+ continue
147
+
148
+ wordsoffset = 0
149
+ wdstart = 0
150
+
151
+ startInWord = 0
152
+ headText = ''
153
+ endInWord = 0
154
+ tailText = ''
155
+
156
+ words = sent.getWords()
157
+ while wdstart < len(words):
158
+
159
+ word = words[wdstart]
160
+ nextstart = sent.sentstr.find(word, wordsoffset)
161
+
162
+ if nextstart == -1:
163
+ # may not find word, due to relpacement
164
+ print >> sys.stderr, "Warning in building conll format: can not find word"
165
+ print >> sys.stderr, word.encode('utf-8')
166
+ wordsoffset += 1
167
+ else:
168
+ wordsoffset = nextstart
169
+
170
+ if wordsoffset >= m['start_off']-sentoffset:
171
+ break
172
+ elif wordsoffset + len(word) > m['start_off']-sentoffset:
173
+ # annotation starts at the middle of a word
174
+ startInWord = 1
175
+ headText = sent.sentstr[wordsoffset: m['start_off']-sentoffset]
176
+ break
177
+
178
+ wordsoffset += len(word)
179
+ wdstart += 1
180
+
181
+ if wdstart == len(words):
182
+ print >> sys.stderr, 'Warning in building conll format: start_off overflow'
183
+ print >> sys.stderr, m, sent.sentstr.encode('utf-8')
184
+ continue
185
+
186
+
187
+ wdend = wdstart
188
+ while wdend < len(words):
189
+
190
+ word = words[wdend]
191
+
192
+ nextstart = sent.sentstr.find(word, wordsoffset)
193
+
194
+ if nextstart == -1:
195
+ print >> sys.stderr, "Warning in building conll format: can not find word"
196
+ print >> sys.stderr, word.encode('utf-8')
197
+ wordsoffset += 1
198
+ else:
199
+ wordsoffset = nextstart
200
+
201
+ if wordsoffset >= m['end_off']-sentoffset:
202
+ # annotation ends at the middle of a word
203
+ if wordsoffset - len(words[wdend-1]) - 1 < m['end_off']-sentoffset:
204
+ endInWord = 1
205
+ tailText = sent.sentstr[m['end_off']-sentoffset : wordsoffset].strip()
206
+ break
207
+
208
+ wordsoffset += len(word)
209
+ wdend += 1
210
+
211
+
212
+ correctionTokenizedStr = self.tokenizeCorrectionStr(headText + m['correction'] + tailText, wdstart, wdend, words)
213
+
214
+ #Shrink the correction string, wdstart, wdend
215
+ correctionTokenizedStr, wdstart, wdend = self.shrinkCorrectionStr(correctionTokenizedStr, wdstart, wdend, words)
216
+ if wdstart == wdend and len(correctionTokenizedStr) == 0:
217
+ continue
218
+
219
+ # build annotation string for .conll.ann file
220
+ annotationStr = '<MISTAKE '
221
+ annotationStr += 'nid="' + table[0][0] + '" ' #nid
222
+ annotationStr += 'pid="' + str(table[0][1]) + '" ' #start_par
223
+ annotationStr += 'sid="' + str(sentid) + '" ' #sentence id
224
+ annotationStr += 'start_token="' + str(wdstart) + '" ' #start_token
225
+ annotationStr += 'end_token="' + str(wdend) + '">\n' #end_token
226
+ annotationStr += '<TYPE>' + m['type'] + '</TYPE>\n'
227
+ annotationStr += '<CORRECTION>' + correctionTokenizedStr + '</CORRECTION>\n'
228
+ annotationStr += '</MISTAKE>\n'
229
+
230
+ annotationList.append(annotationStr)
231
+
232
+ # build annotation string for .conll.m2 file
233
+ m2AnnotationStr = 'A '
234
+ m2AnnotationStr += str(wdstart) + ' '
235
+ m2AnnotationStr += str(wdend) + '|||'
236
+ m2AnnotationStr += m['type'] + '|||'
237
+ m2AnnotationStr += correctionTokenizedStr.replace('\n', '') + '|||'
238
+ m2AnnotationStr += 'REQUIRED|||-NONE-|||0\n'
239
+
240
+ m2AnnotationList.append(m2AnnotationStr)
241
+
242
+
243
+
244
+ # write .conll file
245
+ for row in table:
246
+ output = ''
247
+ for record in row:
248
+ if type(record) == type(1):
249
+ output = output + str(record) + '\t'
250
+ else:
251
+ output = output + record + '\t'
252
+ fcolumn.write((output.strip() + '\n').encode('utf-8'))
253
+ fcolumn.write(('\n').encode('utf-8'))
254
+
255
+ # write .conll.ann file
256
+ if len(annotationList) != 0:
257
+ annotationSent = '<ANNOTATION>\n' + ''.join(annotationList) + '</ANNOTATION>\n'
258
+ fannotation.write((annotationSent + '\n').encode('utf-8'))
259
+
260
+ # write .conll.m2 file
261
+ m2AnnotationSent = 'S ' + tokenizedSentStr + '\n'
262
+ m2AnnotationSent += ''.join(m2AnnotationList) + '\n'
263
+ fm2.write(m2AnnotationSent.encode('utf-8'))
264
+
265
+ fcolumn.close()
266
+ fannotation.close()
267
+ fm2.close()
268
+
269
+
270
+ def tokenizeCorrectionStr(self, correctionStr, wdstart, wdend, words):
271
+
272
+ correctionTokenizedStr = ''
273
+ pseudoSent = correctionStr
274
+
275
+ if wdstart != 0:
276
+ pseudoSent = words[wdstart-1] + ' ' + pseudoSent
277
+
278
+ if wdend < len(words) - 1:
279
+ pseudoSent = pseudoSent + ' ' + words[wdend]
280
+ elif wdend == len(words) - 1:
281
+ pseudoSent = pseudoSent + words[wdend]
282
+
283
+
284
+ pseudoWordsList = []
285
+ sentList = self.sentenceTokenizer.tokenize(pseudoSent)
286
+ for sent in sentList:
287
+ pseudoWordsList += word_tokenize(sent)
288
+
289
+ start = 0
290
+ if wdstart != 0:
291
+ s = ''
292
+ for i in xrange(len(pseudoWordsList)):
293
+ s += pseudoWordsList[i]
294
+ if s == words[wdstart-1]:
295
+ start = i + 1
296
+ break
297
+ if start == 0:
298
+ print >> sys.stderr, 'Can not find words[wdstart-1]'
299
+
300
+ else:
301
+ start = 0
302
+
303
+ end = len(pseudoWordsList)
304
+ if wdend != len(words):
305
+
306
+ s = ''
307
+ for i in xrange(len(pseudoWordsList)):
308
+ s = pseudoWordsList[len(pseudoWordsList) - i - 1] + s
309
+ if s == words[wdend]:
310
+ end = len(pseudoWordsList) - i - 1
311
+ break
312
+ if end == len(pseudoWordsList):
313
+ print >> sys.stderr, 'Can not find words[wdend]'
314
+
315
+ else:
316
+ end = len(pseudoWordsList)
317
+
318
+ correctionTokenizedStr = ' '.join(pseudoWordsList[start:end])
319
+
320
+ return correctionTokenizedStr
321
+
322
+
323
+ def shrinkCorrectionStr(self, correctionTokenizedStr, wdstart, wdend, words):
324
+
325
+ correctionWords = correctionTokenizedStr.split(' ')
326
+ originalWords = words[wdstart: wdend]
327
+ wdstartNew = wdstart
328
+ wdendNew = wdend
329
+ cstart = 0
330
+ cend = len(correctionWords)
331
+
332
+ i = 0
333
+ while i < len(originalWords) and i < len(correctionWords):
334
+ if correctionWords[i] == originalWords[i]:
335
+ i += 1
336
+ wdstartNew = i + wdstart
337
+ cstart = i
338
+ else:
339
+ break
340
+
341
+ i = 1
342
+ while i <= len(originalWords) - cstart and i <= len(correctionWords) - cstart:
343
+ if correctionWords[len(correctionWords)-i] == originalWords[len(originalWords)-i]:
344
+ wdendNew = wdend - i
345
+ cend = len(correctionWords) - i
346
+ i += 1
347
+ else:
348
+ break
349
+
350
+ return ' '.join(correctionWords[cstart:cend]), wdstartNew, wdendNew
351
+
352
+
353
+
354
+
355
+ def usage_debug():
356
+
357
+ u = '\nUsage: python preprocess.py options \n\n'
358
+ u += '-g sgmlFileName -d useDumpedFile\n'
359
+ u += ' generate sentence features and dump the results \n'
360
+ u += ' sgmlFileName: the nucle sgml files \n'
361
+ u += ' useDumpedFile = 0, don\'t use dumped files, will parse nucle sgml file (Default) \n'
362
+ u += ' useDumpedFile = 1, reuse previous dumped parse files \n\n'
363
+ u += '-c conllFileName annotationFileName m2FileName \n'
364
+ u += ' generate conllFile, annotationFile, m2File \n\n'
365
+ u += '-l sgmlFileName conllFileName annotationFileName m2FileName \n'
366
+ u += ' generate conllFile, annotationFile, m2FileName from sgmlFile, without parser info.\n'
367
+ print u
368
+
369
+ def usage_release():
370
+ u = '\nUsage: python preprocess.py OPTIONS sgmlFileName conllFileName annotationFileName m2FileName \n\n'
371
+ u += '-o generate conllFile, annotationFile, m2File from sgmlFile, with parser info.\n'
372
+ u += '-l generate conllFile, annotationFile, m2File from sgmlFile, without parser info.\n'
373
+ print u
374
+
375
+
376
+
377
+ if __name__ == '__main__':
378
+
379
+ ppr = PreProcessor()
380
+ debug = False
381
+ try:
382
+ if debug == True:
383
+ opts, args = getopt.getopt(sys.argv[1:],'g:d:c:l:h')
384
+ else:
385
+ opts, args = getopt.getopt(sys.argv[1:],'l:o:h')
386
+ except getopt.GetoptError:
387
+
388
+ if debug == True:
389
+ usage_debug()
390
+ else:
391
+ usage_release()
392
+ sys.exit(2)
393
+
394
+ option = {}
395
+ option['-g'] = 0
396
+ option['-c'] = 0
397
+ option['-l'] = 0
398
+ option['-o'] = 0
399
+ option['useDumpedFile'] = 0
400
+ option['sgmlFileName'] = None
401
+ option['conllFileName'] = None
402
+ option['annotationFileName'] = None
403
+ option['m2FileName'] = None
404
+
405
+
406
+ for opt, arg in opts:
407
+ if opt == '-g':
408
+ if os.path.isfile(arg) == False:
409
+ print >> sys.stderr, 'can not find sgml file'
410
+ sys.exit(2)
411
+ else:
412
+ option['sgmlFileName'] = arg
413
+ option['-g'] = 1
414
+
415
+ elif opt == '-d':
416
+ if arg not in ('1', '0'):
417
+ print >> sys.stderr, '-d option should be 0 or 1'
418
+ sys.exit(2)
419
+ else:
420
+ option['useDumpedFile'] = int(arg)
421
+
422
+ elif opt == '-c':
423
+ if len(args) != 2:
424
+ print >> sys.stderr, '-c option need 3 args'
425
+ sys.exit(2)
426
+ else:
427
+ option['conllFileName'] = arg
428
+ option['annotationFileName'] = args[0]
429
+ option['m2FileName'] = args[1]
430
+ option['-c'] = 1
431
+
432
+ elif opt == '-l':
433
+ if len(args) != 3:
434
+ print >> sys.stderr, '-l option need 4 args'
435
+ sys.exit(2)
436
+ else:
437
+ if os.path.isfile(arg) == False:
438
+ print >> sys.stderr, 'can not find sgml file'
439
+ sys.exit(2)
440
+ else:
441
+ option['sgmlFileName'] = arg
442
+
443
+ option['conllFileName'] = args[0]
444
+ option['annotationFileName'] = args[1]
445
+ option['m2FileName'] = args[2]
446
+ option['-l'] = 1
447
+
448
+
449
+ elif opt == '-o':
450
+ if len(args) != 3:
451
+ print >> sys.stderr, '-o option need 4 args'
452
+ sys.exit(2)
453
+ else:
454
+ if os.path.isfile(arg) == False:
455
+ print >> sys.stderr, 'can not find sgml file'
456
+ sys.exit(2)
457
+ else:
458
+ option['sgmlFileName'] = arg
459
+
460
+ option['conllFileName'] = args[0]
461
+ option['annotationFileName'] = args[1]
462
+ option['m2FileName'] = args[2]
463
+ option['useDumpedFile'] = 0
464
+ option['-o'] = 1
465
+
466
+ elif opt == '-h':
467
+ if debug == True:
468
+ usage_debug()
469
+ else:
470
+ usage_release()
471
+ sys.exit()
472
+
473
+
474
+ if option['-g'] + option['-c'] + option['-l'] + option['-o'] > 1:
475
+ print >> sys.stderr, 'only one option among -g, -c, -l, -o is allowed'
476
+ sys.exit(2)
477
+ elif option['-g'] + option['-c'] + option['-l'] + option['-o'] == 0:
478
+ print >> sys.stderr, 'no option given'
479
+ sys.exit(2)
480
+
481
+
482
+ if option['-g'] == 1:
483
+ docs = ppr.readNUCLE(option['sgmlFileName'])
484
+ ppr.featureGeneration(docs, option['useDumpedFile'])
485
+
486
+ elif option['-c'] == 1:
487
+ if os.path.isfile(ppr.docsDumpedFileName) == False:
488
+ print >> sys.stderr, '-c option needs dumped \'docs\' file. Please use -g option first. '
489
+ sys.exit(2)
490
+ f = file(ppr.docsDumpedFileName, 'r')
491
+ docs = pickle.load(f)
492
+ f.close()
493
+
494
+ ppr.conllFileGeneration(docs, option['conllFileName'], option['annotationFileName'], option['m2FileName'])
495
+
496
+ elif option['-l'] == 1:
497
+ docs = ppr.sentenceSplit(ppr.readNUCLE(option['sgmlFileName']))
498
+ ppr.conllFileGeneration(docs, option['conllFileName'], option['annotationFileName'], option['m2FileName'])
499
+
500
+ elif option['-o'] == 1:
501
+
502
+ docs = ppr.readNUCLE(option['sgmlFileName'])
503
+ docs = ppr.featureGeneration(docs, 0)
504
+ ppr.conllFileGeneration(docs, option['conllFileName'], option['annotationFileName'], option['m2FileName'])
505
+
506
+ os.remove(ppr.sentenceDumpedFile)
507
+ os.remove(ppr.docsDumpedFileName)
508
+ os.remove(ppr.parsingDumpedFileName)
509
+