File size: 8,717 Bytes
c705f71 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 |
NUCLE Release 3.3
24 Jan 2019
This README file describes the NUS Corpus of Learner English (NUCLE).
It was collected in a collaboration project between the National
University of Singapore (NUS) Natural Language Processing (NLP) Group
led by Prof. Hwee Tou Ng and the NUS Centre for English Language
Communication (CELC) led by Prof. Siew Mei Wu. The work was carried
out as part of the PhD thesis research of Daniel Dahlmeier at the NUS
NLP Group.
The corpus is distributed under the standard NUS licensing agreement
available when downloading the corpus. Any questions regarding NUCLE
should be directed to Hwee Tou Ng at: nght@comp.nus.edu.sg
If you are using the NUCLE corpus in your work, please include a
citation of the following paper:
Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu (2013). Building a
Large Annotated Corpus of Learner English: The NUS Corpus of Learner
English. Proceedings of the Eighth Workshop on Innovative Use of NLP
for Building Educational Applications (BEA 2013). (pp. 22 --
31). Atlanta, Georgia, USA.
1. About
========
NUCLE is a corpus of learner English. It consists of about 1,400
essays written by university students at the National University of
Singapore on a wide range of topics, such as environmental pollution,
healthcare, etc. It contains over one million words which are
completely annotated with error categories and corrections. All
annotations have been performed by professional English instructors at
the NUS CELC.
2. Data Format
==============
The corpus is distributed in a simple SGML format. All annotations
come in a "stand-off" format. The start position and end position of
an annotation are given by paragraph and character offsets.
Paragraphs are enclosed in <P>...</P> tags. Paragraphs and characters
are counted starting from zero. Each annotation includes the following
fields: the error category, the correction, and optionally a
comment. If the correction replaces the original text at the given
location, it should fix the grammatical error.
Example:
<DOC nid="840">
<TEXT>
<P>
Engineering design process can be defined as a process ...
</P>
<P>
Firstly, engineering design ...
</P>
...
</TEXT>
<ANNOTATION teacher_id="173">
<MISTAKE start_par="0" start_off="0" end_par="0" end_off="26">
<TYPE>ArtOrDet</TYPE>
<CORRECTION>The engineering design process</CORRECTION>
</MISTAKE>
...
</ANNOTATION>
</DOC>
<DOC nid="862">
...
Below is a complete list of the error categories in NUCLE release 2.0:
ERROR TAG ERROR CATEGORY
---------------------------
Vt Verb tense
Vm Verb modal
V0 Missing verb
Vform Verb form
SVA Subject-verb-agreement
ArtOrDet Article or Determiner
Nn Noun number
Npos Noun possesive
Pform Pronoun form
Pref Pronoun reference
Wcip Wrong collocation/idiom/preposition
Wa Acronyms
Wform Word form
Wtone Tone
Srun Runons, comma splice
Smod Dangling modifier
Spar Parallelism
Sfrag Fragment
Ssub Subordinate clause
WOinc Incorrect sentence form
WOadv Adverb/adjective position
Trans Link word/phrases
Mec Punctuation, capitalization, spelling, typos
Rloc Local redundancy
Cit Citation
Others Other errors
Um Unclear meaning (cannot be corrected)
3. Updates included in version 2.1
==================================
The major change made in version 2.1 is to map the error categories
Wcip and Rloc to Prep, Wci, ArtOrDet, and Rloc-.
In the original NUCLE corpus, there is not an explicit preposition
error category. Instead, preposition errors are part of the Wcip
(Wrong collocation/idiom/preposition) and Rloc (local redundancy)
error categories. In addition, redundant article or determiner errors
are part of the Rloc error category.
To facilitate the detection and correction of preposition errors and
article/determiner errors, we perform mapping of error categories in
the original NUCLE corpus. The mapping relies on POS tags, constituent
parse trees, and error annotations at the token level.
(a) Conditions to change from the error category Wcip or Rloc to Prep:
This applies to replacing a preposition by another preposition, or
deleting a preposition. The string to be replaced is one word w with
POS tag IN or TO, the parent of w is a PP in the constituent parse
tree, and the replacement is either a preposition or the empty string.
(b) Conditions to change from the error category Wcip to Prep:
This applies to inserting a preposition. The replacement is a
preposition (one word only) and the immediately following word is
tagged as VBG or is the first word of a noun phrase (NP).
(c) Conditions to change from the error category Rloc to ArtOrDet:
The single word has POS tag DT and the replacement is the empty
string.
The remaining unaffected "Wcip" errors are assigned the new error
category "Wci" and the remaining unaffected "Rloc" errors are assigned
the new error category "Rloc-".
List of 36 Prepositions:
about along among around as at beside besides between by down during
except for from in inside into of off on onto outside over through to
toward towards under underneath until up upon with within without
Below is a complete list of 28 error categories in NUCLE release 2.1:
ERROR TAG ERROR CATEGORY
---------------------------
Vt Verb tense
Vm Verb modal
V0 Missing verb
Vform Verb form
SVA Subject-verb-agreement
ArtOrDet Article or Determiner
Nn Noun number
Npos Noun possesive
Pform Pronoun form
Pref Pronoun reference
Prep Preposition
Wci Wrong collocation/idiom
Wa Acronyms
Wform Word form
Wtone Tone
Srun Runons, comma splice
Smod Dangling modifier
Spar Parallelism
Sfrag Fragment
Ssub Subordinate clause
WOinc Incorrect sentence form
WOadv Adverb/adjective position
Trans Link word/phrases
Mec Punctuation, capitalization, spelling, typos
Rloc- Local redundancy
Cit Citation
Others Other errors
Um Unclear meaning (cannot be corrected)
3. Updates included in version 2.2
==================================
- Fixed the bug on expanding an error annotation involving part of a
token to the full token.
- Other miscellaneous corrections were made.
4. Updates included in version 2.3
==================================
- Fixed the bug involving tokenization of punctuation symbols in the
correction string.
- Fixed the tokenization example in the README file of the M^2 scorer
to reflect the real tokenization to be used and removed irrelevant
codes from the scorer package.
5. Updates included in version 3.0
==================================
- Resolved overlapping annotations in the NUCLE corpus to make them
non-overlapping.
- Corrected some minor mistakes in error annotations.
6. Updates included in version 3.1
==================================
- Removed duplicate annotations in the NUCLE corpus with the same span
and correction string but different error type so as to keep only one of
those annotations. This fix only affects 0.1% of all annotations.
- Fixed end-of-paragraph annotations so that the end offset of such
annotations is the last character position in the paragraph. This fix
only affects 0.7% of all annotations.
- Corrected some minor mistakes in error annotations.
- Inclusion of the CoNLL-2013 test data, with all the known problems
described above fixed. Participating teams in the CoNLL-2014 shared
task can make use of the CoNLL-2013 test data in training and
developing their systems if they wish to do so.
- Fixed a minor bug in the M2 scorer that caused duplicate insertion
edits to receive high scores.
7. Updates included in version 3.2
==================================
- Fixed the preprocessing script such that a gold edit that inserts an
empty string is not included in the token-level gold edit and scorer
answer files.
- Removed one edit that inserted an empty string from the CoNLL-2014
test data. Also removed such instances from the NUCLE training data.
- Fixed a bug in the M2 scorer arising from scoring against gold edits
from multiple annotators. Specifically, the bug sometimes caused
incorrect scores to be reported when scoring against the gold edits
of subsequent annotators (other than the first annotator).
- Fixed a bug in the M2 scorer that caused erroneous scores to be
reported when dealing with insertion edits followed by deletion edits
(or vice versa).
8. Updates included in version 3.3
==================================
- Added a subdirectory "bea2019" to contain the ERRANT typed NUCLE M2
file for the BEA 2019 shared task on grammatical error correction.
|