ACL-OCL / Base_JSON /prefixP /json /P07 /P07-1031.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P07-1031",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:51:31.486513Z"
},
"title": "Adding Noun Phrase Structure to the Penn Treebank",
"authors": [
{
"first": "David",
"middle": [],
"last": "Vadas",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sydney NSW 2006",
"location": {
"country": "Australia"
}
},
"email": "dvadas1@it.usyd.edu.au"
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sydney NSW 2006",
"location": {
"country": "Australia"
}
},
"email": "james\u00a1@it.usyd.edu.au"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The Penn Treebank does not annotate within base noun phrases (NPs), committing only to flat structures that ignore the complexity of English NPs. This means that tools trained on Treebank data cannot learn the correct internal structure of NPs. This paper details the process of adding gold-standard bracketing within each noun phrase in the Penn Treebank. We then examine the consistency and reliability of our annotations. Finally, we use this resource to determine NP structure using several statistical approaches, thus demonstrating the utility of the corpus. This adds detail to the Penn Treebank that is necessary for many NLP applications.",
"pdf_parse": {
"paper_id": "P07-1031",
"_pdf_hash": "",
"abstract": [
{
"text": "The Penn Treebank does not annotate within base noun phrases (NPs), committing only to flat structures that ignore the complexity of English NPs. This means that tools trained on Treebank data cannot learn the correct internal structure of NPs. This paper details the process of adding gold-standard bracketing within each noun phrase in the Penn Treebank. We then examine the consistency and reliability of our annotations. Finally, we use this resource to determine NP structure using several statistical approaches, thus demonstrating the utility of the corpus. This adds detail to the Penn Treebank that is necessary for many NLP applications.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The Penn Treebank (Marcus et al., 1993) is perhaps the most influential resource in Natural Language Processing (NLP). It is used as a standard training and evaluation corpus in many syntactic analysis tasks, ranging from part of speech (POS) tagging and chunking, to full parsing.",
"cite_spans": [
{
"start": 18,
"end": 39,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unfortunately, the Penn Treebank does not annotate the internal structure of base noun phrases, instead leaving them flat. This significantly simplified and sped up the manual annotation process. Therefore, any system trained on Penn Treebank data will be unable to model the syntactic and semantic structure inside base-NPs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The following NP is an example of the flat structure of base-NPs within the Penn Treebank:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(NP (NNP Air) (NNP Force) (NN contract))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Air Force is a specific entity and should form a separate constituent underneath the NP, as in our new annotation scheme:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(NP (NML (NNP Air) (NNP Force)) (NN contract))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We use NML to specify that Air Force together is a nominal modifier of contract. Adding this annotation better represents the true syntactic and semantic structure, which will improve the performance of downstream NLP systems. Our main contribution is a gold-standard labelled bracketing for every ambiguous noun phrase in the Penn Treebank. We describe the annotation guidelines and process, including the use of named entity data to improve annotation quality. We check the correctness of the corpus by measuring interannotator agreement, by reannotating the first section, and by comparing against the sub-NP structure in DepBank (King et al., 2003) .",
"cite_spans": [
{
"start": 633,
"end": 652,
"text": "(King et al., 2003)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We also give an analysis of our extended Treebank, quantifying how much structure we have added, and how it is distributed across NPs. Finally, we test the utility of the extended Treebank for training statistical models on two tasks: NP bracketing (Lauer, 1995; Nakov and Hearst, 2005) and full parsing (Collins, 1999) .",
"cite_spans": [
{
"start": 249,
"end": 262,
"text": "(Lauer, 1995;",
"ref_id": "BIBREF11"
},
{
"start": 263,
"end": 286,
"text": "Nakov and Hearst, 2005)",
"ref_id": "BIBREF14"
},
{
"start": 304,
"end": 319,
"text": "(Collins, 1999)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This new resource will allow any system or annotated corpus developed from the Penn Treebank, to represent noun phrase structure more accurately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many approaches to identifying base noun phrases have been explored as part of chunking (Ramshaw and Marcus, 1995) , but determining sub-NP structure is rarely addressed. We could use multi-word expressions (MWEs) to identify some structure. For example, knowing stock market is a MWE may help bracket stock market prices correctly, and Named Entities (NEs) can be used the same way. However, this only resolves NPs dominating MWEs or NEs.",
"cite_spans": [
{
"start": 101,
"end": 114,
"text": "Marcus, 1995)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "Understanding base-NP structure is important, since otherwise parsers will propose nonsensical noun phrases like Force contract by default and pass them onto downstream components. For example, Question Answering (QA) systems need to supply an NP as the answer to a factoid question, often using a parser to identify candidate NPs to return to the user. If the parser never generates the correct sub-NP structure, then the system may return a nonsensical answer even though the correct dominating noun phrase has been found.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "Base-NP structure is also important for annotated data derived from the Penn Treebank. For instance, CCGbank (Hockenmaier, 2003) was created by semi-automatically converting the Treebank phrase structure to Combinatory Categorial Grammar (CCG) (Steedman, 2000) derivations. Since CCG derivations are binary branching, they cannot directly represent the flat structure of the Penn Treebank base-NPs.",
"cite_spans": [
{
"start": 109,
"end": 128,
"text": "(Hockenmaier, 2003)",
"ref_id": "BIBREF7"
},
{
"start": 244,
"end": 260,
"text": "(Steedman, 2000)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "Without the correct bracketing in the Treebank, strictly right-branching trees were created for all base-NPs. This has an unwelcome effect when conjunctions occur within an NP (Figure 1 ). An additional grammar rule is needed just to get a parse, but it is still not correct (Hockenmaier, 2003, p. 64) . The awkward conversion results in bracketing (a) which should be (b): (a) (consumer ((electronics) and (appliances (retailing chain)))) (b) ((((consumer electronics) and appliances) retailing) chain)",
"cite_spans": [
{
"start": 275,
"end": 301,
"text": "(Hockenmaier, 2003, p. 64)",
"ref_id": null
},
{
"start": 388,
"end": 402,
"text": "((electronics)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 176,
"end": 185,
"text": "(Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "We have previously experimented with using NEs to improve parsing performance on CCGbank. Due to the mis-alignment of NEs and right-branching NPs, the increase in performance was negligible. 3 Background",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "The NP bracketing task has often been posed in terms of choosing between the left or right branching structure of three word noun compounds:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "(a) (world (oil prices)) -Right-branching (b) ((crude oil) prices) -Left-branching",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "Most approaches to the problem use unsupervised methods, based on competing association strength between two of the words in the compound (Marcus, 1980, p. 253) . There are two possible models to choose from: dependency or adjacency. The dependency model compares the association between words 1-2 to words 1-3, while the adjacency model compares words 1-2 to words 2-3. Lauer (1995) has demonstrated superior performance of the dependency model using a test set of 244 (216 unique) noun compounds drawn from Grolier's encyclopedia. This data has been used to evaluate most research since. He uses Roget's thesaurus to smooth words into semantic classes, and then calculates association between classes based on their counts in a \"training set\" also drawn from Grolier's. He achieves 80.7% accuracy using POS tags to indentify bigrams in the training set. Lapata and Keller (2004) derive estimates from web counts, and only compare at a lexical level, achieving 78.7% accuracy. Nakov and Hearst (2005) also use web counts, but incorporate additional counts from several variations on simple bigram queries, including queries for the pairs of words concatenated or joined by a hyphen. This results in an impressive 89.3% accuracy.",
"cite_spans": [
{
"start": 138,
"end": 160,
"text": "(Marcus, 1980, p. 253)",
"ref_id": null
},
{
"start": 371,
"end": 383,
"text": "Lauer (1995)",
"ref_id": "BIBREF11"
},
{
"start": 856,
"end": 880,
"text": "Lapata and Keller (2004)",
"ref_id": "BIBREF10"
},
{
"start": 978,
"end": 1001,
"text": "Nakov and Hearst (2005)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "There have also been attempts to solve this task using supervised methods, even though the lack of gold-standard data makes this difficult. Girju et al. (2005) draw a training set from raw WSJ text and use it to train a decision tree classifier achieving 73.1% accuracy. When they shuffled their data with Lauer's to create a new test and training split, their accuracy increased to 83.1% which may be a result of the \u00a2 10% duplication in Lauer's test set. We have created a new NP bracketing data set from our extended Treebank by extracting all rightmost three noun sequences from base-NPs. Our initial experiments are presented in Section 6.1.",
"cite_spans": [
{
"start": 140,
"end": 159,
"text": "Girju et al. (2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "2"
},
{
"text": "According to Marcus et al. (1993) , asking annotators to markup base-NP structure significantly reduced annotation speed, and for this reason base-NPs were left flat. The bracketing guidelines (Bies et al., 1995) also mention the considerable difficulty of identifying the correct scope for nominal modifiers. We found however, that while there are certainly difficult cases, the vast majority are quite simple and can be annotated reliably. Our annotation philosophy can be summarised as:",
"cite_spans": [
{
"start": 13,
"end": 33,
"text": "Marcus et al. (1993)",
"ref_id": "BIBREF13"
},
{
"start": 193,
"end": 212,
"text": "(Bies et al., 1995)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "4"
},
{
"text": "1. most cases are easy and fit a common pattern; 2. prefer the implicit right-branching structure for difficult decisions. Finance jargon was a common source of these; 3. mark very difficult to bracket NPs and discuss with other annotators later;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "4"
},
{
"text": "During this process we identified numerous cases that require a more sophisticated annotation scheme. There are genuine flat cases, primarily names like John A. Smith, that we would like to distinguish from implicitly right-branching NPs in the next version of the corpus. Although our scheme is still developing, we believe that the current annotation is already useful for statistical modelling, and we demonstrate this empirically in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Creation",
"sec_num": "4"
},
{
"text": "Our annotation guidelines 1 are based on those developed for annotating full sub-NP structure in the biomedical domain (Kulick et al., 2004) . The annotation guidelines for this biomedical corpus (an addendum to the Penn Treebank guidelines) introduce the use of NML nodes to mark internal NP structure.",
"cite_spans": [
{
"start": 119,
"end": 140,
"text": "(Kulick et al., 2004)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Process",
"sec_num": "4.1"
},
{
"text": "1 The guidelines and corpus are available on our webpages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Process",
"sec_num": "4.1"
},
{
"text": "In summary, our guidelines leave right-branching structures untouched, and insert labelled brackets around left-branching structures. The label of the newly created constituent is NML or JJP, depending on whether its head is a noun or an adjective. We also chose not to alter the existing Penn Treebank annotation, even though the annotators found many errors during the annotation process. We wanted to keep our extended Treebank as similar to the original as possible, so that they remain comparable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Process",
"sec_num": "4.1"
},
{
"text": "We developed a bracketing tool, which identifies ambiguous NPs and presents them to the user for disambiguation. An ambiguous NP is any (possibly non-base) NP with three or more contiguous children that are either single words or another NP. Certain common patterns, such as three words beginning with a determiner, are unambiguous, and were filtered out. The annotator is also shown the entire sentence surrounding the ambiguous NP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Process",
"sec_num": "4.1"
},
{
"text": "The bracketing tool often suggests a bracketing using rules based mostly on named entity tags, which are drawn from the BBN corpus (Weischedel and Brunstein, 2005) . For example, since Air Force is given ORG tags, the tool suggests that they be bracketed together first. Other suggestions come from previous bracketings of the same words, which helps to keep the annotator consistent.",
"cite_spans": [
{
"start": 131,
"end": 163,
"text": "(Weischedel and Brunstein, 2005)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Process",
"sec_num": "4.1"
},
{
"text": "Two post processes were carried out to increase annotation consistency and correctness. 915 difficult NPs were marked by the annotator and were then discussed with two other experts. Secondly, certain phrases that occurred numerous times and were non-trivial to bracket, e.g. London Interbank Offered Rate, were identified. An extra pass was made through the corpus, ensuring that every instance of these phrases was bracketed consistently.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Process",
"sec_num": "4.1"
},
{
"text": "Annotation initially took over 9 hours per section of the Treebank. However, with practice this was reduced to about 3 hours per section. Each section contains around 2500 ambiguous NPs, i.e. annotating took approximately 5 seconds per NP. Most NPs require no bracketing, or fit into a standard pattern which the annotator soon becomes accustomed to, hence the task can be performed quite quickly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Annotation Time",
"sec_num": "4.2"
},
{
"text": "For the original bracketing of the Treebank, annotators performed at 375-475 words per hour after a Table 1 : Agreement between annotators few weeks, and increased to about 1000 words per hour after gaining more experience (Marcus et al., 1993) . For our annotation process, counting each word in every NP shown, our speed was around 800 words per hour. This figure is not unexpected, as the task was not large enough to get more than a month's experience, and there is less structure to annotate.",
"cite_spans": [
{
"start": 223,
"end": 244,
"text": "(Marcus et al., 1993)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 100,
"end": 107,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Annotation Time",
"sec_num": "4.2"
},
{
"text": "The annotation was performed by the first author.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-annotator Agreement",
"sec_num": "5.1"
},
{
"text": "A second Computational Linguistics PhD student also annotated Section 23, allowing inter-annotator agreement, and the reliability of the annotations, to be measured. This also maximised the quality of the section used for parser testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-annotator Agreement",
"sec_num": "5.1"
},
{
"text": "We measured the proportion of matching brackets and dependencies between annotators, shown in Table 1 , both before and after they discussed cases of disagreement and revised their annotations. The number of dependencies is fixed by the length of the NP, so the dependency precision and recall are the same. Counting matched brackets is a harsher evaluation, as there are many NPs that both annotators agree should have no additional bracketing, which are not taken into account by the metric.",
"cite_spans": [],
"ref_spans": [
{
"start": 94,
"end": 101,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Inter-annotator Agreement",
"sec_num": "5.1"
},
{
"text": "The disagreements occurred for a small number of repeated instances, such as this case:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-annotator Agreement",
"sec_num": "5.1"
},
{
"text": "(NP (NP (NNP Goldman) (NML (NNP Goldman) (, ,) (, ,) (NNP Sachs) (NNP Sachs) ) (CC &) (NNP Co) ) (CC &) (NNP Co) )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-annotator Agreement",
"sec_num": "5.1"
},
{
"text": "The first annotator felt that Goldman , Sachs should form their own NML constituent, while the second annotator did not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-annotator Agreement",
"sec_num": "5.1"
},
{
"text": "We can also look at exact matching on NPs, where the annotators originally agreed in 2667 of 2908 cases (91.71%), and after revision, in 2864 of 2907 cases (98.52%). These results demonstrate that high agreement rates are achievable for these annotations. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-annotator Agreement",
"sec_num": "5.1"
},
{
"text": "Another approach to measuring annotator reliability is to compare with an independently annotated corpus on the same text. We used the PARC700 Dependency Bank (King et al., 2003) which consists of 700 Section 23 sentences annotated with labelled dependencies. We use the Briscoe and Carroll (2006) version of DepBank, a 560 sentence subset used to evaluate the RASP parser. Some translation is required to compare our brackets to DepBank dependencies. We map the brackets to dependencies by finding the head of the NP, using the Collins (1999) head finding rules, and then creating a dependency between each other child's head and this head. This does not work perfectly, and mismatches occur because of which dependencies DepBank marks explicitly, and how it chooses heads. The errors are investigated manually to determine their cause.",
"cite_spans": [
{
"start": 159,
"end": 178,
"text": "(King et al., 2003)",
"ref_id": "BIBREF8"
},
{
"start": 271,
"end": 297,
"text": "Briscoe and Carroll (2006)",
"ref_id": "BIBREF3"
},
{
"start": 529,
"end": 543,
"text": "Collins (1999)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "DepBank Agreement",
"sec_num": "5.2"
},
{
"text": "The results are shown in Table 2 , with the number of agreements before manual checking shown in parentheses. Once again the dependency numbers are higher than those at the NP level. Similarly, when we only look at cases where we have inserted some annotations, we are looking at more difficult cases and the score is not as high.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "DepBank Agreement",
"sec_num": "5.2"
},
{
"text": "The results of this analysis are quite positive. Over half of the disagreements that occur (in either measure) are caused by how company names are bracketed. While we have always separated the company name from post-modifiers such as Corp and Inc, DepBank does not in most cases. These results show that consistently and correctly bracketing noun phrase structure is possible, and that interannotator agreement is at an acceptable level.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "DepBank Agreement",
"sec_num": "5.2"
},
{
"text": "Looking at the entire Penn Treebank corpus, the annotation tool finds 60959 ambiguous NPs out of the 432639 NPs in the corpus (14.09% To compare, we can count the number of existing NP and ADJP nodes found in the NPs that the bracketing tool presents. We find there are 32772 NP children, and 579 ADJP, which are quite similar numbers to the amount of nodes we have added. From this, we can say that our annotation process has introduced almost as much structural information into NPs as there was in the original Penn Treebank. Table 3 shows the most common POS tag sequences for NP, NML and JJP nodes. An example is given showing typical words that match the POS tags. For NML and JJP, we also show the words bracketed, as they would appear under an NP node.",
"cite_spans": [],
"ref_spans": [
{
"start": 529,
"end": 536,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Corpus Composition and Consistency",
"sec_num": "5.3"
},
{
"text": "We checked the consistency of the annotations by identifying NPs with the same word sequence and checking whether they were always bracketed identically. After the first pass through, there were 360 word sequences with multiple bracketings, which occurred in 1923 NP instances. 489 of these instances differed from the majority case for that sequence, and were probably errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Composition and Consistency",
"sec_num": "5.3"
},
{
"text": "The annotator had marked certain difficult and commonly repeating NPs. From this we generated a list of phrases, and then made another pass through the corpus, synchronising all instances that contained one of these phrases. After this, only 150 instances differed from the majority case. Inspecting these remaining inconsistencies showed cases like:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Composition and Consistency",
"sec_num": "5.3"
},
{
"text": "(NP-TMP (NML (NNP Nov.) (CD 15)) (, ,) (CD 1999))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Composition and Consistency",
"sec_num": "5.3"
},
{
"text": "where we were inconsistent in inserting the NML node because the Penn Treebank sometimes already has the structure annotated under an NP node. Since we do not make changes to existing brackets, we cannot fix these cases. Other inconsistencies are rare, but will be examined and corrected in a future release. The annotator made a second pass over Section 00 to correct changes made after the beginning of the annotation process. Comparing the two passes can give us some idea of how the annotator changed as he grew more practiced at the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Composition and Consistency",
"sec_num": "5.3"
},
{
"text": "We find that the old and new versions are identical in 88.65% of NPs, with labelled precision, recall and F-score being 97.17%, 76.69% and 85.72% respectively. This tells us that there were many brackets originally missed that were added in the second pass. This is not surprising since the main problem with how Section 00 was annotated originally was that company names were not separated from their post-modifier (such as Corp). Besides this, it suggests that there is not a great deal of difference between an annotator just learning the task, and one who has had a great deal of experience with it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Composition and Consistency",
"sec_num": "5.3"
},
{
"text": "We have also evaluated how well the suggestion feature of the annotation tool performs. In particular, we want to determine how useful named entities are in determining the correct bracketing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Suggestions",
"sec_num": "5.4"
},
{
"text": "We ran the tool over the original corpus, following NE-based suggestions where possible. We find that when evaluated against our annotations, the Fscore is 50.71%. We need to look closer at the precision and recall though, as they are quite different. The precision of 93.84% is quite high. However, there are many brackets where the entities do not help at all, and so the recall of this method was only 34.74%. This suggests that a NE feature may help to identify the correct bracketing in one third of cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Named Entity Suggestions",
"sec_num": "5.4"
},
{
"text": "Having bracketed NPs in the Penn Treebank, we now describe our initial experiments on how this additional level of annotation can be exploited.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "The obvious first task to consider is noun phrase bracketing itself. We implement a similar system to Table 5 : Lexical overlap Lauer (1995) , described in Section 3, and report on results from our own data and Lauer's original set. First, we extracted three word noun sequences from all the ambiguous NPs. If the last three children are nouns, then they became an example in our data set. If there is a NML node containing the first two nouns then it is left-branching, otherwise it is right-branching. Because we are only looking at the right-most part of the NP, we know that we are not extracting any nonsensical items. We also remove all items where the nouns are all part of a named entity to eliminate flat structure cases.",
"cite_spans": [
{
"start": 128,
"end": 140,
"text": "Lauer (1995)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 102,
"end": 109,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "NP Bracketing Data",
"sec_num": "6.1"
},
{
"text": "Statistics about the new data set and Lauer's data set are given in Table 4 . As can be seen, the Penn Treebank based corpus is significantly larger, and has a more even mix of left and right-branching noun phrases. We also measured the amount of lexical overlap between the two corpora, shown in Table 5 . This displays the percentage of n-grams in Lauer's corpus that are also in our corpus. We can clearly see that the two corpora are quite dissimilar, as even on unigrams barely half are shared.",
"cite_spans": [],
"ref_spans": [
{
"start": 68,
"end": 75,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 297,
"end": 304,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "NP Bracketing Data",
"sec_num": "6.1"
},
{
"text": "With our new data set, we began running experiments similar to those carried out in the literature (Nakov and Hearst, 2005) . We implemented both an adjacency and dependency model, and three different association measures: raw counts, bigram probability, and",
"cite_spans": [
{
"start": 99,
"end": 123,
"text": "(Nakov and Hearst, 2005)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NP Bracketing Results",
"sec_num": "6.2"
},
{
"text": ". We draw our counts from a corpus of n-gram counts calculated over 1 trillion words from the web (Brants and Franz, 2006) .",
"cite_spans": [
{
"start": 98,
"end": 122,
"text": "(Brants and Franz, 2006)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a3 \u00a5 \u00a4",
"sec_num": null
},
{
"text": "The results from the experiments, on both our and Lauer's data set, are shown in ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a3 \u00a5 \u00a4",
"sec_num": null
},
{
"text": "score performed the worst. The results on the new corpus are even more surprising, with the adjacency model outperforming the dependency model by a wide margin. The \u00a3 \u00a4 measure gives the highest accuracy, but still only just outperforms the raw counts. Our analysis shows that the good performance of the adjacency model comes from the large number of named entities in the corpus. When we remove all items that have any word as an entity, the results change, and the dependency model is superior. We also suspect that another cause of the unusual results is the different proportions of left and right-branching NPs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a3 \u00a4",
"sec_num": null
},
{
"text": "With a large annotated corpus, we can now run supervised NP bracketing experiments. We present two configurations in Table 7 : training on our corpus and testing on Lauer's set; and performing 10-fold cross validation using our corpus alone.",
"cite_spans": [],
"ref_spans": [
{
"start": 117,
"end": 124,
"text": "Table 7",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "\u00a3 \u00a4",
"sec_num": null
},
{
"text": "The feature set we explore encodes the information we used in the unsupervised experiments. Ta Table 8 : Parsing performance ble 7 shows the performance with: all features, followed by the individual features, and finally, after removing individual features. The feature set includes: lexical features for each n-gram in the noun compound; n-gram counts for unigrams, bigrams and trigrams; raw probability and \u00a3 \u00a6 \u00a4 association scores for all three bigrams in the compound; and the adjacency and dependency results for all three association measures. We discretised the non-binary features using an implementation of Fayyad and Irani's (1993) algorithm, and classify using MegaM 2 .",
"cite_spans": [
{
"start": 617,
"end": 642,
"text": "Fayyad and Irani's (1993)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 92,
"end": 94,
"text": "Ta",
"ref_id": null
},
{
"start": 95,
"end": 102,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "\u00a3 \u00a4",
"sec_num": null
},
{
"text": "The results on Lauer's set demonstrate that the dependency model performs well by itself but not with the other features. In fact, a better result comes from using every feature except those from the dependency and adjacency models. It is also impressive how good the performance is, considering the large differences between our data set and Lauer's.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a3 \u00a4",
"sec_num": null
},
{
"text": "These differences also account for the disparate cross-validation figures. On this data, the lexical features perform the best, which is to be expected given the nature of the corpus. The best model in this case comes from using all the features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a3 \u00a4",
"sec_num": null
},
{
"text": "We can also look at the impact of our new annotations upon full statistical parsing. We use Bikel's implementation (Bikel, 2004) of Collins' parser (Collins, 1999) in order to carry out these experiments, using the non-deficient Collins settings. The numbers we give are labelled bracket precision, recall and F-scores for all sentences. Bikel mentions that base-NPs are treated very differently in Collins' parser, and so it will be interesting to observe the results using our new annotations.",
"cite_spans": [
{
"start": 115,
"end": 128,
"text": "(Bikel, 2004)",
"ref_id": "BIBREF1"
},
{
"start": 148,
"end": 163,
"text": "(Collins, 1999)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Collins Parsing",
"sec_num": "6.3"
},
{
"text": "Firstly, we compare the parser's performance on the original Penn Treebank and the new NML and JJP bracketed version. Table 8 shows that the new brackets make parsing marginally more difficult overall 2 Available at http://www.cs.utah.edu/ hal/megam/ (by about 0.5% in F-score).",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 125,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Collins Parsing",
"sec_num": "6.3"
},
{
"text": "The performance on only the new NML and JJP brackets is not very high. This shows the difficulty of correctly bracketing NPs. Conversely, the figures for all brackets except NML and JJP are only a tiny amount less in our extended corpus. This means that performance for other phrases is hardly changed by the new NP brackets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Collins Parsing",
"sec_num": "6.3"
},
{
"text": "We also ran an experiment where the new NML and JJP labels were relabelled as NP and ADJP. These are the labels that would be given if NPs were originally bracketed with the rest of the Penn Treebank. This meant the model would not have to discriminate between two different types of noun and adjective structure. The performance, as shown in Table 8, was even lower with this approach, suggesting that the distinction is larger than we anticipated. On the other hand, the precision on NML and JJP constituents was quite high, so the parser is able to identify at least some of the structure very well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Collins Parsing",
"sec_num": "6.3"
},
{
"text": "The work presented in this paper is a first step towards accurate representation of noun phrase structure in NLP corpora. There are several distinctions that our annotation currently ignores that we would like to identify correctly in the future. Firstly, NPs with genuine flat structure are currently treated as implicitly right branching. Secondly, there is still ambiguity in determining the head of a noun phrase. Although Collins' head finding rules work in most NPs, there are cases such as IBM Australia where the head is not the right-most noun. Similarly, apposition is very common in the Penn Treebank, in NPs such as John Smith , IBM president. We would like to be able to identify these multi-head constructs properly, rather than simply treating them as a single entity (or even worse, as two different entities).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Having the correct NP structure also means that we can now represent the true structure in CCGbank, one of the problems we described earlier. Transfer-ring our annotations should be fairly simple, requiring just a few changes to how NPs are treated in the current translation process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "The addition of consistent, gold-standard, noun phrase structure to a large corpus is a significant achievement. We have shown that the these annotations can be created in a feasible time frame with high inter-annotator agreement of 98.52% (measuring exact NP matches). The new brackets cause only a small drop in parsing performance, and no significant decrease on the existing structure. As NEs were useful for suggesting brackets automatically, we intend to incorporate NE information into statistical parsing models in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Our annotated corpus can improve the performance of any system that relies on NPs from parsers trained on the Penn Treebank. A Collins' parser trained on our corpus is now able to identify sub-NP brackets, making it of use in other NLP systems. QA systems, for example, will be able to exploit internal NP structure. In the future, we will improve the parser's performance on NML and JJP brackets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "We have provided a significantly larger corpus for analysing NP structure than has ever been made available before. This is integrated within perhaps the most influential corpus in NLP. The large number of systems trained on Penn Treebank data can all benefit from the extended resource we have created.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "We would like to thank Matthew Honnibal, our second annotator, who also helped design the guidelines; Toby Hawker, for implementing the discretiser; Mark Lauer for releasing his data; and the anonymous reviewers for their helpful feedback. This work has been supported by the Australian Research Council under Discovery Projects DP0453131 and DP0665973.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Bracketing guidelines for Treebank II style Penn Treebank project",
"authors": [
{
"first": "Ann",
"middle": [],
"last": "Bies",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Ferguson",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Katz",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Macintyre",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann Bies, Mark Ferguson, Karen Katz, and Robert MacIntyre. 1995. Bracketing guidelines for Treebank II style Penn Tree- bank project. Technical report, University of Pennsylvania.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "On the Parameter Space of Generative Lexicalized Statistical Parsing Models",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Bikel",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Bikel. 2004. On the Parameter Space of Generative Lexi- calized Statistical Parsing Models. Ph.D. thesis, University of Pennsylvania.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Web 1T 5-gram version 1. Linguistic Data Consortium",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Franz",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants and Alex Franz. 2006. Web 1T 5-gram version 1. Linguistic Data Consortium.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Evaluating the accuracy of an unlexicalized statistical parser on the PARC DepBank",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Poster Session of COLING/ACL-06",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Briscoe and John Carroll. 2006. Evaluating the accuracy of an unlexicalized statistical parser on the PARC DepBank. In Proceedings of the Poster Session of COLING/ACL-06. Sydney, Australia.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Head-Driven Statistical Models for Natural Language Parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1999. Head-Driven Statistical Models for Nat- ural Language Parsing. Ph.D. thesis, University of Pennsyl- vania.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Multi-interval discretization of continuous-valued attributes for classification learning",
"authors": [
{
"first": "M",
"middle": [],
"last": "Usama",
"suffix": ""
},
{
"first": "Keki",
"middle": [
"B"
],
"last": "Fayyad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Irani",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 13th International Joint Conference on Artifical Intelligence (IJCAI-93)",
"volume": "",
"issue": "",
"pages": "1022--1029",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Usama M. Fayyad and Keki B. Irani. 1993. Multi-interval dis- cretization of continuous-valued attributes for classification learning. In Proceedings of the 13th International Joint Con- ference on Artifical Intelligence (IJCAI-93), pages 1022- 1029. Chambery, France.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "On the semantics of noun compounds",
"authors": [
{
"first": "Roxana",
"middle": [],
"last": "Girju",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Moldovan",
"suffix": ""
},
{
"first": "Marta",
"middle": [],
"last": "Tatu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Antohe",
"suffix": ""
}
],
"year": 2005,
"venue": "Journal of Computer Speech and Language -Special Issue on Multiword Expressions",
"volume": "19",
"issue": "4",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roxana Girju, Dan Moldovan, Marta Tatu, and Daniel Antohe. 2005. On the semantics of noun compounds. Journal of Computer Speech and Language -Special Issue on Multi- word Expressions, 19(4):313-330.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Data and Models for Statistical Parsing with Combinatory Categorial Grammar",
"authors": [
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julia Hockenmaier. 2003. Data and Models for Statistical Pars- ing with Combinatory Categorial Grammar. Ph.D. thesis, University of Edinburgh.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The PARC700 dependency bank",
"authors": [
{
"first": "Tracy Holloway",
"middle": [],
"last": "King",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Crouch",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Dalrymple",
"suffix": ""
},
{
"first": "Ronald",
"middle": [
"M"
],
"last": "Kaplan",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 4th International Workshop on Linguistically Interpreted Corpora (LINC-03)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tracy Holloway King, Richard Crouch, Stefan Riezler, Mary Dalrymple, and Ronald M. Kaplan. 2003. The PARC700 dependency bank. In Proceedings of the 4th International Workshop on Linguistically Interpreted Corpora (LINC-03). Budapest, Hungary.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Integrated annotation for biomedical information extraction",
"authors": [
{
"first": "Seth",
"middle": [],
"last": "Kulick",
"suffix": ""
},
{
"first": "Ann",
"middle": [],
"last": "Bies",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Libeman",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Mandel",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Schein",
"suffix": ""
},
{
"first": "Lyle",
"middle": [],
"last": "Ungar",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Seth Kulick, Ann Bies, Mark Libeman, Mark Mandel, Ryan McDonald, Martha Palmer, Andrew Schein, and Lyle Ungar. 2004. Integrated annotation for biomedical information ex- traction. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Associa- tion for Computational Linguistics. Boston.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The web as a baseline: Evaluating the performance of unsupervised web-based models for a range of NLP tasks",
"authors": [
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "121--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mirella Lapata and Frank Keller. 2004. The web as a base- line: Evaluating the performance of unsupervised web-based models for a range of NLP tasks. In Proceedings of the Hu- man Language Technology Conference of the North Ameri- can Chapter of the Association for Computational Linguis- tics, pages 121-128. Boston.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Corpus statistics meet the compound noun: Some empirical results",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Lauer",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Lauer. 1995. Corpus statistics meet the compound noun: Some empirical results. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. Cambridge, MA.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A Theory of Syntactic Recognition for Natural Language",
"authors": [
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell Marcus. 1980. A Theory of Syntactic Recognition for Natural Language. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Building a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Beatrice",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell Marcus, Beatrice Santorini, and Mary Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Search engine statistics beyond the n-gram: Application to noun compound bracketing",
"authors": [
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Marti",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of CoNLL-2005, Ninth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov and Marti Hearst. 2005. Search engine statistics beyond the n-gram: Application to noun compound brack- eting. In Proceedings of CoNLL-2005, Ninth Conference on Computational Natural Language Learning. Ann Arbor, MI.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Text chunking using transformation-based learning",
"authors": [
{
"first": "A",
"middle": [],
"last": "Lance",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [
"P"
],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Marcus",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the Third ACL Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lance A. Ramshaw and Mitchell P. Marcus. 1995. Text chunk- ing using transformation-based learning. In Proceedings of the Third ACL Workshop on Very Large Corpora. Cambridge MA, USA.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The Syntactic Process",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Steedman. 2000. The Syntactic Process. MIT Press, Cam- bridge, MA.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "BBN pronoun coreference and entity type corpus",
"authors": [
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "Ada",
"middle": [],
"last": "Brunstein",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ralph Weischedel and Ada Brunstein. 2005. BBN pronoun coreference and entity type corpus. Technical report, Lin- guistic Data Consortium.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "CCG derivation fromHockenmaier (2003)",
"num": null
},
"TABREF2": {
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "",
"html": null
},
"TABREF4": {
"num": null,
"type_str": "table",
"content": "<table/>",
"text": "Common POS tag sequences these (37.49%) had brackets inserted by the annotator. This is as we expect, as the majority of NPs are right-branching. Of the brackets added, 22368 were",
"html": null
},
"TABREF6": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>N-GRAM</td><td>MATCH</td></tr><tr><td>Unigrams</td><td>51.20%</td></tr><tr><td>Adjacency bigrams</td><td>6.35%</td></tr><tr><td colspan=\"2\">Dependency bigrams 3.85%</td></tr><tr><td>All bigrams</td><td>5.83%</td></tr><tr><td>Trigrams</td><td>1.40%</td></tr></table>",
"text": "Comparison of NP bracketing corpora",
"html": null
},
"TABREF7": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>ASSOC. MEASURE</td><td>LAUER</td><td>PTB</td></tr><tr><td>Raw counts, adj.</td><td colspan=\"2\">75.41% 77.46%</td></tr><tr><td>Raw counts, dep.</td><td colspan=\"2\">77.05% 68.85%</td></tr><tr><td>Probability, adj.</td><td colspan=\"2\">71.31% 76.42%</td></tr><tr><td>Probability, dep. \u00a3 \u00a6 \u00a4 , adj. \u00a3 \u00a4 , dep.</td><td colspan=\"2\">80.33% 69.56% 71.31% 77.93% 74.59% 68.92%</td></tr></table>",
"text": "Our results",
"html": null
},
"TABREF8": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>FEATURES</td><td>LAUER</td><td>10-FOLD CROSS</td></tr><tr><td colspan=\"3\">All features Lexical n-gram counts Probability \u00a7 \u00a9 Adjacency model Dependency model Both models -Lexical -n-gram counts -Probability - \u00a7 \u00a9 -Adjacency model -Dependency model 81.15% 80.74% 89.91% (1.04%) 71.31% 84.52% (1.77%) 75.41% 82.50% (1.49%) 72.54% 78.34% (2.11%) 75.41% 80.10% (1.71%) 72.95% 79.52% (1.32%) 78.69% 72.86% (1.48%) 76.23% 79.67% (1.42%) 79.92% 85.72% (0.77%) 80.74% 89.11% (1.39%) 79.10% 89.79% (1.22%) 80.74% 89.79% (0.98%) 81.56% 89.63% (0.96%) 89.72% (0.86%) -Both models 81.97% 89.63% (0.95%)</td></tr></table>",
"text": "Bracketing task, unsupervised results",
"html": null
},
"TABREF9": {
"num": null,
"type_str": "table",
"content": "<table><tr><td>: Bracketing task, supervised results</td></tr><tr><td>on Lauer's corpus are similar to those reported pre-</td></tr><tr><td>viously, with the dependency model outperforming</td></tr><tr><td>the adjacency model on all measures. The bigram</td></tr><tr><td>probability scores highest out of all the measures,</td></tr><tr><td>while the</td></tr></table>",
"text": "",
"html": null
}
}
}
}