ACL-OCL / Base_JSON /prefixW /json /W14 /W14-0125.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W14-0125",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:55:55.013193Z"
},
"title": "Parse Ranking with Semantic Dependencies and WordNet",
"authors": [
{
"first": "Xiaocheng",
"middle": [],
"last": "Yin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nanyang Technological University",
"location": {
"country": "Singapore"
}
},
"email": ""
},
{
"first": "Jungjae",
"middle": [],
"last": "Kim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nanyang Technological University",
"location": {
"country": "Singapore"
}
},
"email": "jungjae.kim@ntu.edu.sg"
},
{
"first": "Zinaida",
"middle": [],
"last": "Pozen",
"suffix": "",
"affiliation": {},
"email": "zpozen@gmail.com"
},
{
"first": "Francis",
"middle": [],
"last": "Bond",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Nanyang Technological University",
"location": {
"country": "Singapore"
}
},
"email": "bond@ieee.org"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we investigate which features are useful for ranking semantic representations of text. We show that two methods of generalization improved results: extended grand-parenting and supertypes. The models are tested on a subset of SemCor that has been annotated with both Dependency Minimal Recursion Semantic representations and WordNet senses. Using both types of features gives a significant improvement in whole sentence parse selection accuracy over the baseline model.",
"pdf_parse": {
"paper_id": "W14-0125",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we investigate which features are useful for ranking semantic representations of text. We show that two methods of generalization improved results: extended grand-parenting and supertypes. The models are tested on a subset of SemCor that has been annotated with both Dependency Minimal Recursion Semantic representations and WordNet senses. Using both types of features gives a significant improvement in whole sentence parse selection accuracy over the baseline model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "In this paper we investigate various features to improve the accuracy of semantic parse ranking. There has been considerable successful work on syntactic parse ranking and reranking (Toutanova et al., 2005; Collins and Koo, 2006; McClosky et al., 2006) , but very little that uses pure semantic representations. With recent work on building semantic representations (from deep grammars such as LFG (Butt et al., 1999) and HPSG (Sag et al., 1999) , directly through lambda calculus, or as in intermediate step in machine translation) the question of ranking them has become more important.",
"cite_spans": [
{
"start": 182,
"end": 206,
"text": "(Toutanova et al., 2005;",
"ref_id": "BIBREF21"
},
{
"start": 207,
"end": 229,
"text": "Collins and Koo, 2006;",
"ref_id": "BIBREF7"
},
{
"start": 230,
"end": 252,
"text": "McClosky et al., 2006)",
"ref_id": "BIBREF17"
},
{
"start": 398,
"end": 417,
"text": "(Butt et al., 1999)",
"ref_id": "BIBREF6"
},
{
"start": 422,
"end": 445,
"text": "HPSG (Sag et al., 1999)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The closest related work is Fujita et al. (2010) who ranked parses using semantic features from Minimal Recursion Semantics (MRS) and syntactic trees, using a Maximum Entropy Ranker. They experimented with Japanese data, using the Hinoki Treebank (Bond et al., 2008) , using primarily elementary dependencies: single arcs between pred- icates and their arguments. These can miss some important connections between predicates.",
"cite_spans": [
{
"start": 28,
"end": 48,
"text": "Fujita et al. (2010)",
"ref_id": "BIBREF11"
},
{
"start": 247,
"end": 266,
"text": "(Bond et al., 2008)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An example parse tree for I treat dogs and cats with worms is shown in Figure 1 . 1 , for the interpretation \"I treat both dogs and cats that have worms\" (not \"I treat, using worms, dogs and cats\" or any of the other possibilities)",
"cite_spans": [],
"ref_spans": [
{
"start": 71,
"end": 79,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The semantic representation we use is Dependency Minimal Recursion Semantics (DRMS: Copestake, 2009) . The Minimal Recursion Semantics (MRS: Copestake et al., 2005 ) is a computationally tractable flat semantics that underspecifies quantifier scope. The Dependency MRS is an MRS representation format that keeps all the information from the MRS but is simpler to manipulate. DMRSs differ from syntactic dependency graphs in that the relations are defined between slightly abstract predicates, not between surface forms. Some semantically empty surface tokens (such as infinitive to) are not included, while some predicates are inserted that are not in the original text (such as the null article).",
"cite_spans": [
{
"start": 84,
"end": 100,
"text": "Copestake, 2009)",
"ref_id": "BIBREF8"
},
{
"start": 141,
"end": 163,
"text": "Copestake et al., 2005",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A simplified MRS representation of our example sentence and its DMRS equivalent are shown in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 93,
"end": 101,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the DMRS, the basic links between the nodes are present. However, potentially interesting relations such as that between the verb treat and its conjoined arguments dogs and cats are not linked directly. Similarly, the relation between dogs and cats and worms is conveyed by the preposition with, which links them through its external argument (ARG1: and) and internal argument (ARG2: worms). There is no direct link. We investigate new features that make these links more direct (Section 3.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We also explore the significance of the effectiveness of links between words that are connected arbitrarily far away in the semantic graph (Section 3.2.3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Finally, we experimented with generalizing over semantic classes. We used WordNet semantic files as supertypes to reduce data sparseness (Section 3.2.4). This will generalize the lexical semantics of the predicates, resulting in a reduction of feature size and ambiguity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper follows up on the work of Fujita et al. (2010) in ranking MRS semantic representations, which was carried out for Japanese. We are conducting a similar investigation for English, and add new features and approaches. Fujita et al. (2010) worked with the Japanese Hinoki Corpus (Bond et al., 2008) data and used hypernym chains from the Goi-Taikei Japanese ontology (Ikehara et al., 1997) for variable-level semantic backoff. This is in contrast to the uniform WordNet semantic file backoff performed here. In addition, this work only focuses on MRS ranking, whereas Fujita et al. (2010) combined MRS features with syntactic features to improve syntactic parse ranking accuracy.",
"cite_spans": [
{
"start": 37,
"end": 57,
"text": "Fujita et al. (2010)",
"ref_id": "BIBREF11"
},
{
"start": 227,
"end": 247,
"text": "Fujita et al. (2010)",
"ref_id": "BIBREF11"
},
{
"start": 287,
"end": 306,
"text": "(Bond et al., 2008)",
"ref_id": "BIBREF4"
},
{
"start": 375,
"end": 397,
"text": "(Ikehara et al., 1997)",
"ref_id": "BIBREF12"
},
{
"start": 576,
"end": 596,
"text": "Fujita et al. (2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "Our use of WordNet Semantic Files (SF) to reduce lexical feature sparseness is inspired by several recent papers. Agirre et al. (2008 Agirre et al. ( , 2011 have experimented with replacing open-class words with their SFs. Agirre et al. (2008) have shown an improvement in full parse and PP attachment scores with statistical constituency parsers using SFs. Agirre et al. (2011) have followed up on those results and re-trained a dependency parser on the data where words were replaced with their SFs. This resulted in a very modest labeled attachment score improvement, but with a significantly reduced feature set. In a recent HPSG work, MacKinlay et al. (2012) attempted to integrate lexical semantic features, including SF backoff, into a discriminative parse ranking model. However, this was not shown to help, presumably because the lexical semantic features were built from syntactic constituents rather than MRS predicates.",
"cite_spans": [
{
"start": 114,
"end": 133,
"text": "Agirre et al. (2008",
"ref_id": "BIBREF0"
},
{
"start": 134,
"end": 156,
"text": "Agirre et al. ( , 2011",
"ref_id": "BIBREF1"
},
{
"start": 223,
"end": 243,
"text": "Agirre et al. (2008)",
"ref_id": "BIBREF0"
},
{
"start": 358,
"end": 378,
"text": "Agirre et al. (2011)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "The ancestor features found to be helpful here are inspired by the use of grand-parenting in syntactic parse ranking (Toutanova et al., 2005) and chains in dependency parsing ranking (Le Roux et al., 2012) .",
"cite_spans": [
{
"start": 117,
"end": 141,
"text": "(Toutanova et al., 2005)",
"ref_id": "BIBREF21"
},
{
"start": 183,
"end": 205,
"text": "(Le Roux et al., 2012)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": "2"
},
{
"text": "In this section we introduce the corpus we work on, and the features we extract from it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resources and Methodology",
"sec_num": "3"
},
{
"text": "To evaluate our ranking methods, we are using the Redwoods Treebank (Oepen et al., 2004) of manually disambiguated HPSG parses, storing full signs for each analysis and supporting export into a variety of formats, including the Dependency MRS (DMRS) format used in this work.",
"cite_spans": [
{
"start": 68,
"end": 88,
"text": "(Oepen et al., 2004)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus: SemCor",
"sec_num": "3.1"
},
{
"text": "The HPSG parses in Redwoods are based on the English Resource Grammar (ERG; Flickinger, 2000) -a hand-crafted broad-coverage HPSG grammar of English.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus: SemCor",
"sec_num": "3.1"
},
{
"text": "For our experiments, we used a subset of the Redwoods Treebank, consisting of 2,590 sentences drawn from SemCor (Landes et al., 1998) . In the SemCor corpus each of the sentences is tagged with WordNet senses created at Princeton University by the WordNet Project research team. The average length of the Redwoods SemCor sentences is 15.4 words, and the average number of parses is 247.",
"cite_spans": [
{
"start": 112,
"end": 133,
"text": "(Landes et al., 1998)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus: SemCor",
"sec_num": "3.1"
},
{
"text": "From the treebank we can export the DMRS. The choice of which words become predicates is slightly different in the SemCor/WordNet and the ERG. The ERG lexicon groups together all senses that have the same syntactic properties, making them underspecified for many sense differences. Thus elementary predicate cat n:1 could be any of the WordNet senses cat n:1 \"feline mammal usu-I treat dogs and cats with worms. Tops n act n animal n artifact n attribute n body n cognition n communication n event n feeling n food n group n location n motive n object n person n phenomenon n plant n possession n process n quantity n relation n shape n state n substance n time n ally having thick soft fur and no ability to roar\", cat n:2 \"an informal term for a youth or man\" and six more. 2 In some cases, DMRS decomposes a single predicate into multiple predicates (e.g. here into in p this q place n ). The ERG and WordNet also often make different decisions about what constitutes a multiword expression. For these reasons the mapping between the two annotations is not always straightforward. In this paper we use the mapping between the DRMS and WordNet annotations produced by Pozen (2013). Using the mapping, we exploited the sense tagging of the SemCor in several ways. We experimented both with replacing elementary predicates with their synsets, their hypernyms at various levels and with their semantic files (Landes et al., 1998) , which generalize the meanings of words that belong to the same broad semantic categories. 3 These dozens of generalized semantic tags help to address the issue of feature sparseness, compared to thousands of synsets. We show the semantic files for nouns and verbs in Tables 1 and 2 . In this paper, we only report on the parse selection accuracy using semantic files to reduce ambiguity, as it gave the best results.",
"cite_spans": [
{
"start": 1407,
"end": 1428,
"text": "(Landes et al., 1998)",
"ref_id": "BIBREF13"
},
{
"start": 1521,
"end": 1522,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1698,
"end": 1713,
"text": "Tables 1 and 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Corpus: SemCor",
"sec_num": "3.1"
},
{
"text": "\uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 mrs LTOP h1 h INDEX e3 e RELS \uf8ee \uf8ef \uf8f0 pron 0:1 LBL h4 h ARG0 x5 x \uf8f9 \uf8fa \uf8fb, \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 treatv:1 2:6 LBL h2 h ARG0 e3 ARG1 x5 ARG2 x9 x \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb , \uf8ee \uf8ef \uf8f0 dogn:1 7:11 LBL h17 h ARG0 x15 \uf8f9 \uf8fa \uf8fb, \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 andc 12:15 LBL h22 h ARG0 x9 L-INDEX x15 R-INDEX x19 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb , \uf8ee \uf8ef \uf8f0 catn:1 16:20 LBL h23 h ARG0 x19 \uf8f9 \uf8fa \uf8fb, \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 withp 21:25 LBL h22 ARG0 e24 e ARG1 x9 ARG2 x25 x \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb , \uf8ee \uf8ef \uf8f0 wormn:1 26:31 LBL h29 h ARG0 x25 \uf8f9 \uf8fa \uf8fb \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus: SemCor",
"sec_num": "3.1"
},
{
"text": "In this section we introduce the baseline features for parse ranking. Table 3 shows example features extracted from the DMRS depicted in Figure 2 .Features 1-16 are Baseline features are those that directly reflect the dependencies of the DMRS. In Table 3 , feature type 0 (0-2) shows predicates with all their arguments. Feature type 1 (3-8) shows each argument individually. Feature type 2 shows all arguments without the argument types. Feature type 3 is the least specified, showing individual arguments without the labels. These types are the same as the MRS features of Toutanova et al. (2005) and the SEM-DEP features of Fujita et al. (2010) .",
"cite_spans": [
{
"start": 576,
"end": 599,
"text": "Toutanova et al. (2005)",
"ref_id": "BIBREF21"
},
{
"start": 628,
"end": 648,
"text": "Fujita et al. (2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 70,
"end": 77,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 137,
"end": 145,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 248,
"end": 255,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Semantic Dependency Features",
"sec_num": "3.2"
},
{
"text": "body v change v cognition v communication v competition v consumption v contact v creation v emotion v motion v perception v possession v social v stative v weather v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Dependency Features",
"sec_num": "3.2"
},
{
"text": "We further create two more features, called Left/Right Handle Features (LR), to link directly the two arguments of conjunctive relations with their parent, independently from the other argument. In Table 1 , for example, the feature treat v:1 ARG2 and c , although valid, does not convey the meaning of the sentence. Instead, we add the two LR features treat v:1 ARG2 dog n:1 (feature 17) and treat v:1 ARG2 cat n:1 (feature 18), which better model the conjunction relation.",
"cite_spans": [],
"ref_spans": [
{
"start": 198,
"end": 205,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Conjunctive Features",
"sec_num": "3.2.1"
},
{
"text": "As shown in Figure 2 , the node with p has two links: to and c (ARG1) and to worm n:1 (ARG2). The two relations together indicate a noun-prepositionnoun relationship. Instead of breaking the relationship into the two separate features, we introduce it, as a whole, as a new type of feature, where the two arguments of the preposition (e.g. and c , worm n:1 ) will have a direct relation via the preposition (e.g. with p ). We name these Preposition Role features (PR), as they are similar in spirit to semantic roles. Some sample PR features are given in Table 3 , features 19-22. The new features explicitly convey, for example, noun-preposition-noun relations. Parses containing features like something at somewhere can be further distinguished from parses containing at somewhere and something at separately. When the features become more representative, active parses are more likely to be selected, though with the cost of a larger feature set size.",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 555,
"end": 562,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Preposition Role Features",
"sec_num": "3.2.2"
},
{
"text": "As 4 types of features can be developed based on one relationship, a Preposition Role link would have 4 separate features. While the Conjunctive features mentioned in previous section give 2 to 4 additional features, Baseline-PR features normally give 4 more. Thus, the feature size of Baseline-PR model is larger than that of the Baseline-LR model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preposition Role Features",
"sec_num": "3.2.2"
},
{
"text": "While the semantic dependency features correspond to direct dependencies, we introduce a new type of features that represent indirect dependencies between ancestors and their descendants in the DMRS. For each predicate, we collect all its descendants linked through more than one dependency and create features to represent the indirect # Sample Features 0 0 treat v:1 ARG1 pron ARG2 and c 1 0 and c L-IND dog n:1 R-IND cat n:1 2 0 treat v:1 ARG2 dog n:1 ARG2 cat n:1 3 1 treat v:1 ARG1 pron 4 1 treat v:1 ARG2 and c 5 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ancestor Features",
"sec_num": "3.2.3"
},
{
"text": "treat v:1 ARG2 dog n:1 6 1 treat v:1 ARG2 cat n:1 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ancestor Features",
"sec_num": "3.2.3"
},
{
"text": "1 and c L-IND dog n:1 8 1 and c R-IND cat n:1 9 2 treat v:1 pron and c 10 2 treat v:1 dog n:1 cat n:1 11 3 treat v:1 pron 12 3 treat v:1 and c 13 3 treat v:1 dog n:1 14 3 treat v:1 cat n:1 dependencies between the predicate and the descendants. We name these features Ancestor Features (AF). Table 4 has some sample AF features such as that linking from treat v:1 to dog n:1 and cat n:1 (i.e. feature 2). This is a one-level ancestor, involving two predicates, while multi-level ancestors deal with more than two predicates linked in a sequence. Note that these are different from the LR features (features 15, 16 in Table 1 ), in that AF features include both arguments of a conjunction, for example, connecting the predicate treat v:1 to its grandchildren dog n:1 and cat n:1 via the argument role of and c in the predicate (feature 2 in Table 4) .",
"cite_spans": [],
"ref_spans": [
{
"start": 292,
"end": 299,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 617,
"end": 624,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 840,
"end": 848,
"text": "Table 4)",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Ancestor Features",
"sec_num": "3.2.3"
},
{
"text": "When a sentence has n dependencies, our method generates O( n(n\u22121) 2 ) = O(n 2 ) AF features. In the corpus we use, the dependency structure of a sentence typically has 4 levels. In practice the number of AF features is roughly triple the number of Baseline features. In the evaluation experiments, we investigated all the eight combinations of the three types of LR, PR, and AF features, where each combination is combined with the baseline features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ancestor Features",
"sec_num": "3.2.3"
},
{
"text": "In the features up until now, words have been represented as elementary predicate semantic dependencies (SD). Because SemCor also has WordNet senses, we experiment with replacing open class words with their supertypes, in this case using the WordNet semantic files (SF). If a word is not matched to a WordNet synset we continue to use 1 and c R-IND animal n 7 1 with c ARG1 animal n 8 1 with c ARG2 animal n 9 2 body v pron and c 10 2 with p and c animal n 11 3 body v pron 12 3 body v and c 13 3 and c animal n 14 3 and c animal n 15 3 with c and c 16 3 with c animal n the elementary predicate. This SF representation is also applied to the eight combinations of feature types. A sample of the features in the SF representations are given in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 744,
"end": 751,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Semantic File Features",
"sec_num": "3.2.4"
},
{
"text": "Sometimes two features, such as 13 and 14 in Table 3 , are replaced with the same feature, like 9 in Table 5 , because dog n:1 and cat n:1 are both replaced with animal n . There are about half as many Semantic File features as there are SD features.",
"cite_spans": [],
"ref_spans": [
{
"start": 45,
"end": 52,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 101,
"end": 108,
"text": "Table 5",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Semantic File Features",
"sec_num": "3.2.4"
},
{
"text": "We set up the evaluation task as reranking of the top 500 Redwoods analyses, previously selected by the syntactic MaxEnt ranker. The subset of SemCor introduced in Section 3.1 is trained and tested with the features introduced in Section 3.2. We grouped the feature sets into two according to the two word representation of basic Semantic Dependencies (SD) and generalized Semantic Files (SF). Sometime two or more different parses of a sentence have the same set of features. That is, the features failed to distinguish between two parses: often because of spurious syntactic ambiguity that had no effect on the semantics. In this case we merged duplicate feature sets to reduce the ambiguity in machine learning. If an inactive parse has the same set of features as that of the active one, the resulting merged parse was treated as active. We used TADM (Toolkit for Advanced Discriminative Modeling; Malouf, 2002) for the training and testing of our machine learning model, following Fujita et al. (2010) . We carried out 10-fold cross-validation for evaluation. We measured the parse selection accuracy at the sentence level. A parse was considered correct only when all the dependencies of the parse are correct.",
"cite_spans": [
{
"start": 902,
"end": 915,
"text": "Malouf, 2002)",
"ref_id": "BIBREF16"
},
{
"start": 986,
"end": 1006,
"text": "Fujita et al. (2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "The results of parse selection based on SD and SF representations are shown in Tables 6 and 7. The addition of the ancestor features (AF) gives the most increase in the parse selection accuracy. This result indicates that indirect dependencies as well as direct dependencies in a successful parse frequently appear in other active parses. Second, the SF representation shows better results than the SD representation in most cases. The semantic abstraction of the semantic files reduces the problem of feature sparseness and is enough to effectively rerank parses, whose syntactic properties are already to some extent validated during parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Third, the addition of the PR features also usually increases the parse selection accuracy. We plan to (semi-)automatically find more such multidependency structures whose combination shows better performance than the individual dependen-cies. Fourth, the LR features do not improve the accuracy significantly in most cases, though the SD+AF+LR combination shows the best results among the feature sets of the SD representation. This is understandable since the number of the LR features in our corpus is much smaller than those of the other features of SD, PR and AF. We need to test it with a bigger corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "These results show the validity of our assumption that long distance features and supertypes are both useful for selecting the correct interpretation of a sentence. Currently the SD+AF+LR model is the best for using the elementary predicates. However the best overall results come from the SF+AF model when we generalize to the semantic files. In future work we will investigate on larger-sized and more richly annotated corpora so that we can discover more about the relation between feature size and parse selection accuracy. In addition, we expect that increasing the corpus size will lead directly to higher accuracy. Other avenues we would like to explore is backing off not to the semantic files, but rather to WordNet hypernyms at various levels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "These results show that generalizing to semantic supertypes allows us to build semantic ranking models that are not only smaller, but more accurate. In general, learning time was roughly proportional to the number of features, so a smaller model can be learned faster. We hypothesize that it is the combination of dependencies and supertypes that makes the difference: approaches that used semantic features on phrase structure trees (such as Bikel (2000) and MacKinlay et al. (2012)) have in general failed to get much improvement. The overall accuracy is still quite low, due principally to the lack of training data. We show the learning curves for the SF+AF configuration in Figure 3 (the other configurations are similar). The curve is still clearly rising: the accuracy of parse selection on our corpus is far from saturated. This observation gives us confidence that with a larger corpus the accuracy of parse selection will improve considerably. The learning curve in Fujita et al. (2010) showed similar results for the same amount of data, and increased rapidly with more (they had a larger corpus for Japanese).",
"cite_spans": [
{
"start": 443,
"end": 455,
"text": "Bikel (2000)",
"ref_id": "BIBREF2"
},
{
"start": 976,
"end": 996,
"text": "Fujita et al. (2010)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 679,
"end": 687,
"text": "Figure 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "As there are so far still very few corpora with both structural and lexical semantic annotation, we are currently investigating the use of automatic word sense disambiguation to create the features, in a similar way to Agirre et al. (2008) . Finally, we would like to investigate even more features, such as the dependency chains of Le Roux et al. (2012) .",
"cite_spans": [
{
"start": 219,
"end": 239,
"text": "Agirre et al. (2008)",
"ref_id": "BIBREF0"
},
{
"start": 336,
"end": 354,
"text": "Roux et al. (2012)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "One exciting possibility is projecting ranking features across languages: wordnet semantic files are the supertypes for all wordnets linked to the Princeton Wordnet, of which there are many (Bond and Foster, 2013) . The predicates that are not in the wordnets are generally either named entities or from smallish closed sets of function words such as conjunctions, prepositions and pronouns. We are currently investigating mapping these between Japanese and English using transfer rules from an existing machine translation system (Bond et al., 2011) . In principal, a small set of mappings for closed class words could allow us to quickly boot-strap a semantic ranking model for any language with a wordnet.",
"cite_spans": [
{
"start": 190,
"end": 213,
"text": "(Bond and Foster, 2013)",
"ref_id": "BIBREF3"
},
{
"start": 531,
"end": 550,
"text": "(Bond et al., 2011)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In summary, we showed some features that help parse selection. In the SD group, LR features together with AF features achieved a 1.75% improvement in accuracy over the basic Baseline model (25.36% \u2192 27.12%). However, LR feature alone and AF feature alone both decrease the accuracy (25.36% \u2192 25.28% and 25.36% \u2192 24.84%). PR features and combination of PR and AF features both achieved small improvements (0.416% Baseline \u2192 Baseline+PR, 0.410% Baseline \u2192 Baseline-PR+AF). LR combined with PR features did not improve the accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "When features get generalized to supertypes, as shown in the SF group, models with more features achieved higher accuracies with the best be-ing the model with ancestor features (AF) added. This (SF+AF) achieved an improvement of 3.21% absolute over the baseline model (24.97% \u2192 28.18%). Adding more features to AF only decreases the accuracy. Generalizing to semantic supertypes allows us to build dependency ranking models that are not only smaller, but more accurate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Simplified by omission of non-branching nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Elementary predicates are shown in sans-serif font, Word-Net senses in bold italic, WordNet semantic files are shown in bold typewriter.3 Semantic Files are also sometimes referred to as Semantic Fields, Lexical Fields or Supersenses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors are grateful to Mathieu Morey and other members of the Deep Linguistic Processing with HPSG Initiative along with other members of their research groups for many extremely helpful discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Improving parsing and PP attachment performance with sense information",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Martinez",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL HLT 2008)",
"volume": "",
"issue": "",
"pages": "317--325",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Timothy Baldwin, and David Martinez. 2008. Improving parsing and PP attachment performance with sense information. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL HLT 2008), pages 317-325. Columbus, USA.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Improving dependency parsing with semantic classes",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Kepa",
"middle": [],
"last": "Bengoetxea",
"suffix": ""
},
{
"first": "Koldo",
"middle": [],
"last": "Gojenola",
"suffix": ""
},
{
"first": "Joakim",
"middle": [],
"last": "Nivre",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "699--703",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Kepa Bengoetxea, Koldo Gojenola, and Joakim Nivre. 2011. Improving dependency parsing with semantic classes. Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 699-703.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A statistical model for parsing and word-sense disambiguation",
"authors": [
{
"first": "M",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bikel",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora",
"volume": "",
"issue": "",
"pages": "155--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel M. Bikel. 2000. A statistical model for parsing and word-sense disambiguation. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natu- ral Language Processing and Very Large Corpora, pages 155-163. Hong Kong.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Linking and extending an open multilingual wordnet",
"authors": [
{
"first": "Francis",
"middle": [],
"last": "Bond",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2013,
"venue": "51st Annual Meeting of the Association for Computational Linguistics: ACL-2013",
"volume": "",
"issue": "",
"pages": "1352--1362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francis Bond and Ryan Foster. 2013. Linking and extending an open multilingual wordnet. In 51st Annual Meeting of the Association for Computational Linguistics: ACL- 2013, pages 1352-1362. Sofia. URL http://aclweb. org/anthology/P13-1133.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The Hinoki syntactic and semantic treebank of Japanese. Language Resources and Evaluation",
"authors": [
{
"first": "Francis",
"middle": [],
"last": "Bond",
"suffix": ""
},
{
"first": "Sanae",
"middle": [],
"last": "Fujita",
"suffix": ""
},
{
"first": "Takaaki",
"middle": [],
"last": "Tanaka",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "42",
"issue": "",
"pages": "243--251",
"other_ids": {
"DOI": [
"10.1007/s10579-008-9062-z"
]
},
"num": null,
"urls": [],
"raw_text": "Francis Bond, Sanae Fujita, and Takaaki Tanaka. 2008. The Hinoki syntactic and semantic treebank of Japanese. Language Resources and Evaluation, 42(2):243-251. URL http://dx.doi.org/ 10.1007/s10579-008-9062-z, (Re-issue of DOI 10.1007/s10579-007-9036-6 as Springer lost the Japanese text).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Deep open source machine translation",
"authors": [
{
"first": "Francis",
"middle": [],
"last": "Bond",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Nichols",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "Erik",
"middle": [],
"last": "Velldal",
"suffix": ""
},
{
"first": "Petter",
"middle": [],
"last": "Haugereid",
"suffix": ""
}
],
"year": 2011,
"venue": "Machine Translation",
"volume": "25",
"issue": "2",
"pages": "87--105",
"other_ids": {
"DOI": [
"10.1007/s10590-011-9099-4"
]
},
"num": null,
"urls": [],
"raw_text": "Francis Bond, Stephan Oepen, Eric Nichols, Dan Flickinger, Erik Velldal, and Petter Haugereid. 2011. Deep open source machine translation. Machine Transla- tion, 25(2):87-105. URL http://dx.doi.org/10. 1007/s10590-011-9099-4, (Special Issue on Open source Machine Translation).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Grammar Writer's Cookbook",
"authors": [
{
"first": "Miriam",
"middle": [],
"last": "Butt",
"suffix": ""
},
{
"first": "Tracy",
"middle": [
"Holloway"
],
"last": "King",
"suffix": ""
},
{
"first": "Mar\u00eda-Eugenia",
"middle": [],
"last": "Ni\u00f1o",
"suffix": ""
},
{
"first": "Fr\u00e9d\u00e9rique",
"middle": [],
"last": "Segond",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miriam Butt, Tracy Holloway King, Mar\u00eda-Eugenia Ni\u00f1o, and Fr\u00e9d\u00e9rique Segond. 1999. A Grammar Writer's Cook- book. CSLI publications.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Discriminative reranking for natural language parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
}
],
"year": 2006,
"venue": "Computational Linguistics",
"volume": "31",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins and Terry Koo. 2006. Discriminative rerank- ing for natural language parsing. Computational Linguis- tics, 31(1).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Slacker semantics: Why superficiality, dependency and avoidance of commitment can be the right way to go",
"authors": [
{
"first": "Ann",
"middle": [],
"last": "Copestake",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 12th Conference of the European Chapter",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann Copestake. 2009. Slacker semantics: Why superficiality, dependency and avoidance of commitment can be the right way to go. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 1-9. Athens.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Minimal Recursion Semantics. An introduction",
"authors": [
{
"first": "Ann",
"middle": [],
"last": "Copestake",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ivan",
"suffix": ""
}
],
"year": 2005,
"venue": "Research on Language and Computation",
"volume": "3",
"issue": "4",
"pages": "281--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ann Copestake, Dan Flickinger, Ivan A. Sag, and Carl Pol- lard. 2005. Minimal Recursion Semantics. An introduc- tion. Research on Language and Computation, 3(4):281- 332.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "On building a more efficient grammar by exploiting types",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
}
],
"year": 2000,
"venue": "Natural Language Engineering",
"volume": "6",
"issue": "1",
"pages": "15--28",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Flickinger. 2000. On building a more efficient gram- mar by exploiting types. Natural Language Engineering, 6 (1):15-28.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Exploiting semantic information for HPSG parse selection",
"authors": [
{
"first": "Sanae",
"middle": [],
"last": "Fujita",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Bond",
"suffix": ""
},
{
"first": "Takaaki",
"middle": [],
"last": "Tanaka",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
}
],
"year": 2010,
"venue": "Research on Language and Computation",
"volume": "8",
"issue": "1",
"pages": "1--22",
"other_ids": {
"DOI": [
"10.1007/s11168-010-9069-7"
]
},
"num": null,
"urls": [],
"raw_text": "Sanae Fujita, Francis Bond, Takaaki Tanaka, and Stephan Oepen. 2010. Exploiting semantic information for HPSG parse selection. Research on Language and Computation, 8(1):1-22. URL http://dx.doi.org/10.1007/ s11168-010-9069-7.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Goi-Taikei -A Japanese Lexicon",
"authors": [
{
"first": "Satoru",
"middle": [],
"last": "Ikehara",
"suffix": ""
},
{
"first": "Masahiro",
"middle": [],
"last": "Miyazaki",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Shirai",
"suffix": ""
},
{
"first": "Akio",
"middle": [],
"last": "Yokoo",
"suffix": ""
},
{
"first": "Hiromi",
"middle": [],
"last": "Nakaiwa",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Ogura",
"suffix": ""
},
{
"first": "Yoshifumi",
"middle": [],
"last": "Ooyama",
"suffix": ""
},
{
"first": "Yoshihiko",
"middle": [],
"last": "Hayashi",
"suffix": ""
}
],
"year": 1997,
"venue": "Iwanami Shoten, Tokyo",
"volume": "5",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satoru Ikehara, Masahiro Miyazaki, Satoshi Shirai, Akio Yokoo, Hiromi Nakaiwa, Kentaro Ogura, Yoshifumi Ooyama, and Yoshihiko Hayashi. 1997. Goi-Taikei - A Japanese Lexicon. Iwanami Shoten, Tokyo. 5 vol- umes/CDROM.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Building semantic concordances",
"authors": [
{
"first": "Shari",
"middle": [],
"last": "Landes",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "Christiane",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "WordNet: An Electronic Lexical Database",
"volume": "",
"issue": "8",
"pages": "199--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shari Landes, Claudia Leacock, and Christiane Fellbaum. 1998. Building semantic concordances. In Christine Fell- baum, editor, WordNet: An Electronic Lexical Database, chapter 8, pages 199-216. MIT Press.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Generative constituent parsing and discriminative dependency reranking: Experiments on english and french",
"authors": [
{
"first": "Joseph",
"middle": [],
"last": "Le Roux",
"suffix": ""
},
{
"first": "Benoit",
"middle": [],
"last": "Favre",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the ACL 2012 Joint Workshop on Statistical Parsing and Semantic Processing of Morphologically Rich Languages",
"volume": "",
"issue": "",
"pages": "89--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph Le Roux, Benoit Favre, Alexis Nasr, and Seyed Abol- ghasem Mirroshandel. 2012. Generative constituent pars- ing and discriminative dependency reranking: Experi- ments on english and french. In Proceedings of the ACL 2012 Joint Workshop on Statistical Parsing and Seman- tic Processing of Morphologically Rich Languages, pages 89-99. Association for Computational Linguistics, Jeju, Republic of Korea. URL http://www.aclweb.org/ anthology/W12-3412.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The effects of semantic annotations on precision parse ranking",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Mackinlay",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Dridan",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2012,
"venue": "SEM 2012: The First Joint Conference on Lexical and Computational Semantics",
"volume": "2",
"issue": "",
"pages": "228--236",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew MacKinlay, Rebecca Dridan, Diana McCarthy, and Timothy Baldwin. 2012. The effects of semantic an- notations on precision parse ranking. In SEM 2012: The First Joint Conference on Lexical and Computational Semantics-, volume 2, pages 228-236. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A comparison of algorithms for maximum entropy parameter estimation",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Malouf",
"suffix": ""
}
],
"year": 2002,
"venue": "CONLL-2002",
"volume": "",
"issue": "",
"pages": "49--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert Malouf. 2002. A comparison of algorithms for max- imum entropy parameter estimation. In CONLL-2002, pages 49-55. Taipei, Taiwan.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Reranking and self-training for parser adaptation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mcclosky",
"suffix": ""
},
{
"first": "Eugene",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2006,
"venue": "44th Annual Meeting of the Association for Computational Linguistics and 21st International Conference on Computational Linguistics: COLING/ACL-2006",
"volume": "",
"issue": "",
"pages": "337--344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David McClosky, Eugene Charniak, and Mark Johnson. 2006. Reranking and self-training for parser adaptation. In 44th Annual Meeting of the Association for Computational Linguistics and 21st International Conference on Compu- tational Linguistics: COLING/ACL-2006, pages 337-344.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "LinGO Redwoods: A rich and dynamic treebank for HPSG",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Christoper",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2004,
"venue": "Research on Language and Computation",
"volume": "2",
"issue": "4",
"pages": "575--596",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Oepen, Dan Flickinger, Kristina Toutanova, and Christoper D. Manning. 2004. LinGO Redwoods: A rich and dynamic treebank for HPSG. Research on Language and Computation, 2(4):575-596.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Using Lexical and Compositional Semantics to Improve HPSG Parse Selection",
"authors": [
{
"first": "",
"middle": [],
"last": "Zinaida Pozen",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zinaida Pozen. 2013. Using Lexical and Composi- tional Semantics to Improve HPSG Parse Selection. Master's thesis, University of Washington. URL https://digital.lib.washington.edu/ researchworks/handle/1773/%23469.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Syntactic Theory: A Formal Introduction",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ivan",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Sag",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Wasow",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bender",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ivan A. Sag, Tom Wasow, and Emily Bender. 1999. Syntactic Theory: A Formal Introduction. CSLI Publications, Stan- ford, second edition.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Stochastic HPSG parse disambiguation using the Redwoods corpus",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Flickinger",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Oepen",
"suffix": ""
}
],
"year": 2005,
"venue": "Research on Language and Computation",
"volume": "3",
"issue": "",
"pages": "83--105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova, Christopher D. Manning, Dan Flickinger, and Stephan Oepen. 2005. Stochastic HPSG parse disam- biguation using the Redwoods corpus. Research on Lan- guage and Computation, 3(1):83-105.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Currently at PointInside, Inc."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Syntactic view of sentence \"I treat dogs and cats with worms\"."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "MRS and DMRS for I treat cats and dogs with worms."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "v ARG1 pron ARG2 and c 1 0 and c L-IND animal n R-IND animal n 2 0 with p ARG1 and c ARG2 animal n 3 1 body v ARG1 pron 4 1 body v ARG2 and c 5 1 and c L-IND animal n 6"
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Learning curve for SF+AF."
},
"TABREF0": {
"num": null,
"type_str": "table",
"html": null,
"text": "the roles: 1 is ARG1, 2 is ARG2, . . . .",
"content": "<table><tr><td/><td/><td/><td>2</td><td>1</td><td>with</td><td/></tr><tr><td/><td/><td/><td/><td>1</td><td/><td/></tr><tr><td>1</td><td>2</td><td>2</td><td>L</td><td>R</td><td>1</td><td>2</td></tr><tr><td>pron</td><td>treatv:1</td><td>dogn:1</td><td>andc</td><td>catn:1</td><td>withp</td><td>wormn:1</td></tr><tr><td colspan=\"4\">Simplified by omission of quantifiers</td><td/><td/><td/></tr><tr><td colspan=\"5\">Dashed lines show Preposition (P) features</td><td/><td/></tr><tr><td colspan=\"5\">Dotted lines show Conjunction (LR) features</td><td/><td/></tr><tr><td colspan=\"2\">Arc labels show</td><td/><td/><td/><td/><td/></tr></table>"
},
"TABREF1": {
"num": null,
"type_str": "table",
"html": null,
"text": "WordNet Noun Semantic Files.",
"content": "<table/>"
},
"TABREF2": {
"num": null,
"type_str": "table",
"html": null,
"text": "WordNet Verb Semantic Files. the semantic dependency features (Baseline). 17-18 are the conjunctive features (LR). 19-22 are the preposition role features (PR). with p and c worm n:1 11 3 treat v:1 pron 12 3 treat v:1 and c 13 3 and c dog n:1 14 3 and c cat n:1 15 3 with p and c 16 3 with p worm n:1 17 1 treat v:1 ARG2 dog n:1 18 1 treat v:1 ARG2 cat n:1 19 0 and",
"content": "<table><tr><td>#</td><td>Sample Features</td></tr><tr><td>0</td><td>0 treat :1</td></tr><tr><td>3</td><td>1 treat v:1 ARG1 pron</td></tr><tr><td>4</td><td>1 treat v:1 ARG2 and c</td></tr><tr><td>5</td><td>1 and c L-IND dog n:1</td></tr><tr><td>6</td><td>1 and c R-IND cat n:1</td></tr><tr><td>7</td><td>1 with p ARG1 and c</td></tr><tr><td>8</td><td>1 with p ARG2 worm n:1</td></tr><tr><td>9</td><td>2 treat v:1 pron and c</td></tr><tr><td colspan=\"2\">10 2</td></tr></table>"
},
"TABREF3": {
"num": null,
"type_str": "table",
"html": null,
"text": "Features for the DMRS inFig 2.",
"content": "<table/>"
},
"TABREF4": {
"num": null,
"type_str": "table",
"html": null,
"text": "Ancestor Features (AF).",
"content": "<table/>"
},
"TABREF5": {
"num": null,
"type_str": "table",
"html": null,
"text": "Baseline features with Semantic Files (SF).",
"content": "<table/>"
},
"TABREF7": {
"num": null,
"type_str": "table",
"html": null,
"text": "Parse selection results with SD.",
"content": "<table><tr><td>Features</td><td colspan=\"2\">Accuracy Features</td></tr><tr><td/><td colspan=\"2\">(%) (\u00d71,000)</td></tr><tr><td>SF-Baseline</td><td>25.0</td><td>223</td></tr><tr><td>SF+LR</td><td>25.1</td><td>235</td></tr><tr><td>SF+PR</td><td>26.3</td><td>306</td></tr><tr><td>SF+LR+PR</td><td>26.3</td><td>321</td></tr><tr><td>SF+AF</td><td>28.2</td><td>1,051</td></tr><tr><td>SF+AF+LR</td><td>28.0</td><td>1,101</td></tr><tr><td>SF+AF+PR</td><td>28.1</td><td>1,310</td></tr><tr><td>SF+AF+LR+PR</td><td>27.7</td><td>1,375</td></tr></table>"
},
"TABREF8": {
"num": null,
"type_str": "table",
"html": null,
"text": "Parse selection results with SF.",
"content": "<table/>"
}
}
}
}