ACL-OCL / Base_JSON /prefixN /json /N04 /N04-1032.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N04-1032",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:44:06.737113Z"
},
"title": "Shallow Semantic Parsing of Chinese",
"authors": [
{
"first": "Honglin",
"middle": [],
"last": "Sun",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we address the question of assigning semantic roles to sentences in Chinese. We show that good semantic parsing results for Chinese can be achieved with a small 1100-sentence training set. In order to extract features from Chinese, we describe porting the Collins parser to Chinese, resulting in the best performance currently reported on Chinese syntactic parsing; we include our headrules in the appendix. Finally, we compare English and Chinese semantic-parsing performance. While slight differences in argument labeling make a perfect comparison impossible, our results nonetheless suggest significantly better performance for Chinese. We show that much of this difference is due to grammatical differences between English and Chinese, such as the prevalence of passive in English, and the strict word order constraints on adjuncts in Chinese.",
"pdf_parse": {
"paper_id": "N04-1032",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we address the question of assigning semantic roles to sentences in Chinese. We show that good semantic parsing results for Chinese can be achieved with a small 1100-sentence training set. In order to extract features from Chinese, we describe porting the Collins parser to Chinese, resulting in the best performance currently reported on Chinese syntactic parsing; we include our headrules in the appendix. Finally, we compare English and Chinese semantic-parsing performance. While slight differences in argument labeling make a perfect comparison impossible, our results nonetheless suggest significantly better performance for Chinese. We show that much of this difference is due to grammatical differences between English and Chinese, such as the prevalence of passive in English, and the strict word order constraints on adjuncts in Chinese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Thematic roles (AGENT, THEME, LOCATION, etc) provide a natural level of shallow semantic representation for a sentence. A number of algorithms have been proposed for automatically assigning such shallow semantic structure to English sentences. But little is understood about how these algorithms may perform in other languages, and in general the role of language-specific idiosyncracies in the extraction of semantic content and how to train these algorithms when large hand-labeled training sets are not available. In this paper we address the question of assigning semantic roles to sentences in Chinese. Our work is based on the SVM-based algorithm proposed for English by Pradhan et al (2003) . We first describe our creation of a small 1100-sentence Chinese corpus labeled according to principles from the English and (in-progress) Chinese PropBanks. We then introduce the features used by our SVM classifier, and show their performance on semantic parsing for both seen and unseen verbs, given hand-corrected (Chinese TreeBank) syntactic parses. We then describe our port of the Collins (1999) parser to Chinese. Finally, we apply our SVM semantic parser to a matching English corpus, and discuss the differences between English and Chinese that lead to significantly better performance on Chinese.",
"cite_spans": [
{
"start": 677,
"end": 697,
"text": "Pradhan et al (2003)",
"ref_id": "BIBREF11"
},
{
"start": 1086,
"end": 1100,
"text": "Collins (1999)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Work on semantic parsing in English has generally related on the PropBank, a portion of the Penn TreeBank in which the arguments of each verb are annotated with semantic roles. Although a project to produce a Chinese PropBank is underway (Xue and Palmer 2003) , this data is not expected to be available for another year. For these experiments, we therefore hand-labeled a small corpus following the Penn Chinese Propbank labeling guidelines (Xue, 2002) . In this section, we first describe the semantic roles we used in the annotation and then introduce the data for our experiments.",
"cite_spans": [
{
"start": 238,
"end": 259,
"text": "(Xue and Palmer 2003)",
"ref_id": "BIBREF15"
},
{
"start": 442,
"end": 453,
"text": "(Xue, 2002)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Annotation and the Corpus",
"sec_num": "2"
},
{
"text": "Semantic roles in the English (Kingsbury et al 2002) and Chinese (Xue 2002) PropBanks are grouped into two major types: (1) arguments, which represent central participants in an event. A verb may require one, two or more arguments and they are represented with a contiguous sequence of numbers prefixed by arg, as arg0, arg1.",
"cite_spans": [
{
"start": 30,
"end": 52,
"text": "(Kingsbury et al 2002)",
"ref_id": "BIBREF7"
},
{
"start": 65,
"end": 75,
"text": "(Xue 2002)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic roles",
"sec_num": "2.1"
},
{
"text": "(2) adjuncts, which are optional for an event but supply more information about an event, such as time, location, reason, condition, etc. An adjunct role is represented with argM plus a tag. For example, argM-TMP stands for temporal, argM-LOC for location.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic roles",
"sec_num": "2.1"
},
{
"text": "In our corpus three argument roles and 15 adjunct roles appear. The whole set of roles is given at Table 1 ",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 106,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic roles",
"sec_num": "2.1"
},
{
"text": "We created our training and test corpora by choosing 10 Chinese verbs, and then selecting all sentences containing these 10 verbs from the 250K-word Penn Chinese Treebank 2.0. We chose the 10 verbs by considering frequency, syntactic diversity, and word sense. We chose words that were frequent enough to provide sufficient training data. The frequencies of the 10 verbs range from 41 to 230, with an average of 114. We chose verbs that were representative of the variety of verbal syntactic behavior in Chinese, including verbs with one, two, and three arguments, and verbs with various patterns of argument linking. Finally, we chose verbs that varied in their number of word senses. In total, we selected 1138 sentences. The first author then labeled each verbal argument/adjunct in each sentence with a role label. We created our training and test sets by splitting the data for each verb into two parts: 90% for training and 10% for test. Thus there are 1025 sentences in the training set and 113 sentences in the test set, and each test set verb has been seen in the training set. The list of verbs chosen and their number of senses, argument numbers and frequencies are given in Table 2 . 3 Semantic Parsing",
"cite_spans": [],
"ref_spans": [
{
"start": 1186,
"end": 1193,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "The training and test sets",
"sec_num": "2.2"
},
{
"text": "Following the architecture of earlier semantic parsers like Gildea and Jurafsky (2002) , we treat the semantic parsing task as a 1-of-N classification problem. For each (non-aux/non-copula) verb in each sentence, our classifier examines each node in the syntactic parse tree for the sentence and assigns it a semantic role label. Most constituents are not arguments of the verb, and so the most common label is NULL. Our architecture is based on a Support Vector Machine classifier, following Pradhan et al. (2003) . Since SVMs are binary classifiers, we represent this 1-of-19 classification problem (18 roles plus NULL) by training 19 binary one-versus-all classifiers. Following Pradhan et al. (2003) , we used tinySVM along with YamCha Matsumoto 2000, 2001) as the SVM training and test software. The system uses a polynominal kernel with degree 2; the cost per unit violation of the margin, C=1; tolerance of the termination criterion e=0.001.",
"cite_spans": [
{
"start": 60,
"end": 86,
"text": "Gildea and Jurafsky (2002)",
"ref_id": "BIBREF5"
},
{
"start": 493,
"end": 514,
"text": "Pradhan et al. (2003)",
"ref_id": "BIBREF11"
},
{
"start": 682,
"end": 703,
"text": "Pradhan et al. (2003)",
"ref_id": "BIBREF11"
},
{
"start": 740,
"end": 761,
"text": "Matsumoto 2000, 2001)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Architecture and Classifier",
"sec_num": "3.1"
},
{
"text": "The literature on semantic parsing in English relies on a number of features extracted from the input sentence and its parse. These include the constituent's syntactic phrase type, head word, and governing category, the syntactic path in the parse tree connecting it to the verb, whether the constitutent is before or after the verb, the subcategorization bias of the verb, and the voice (active/passive) of the verb. We investigated each of these features in Chinese; some acted quite similarly to English, while others showed interesting differences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "Features that acted similarly to English include the target verb, the phrase type, the syntactic category of the constituent. (NP, PP, etc), and the subcategorization of the target verb. The sub-categorization feature represents the phrase structure rule for the verb phrase containing the target verb (e.g., VP -> VB NP, etc). Five features (path, position, governing category, headword, and voice) showed interesting patterns that are discussed below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "3.2.1 Path in the syntactic parse tree. The path feature represents the path from a constituent to the target verb in the syntactic parse tree, using \"^\" for ascending a parse tree, and \"!\" for descending. This feature manifests the syntactic relationship between the constituent and the target verb. For example the path \"NP^IP!VP!VP!VV\" indicates that the constituent is an \"NP\" which is the subject of the predicate verb. In general, we found the path feature to be sparse. In our test set, 60% of path types and 39% of path tokens are unseen in the training. The distributions of paths are very uneven. In the whole corpus, paths for roles have an average frequency of 14.5 while paths for non-roles have an average of 2.7. Within the role paths, a small number of paths account for majority of the total occurrences; among the 188 role path types, the top 20 paths account for 86% of the tokens. Thus, although the path feature is sparse, its sparsity may not be a major problem in role recognition. Of the 291 role tokens in our test set, only 9 have unseen paths, i.e., most of the unseen paths are due to non-roles. 3.2.2 Position before or after the verb. The position feature indicates that a constituent is before or after the target verb. In our corpus, 69% of the roles are before the verb while 31% are after the verb. As in English, the position is a useful cue for role identity. For example, 88% of arg0s are before the verb, 67% of arg1s are after the verb and all the arg2s are after the verb. Adjuncts have even a stronger bias. Ten of the adjunct types can only occur before the verb, while three are always after the verb. The two most common adjunct roles, argM-LOC and argM-TMP are almost always before the verb, a sharp difference from English. The details are shown seen in Table 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 1800,
"end": 1807,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "3.2"
},
{
"text": "The governing category feature is only applicable for NPs. In the original formulation for English in Gildea and Jurafsky (2002) , it answers the question: Is the NP governed by IP or VP? An NP governed by an IP is likely to be a subject, while an NP governed by a VP is more likely to be an object. For Chinese, we added a third option in which the governing category of an NP is neither IP nor VP, but an NP. This is caused by the \"DE\" construction, in which a clause is used as a modifier of an NP. For instance, in the example indicated in Figure 1 , for the last NP, \" \"(\"international Olympic conference\") the parent node is NP, from where it goes down to the target verb \" \"(\"taking place\"). Since the governing category information is encoded in the path feature, it may be redundant; indeed this redundancy might explain why the governing category feature was used in Gildea & Jurafsky(2002) but not in Gildea and Palmer(2002) . Since the \"DE\" construction caused us to modify the feature for Chinese, we conducted several experiments to test whether the governing category feature is useful or whether it is redundant with the path and position features. Using the paradigm to be described in section 3.4, we found a small improvement using governing category, and so we include it in our model.",
"cite_spans": [
{
"start": 102,
"end": 128,
"text": "Gildea and Jurafsky (2002)",
"ref_id": "BIBREF5"
},
{
"start": 877,
"end": 900,
"text": "Gildea & Jurafsky(2002)",
"ref_id": "BIBREF5"
},
{
"start": 912,
"end": 935,
"text": "Gildea and Palmer(2002)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 544,
"end": 552,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Governing Category.",
"sec_num": "3.2.3"
},
{
"text": "The head word is a useful but sparse feature. In our corpus, of the 2716 roles, 1016 head words (type) are used, in which 646 are used only once. The top 20 words are given in Table 4 . In the top 20 words, 4 are prepositions (\" /in /at /than /for\") and 3 are temporal nouns(\" /today /present /recently\") and 2 are adverbs(\" /already, /will\"). These closed class words are highly correlated with specific semantic roles. For example,\" /for\" occurs 195 times as the head of a constituent, of which 172 are non-roles, 19 are argM-BFYs, 3 are arg1s and 1 is an argM-TPC.\" /in\" occurs 644 times as a head, of which 430 are nonroles, 174 are argM-LOCs, 24 are argM-TMPs, 9 are argM-RNGs, and 7 are argM-CND. \" /already\" occurs 135 times as a head, of which 97 are non-roles and 38 are argM-ADVs. \" /today\" occurs 69 times as a head, of which 41 are argM-TMPs and 28 are nonroles. Within the open class words, some are closely correlated to the target verb. For example, \" /meeting; conference\" occurs 43 times as a head for roles, of which 24 are for the target \" /take place\" and 19 for \" /pass\". \" /ceremony\" occurs 28 times and all are arguments of \" \"(take place).\" /statement\" occurs 19 times, 18 for \" /release; publish\" and one for \" /hope\". These statistics emphasize the key role of the lexicalized head word feature in capturing the collocation between verbs and their arguments. Due to the sparsity of the head word feature, we also use the part-of-speech of the head word, following Surdeanu et al (2003) . For example, \"7 26 /July 26\" may not be seen in the training, but its POS, NT(temporal noun) , is a good indicator that it is a temporal.",
"cite_spans": [
{
"start": 1490,
"end": 1511,
"text": "Surdeanu et al (2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 176,
"end": 183,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Head word and its part of speech.",
"sec_num": "3.2.4"
},
{
"text": "The passive construction in English gives information about surface location of arguments. In Chinese the marked passive voice is indicated by the use of the preposition \" /by\" (POS tag LB in Penn Chinese Treebank). This passive, however, is seldom used in Chinese text. In our entire 1138-sentence corpus, only 13 occurrences of \"LB\" occur, and only one (in the training set) is related to the target verb. Thus we do not use the voice feature in our system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Voice.",
"sec_num": "3.2.5"
},
{
"text": "We now test the performance of our classifier, trained on the 1025-sentence training set and tested on the 113sentence test set introduced in Section 2.2. Recall that in this 'stratified' test set, each verb has been seen in the training data. The last row in Table 5 shows the current best performance of our system on this test set. The preceding rows show various subsets of the feature set, beginning with the path feature. As Table 5 shows, the most important feature is path, followed by target verb and head word. In general, the lexicalized features are more important than the other features. The combined feature set outperforms any other feature sets with less features and it has an Fscore of 83.1. The performance is better for the arguments (i.e., only ARG0-2), 86.7 for arg0 and 89.4 for arg1.",
"cite_spans": [],
"ref_spans": [
{
"start": 260,
"end": 267,
"text": "Table 5",
"ref_id": "TABREF4"
},
{
"start": 431,
"end": 438,
"text": "Table 5",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Experimental Results for Seen Verbs",
"sec_num": "3.3"
},
{
"text": "To test the performance of the semantic parser on unseen verbs, we used cross-validation, selecting one verb as test and the other 9 as training, and iterating with each verb as test. All the results are given in Table 6 . The results for some verbs are almost equal to the performance on seen verbs. For example for \" \" and \"",
"cite_spans": [],
"ref_spans": [
{
"start": 213,
"end": 221,
"text": "Table 6",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Experimental Results for Unseen Verbs",
"sec_num": "3.4"
},
{
"text": "\", the F-scores are over 80. However, for some verbs, the results are much worse. The worst case is the verb \" \", which has an F-score of 11. This is due to the special syntactic characteristics of this verb. This verb can only have one argument and this argument most often follows the verb, in object position. In the surface structure, there is often an NP before the verb working as its subject, but semantically this subject cannot be analyzed as arg0. For example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results for Unseen Verbs",
"sec_num": "3.4"
},
{
"text": "(1) /China /not /will /emerge /food /crisis. (A food crisis won't emerge in China.) (2) /Finland /economy /emerge /AUX /post-war /most /serious /AUX /depression. (The most severe post-war depression emerged in the Finland economy.)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results for Unseen Verbs",
"sec_num": "3.4"
},
{
"text": "The subjects, \" /China\" in (1) and \" /Finland /economy\", are locatives, i.e. argM-LOC, and the objects, \" /food /crisis\" in (1) and \" /postwar /most /serious /AUX /depression\" in (2), are analyzed as arg0. But the parser classified the subjects as arg0 and the objects as arg1. These are correct for most common verbs but wrong for this particular verb. It is difficult to know how common this problem would be in a larger, test set. The fact that we considered diversity of syntactic behavior when selecting verbs certainly helps make this test set reflect the difficult cases. If most verbs prove not to be as idiosyncratic as \" /emerge\", the real performance of the parser on unseen verbs may be better than the average given here. that he rarely gave his blessing to the claptrap that passes for consensus in various international institutions. In (a), arg2 represents the goal of \"give\", in (b), it represents the amount of increase, and in (c) it represents yet another role. These complete different semantic relations are given the same semantic label. For unseen verbs, this makes it difficult for the semantic parser to know what would count as an arg2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results for Unseen Verbs",
"sec_num": "3.4"
},
{
"text": "The results in the last section are based on the use of perfect (hand-corrected) parses drawn from the Penn Chinese Treebank. In practical use, of course, automatic parses will not be as accurate. In this section we describe experiments on semantic parsing when given automatic parses produced by an automatic parser, the Collins (1999) parser, ported to Chinese. We first describe how we ported the Collins parser to Chinese and then present the results of the semantic parser with features drawn from the automatic parses.",
"cite_spans": [
{
"start": 322,
"end": 336,
"text": "Collins (1999)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Using Automatic Parses",
"sec_num": "4"
},
{
"text": "The Collins parser is a state-of-the-art statistical parser that has high performance on English (Collins, 1999) and Czech (Collins et al. 1999) . There have been attempts in applying other algorithms in Chinese parsing (Bikel and Chiang, 2000; Chiang and Bikel 2002; Levy and Manning 2003) , but there has been no report on applying the Collins parser on Chinese. The Collins parser is a lexicalized statistical parser based on a head-driven extended PCFG model; thus the choice of head node is crucial to the success of the parser. We analyzed the Penn Chinese Treebank data and worked out head rules for the Chinese Treebank grammar (we were unable to find any published head rules for Chinese in the literature). There are two major differences in the head rules between English and Chinese. First, NP heads in Chinese are rigidly rightmost, that is to say, no modifiers of an NP can follow the head. In contrast, in English a modifier may follow the head. Second, just as with NPs in Chinese, the head of ADJP is rigidly rightmost. In English, by contrast, the head of an ADJP is mainly the leftmost constituent. Our head rules for the Chinese Treebank grammar are given in the Appendix. In addition to the head rules, we modified the POS tags for all punctuation. This is because all cases of punctuation in the Penn Chinese Treebank are assigned the same POS tag \"PU\". The Collins parser, on the other hand, expects the punctuation tags in the English TreeBank format, where the tag for a punctuation mark is the punctuation mark itself. We therefore replaced the POS tags for all punctuation marks in the Chinese data to conform to the conventions in English. Finally, we made one further augmentation also related to punctuation. Chinese has one punctuation mark that does not exist in English. This commonly used mark, 'semi-stop', is used in Chinese to link coordinates within a sentence (for example between elements of a list). This function is represented in English by a comma. But the comma in English is ambiguous; in addition to its use in coordination and lists, it can also represent the end of a clause. In Chinese, by contrast the semi-stop has only the conjunction/list function. Chinese thus uses the regular comma only for representing clause boundaries. We investigated two ways to model the use of the Chinese semi-stop: (1) just converting the semi-stop to the comma, thus conflating the two functions as in English; and (2) by giving the semi-stop the POS tag \"CC\", a conjunction. We compared parsing results with these two methods; the latter (conjunction) method gained 0.5% net improvement in F-score over the former one. We therefore include it in our Collins parser port. We trained the Collins parser on the Penn Chinese Treebank(CTB) Release 2 with 250K words, first removing from the training set any sentences that occur in the test set for the semantic parsing experiments. We then tested on the test set used in the semantic parsing which includes 113 sentences(TEST1). The results of the syntactic parsing on the test set are shown in Table 7 . To compare the performance of the Collins parser on Chinese with those of other parsers, we conducted an experiment in which we used the same training and test data (Penn Chinese Treebank Release 1, with 100K words) as used in those reports. In this experiment, we used articles 1-270 for training and 271-300 as test(TEST2). Table 8 shows the results and the comparison with other parsers. Table 8 only shows the performance on sentences \u2264 40 words. Our performance on all the sentences TEST2 is P/R/F=82.2/83.3/82.7. It may seem surprising that the overall F-score on TEST2 (82.7) is higher than the overall F-score on TEST1 (81.0) despite the fact that our TEST1 system had more than twice as much training as our TEST2 system. The reason lies in the makeup of the two test sets; TEST1 consists of randomly selected long sentences; TEST2 consists of sequential text, including many short sentences. The average sentence length in TEST1 is 35.2 words, vs. 22.1 in TEST2. TEST1 has 32% long sentences (>40 words) while TEST2 has only 13%. ",
"cite_spans": [
{
"start": 97,
"end": 112,
"text": "(Collins, 1999)",
"ref_id": "BIBREF3"
},
{
"start": 123,
"end": 144,
"text": "(Collins et al. 1999)",
"ref_id": "BIBREF4"
},
{
"start": 220,
"end": 244,
"text": "(Bikel and Chiang, 2000;",
"ref_id": "BIBREF1"
},
{
"start": 245,
"end": 267,
"text": "Chiang and Bikel 2002;",
"ref_id": "BIBREF2"
},
{
"start": 268,
"end": 290,
"text": "Levy and Manning 2003)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 3076,
"end": 3084,
"text": "Table 7",
"ref_id": "TABREF6"
},
{
"start": 3413,
"end": 3420,
"text": "Table 8",
"ref_id": "TABREF7"
},
{
"start": 3478,
"end": 3485,
"text": "Table 8",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "The Collins parser for Chinese",
"sec_num": "4.1"
},
{
"text": "In the test set of 113 sentences, there are 3 sentences in which target verbs are given the wrong POS tags, so they can not be used for semantic parsing. For the remaining 100 sentences, we used the feature set containing eight features (path, pt, gov, position, subcat, target, head word and head POS) , the same as that used in the experiment on perfect parses. The results are shown in Table 9 . Compared to the F-score using hand-corrected syntactic parses from the TreeBank, using automatic parses decreases the F-score by 6.4.",
"cite_spans": [],
"ref_spans": [
{
"start": 389,
"end": 396,
"text": "Table 9",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Semantic parsing using Collins parses",
"sec_num": "4.2"
},
{
"text": "Recent research on English semantic parsing has achieved quite good results by relying on the large amounts of training data available in the Propbank and Framenet (Baker et al. 1998) databases. But in extending the semantic parsing approach to other languages, we are unlikely to always have large data sets available. Thus it is crucial to understand how small amounts of data affect semantic parsing. At the same time, there have been no comparisons between English and other languages with respect to semantic parsing. It is thus not clear what language-specific issues may arise in general with the automatic mapping of syntactic structures to semantic relations. In this section, we compare English and Chinese by using the same semantic parser, similar verbs and similar amounts of data. Our goals are two-folds: (1) to compare the performance of the parser on English and Chinese; and (2) to understand differences between English and Chinese that affect automatic mapping between syntax and semantics. At first, we introduce the data used in the experiments and then we present the results and give analysis.",
"cite_spans": [
{
"start": 164,
"end": 183,
"text": "(Baker et al. 1998)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with English",
"sec_num": "5"
},
{
"text": "In order to create an English corpus which matched our small Chinese corpus, we selected 10 English verbs which corresponded to our 10 Chinese verbs in meaning and frequency; exact translations of the Chinese when possible, or the closest possible word when an extract translation did not exist. The English verbs and their Chinese correspondents are given in Table 10 . After the verbs were chosen, we extracted every sentence containing these verbs from section 02 to section 21 of the Wall Street Journal data from the Penn English Propbank. The number of sentences for each verb is given in Table 10 .",
"cite_spans": [],
"ref_spans": [
{
"start": 360,
"end": 368,
"text": "Table 10",
"ref_id": "TABREF9"
},
{
"start": 595,
"end": 603,
"text": "Table 10",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "The English data",
"sec_num": "5.1"
},
{
"text": "As in our Chinese experiments, we used our SVMbased classifier, using N one-versus-all classifiers. Table 11 shows the performance on our English test set (with Chinese for comparison), beginning with the path feature, and incrementally adding features until in the last row we combine all 8 features together. It is immediately clear from Table 11 that using similar verbs, the same amount of data, the same classifier, the same number of roles, and the same features, the results from English are much worse than those for Chinese. While some part of the difference is probably due to idiosyncracies of particular sentences in the English and Chinese data, other aspects of the difference might be accounted for systematically, as we discuss in the next section.",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 108,
"text": "Table 11",
"ref_id": "TABREF11"
},
{
"start": 340,
"end": 348,
"text": "Table 11",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5.2"
},
{
"text": "We first investigated whether the differences between English and Chinese could be attributed to particular semantic roles. We found that this was indeed the case. The great bulk of the error rate difference between English and Chinese was caused by the 4 adjunct classes argM-ADV, argM-LOC, argM-MNR, and argM-TMP, which together account for 19.6% of the role tokens in our English corpus. The average F-score in English for the four roles is 36.7, while in Chinese the F-score for the four roles is 78.6. Why should these roles be so much more difficult to identify in English than Chinese?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion: English/Chinese differences",
"sec_num": "5.3"
},
{
"text": "We believe the answer lies in the analysis of the position feature in section 3.2.2. This is repeated, with error rate information in Table 12 . We see there that adjuncts in English have no strong preference for occurring before or after the verb. Chinese adjuncts, by contrast, are well-known to have an extremely strong preference to be preverbal, as Table 12 shows. The relatively fixed word order of adjuncts makes it much easier in Chinese to map these roles from surface syntactic constituents than in English.",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 142,
"text": "Table 12",
"ref_id": "TABREF1"
},
{
"start": 354,
"end": 362,
"text": "Table 12",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Discussion: English/Chinese differences",
"sec_num": "5.3"
},
{
"text": "If the average F-score of the four adjuncts in English is raised to the level of that in Chinese, the overall Fscore on English would be raised from 71.5 to 79.7, accounting for 8.2 of the 11.6 difference in F-scores between the two languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion: English/Chinese differences",
"sec_num": "5.3"
},
{
"text": "We next investigated the one feature from our original English-specific feature set that we had dropped in our Chinese system: passive. Recall that we dropped this feature because marked passives are extremely rare in Chinese. When we added this feature back into our English system, the performance rose from P/R/F=84.1/62.2/71.5 to 86.4/65.1/74.3. As might be expected, this effect of voice is mainly reflected in an improvement on arg0 and arg1, as Table 13 shows below: Table 13 . Improvement in English semantic parsing with the addition of the voice feature -voice +voice P R F P R F arg0",
"cite_spans": [],
"ref_spans": [
{
"start": 452,
"end": 460,
"text": "Table 13",
"ref_id": "TABREF2"
},
{
"start": 474,
"end": 482,
"text": "Table 13",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Discussion: English/Chinese differences",
"sec_num": "5.3"
},
{
"text": "88.9 75.3 81.5 94.4 80 86.6 arg1 86.5 82.8 84.6 88.5 86.2 87.3 A third source of English-Chinese differences is the distribution of roles; the Chinese data has proportionally more adjuncts (ARGMs), while the English data has proportionally more oblique arguments (ARG2, ARG3, ARG4). Oblique arguments are more difficult to process than other arguments, as was discussed in section 3.4. This difference is most likely to be caused by labeling factors rather than by true structural differences between English in Chinese. In summary, the higher performance in our Chinese system is due to 3 factors: the importance of passive in English; the strict word-order constraints of Chinese adverbials, and minor labeling differences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion: English/Chinese differences",
"sec_num": "5.3"
},
{
"text": "We can draw a number of conclusions from our investigation of semantic parsing in Chinese. First, reasonably good performance can be achieved with a very small (1100 sentences) training set. Second, the features that we extracted for English semantic parsing worked well when applied to Chinese. Many of these features required creating an automatic parse; in doing so we showed that the Collins (1999) parser when ported to Chinese achieved the best reported performance on Chinese syntactic parsing. Finally, we showed that semantic parsing is significantly easier in Chinese than in English. We show that this counterintuitive result seems to be due to the strict constraints on adjunct ordering in Chinese, making adjuncts easier to find and label.",
"cite_spans": [
{
"start": 388,
"end": 402,
"text": "Collins (1999)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Currently at Department of Computer Science, Queens College, City University of New York. Email: sunh@qc.edu. 2 Currently at Department of Linguistics, Stanford University.Email: jurafsky@stanford.edu.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was partially supported by the National Science Foundation via a KDD Supplement to NSF CISE/IRI/Interactive Systems Award IIS-9978025. Many thanks to Ying Chen for her help on the Collins parser port, and to Nianwen Xue and Sameer Pradhan for providing the data. Thanks to Kadri Hacioglu, Wayne Ward, James Martin, Martha Palmer, and three anonymous reviewers for helpful advice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "Parent",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix: Head rules for Chinese",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The Berkekey FrameNet Project",
"authors": [
{
"first": "Collin",
"middle": [
"F"
],
"last": "Baker",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Charles",
"suffix": ""
},
{
"first": "John",
"middle": [
"B"
],
"last": "Fillmore",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lowe",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceeding of COLING/ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baker, Collin F., Charles J. Fillmore, and John B. Lowe. 1998. The Berkekey FrameNet Project. In Proceeding of COLING/ACL.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Two Statistical Parsing models Applied to the Chinese Treebank",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Bikel",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Second Chinese Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bikel, Daniel and David Chiang. 2000. Two Statistical Parsing models Applied to the Chinese Treebank. In Proceedings of the Second Chinese Language Processing Workshop, pp. 1-6.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Recovering Latent Information in Treebanks",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Bikel",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of COLING-2002",
"volume": "",
"issue": "",
"pages": "183--189",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chiang, David and Daniel Bikel. 2002. Recovering Latent Information in Treebanks. In Proceedings of COLING-2002, pp.183-189.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Head-driven Statistical Models for Natural Language Parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collins, Michael. 1999. Head-driven Statistical Models for Natural Language Parsing. Ph.D. dissertation, University of Pennsylvannia.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Statistical Parser for Czech",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Hajic",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Christoph",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 37 th Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "505--512",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collins, Michael, Jan Hajic, Lance Ramshaw and Christoph Tillmann. 1999. A Statistical Parser for Czech. In Proceedings of the 37 th Meeting of the ACL, pp. 505-512.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatic Labeling of Semantic Roles",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2002,
"venue": "Computational Linguistics",
"volume": "28",
"issue": "3",
"pages": "245--288",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gildea, Daniel and Daniel Jurafsky. 2002. Automatic Labeling of Semantic Roles. Computational Linguistics, 28(3):245-288.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The Necessity of Parsing for Predicate Argument Recognition",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40 th Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "239--246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gildea, Daniel and Martha Palmer. 2002. The Necessity of Parsing for Predicate Argument Recognition, In Proceedings of the 40 th Meeting of the ACL, pp. 239-246.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Adding semantic annotation to the Penn Treebank",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Kingsbury",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Mitch",
"middle": [],
"last": "Marcus",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of HLT-02",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kingsbury, Paul, Martha Palmer, and Mitch Marcus. 2002. Adding semantic annotation to the Penn Treebank. In Proceedings of HLT-02.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Use of support vector learning for chunk Identification",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 4 th Conference on CoNLL",
"volume": "",
"issue": "",
"pages": "142--144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kudo, Taku and Yuji Matsumoto. 2000. Use of support vector learning for chunk Identification. In Proceedings of the 4 th Conference on CoNLL, pp. 142-144.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Chunking with Support Vector Machines",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceeding of the 2 nd Meeting of the NAACL",
"volume": "",
"issue": "",
"pages": "192--199",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kudo, Taku and Yuji Matsumoto. 2001 Chunking with Support Vector Machines. In Proceeding of the 2 nd Meeting of the NAACL. pp.192-199.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Is it harder to parse Chinese, or the Chinese Treebank? ACL 2003",
"authors": [
{
"first": "Roger",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "439--446",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Levy, Roger and Christopher Manning. 2003. Is it harder to parse Chinese, or the Chinese Treebank? ACL 2003, pp. 439-446.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Semantic Role Parsing: Adding Semantic Structure to Unstructured Text",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Kadri",
"middle": [],
"last": "Hacioglu",
"suffix": ""
},
{
"first": ",",
"middle": [
"Wayne"
],
"last": "Ward",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2003,
"venue": "the Proceedings of the International Conference on Data Mining (ICDM-2003)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pradhan, Sameer, Kadri Hacioglu,. Wayne Ward, James Martin, and Daniel Jurafsky. 2003. \"Semantic Role Parsing: Adding Semantic Structure to Unstructured Text\". In the Proceedings of the International Conference on Data Mining (ICDM- 2003), Melbourne, FL, 2003",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Using Predicate-Argument Structures for Information Extraction",
"authors": [
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Sanda",
"middle": [],
"last": "Harabagiu",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Aarseth",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Surdeanu, Mihai, Sanda Harabagiu, John Williams and Paul Aarseth. 2003. Using Predicate-Argument Structures for Information Extraction, In Proceedings of ACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Guidelines for the Penn Chinese Proposition Bank (1 st Draft)",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xue, Nianwen. 2002. Guidelines for the Penn Chinese Proposition Bank (1 st Draft), UPenn.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Building a large-scale annotated Chinese corpus",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Fu-Dong",
"middle": [],
"last": "Chiou",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of COLING-2002",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xue, Nianwen, Fu-Dong Chiou and Martha Palmer. 2002. Building a large-scale annotated Chinese corpus. In Proceedings of COLING-2002.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Annotating the propositions in the Penn Chinese Treebank",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2 nd SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xue, Nianwen, Martha Palmer. 2003. Annotating the propositions in the Penn Chinese Treebank. In Proceedings of the 2 nd SIGHAN Workshop on Chinese Language Processing.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Olympic Conference held in Paris\" Figure 1 Example of DE construction",
"uris": null,
"num": null
},
"TABREF1": {
"type_str": "table",
"num": null,
"text": "",
"content": "<table><tr><td/><td colspan=\"3\">List of verbs for experiments</td></tr><tr><td>Verb</td><td># of</td><td>Arg</td><td>Freq</td></tr><tr><td/><td>senses</td><td>number</td><td/></tr><tr><td>/set up</td><td>1</td><td>2</td><td>106</td></tr><tr><td>/emerge</td><td>1</td><td>1</td><td>80</td></tr><tr><td>/publish</td><td>1</td><td>2</td><td>113</td></tr><tr><td>/give</td><td>2</td><td>3/2</td><td>41</td></tr><tr><td>/build into</td><td>2</td><td>2/3</td><td>113</td></tr><tr><td>/enter</td><td>1</td><td>2</td><td>123</td></tr><tr><td>/take place</td><td>1</td><td>2</td><td>230</td></tr><tr><td>/pass</td><td>3</td><td>2</td><td>75</td></tr><tr><td>/hope</td><td>1</td><td>2</td><td>90</td></tr><tr><td>/increase</td><td>1</td><td>2</td><td>167</td></tr></table>",
"html": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"text": "The positional distribution of roles",
"content": "<table><tr><td>Role</td><td>Before verb</td><td>After verb</td><td>Total</td></tr><tr><td>arg0</td><td>547</td><td>72</td><td>619</td></tr><tr><td>arg1</td><td>319</td><td>644</td><td>963</td></tr><tr><td>arg2</td><td/><td>28</td><td>28</td></tr><tr><td>argM-ADV</td><td>223</td><td/><td>223</td></tr><tr><td>argM-BFY</td><td>28</td><td/><td>28</td></tr><tr><td>argM-CMP</td><td>38</td><td/><td>38</td></tr><tr><td>argM-CND</td><td>15</td><td/><td>15</td></tr><tr><td>argM-CPN</td><td>10</td><td/><td>10</td></tr><tr><td>argM-DGR</td><td/><td>57</td><td>57</td></tr><tr><td>argM-FRQ</td><td/><td>3</td><td>3</td></tr><tr><td>argM-LOC</td><td>233</td><td>5</td><td>238</td></tr><tr><td>argM-MNR</td><td>11</td><td/><td>11</td></tr><tr><td>argM-PRP</td><td>11</td><td/><td>11</td></tr><tr><td>argM-RNG</td><td>9</td><td/><td>9</td></tr><tr><td>argM-RST</td><td/><td>16</td><td>16</td></tr><tr><td>argM-SRC</td><td>12</td><td/><td>12</td></tr><tr><td>argM-TMP</td><td>408</td><td>13</td><td>421</td></tr><tr><td>argM-TPC</td><td>14</td><td/><td>14</td></tr><tr><td>Total</td><td>1878</td><td>838</td><td>2716</td></tr></table>",
"html": null
},
"TABREF3": {
"type_str": "table",
"num": null,
"text": "Top 20 head words for roles",
"content": "<table><tr><td>Word</td><td colspan=\"2\">Freq Word</td><td>Freq</td></tr><tr><td>/in</td><td>214</td><td>/China</td><td>25</td></tr><tr><td>/meeting</td><td>43</td><td>/for</td><td>23</td></tr><tr><td>/today</td><td>41</td><td>/statement</td><td>19</td></tr><tr><td>/at</td><td>40</td><td>/speech</td><td>18</td></tr><tr><td>/already</td><td>38</td><td>/stage</td><td>17</td></tr><tr><td colspan=\"2\">/enterprise 35</td><td colspan=\"2\">/government 16</td></tr><tr><td colspan=\"2\">/company 32</td><td>/present</td><td>16</td></tr><tr><td>/than</td><td>31</td><td>/bank</td><td>15</td></tr><tr><td>/will</td><td>30</td><td>/recently</td><td>14</td></tr><tr><td colspan=\"2\">/ceremony 28</td><td>/base</td><td>14</td></tr></table>",
"html": null
},
"TABREF4": {
"type_str": "table",
"num": null,
"text": "",
"content": "<table><tr><td colspan=\"4\">Semantic parsing results on seen verbs</td></tr><tr><td>feature set</td><td>P</td><td>R</td><td>F</td></tr><tr><td/><td>(%)</td><td>(%)</td><td>(%)</td></tr><tr><td>path</td><td>71.8</td><td>59.4</td><td>65.0</td></tr><tr><td>path + pt</td><td>72.9</td><td>62.9</td><td>67.5</td></tr><tr><td>path + position</td><td>72.5</td><td>60.8</td><td>66.2</td></tr><tr><td>path + head POS</td><td>77.6</td><td>63.3</td><td>69.7</td></tr><tr><td>path + sub-cat</td><td>80.8</td><td>63.6</td><td>71.2</td></tr><tr><td>path + head word</td><td>85.0</td><td>66.0</td><td>74.3</td></tr><tr><td>path + target verb</td><td>85.8</td><td>68.4</td><td>76.1</td></tr><tr><td>path + pt + gov + position</td><td/><td/><td/></tr><tr><td>+ subcat + target</td><td/><td/><td/></tr><tr><td>+ head word</td><td/><td/><td/></tr><tr><td>+ head POS</td><td>91.7</td><td>76.0</td><td>83.1</td></tr></table>",
"html": null
},
"TABREF5": {
"type_str": "table",
"num": null,
"text": "",
"content": "<table><tr><td colspan=\"4\">Experimental Results for Unseen Verbs</td></tr><tr><td>target</td><td>P(%)</td><td>R(%)</td><td>F(%)</td></tr><tr><td>/publish</td><td>90.7</td><td>72.9</td><td>80.8</td></tr><tr><td>/increase</td><td>49.6</td><td>34.3</td><td>40.5</td></tr><tr><td>/take place</td><td>90.1</td><td>63.3</td><td>74.4</td></tr><tr><td>/build into</td><td>65.2</td><td>55.5</td><td>60.0</td></tr><tr><td>/give</td><td>65.7</td><td>37.9</td><td>48.1</td></tr><tr><td>/pass</td><td>85.9</td><td>77.0</td><td>81.2</td></tr><tr><td>/emerge</td><td>12.6</td><td>10.2</td><td>11.3</td></tr><tr><td>/enter</td><td>81.9</td><td>58.8</td><td>68.4</td></tr><tr><td>/set up</td><td>79.0</td><td>61.1</td><td>68.9</td></tr><tr><td>/hope</td><td>77.7</td><td>35.9</td><td>49.1</td></tr><tr><td>Average</td><td>69.8</td><td>50.7</td><td>58.3</td></tr><tr><td colspan=\"4\">Another important difficulty in processing unseen</td></tr><tr><td colspan=\"4\">verbs is the fact that roles in PropBank are defined in a</td></tr><tr><td colspan=\"4\">verb-dependent way. This may be easiest to see with an</td></tr><tr><td colspan=\"4\">English example. The roles arg2, arg3, arg4 have</td></tr><tr><td colspan=\"4\">different meaning for different verbs; underlined in the</td></tr><tr><td colspan=\"3\">following are some examples of arg2:</td><td/></tr><tr><td colspan=\"4\">(a) The state gave CenTrust 30 days to sell the Rubens.</td></tr><tr><td colspan=\"4\">(b) Revenue increased 11 to 2.73 billion from 2.46</td></tr><tr><td>billion.</td><td/><td/><td/></tr><tr><td colspan=\"4\">(c) One of Ronald Reagan 's attributes as President was</td></tr></table>",
"html": null
},
"TABREF6": {
"type_str": "table",
"num": null,
"text": "",
"content": "<table><tr><td/><td colspan=\"3\">Results for syntactic parsing, trained on</td></tr><tr><td colspan=\"4\">CTB Release 2, tested on test set in semantic parsing</td></tr><tr><td/><td>LP(%)</td><td>LR(%)</td><td>F1(%)</td></tr><tr><td>overall</td><td>81.6</td><td>82.1</td><td>81.0</td></tr><tr><td>len&lt;=40</td><td>86.1</td><td>85.5</td><td>86.7</td></tr></table>",
"html": null
},
"TABREF7": {
"type_str": "table",
"num": null,
"text": "",
"content": "<table><tr><td colspan=\"4\">Comparison with other parsers: TEST2</td></tr><tr><td/><td/><td>\u2264 40 words</td><td/></tr><tr><td/><td colspan=\"3\">LP(%) LR(%) F1(%)</td></tr><tr><td>Bikel &amp; Chiang 2000</td><td>77.2</td><td>76.2</td><td>76.7</td></tr><tr><td>Chiang &amp; Bikel 2002</td><td>81.1</td><td>78.8</td><td>79.9</td></tr><tr><td>Levy &amp; Manning 2003</td><td>78.4</td><td>79.2</td><td>78.8</td></tr><tr><td>Collins parser</td><td>86.4</td><td>85.5</td><td>85.9</td></tr></table>",
"html": null
},
"TABREF8": {
"type_str": "table",
"num": null,
"text": "",
"content": "<table><tr><td colspan=\"4\">Result for semantic parsing using automatic</td></tr><tr><td/><td colspan=\"2\">syntactic parses</td><td/></tr><tr><td/><td>P(%)</td><td>R(%)</td><td>F(%)</td></tr><tr><td>110 sentences</td><td>86.0</td><td>70.8</td><td>77.6</td></tr><tr><td>113 sentences</td><td>86.0</td><td>69.2</td><td>76.7</td></tr></table>",
"html": null
},
"TABREF9": {
"type_str": "table",
"num": null,
"text": "",
"content": "<table><tr><td/><td colspan=\"4\">English verbs chosen for experiments</td></tr><tr><td colspan=\"2\">English Freq</td><td>Chinese English</td><td>Freq</td><td>Chinese</td></tr><tr><td>build</td><td>46</td><td>hold</td><td>120</td></tr><tr><td colspan=\"2\">emerge 30</td><td>hope</td><td>63</td></tr><tr><td>enter</td><td>108</td><td colspan=\"2\">increase 231</td></tr><tr><td>found</td><td>248</td><td>pass</td><td>143</td></tr><tr><td>give</td><td>124</td><td>publish</td><td/></tr></table>",
"html": null
},
"TABREF10": {
"type_str": "table",
"num": null,
"text": "",
"content": "<table><tr><td/><td/><td/><td colspan=\"6\">The comparison between adjuncts in English and Chinese</td><td/><td/><td/></tr><tr><td/><td/><td/><td>English</td><td/><td/><td/><td/><td>Chinese</td><td/><td/><td/></tr><tr><td>Role</td><td>Before</td><td>After</td><td>Freq in</td><td colspan=\"2\">P R F</td><td>Before</td><td>After</td><td>Freq in</td><td>P</td><td>R</td><td>F</td></tr><tr><td/><td>verb</td><td>verb</td><td>test</td><td/><td>(%)</td><td>verb</td><td>verb</td><td>test</td><td/><td>(%)</td><td/></tr><tr><td>argM-ADV</td><td>22</td><td>43</td><td>5</td><td>0</td><td>0 0</td><td>223</td><td>0</td><td>37</td><td colspan=\"3\">91.3 56.8 70</td></tr><tr><td>argM-LOC</td><td>25</td><td>82</td><td>11</td><td colspan=\"2\">80 36.4 50</td><td>233</td><td>5</td><td>31</td><td colspan=\"3\">90.0 87.1 88.5</td></tr><tr><td>argM-MNR</td><td>22</td><td>75</td><td>14</td><td>0</td><td>0 0</td><td>11</td><td>0</td><td>1</td><td>0</td><td>0</td><td>0</td></tr><tr><td>argM-TMP</td><td>119</td><td>164</td><td>37</td><td colspan=\"2\">66.7 27 38.5</td><td>408</td><td>13</td><td>44</td><td colspan=\"3\">96.7 65.9 78.4</td></tr></table>",
"html": null
},
"TABREF11": {
"type_str": "table",
"num": null,
"text": "",
"content": "<table><tr><td/><td colspan=\"2\">Experimental results of English</td></tr><tr><td/><td>Chinese</td><td>English</td></tr><tr><td>feature set</td><td>R/F/P</td><td>P/R/F</td></tr><tr><td>path</td><td>71.8/59.4/65.0</td><td>78.2/48.3/59.7</td></tr><tr><td>path + pt</td><td>72.9/62.9/67.5</td><td>77.4/51.2/61.6</td></tr><tr><td colspan=\"2\">path + position 72.5/60.8/66.2</td><td>75.7/50.9/60.8</td></tr><tr><td colspan=\"2\">path + hd POS 77.6/63.3/69.7</td><td>79.1/49.7/61.0</td></tr><tr><td>path + sub-cat</td><td>80.8/63.6/71.2</td><td>79.9/45.3/57.8</td></tr><tr><td colspan=\"2\">path + hd word 85.0/66.0/74.3</td><td>84.0/47.7/60.8</td></tr><tr><td>path + target</td><td>85.8/68.4/76.1</td><td>85.7/49.1/62.5</td></tr><tr><td>COMBINED</td><td>91.7/76.0/83.1</td><td>84.1/62.2/71.5</td></tr></table>",
"html": null
}
}
}
}