ACL-OCL / Base_JSON /prefixH /json /H01 /H01-1043.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H01-1043",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:31:13.291777Z"
},
"title": "Japanese Case Frame Construction by Coupling the Verb and its Closest Case Component",
"authors": [
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Kyoto University Yoshida-Honmachi",
"location": {
"addrLine": "Sakyo-ku",
"postCode": "606-8501",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "kawahara@pine.kuee.kyoto-u.ac.jp"
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes a method to construct a case frame dictionary automatically from a raw corpus. The main problem is how to handle the diversity of verb usages. We collect predicate-argument examples, which are distinguished by the verb and its closest case component in order to deal with verb usages, from parsed results of a corpus. Since these couples multiply to millions of combinations, it is difficult to make a wide-coverage case frame dictionary from a small corpus like an analyzed corpus. We, however, use a raw corpus, so that this problem can be addressed. Furthermore, we cluster and merge predicate-argument examples which does not have different usages but belong to different case frames because of different closest case components. We also report on an experimental result of case structure analysis using the constructed case frame dictionary.",
"pdf_parse": {
"paper_id": "H01-1043",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes a method to construct a case frame dictionary automatically from a raw corpus. The main problem is how to handle the diversity of verb usages. We collect predicate-argument examples, which are distinguished by the verb and its closest case component in order to deal with verb usages, from parsed results of a corpus. Since these couples multiply to millions of combinations, it is difficult to make a wide-coverage case frame dictionary from a small corpus like an analyzed corpus. We, however, use a raw corpus, so that this problem can be addressed. Furthermore, we cluster and merge predicate-argument examples which does not have different usages but belong to different case frames because of different closest case components. We also report on an experimental result of case structure analysis using the constructed case frame dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Syntactic analysis or parsing has been a main objective in Natural Language Processing. In case of Japanese, however, syntactic analysis cannot clarify relations between words in sentences because of several troublesome characteristics of Japanese such as scrambling, omission of case components, and disappearance of case markers. Therefore, in Japanese sentence analysis, case structure analysis is an important issue, and a case frame dictionary is necessary for the analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "Some research institutes have constructed Japanese case frame dictionaries manually [2, 3] . However, it is quite expensive, or almost impossible to construct a wide-coverage case frame dictionary by hand.",
"cite_spans": [
{
"start": 84,
"end": 87,
"text": "[2,",
"ref_id": null
},
{
"start": 88,
"end": 90,
"text": "3]",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "Others have tried to construct a case frame dictionary automatically from analyzed corpora. However, existing syntactically analyzed corpora are too small to learn a dictionary, since case frame information consists of relations between nouns and verbs, which multiplies to millions of combinations. Based on such a consideration, we took the unsupervised learning strategy to Japanese case frame construction 1 .",
"cite_spans": [
{
"start": 410,
"end": 411,
"text": "1",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "To construct a case frame dictionary from a raw corpus, we parse a raw corpus first, but parse errors are problematic in this case. However, if we use only reliable modifier-head relations to construct a case frame dictionary, this problem can be addressed. Verb sense ambiguity is rather problematic. Since verbs can have different cases and case components depending on their meanings, verbs which have different meanings should have different case frames. To deal with this problem, we collect predicate-argument examples, which are distinguished by the verb and its closest case component, and cluster them. That is, examples are not distinguished by verbs such as naru 'make, become' and tsumu 'load, accumulate', but by couples such as tomodachi ni naru 'make a friend', byouki ni naru 'become sick',nimotsu wo tsumu 'load baggage', and keiken wo tsumu 'accumulate experience'. Since these couples multiply to millions of combinations, it is difficult to make a wide-coverage case frame dictionary from a small corpus like an analyzed corpus. We, however, use a raw corpus, so that this problem can be addressed. The clustering process is to merge examples which does not have different usages but belong to different case frames because of different closest case components.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": "1."
},
{
"text": "We employ the following procedure of case frame construction from raw corpus ( Figure 1 ): 1. A large raw corpus is parsed by KNP [5] , and reliable modifier-head relations are extracted from the parse results. We call these modifier-head relations examples.",
"cite_spans": [
{
"start": 130,
"end": 133,
"text": "[5]",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 79,
"end": 87,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "VARIOUS METHODS FOR CASE FRAME CONSTRUCTION",
"sec_num": "2."
},
{
"text": "The extracted examples are distinguished by the verb and its closest case component. We call these data example patterns.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "3. The example patterns are clustered based on a thesaurus. We call the output of this process example case frames, which is the final result of the system. We call words which compose case components case examples, and a group of case examples case example group. In Figure 1 , nimotsu 'baggage', busshi 1 In English, several unsupervised methods have been proposed [7, 1] . However, it is different from those that combinations of nouns and verbs must be collected in Japanese. 'supply', and keiken 'experience' are case examples, and {nimotsu 'baggage', busshi 'supply'} (of wo case marker in the first example case frame of tsumu 'load, accumulate') is a case example group. A case component therefore consists of a case example and a case marker (CM).",
"cite_spans": [
{
"start": 305,
"end": 306,
"text": "1",
"ref_id": "BIBREF0"
},
{
"start": 367,
"end": 370,
"text": "[7,",
"ref_id": "BIBREF6"
},
{
"start": 371,
"end": 373,
"text": "1]",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 268,
"end": 276,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "Let us now discuss several methods of case frame construction as shown in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 74,
"end": 82,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "First, examples (I of Figure 1 ) can be used individually, but this method cannot solve the sparse data problem. For example, even if these two examples occur in a corpus, it cannot be judged whether the expression \"kuruma ni busshi wo tsumu\" (load supply onto the car) is allowed or not.",
"cite_spans": [],
"ref_spans": [
{
"start": 22,
"end": 30,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "Secondly, examples can be decomposed into binomial relations (II of Figure 1 ). These co-occurrences are utilized by statistical parsers, and can address the sparse data problem. In this case, however, verb sense ambiguity becomes a serious problem. For example, from these two examples, three co-occurrences (\"kuruma ni tsumu\", \"nimotsu wo tsumu\", and \"keiken wo tsumu\") are extracted. They, however, allow the incorrect expression \"kuruma ni keiken wo tsumu\" (load experience onto the car, accumulate experience onto the car).",
"cite_spans": [],
"ref_spans": [
{
"start": 68,
"end": 76,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "Thirdly, examples can be simply merged into one frame (III of Figure 1 ). However, information quantity of this is equivalent to that of the co-occurrences (II of Figure 1 ), so verb sense ambiguity becomes a problem as well.",
"cite_spans": [],
"ref_spans": [
{
"start": 62,
"end": 70,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 163,
"end": 171,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "We distinguish examples by the verb and its closest case component. Our method can address the two problems above: verb sense ambiguity and sparse data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "On the other hand, semantic markers can be used as case components instead of case examples. These we call semantic case frames (IV of Figure 1 ). Constructing semantic case frames by hand leads to the problem mentioned in Section 1. Utsuro et al. constructed semantic case frames from a corpus [8] . There are three main differences to our approach: they use an annotated corpus, depend deeply on a thesaurus, and did not resolve verb sense ambiguity.",
"cite_spans": [
{
"start": 295,
"end": 298,
"text": "[8]",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 135,
"end": 143,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "2.",
"sec_num": null
},
{
"text": "This section explains how to collect examples shown in Figure 1 . In order to improve the quality of collected examples, reliable modifier-head relations are extracted from the parsed corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 63,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "COLLECTING EXAMPLES",
"sec_num": "3."
},
{
"text": "When examples are collected, case markers, case examples, and case components must satisfy the following conditions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditions of case components",
"sec_num": "3.1"
},
{
"text": "Case components which have the following case markers (CMs) are collected: ga (nominative), wo (accusative), ni (dative), to (with, that), de (optional), kara (from), yori (from), he (to), and made (to). We also handle compound case markers such as ni-tsuite 'in terms of', wo-megutte 'concerning', and others.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditions of case markers",
"sec_num": null
},
{
"text": "In addition to these cases, we introduce time case marker. Case components which belong to the class <time>(see below) and contain a ni, kara, or made CM are merged into time CM. This is because it is important whether a verb deeply relates to time or not, but not to distinguish between surface CMs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditions of case markers",
"sec_num": null
},
{
"text": "Case examples which have definite meanings are generalized. We introduce the following three classes, and use these classes instead of words as case examples.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalization of case examples",
"sec_num": null
},
{
"text": "\u2022 nouns which mean time e.g. asa 'morning', haru 'spring', rainen 'next year'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<time>",
"sec_num": null
},
{
"text": "\u2022 case examples which contain a unit of time e.g. 1999nen 'year', 12gatsu 'month', 9ji 'o'clock'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<time>",
"sec_num": null
},
{
"text": "\u2022 words which are followed by the suffix mae 'before', tyu ' ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "<time>",
"sec_num": null
},
{
"text": "We collect examples not only for verbs, but also for adjectives and noun+copulas 3 . However, when a verb is followed by a causative auxiliary or a passive auxiliary, we do not collect examples, since the case pattern is changed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditions of verbs",
"sec_num": "3.2"
},
{
"text": "When examples are extracted from automatically parsed results, the problem is that the parsed results inevitably contain errors. Then, to decrease influences of such errors, we discard modifier-head relations whose parse accuracies are low and use only reliable relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of reliable examples",
"sec_num": "3.3"
},
{
"text": "KNP employs the following heuristic rules to determine a head of a modifier:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of reliable examples",
"sec_num": "3.3"
},
{
"text": "HR1 KNP narrows the scope of a head by finding a clear boundary of clauses in a sentence. When there is only one candidate verb in the scope, KNP determines this verb as the head of the modifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of reliable examples",
"sec_num": "3.3"
},
{
"text": "HR2 Among the candidate verbs, verbs which rarely take case components are excluded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of reliable examples",
"sec_num": "3.3"
},
{
"text": "HR3 KNP determines the head according to the preference: a modifier which is not followed by a comma depends on the nearest candidate, and a modifier with a comma depends on the second nearest candidate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of reliable examples",
"sec_num": "3.3"
},
{
"text": "Our approach trusts HR1 but not HR2 and HR3. That is, modifier-head relations which are decided in HR1 (there is only one candidate of the head in the scope) are extracted as examples, but relations which HR2 and HR3 are applied to are not extracted. The following examples illustrate the application of these rules. In this example, an example which can be extracted without ambiguity is \"Tokyo he okutta\" 'sent \u03c6 to Tokyo' at the end of the sentence. In addition, since node 'because' is analyzed as a clear boundary of clauses, the head candidate of hon wo 'book acc-CM' is only mitsuketa 'find', and this is also extracted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of reliable examples",
"sec_num": "3.3"
},
{
"text": "Verbs excluded from head candidates by HR2 possibly become heads, so we do not use the examples which HR2 is applied to. For example, when there is a strong verb right after an adjective, this adjective tends not to be a head of a case component, so it is excluded from head candidates. (6) Hi no mawari ga hayaku fire of spread nom-CM rapidly sukuidase-nakatta. could not save (The fire spread rapidly, so \u03c61 could not save \u03c62.)",
"cite_spans": [
{
"start": 287,
"end": 290,
"text": "(6)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of reliable examples",
"sec_num": "3.3"
},
{
"text": "In this example, the correct head of mawari ga 'spread' is hayaku 'rapidly'. However, since hayaku 'rapidly' is excluded from the head candidates, the head of mawari ga 'spread' is analyzed incorrectly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of reliable examples",
"sec_num": "3.3"
},
{
"text": "We show an example of the process HR3: In this example, head candidates of shitsumon ni 'question acc-CM' are kitte 'take' and kotaeta 'answered'. According to the preference \"modify the nearer head\", KNP incorrectly decides the head is kitte 'take'. Like this example, when there are many head candidates, the decided head is not reliable, so we do not use examples in this case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of reliable examples",
"sec_num": "3.3"
},
{
"text": "We extracted reliable examples from Kyoto University Corpus [6] , that is a syntactically analyzed corpus, and evaluated the accuracy of them. The accuracy of all the case examples which have the target cases was 90.9%, and the accuracy of the reliable examples was 97.2%. Accordingly, this process is very effective.",
"cite_spans": [
{
"start": 60,
"end": 63,
"text": "[6]",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extraction of reliable examples",
"sec_num": "3.3"
},
{
"text": "As shown in Section 2, when examples whose verbs have different meanings are merged, a case frame which allows an incorrect expression is created. So, for verbs with different meanings, different case frames should be acquired.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONSTRUCTION OF EXAMPLE CASE FRAMES",
"sec_num": "4."
},
{
"text": "In most cases, an important case component which decides the sense of a verb is the closest one to the verb, that is, the verb sense ambiguity can be resolved by coupling the verb and its closest case component. Accordingly, we distinguish examples by the verb and its closest case component. We call the case marker of the closest case component closest case marker.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONSTRUCTION OF EXAMPLE CASE FRAMES",
"sec_num": "4."
},
{
"text": "The number of example patterns which one verb has is equal to that of the closest case components. That is, example patterns which have almost the same meaning are individually handled as follows: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONSTRUCTION OF EXAMPLE CASE FRAMES",
"sec_num": "4."
},
{
"text": "The clustering of example patterns is performed by using the similarity between example patterns. This similarity is based on the similarities between case examples and the ratio of common cases. Figure 2 shows an example of calculating the similarity between example patterns.",
"cite_spans": [],
"ref_spans": [
{
"start": 196,
"end": 204,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Similarity between example patterns",
"sec_num": "4.1"
},
{
"text": "First, the similarity between two examples e1, e2 is calculated using the NTT thesaurus as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity between example patterns",
"sec_num": "4.1"
},
{
"text": "sime(e1, e2) = maxx\u2208s 1 ,y\u2208s 2 sim(x, y) sim(x, y) = 2L lx + ly",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity between example patterns",
"sec_num": "4.1"
},
{
"text": "where x, y are semantic markers, and s1, s2 are sets of semantic markers of e1, e2 respectively 4 . lx, ly are the depths of x, y in the thesaurus, and the depth of their lowest (most specific) common node is L. If x and y are in the same node of the thesaurus, the similarity is 1.0, the maximum score based on this criterion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity between example patterns",
"sec_num": "4.1"
},
{
"text": "Next, the similarity between the two case example groups E1, E2 is the normalized sum of the similarities of case examples as follows: where |e1| , |e2| represent the frequencies of e1, e2 respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity between example patterns",
"sec_num": "4.1"
},
{
"text": "simE(E1, E2) = \u00c8 e 1 \u2208E",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity between example patterns",
"sec_num": "4.1"
},
{
"text": "The ratio of common cases of example patterns F1, F2 is calculated as follows: The similarity between F1 and F2 is the product of the ratio of common cases and the similarities between case example groups of common cases of F1 and F2 as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity between example patterns",
"sec_num": "4.1"
},
{
"text": "cs = \u00d7 \u00c8 n i=1 |E1cc i | + \u00c8 n i=1 |E2cc i | \u00c8 l i=1 |E1c1 i | + \u00c8 m i=1 |E2c2 i |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity between example patterns",
"sec_num": "4.1"
},
{
"text": "score = cs \u2022 \u00c8 n i=1 \u221a wi simE(E1cc i , E2cc i ) \u00c8 n i=1 \u221a wi wi = e 1 \u2208E 1cc i e 2 \u2208E 2cc i \u00d4 |e1| |e2|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity between example patterns",
"sec_num": "4.1"
},
{
"text": "where wi is the weight of the similarities between case example groups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity between example patterns",
"sec_num": "4.1"
},
{
"text": "The similarities between example patterns are deeply influenced by semantic markers of the closest case components. So, when the closest case components have semantic ambiguities, a problem arises. For example, when clustering example patterns of awaseru 'join, adjust', the pair of example patterns (te 'hand', kao, 'face') 5 is created with the common semantic marker <part of an animal>, and (te 'method', syouten 'focus') is created with the common semantic marker <logic, meaning>. From these two pairs, the pair (te 'hand', kao 'face', syouten 'focus') is created, though <part of an animal> is not similar to <logic, meaning> at all.",
"cite_spans": [
{
"start": 325,
"end": 326,
"text": "5",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Selection of semantic markers of example patterns",
"sec_num": "4.2"
},
{
"text": "To address this problem, we select one semantic marker of the closest case component of each example pattern in order of the similarity between example patterns as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection of semantic markers of example patterns",
"sec_num": "4.2"
},
{
"text": "1. In order of the similarity of a pair, (p, q), of two example patterns, we select semantic markers of the closest case components, np, nq of p, q. The selected semantic markers sp, sq maximize the similarity between np and nq .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection of semantic markers of example patterns",
"sec_num": "4.2"
},
{
"text": "2. The similarities of example patterns related to p, q are recalculated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection of semantic markers of example patterns",
"sec_num": "4.2"
},
{
"text": "3. These two processes are iterated while there are pairs of two example patterns, of which the similarity is higher than a threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selection of semantic markers of example patterns",
"sec_num": "4.2"
},
{
"text": "The following is the clustering procedure:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering procedure",
"sec_num": "4.3"
},
{
"text": "1. Elimination of example patterns which occur infrequently",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering procedure",
"sec_num": "4.3"
},
{
"text": "Target example patterns of the clustering are those whose closest case components occur more frequently than a threshold. We set this threshold to 5. 5 Example patterns are represented by the closest case components.",
"cite_spans": [
{
"start": 150,
"end": 151,
"text": "5",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering procedure",
"sec_num": "4.3"
},
{
"text": "2. Clustering of example patterns which have the same closest CM (a) Similarities between pairs of two example patterns which have the same closest CM are calculated, and semantic markers of closest case components are selected. These two processes are iterated as mentioned in 4.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering procedure",
"sec_num": "4.3"
},
{
"text": "(b) Each example pattern pair whose similarity is higher than some threshold is merged.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering procedure",
"sec_num": "4.3"
},
{
"text": "The example patterns which are output by 2 are clustered. In this phase, it is not considered whether the closest CMs are the same or not. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering of all the example patterns",
"sec_num": "3."
},
{
"text": "If a CM whose frequency is lower than other CMs, it might be collected because of parsing errors, or has little relation to its verb. So, we set the threshold for the CM frequency as 2 \u221a mf, where mf means the frequency of the most found CM. If the frequency of a CM is less than the threshold, it is discarded. For example, suppose the most frequent CM for a verb is wo, 100 times, and the frequency of ni CM for the verb is 16, ni CM is discarded (since it is less than the threshold, 20).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SELECTION OF OBLIGATORY CASE MARKERS",
"sec_num": "5."
},
{
"text": "However, since we can say that all the verbs have ga (nominative) CMs, ga CMs are not discarded. Furthermore, if an example case frame do not have a ga CM, we supplement its ga case with semantic marker <person>.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SELECTION OF OBLIGATORY CASE MARKERS",
"sec_num": "5."
},
{
"text": "We applied the above procedure to Mainichi Newspaper Corpus (9 years, 4,600,000 sentences). We set the threshold of the clustering 0.80. The criterion for setting this threshold is that case frames which have different case patterns or different meanings should not be merged into one case frame. Table1 shows examples of constructed example case frames.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONSTRUCTED CASE FRAME DICTIO-NARY",
"sec_num": "6."
},
{
"text": "From the corpus, example case frames of 71,000 verbs are constructed; the average number of example case frames of a verb is 1.9; the average number of case slots of a verb is 1.7; the average number of example nouns in a case slot is 4.3. The clustering led a decrease in the number of example case frames of 47%. As shown in Table1, example case frames of noun+copulas such as sanseida 'positiveness+copula (agree)', and compound case markers such as ni-tsuite 'in terms of' of tadasu 'examine' are acquired.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONSTRUCTED CASE FRAME DICTIO-NARY",
"sec_num": "6."
},
{
"text": "Since it is hard to evaluate the dictionary statically, we use the dictionary in case structure analysis and evaluate the analysis result. We used 200 sentences of Mainichi Newspaper Corpus as a test set. We analyzed case structures of the sentences using the method proposed by [4] . As the evaluation of the case structure analysis, we checked whether cases of ambiguous case components (topic markers and clausal modifiers) are correctly detected or not. The evaluation result is presented in Table 2 . The baseline is the result by assigning a vacant case in order of 'ga', 'wo', and 'ni'. When we do not consider parsing errors to evaluate the case detection, the accuracy of our method for topic markers was 96% and that for clausal modifiers was 76%. The baseline accuracy for topic markers was 91% and that for clausal modifiers was 62%. Thus we see our method is superior to the baseline. ",
"cite_spans": [
{
"start": 279,
"end": 282,
"text": "[4]",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 496,
"end": 503,
"text": "Table 2",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "EXPERIMENTS AND DISCUSSION",
"sec_num": "7."
},
{
"text": "We proposed an unsupervised method to construct a case frame dictionary by coupling the verb and its closest case component. We obtained a large case frame dictionary, which consists of 71,000 verbs. Using this dictionary, we can detect ambiguous case components accurately. We plan to exploit this dictionary in anaphora resolution in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CONCLUSION",
"sec_num": "8."
},
{
"text": "The research described in this paper was supported in part by JSPS-RFTF96P00502 (The Japan Society for the Promotion of Science, Research for the Future Program).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ACKNOWLEDGMENTS",
"sec_num": "9."
},
{
"text": "Most nouns must take a numeral classifier when they are quantified in Japanese. An English equivalent to it is 'piece'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In this paper, we use 'verb' instead of 'verb/adjective or noun+copula' for simplicity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In many cases, nouns have many semantic markers in NTT thesaurus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The underlined words with are correctly analyzed, but ones with \u00d7 are not. The detected CMs are shown after the underlines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Automatic extraction of subcategorization from corpora",
"authors": [
{
"first": "T",
"middle": [],
"last": "Briscoe",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Carroll",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 5th Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "356--363",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Briscoe and J. Carroll. Automatic extraction of subcategorization from corpora. In Proceedings of the 5th Conference on Applied Natural Language Processing, pages 356-363, 1997.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Japanese Verbs : A Guide to the IPA Lexicon of Basic Japanese Verbs",
"authors": [],
"year": 1987,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Information-Technology Promotion Agency, Japan. Japanese Verbs : A Guide to the IPA Lexicon of Basic Japanese Verbs. 1987.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "A method of case structure analysis for japanese sentences based on examples in case frame dictionary",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1994,
"venue": "IEICE Transactions on Information and Systems",
"volume": "77",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Kurohashi and M. Nagao. A method of case structure analysis for japanese sentences based on examples in case frame dictionary. In IEICE Transactions on Information and Systems, volume E77-D No.2, 1994.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A syntactic analysis method of long japanese sentences based on the detection of conjunctive structures",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Kurohashi and M. Nagao. A syntactic analysis method of long japanese sentences based on the detection of conjunctive structures. Computational Linguistics, 20(4), 1994.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Building a japanese parsed corpus while improving the parsing system",
"authors": [
{
"first": "S",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of The First International Conference on Language Resources & Evaluation",
"volume": "",
"issue": "",
"pages": "719--724",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Kurohashi and M. Nagao. Building a japanese parsed corpus while improving the parsing system. In Proceedings of The First International Conference on Language Resources & Evaluation, pages 719-724, 1998.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatic acquisition of a large subcategorization dictionary from corpora",
"authors": [
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 31th Annual Meeting of ACL",
"volume": "",
"issue": "",
"pages": "235--242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. D. Manning. Automatic acquisition of a large subcategorization dictionary from corpora. In Proceedings of the 31th Annual Meeting of ACL, pages 235-242, 1993.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Maximum entropy model learning of subcategorization preference",
"authors": [
{
"first": "T",
"middle": [],
"last": "Utsuro",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Miyata",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 5th Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "246--260",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Utsuro, T. Miyata, and Y. Matsumoto. Maximum entropy model learning of subcategorization preference. In Proceedings of the 5th Workshop on Very Large Corpora, pages 246-260, 1997.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Several methods for case frame construction."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "kuruma ni nimotsu wo tsumu car dat-CM baggage acc-CM load (load baggage onto the car) (4) keiken wo tsumu experience acc-CM accumulate (accumulate experience)"
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "jugyoin:ga kuruma:ni worker:nom-CM car:dat-CM nimotsu:wo tsumu baggage:acc-CM load (9) {truck,hikoki }:ni {truck,airplane}:dat-CM busshi :wo tsumu supply:acc-CM load In order to merge example patterns that have almost the same meaning, we cluster example patterns. The final ex-Example of calculating the similarity between example patterns (Numerals in the lower right of examples represent their frequencies.)ample case frames consist of the example pattern clusters. The detail of the clustering is described in the following section."
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "where the cases of example pattern F1 are c11, c12, \u2022 \u2022 \u2022 , c1l, the cases of example pattern F2 are c21, c22, \u2022 \u2022 \u2022 , c2m, and the common cases of F1 and F2 is cc1, cc2, \u2022 \u2022 \u2022 , ccn. E1cc i is the case example group of cci in F1. E2cc i , E1c1 i , and E2c2 i are defined in the same way. The square root in this equation decreases influences of the frequencies."
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "characteristic nom-CM have (These industries have the characteristic of strong political voice.) Analysis errors are mainly caused by two phenomena. The first is clausal modifiers which have no case relation to the modifees such as \"\u2022 \u2022 \u2022 wo mitomeru houshin\" 'policy of consenting \u2022 \u2022 \u2022 ' ( \u2020 above). The Second is verbs which take two ga 'nominative' case markers (one is wa superficially) such as \"gyokai wa \u2022 \u2022 \u2022 toiu tokutyo ga aru\" 'industries have the characteristic of \u2022 \u2022 \u2022 ' ( \u2021 above). Handling these phenomena is an area of future work."
},
"TABREF2": {
"text": "\u2022 numerals followed by a numeral classifier 2 such as tsu, ko, and nin.They are expressed with pairs of the class <quan-tity> and a numeral classifier: <quantity>tsu, <quan-tity>ko, and <quantity>nin. wo teian-shita. the assemblyman TM acc-CM proposed wa is a topic marker and giin wa 'assemblyman TM' depends on teian-shita 'proposed', but there is no case marker for giin 'assemblyman' in relation to teianshita 'proposed'. \u2022 \u2022 wo teian-shiteiru\" is a clausal modifier and teianshiteiru 'proposing' depends on giin 'assemblyman', but there is no case marker for giin 'assemblyman' in relation to teian-shiteiru 'proposing'.\u2022 Case components which contain a ni or de case marker are sometimes used adverbially. Since they have the optional relation to their verbs, we do not use them. On 30th the prime minister gave awards to those two people.) from this sentence, the following example is acquired.<time>:time-CM daijin:ga minister:nom-CM <quantity>nin:ni syou:wo okuru people:dat-CM award acc-CM give",
"html": null,
"num": null,
"content": "<table><tr><td>(</td><td/></tr><tr><td colspan=\"3\">e.g. 1tsu \u2192 &lt;quantity&gt;tsu</td></tr><tr><td/><td colspan=\"2\">2ko \u2192 &lt;quantity&gt;ko</td></tr><tr><td>&lt;clause&gt;</td><td/></tr><tr><td colspan=\"3\">\u2022 quotations (\"\u2022 \u2022 \u2022 to\" 'that \u2022 \u2022 \u2022 ') and expressions which</td></tr><tr><td colspan=\"3\">function as quotations (\"\u2022 \u2022 \u2022 koto wo\" 'that \u2022 \u2022 \u2022 ').</td></tr><tr><td colspan=\"3\">e.g. kaku to 'that \u2022 \u2022 \u2022 write',</td></tr><tr><td/><td colspan=\"2\">kaita koto wo 'that \u2022 \u2022 \u2022 wrote'</td></tr><tr><td colspan=\"3\">Exclusion of ambiguous case components</td></tr><tr><td colspan=\"3\">We do not use the following case components:</td></tr><tr><td colspan=\"3\">\u2022 Since case components which contain topic markers</td></tr><tr><td colspan=\"3\">(TMs) and clausal modifiers do not have surface case</td></tr><tr><td colspan=\"3\">markers, we do not use them. For example,</td></tr><tr><td colspan=\"3\">sono giin wa \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 wo teian-shiteiru giin ga \u2022 \u2022 \u2022</td></tr><tr><td colspan=\"3\">acc-CM proposing</td><td>assemblyman</td></tr><tr><td colspan=\"3\">\"\u2022 e.g. tame ni 'because of',</td></tr><tr><td/><td colspan=\"2\">mujouken ni 'unconditionally',</td></tr><tr><td/><td colspan=\"2\">ue de 'in addition to'</td></tr><tr><td colspan=\"2\">For example,</td></tr><tr><td colspan=\"3\">30nichi ni souri daijin</td><td>ga</td></tr><tr><td>30th</td><td colspan=\"2\">on prime minister nom-CM</td></tr><tr><td colspan=\"2\">sono 2nin</td><td>ni</td></tr><tr><td colspan=\"3\">those two people dat-CM</td></tr><tr><td colspan=\"2\">syou wo</td><td>okutta</td></tr><tr><td colspan=\"3\">award acc-CM gave</td></tr></table>",
"type_str": "table"
},
"TABREF3": {
"text": "Because he found a lot of books which he wants to buy, he sent them to Tokyo.)",
"html": null,
"num": null,
"content": "<table><tr><td colspan=\"3\">) kare wa kai-tai</td><td>hon wo</td></tr><tr><td>he</td><td colspan=\"3\">TM want to buy book acc-CM</td></tr><tr><td colspan=\"4\">takusan mitsuketa node,</td></tr><tr><td colspan=\"2\">a lot</td><td>found</td><td>because</td></tr><tr><td colspan=\"3\">Tokyo he okutta.</td></tr><tr><td colspan=\"3\">Tokyo to sent</td></tr><tr><td>(</td><td/><td/></tr></table>",
"type_str": "table"
},
"TABREF7": {
"text": "",
"html": null,
"num": null,
"content": "<table><tr><td>verb</td><td>CM</td><td>case examples</td></tr><tr><td>kau1</td><td>ga</td><td>person, passenger</td></tr><tr><td>'buy'</td><td colspan=\"2\">wo* stock, land, dollar, ticket</td></tr><tr><td/><td>de</td><td>shop, station, yen</td></tr><tr><td>kau2</td><td>ga</td><td>treatment, welfare, postcard</td></tr><tr><td/><td colspan=\"2\">wo* anger, disgust, antipathy</td></tr><tr><td>. . .</td><td>. . .</td><td>. . .</td></tr><tr><td>yomu1</td><td>ga</td><td>student, prime minister</td></tr><tr><td>'read'</td><td colspan=\"2\">wo* book, article, news paper</td></tr><tr><td>yomu2</td><td>ga</td><td>&lt;person&gt;</td></tr><tr><td/><td>wo</td><td>talk, opinion, brutality</td></tr><tr><td/><td colspan=\"2\">de* news paper, book, textbook</td></tr><tr><td>yomu3</td><td>ga</td><td>&lt;person&gt;</td></tr><tr><td/><td colspan=\"2\">wo* future</td></tr><tr><td>. . .</td><td>. . .</td><td>. . .</td></tr><tr><td>tadasu1</td><td>ga</td><td>member, assemblyman</td></tr><tr><td colspan=\"3\">'examine' wo* opinion, intention, policy</td></tr><tr><td/><td colspan=\"2\">ni tsuite problem, &lt;clause&gt;, bill</td></tr><tr><td>tadasu2</td><td>ga</td><td>chairman, oneself</td></tr><tr><td colspan=\"3\">'improve' wo* position, form</td></tr><tr><td>. . .</td><td>. . .</td><td>. . .</td></tr><tr><td>kokuchi 1</td><td>ga</td><td>doctor</td></tr><tr><td>'inform'</td><td>ni *</td><td>the said person</td></tr><tr><td>kokuchi 2</td><td>ga</td><td>colleague</td></tr><tr><td/><td colspan=\"2\">wo* infection, cancer</td></tr><tr><td/><td>ni*</td><td>patient, family</td></tr><tr><td colspan=\"2\">sanseida1 ga</td><td>&lt;person&gt;</td></tr><tr><td>'agree'</td><td>ni *</td><td>opinion, idea, argument</td></tr><tr><td colspan=\"2\">sanseida2 ga</td><td>&lt;person&gt;</td></tr><tr><td/><td>ni*</td><td>&lt;clause&gt;</td></tr></table>",
"type_str": "table"
},
"TABREF8": {
"text": "",
"html": null,
"num": null,
"content": "<table><tr><td/><td/><td/><td/><td/><td>correct case detection</td><td>incorrect case detection</td><td>parsing error</td></tr><tr><td/><td/><td/><td>our method</td><td>topic marker clausal modifier</td><td>85 48</td><td>4 15</td><td>13 2</td></tr><tr><td/><td/><td/><td>baseline</td><td>topic marker clausal modifier</td><td>81 39</td><td>8 24</td><td>13 2</td></tr><tr><td colspan=\"5\">The following are examples of analysis results 6 :</td></tr><tr><td colspan=\"5\">wa ginko ga the Ministry of Finance TM bank nom-CM (1) 1 ookurasyo ga</td></tr><tr><td/><td colspan=\"4\">2 tsumitate-teiru 2 ryuhokin wo no</td></tr><tr><td/><td>deposit</td><td colspan=\"3\">reserve fund of</td></tr><tr><td/><td colspan=\"2\">torikuzushi wo</td><td>3 mitomeru</td><td/></tr><tr><td/><td>consume</td><td colspan=\"2\">acc-CM consent</td><td/></tr><tr><td/><td colspan=\"2\">3 houshin \u00d7ni \u2020 wo</td><td>1 kimeta .</td><td/></tr><tr><td/><td>policy</td><td colspan=\"2\">acc-CM decide</td><td/></tr><tr><td/><td colspan=\"4\">(The Ministry of Finance decided the policy of con-</td></tr><tr><td/><td colspan=\"4\">senting to consume the reserve fund which the banks</td></tr><tr><td/><td colspan=\"2\">have deposited.)</td><td/><td/></tr><tr><td>(2)</td><td colspan=\"4\">korera no 1 gyokai\u00d7wo \u2021 wa seijiteki these industry TM political</td></tr><tr><td/><td colspan=\"2\">hatsugenryoku ga</td><td colspan=\"2\">tsuyoi toiu</td></tr><tr><td/><td>voice</td><td colspan=\"2\">nom-CM strong</td><td/></tr><tr><td/><td>tokutyo</td><td>ga</td><td/><td/></tr></table>",
"type_str": "table"
}
}
}
}