| { |
| "paper_id": "Y99-1026", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:37:15.381667Z" |
| }, |
| "title": "A CLASSIFICATION TREE APPROACH TO AUTOMATIC SEGMENTATION OF JAPANESE COMPOUND SENTENCES", |
| "authors": [ |
| { |
| "first": "Yujie", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "The University of Electro-Communications", |
| "location": { |
| "settlement": "Tokyo", |
| "country": "Japan" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Kazuhiko", |
| "middle": [], |
| "last": "Ozeki", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "The University of Electro-Communications", |
| "location": { |
| "settlement": "Tokyo", |
| "country": "Japan" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "It is well known that direct parsing of a long Japanese compound sentence is extremely difficult. Various pre-processing methods have been proposed to segment such a sentence into shorter, simpler ones prior to parsing. The problem with the conventional methods is that some kind of segmentation patterns or heuristic preference scores must be given manually, hence no guarantee for optimality. This paper proposes a new method of sentence segmentation based on a classification tree technique. In this method, optimal segmentation patterns and the optimal order of their application are automatically acquired from training data, linguistic phenomena together with their occurence frequencies being taken into account. Generation of a classification tree is conducted on an EDR corpus, and evaluation results are reported. It is shown that pruning reduces the tree size by a factor of about 1/4 without affecting the performance.", |
| "pdf_parse": { |
| "paper_id": "Y99-1026", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "It is well known that direct parsing of a long Japanese compound sentence is extremely difficult. Various pre-processing methods have been proposed to segment such a sentence into shorter, simpler ones prior to parsing. The problem with the conventional methods is that some kind of segmentation patterns or heuristic preference scores must be given manually, hence no guarantee for optimality. This paper proposes a new method of sentence segmentation based on a classification tree technique. In this method, optimal segmentation patterns and the optimal order of their application are automatically acquired from training data, linguistic phenomena together with their occurence frequencies being taken into account. Generation of a classification tree is conducted on an EDR corpus, and evaluation results are reported. It is shown that pruning reduces the tree size by a factor of about 1/4 without affecting the performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "It is well known that direct parsing of a long Japanese compound sentence, comprising many coordinate clauses, is extremely difficult. Various pre-processing methods have been proposed to segment such a sentence into shorter, simpler ones prior to parsing [1] . Sentence segmentation have also been discussed from a view point of document revision support system [2] , because a long compound sentence is difficult to understand even for humans. The techniques of sentence segmentation reported so far can be summarized as follows:", |
| "cite_spans": [ |
| { |
| "start": 256, |
| "end": 259, |
| "text": "[1]", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 363, |
| "end": 366, |
| "text": "[2]", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "INTRODUCTION", |
| "sec_num": "1." |
| }, |
| { |
| "text": "(1) Segmentation points are estimated by matching between prescribed segmentation patterns and an input sentence. The segmentation patterns are described by using the part-of-speech and the orthographic representation of morphemes obtained by morphological analysis [1] .", |
| "cite_spans": [ |
| { |
| "start": 266, |
| "end": 269, |
| "text": "[1]", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "INTRODUCTION", |
| "sec_num": "1." |
| }, |
| { |
| "text": "(2) Dependency analysis on a clause sequence is conducted based on subordination relation among clauses [3] . Then dependency structure candidates are ordered by using heuristic dependency scores between clauses. Finally the segmentation points are determined in accordance with the top candidate for the dependency structure [2] .", |
| "cite_spans": [ |
| { |
| "start": 104, |
| "end": 107, |
| "text": "[3]", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 326, |
| "end": 329, |
| "text": "[2]", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "INTRODUCTION", |
| "sec_num": "1." |
| }, |
| { |
| "text": "These techniques have been reported effective. However, the problem with these conventional methods is that the segmentation patterns or the heuristic dependency scores must be given manually, hence no guarantee for optimality. This paper proposes a new method of automatic segmentation of long compound sentences using a classification tree technique [4, 5, 6, 7] based on the surface information obtained by morphological analysis. In this method, optimal segmentation patterns and the optimal order of their application are automatically acquired from training data, linguistic phenomena together with their occurrence frequencies being taken into account. The rest of the paper describes the details of the method, and reports the experimental results on an EDR corpus, including the effects of pruning.", |
| "cite_spans": [ |
| { |
| "start": 352, |
| "end": 355, |
| "text": "[4,", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 356, |
| "end": 358, |
| "text": "5,", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 359, |
| "end": 361, |
| "text": "6,", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 362, |
| "end": 364, |
| "text": "7]", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "INTRODUCTION", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The classification tree employed in this work is of the following type:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLASSIFICATION TREE", |
| "sec_num": "2." |
| }, |
| { |
| "text": "(1) It is a binary tree: each intermediate node has two child-nodes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLASSIFICATION TREE", |
| "sec_num": "2." |
| }, |
| { |
| "text": "(2) Gini index [4] is employed for the measurement of impurity.", |
| "cite_spans": [ |
| { |
| "start": 15, |
| "end": 18, |
| "text": "[4]", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLASSIFICATION TREE", |
| "sec_num": "2." |
| }, |
| { |
| "text": "(3) In the tree generation stage, if there is no test that reduces the impurity at a node, then the node is decided to be a leaf.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLASSIFICATION TREE", |
| "sec_num": "2." |
| }, |
| { |
| "text": "(4) A leaf is labeled with \"YES\" (segmentation point) or \"NO\" (not segmentation point) by the majority rule for the training data that reach the leaf.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLASSIFICATION TREE", |
| "sec_num": "2." |
| }, |
| { |
| "text": "The data given to the tree and the tests at tree nodes will be described in detail in the following sections.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLASSIFICATION TREE", |
| "sec_num": "2." |
| }, |
| { |
| "text": "3. DATA AND ATTRIBUTES", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CLASSIFICATION TREE", |
| "sec_num": "2." |
| }, |
| { |
| "text": "The syntactic unit employed here is bunsetsu phrase, which comprises a content word with or without being followed by a string of function words. In segmenting a long compound sentence, it is important to define precisely what a correct segmentation point should be. A correct segmentation point here is the boundary between two consecutive bunsetsu phrases X and Y, where X must satisfy the following conditions:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Segmentation Points", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(1) X is not the sentence-final bunsetsu phrase.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Segmentation Points", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(2) X is a predicate bunsetsu phrase containing such word as a verb or an adjective.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Segmentation Points", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(3) X modifies (in a wide sense) the sentence-final bunsetsu phrase.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Segmentation Points", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Segments obtained by dividing a Japanese sentence at such segmentation points are parallel, coordinate clauses.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Segmentation Points", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "A brief explanation of Japanese grammatical terms relevant to the present work will be appropriate here. Table 1 . A,B, and C in the column \"value\" conform to the classification of conjunctive forms by Minami [3] . The value of the scope attribute is scope if the bunsetsu phrase contains a quotation particle 'to' or formal noun `koto', and null otherwise. The value of the punctuation attribute is punct if the bunsetsu phrase is followed by a comma and null otherwise. The important bunsetsu phrases represented by sets of attribute values defined above are referred to as essential phrases here. The following is an example of conversion from an ordinary sentence to a sequence of essential phrases. The suffix is the bunsetsu phrase number in the original sentence. ", |
| "cite_spans": [ |
| { |
| "start": 209, |
| "end": 212, |
| "text": "[3]", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 105, |
| "end": 112, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Essential Phrases", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "It is probable that there is a correct segmentation point just after an essential phrase whose conjunctive attribute value is either v-renyou, 'tame, A, B, C, an-renyou, or a-renyou. This kind of essential phrases are referred to as segmentation phrases. The boundary between a segmentation phrase and the immediately succeeding bunsetsu phrase is a segmentation point candidate.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Candidates for Segmentation Points", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "It is obvious that attribute values of segmentation phrases are very important for estimation of segmentation points. Also, whether a segmentation point candidate is a correct one or not is decided by the bunsetsu phrase modified by the segmentation phrase. Therefore essential phrases that appear after the segmentation phrase are expected to play an important role in estimating a segmentation point, whereas essential phrases that appear before the segmentation phrase are considered unimportant. Thus only the segmentation phrase and the succeeding essential phrases are tested. An input data given to a classification tree is an essential phrase sequence, a segmentation phrase being at the top. Thus n data are generated from an essential phrase sequence with n segmentation phrases. The task for a classification tree then is to judge if a segmentation point candidate, which is the boundary between the segmentation phrase and the immediately succeeding bunsetsu phrase, is a correct one. Training data and evaluation data are labeled with \"YES\" (segmentation point) or \"NO\" (not segmentation point) by syntactic information obtained from a corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data for Classification", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "The set of tests V is defined to be the product set ({conjunctive attribute values} U {*}) x {scope, null, *} x {punct, null, *}, where `*' denotes a wild card that matches any attribute value. Also introduced is a symbol which matches any non-empty essential phrase sequence. Then a test is represented by", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Form of a Test", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "[X] < Y >,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Form of a Test", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "where X is an element in V U {+}, and Y is a sequence of elements in V U {+} with no continuation of -I-'s. [X] checks matching between X and a segmentation phrase, and < Y > checks matching between Y and the sequence of essential phrases that appear after the segmentation phrase. For example, a data that passes the test is one that satisfies the following conditions:", |
| "cite_spans": [ |
| { |
| "start": 108, |
| "end": 111, |
| "text": "[X]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Form of a Test", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(1) The segmentation phrase has the conjunctive attribute value A and the punctuation attribute value punct. The scope attribute value does not matter.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Form of a Test", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(2) The essential phrase immediately after the candidate segmentation point has the conjunctive attribute value le L renyou. The scope and punctuation attribute values do not matter.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Form of a Test", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "(3) There exists an essential phrase having the scope attribute value scope between the second essential phrase after the segmentation point candidate and the last essential phrase. The conjunctive and punctuation attribute values of the phrase do not matter. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Form of a Test", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "From an EDR corpus [8] , 2000 sentences, each of which has more than 30 morphemes, were randomly selected. Then the dependency structure for each sentence was determined by using bracket information given in the copus. It turned out that there were 1835 well-formed sentences among the 2000. A well-formed sentence here is one which satisfies the conditions that each non sentence-final bunsetsu phrase modifies one and only one succeeding bunsetsu phrase, and that two pairs of bunsetsu phrases in modification relation never cross with each other. The 1835 sentences were segmented into bunsetsu phrases, and the main word for each bunsetsu phrase was extracted by using the bracket information. Also, the conjugation form was determined for each phrase-final conjugating word by looking up a word dictionary attached to the corpus. Based on these results, sentences were then converted to essential phrase sequences, and segmentation phrases were detected to make experimental data. Each data was labeled 'YES' or 'NO' depending on whether the segmentation point candidate is correct or not as indicated by the bracket information. The resulting number of total data was 2484. Fig.1 Part of the generated tree near the root.", |
| "cite_spans": [ |
| { |
| "start": 19, |
| "end": 22, |
| "text": "[8]", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1180, |
| "end": 1185, |
| "text": "Fig.1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Data", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "From the 1835 sentences, 400 sentences were randomly selected for creating evaluation data. The remaining 1435 sentences were used to make training data. The resulting numbers of the evaluation data and the training data were 555 and 1435, respectively. A classification tree was generated on the training data by using the method described above. The number of nodes in the generated tree was 771, among which 386 were leaves. A part of the tree near the root related to the conjunctive attribute value C is shown in Fig.1 . In this classification tree, an input data having a segmentation phrase with the conjunctive attribute value C and the punctuation attribute value punct, for example, first goes to a 'yes' child-node. Then the essential phrase sequence after the segmentation point candidate is tested. If there is an essential phrase having scope and punct attribute values between the essential phrase immediately after the segmentation point candidate and the final essential phrase in the sentence, then the data goes to a leaf with `NO' class label, where the segmentation point candidate is judged not to be a segmentation point. The performance of the classification tree was measured by using the evaluation data. There were 7 sentences among the 400 sentences that have no segmentation point candidates. Some examples of segmentation results are shown below. The symbol `?' designates a segmentation point candidate, and the suffix is the serial number for the segmentation point candidates. In Example 1, the estimation results are correct for all the segmentation point candidates. The segmentation point candidate `? 1 ' in Example 2, and `? 2 ', `?4 in Example 3 are wrongly estimated to be segmentation points. However, the last bunsetsu phrase in Example 3 could be `suru-koto-mo-dekiru' instead of `dekiru', because in some parts of the EDR corpus, 'clekiru' is labeled as a function word. If we employ such bunsetsu phrase segmentation, then the estimation results for all the segmentation point candidates in Example 3 are turned into correct ones. Thus the type of errors as `? 2 ' and 'Li in Example 3 are permissible ones. Such errors were corrected manually, and performance was evaluated in two different ways: one without such corrections, and the other with such corrections.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 518, |
| "end": 523, |
| "text": "Fig.1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Tree Generation and Segmentation Experiment", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Evaluation measures employed in this work are as follows, and the evaluation results are shown in Table 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 98, |
| "end": 105, |
| "text": "Table 2", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Tree Generation and Segmentation Experiment", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "#estimated segmentation points #correctly estimated segmentation points The following problems were observed as to the cause of errors.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Precision", |
| "sec_num": null |
| }, |
| { |
| "text": "(a) When a segmentation point candidate is followed (not necessarily immediately) by an essential phrase that has the conjunctive attribute value v-rentai or the scope attribute value scope, estimation results are unreliable. This shows that it is difficult to estimate correctly the scope of a rentai clause and a quotation clause, as has been pointed out. In Example 2 above, for example, the candidate `? 1 ' which is followed by an essential phrase 'clashite-ita (issuing)' with the conjunctive attribute value of v-rentai, was wrongly estimated as a segmentation point. In fact the segmentation phras\u00e8 susumi, (getting worse,)' modifies 'clashite-ita (issuing)', not the sentence-final bunsetsu phrase.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Precision", |
| "sec_num": null |
| }, |
| { |
| "text": "(b) Morphological information given in the corpus is insufficient. In the EDR corpus, the classification of particles is rather coarse. For example, the particle 'to' has three functions: 'quotation', 'conjunctive', and 'parallel conjunction'. However, there is no label in the corpus to indicate which function 'to' has when it appears in a particular context.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Precision", |
| "sec_num": null |
| }, |
| { |
| "text": "(c) Some errors are obviously results from inconsistency of the bracket information in the corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Precision", |
| "sec_num": null |
| }, |
| { |
| "text": "It is expected that pruning makes the classification tree more compact, and improves its generalization property. Various pruning methods have been proposed so far [4, 7] , from which one proposed by Gelfant et al. [9] was employed here. The method is described briefly as follows. Let T be a classification tree, and D a set of data. The error rate for T evaluated on D is denoted by RV , D) . Let' the expression T' < T denote that T' is a pruned subtree of T.", |
| "cite_spans": [ |
| { |
| "start": 164, |
| "end": 167, |
| "text": "[4,", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 168, |
| "end": 170, |
| "text": "7]", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 215, |
| "end": 218, |
| "text": "[9]", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 385, |
| "end": 392, |
| "text": "RV , D)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EFFECTS OF PRUNING", |
| "sec_num": "6." |
| }, |
| { |
| "text": "Prepare two independent training data sets D 1 and D2. The pruning algorithm uses D 1 and D2 alternately to grow and prune the tree in the following manner:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EFFECTS OF PRUNING", |
| "sec_num": "6." |
| }, |
| { |
| "text": "(1) Initially generate a full grown classification tree T1 on D1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EFFECTS OF PRUNING", |
| "sec_num": "6." |
| }, |
| { |
| "text": "(2) Find a pruned subtree Ti * of T1 that minimizes the error rate on D2: := arg min R(r , D2).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "EFFECTS OF PRUNING", |
| "sec_num": "6." |
| }, |
| { |
| "text": "(3) Generate a full grown classification tree T2 on D 2 extending branches from leaves of (4) Find a pruned subtree 7'; of T2 that minimizes the error rate on .131:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ti<Ti", |
| "sec_num": null |
| }, |
| { |
| "text": "71 ; := arg min R(T', D1). (6) Repeat 2 through 5 above, incrementing the suffix of T, until a stopping condition is satisfied.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ti<Ti", |
| "sec_num": null |
| }, |
| { |
| "text": "This algorithm was applied to the current problem. Among the 1435 training sentences, 514 were unambiguous with respect to segmentation. The remaining 921 sentences were used as training sentences in this experiment. The 1339 training data generated from the 921 sentences were split into two sets D 1 (670 data) and D2 (669 data). As the growing and pruning iteration proceeds, the size and the error rate of the classification tree changed as shown in Table 3 . After k=6, the same results as in steps 4 and 5 alternately appeared. Table 4 shows the size and the performance of the pruned trees measured on the 555 evaluation data described in 5.2. The last row of the table is for the unpruned tree described in 5.2. From this table, it is observed that pruning reduces the size of the tree by a factor of about 1/4 without affecting the performance, though it does not improve the performance for the evaluation data. Table 3 . Change of the tree size and the error rate by pruning. I TI denotes the size (the number of nodes) of a tree T, and j = (k mod 2) + 1. Table 4 . Performance of pruned trees measured on the evaluation data. The last row is for the unpruned tree.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 454, |
| "end": 461, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 534, |
| "end": 541, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 922, |
| "end": 929, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 1067, |
| "end": 1074, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Ti<Ti", |
| "sec_num": null |
| }, |
| { |
| "text": "After a brief review of conventional techniques for segmentation of long Japanese compound sentences, a new method based on a classification tree technique was introduced. Generation of a classification tree was conducted on an EDR corpus, and evaluation results were reported. It was shown that pruning reduces the tree size by a factor of about 1/4 without affecting the performance, though it does not improve the performance in this application. To further improve the performance, more detailed morphological information will be necessary. Also, the problem of how to fully exploit morphological information in a classification tree technique, especially for determining the scope of a quotation clause and a rental clause, remains open.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "CONCLUSION", |
| "sec_num": "7." |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work was supported in part by the Okawa Foundation for Information and Telecommunications.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "ACKNOWLEDGMENT", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "An automatic sentence breaking and subject supplement method for J/E machine translation", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Ehara", |
| "suffix": "" |
| } |
| ], |
| "year": 1984, |
| "venue": "Trans. IPSJ", |
| "volume": "36", |
| "issue": "6", |
| "pages": "1018--1028", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Y. Kim and T. Ehara, \"An automatic sentence breaking and subject supplement method for J/E machine translation,\" Trans. IPSJ, Vol. 36, No. 6, pp. 1018-1028, 1984 (in Japanese).", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Dividing Japanese complex sentences based on conjunctive expressions analysis", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Takeishi", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Hayashi", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Trans. IPSJ", |
| "volume": "33", |
| "issue": "5", |
| "pages": "652--663", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E. Takeishi and Y. Hayashi, \"Dividing Japanese complex sentences based on conjunctive expressions analysis,\" Trans. IPSJ, Vol. 33, No. 5, pp.652-663, 1992 (in Japanese).", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "The Structure of Modern Japanese", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Minami", |
| "suffix": "" |
| } |
| ], |
| "year": 1974, |
| "venue": "Taishukan-Shoten", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Minami, \"The Structure of Modern Japanese,\" Taishukan-Shoten, 1974 (in Japanese).", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Classification and Regression Trees", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Breiman", |
| "suffix": "" |
| } |
| ], |
| "year": 1984, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "L. Breiman et al., \"Classification and Regression Trees,\" Chapman & Hall, 1984.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Automatic bunsetsu segmentation of Japanese sentences using a classification tree", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Ozeki", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proc. PACLIC13", |
| "volume": "", |
| "issue": "", |
| "pages": "230--235", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Y. Zhang and K. Ozeki, \"Automatic bunsetsu segmentation of Japanese sentences using a classification tree,\" Proc. PACLIC13, pp. 230-235, 1998.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "The application of semantic classification trees to natural language understanding", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Kuhn", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "De Mori", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "IEEE Trans. PAMI", |
| "volume": "17", |
| "issue": "5", |
| "pages": "449--460", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Kuhn and R. de Mori, \"The application of semantic classification trees to natural language understanding,\" IEEE Trans. PAMI, Vol.17, No.5, pp.449-460, 1995.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "C4.5: Programs for Machine Learning", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "R" |
| ], |
| "last": "Quinlan", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. R. Quinlan, \"C4.5: Programs for Machine Learning,\" Morgan Kaufmann Publishers, 1993.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Specification of EDR Electronic Dictionary Ver.1.5", |
| "authors": [], |
| "year": 1996, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Japan Electronic Dictionary Research Institute, \"Specification of EDR Electronic Dic- tionary Ver.1.5,\" 1996 (in Japanese).", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "An iterative growing and pruning algorithm for classification tree design", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "B" |
| ], |
| "last": "Gelfand", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "S" |
| ], |
| "last": "Ravishankar", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [ |
| "J" |
| ], |
| "last": "Delp", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "IEEE Trans. PAMI", |
| "volume": "13", |
| "issue": "2", |
| "pages": "163--174", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. B. Gelfand, C. S. Ravishankar and E. J. Delp, \"An iterative growing and pruning al- gorithm for classification tree design,\" IEEE Trans. PAMI, Vol.13, No.2, pp.163-174, 1991.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "Estimation results are shown by (Y) (segmentation point) and (N) (not segmentation point). The symbol `I' indicates a correct segmentation point. Example 1 is the same sentence as the one that appeared in 3.2. [Example 1] 16-nichi-ni (on 16th) bei (American) senseki (of registry) tankah-ga (tanker [nominative]) hidan-shita (was shot) Loki, (when,) ? i (N) kuehto-gun-ha (Kuwaiti forces [nominative]) misairu-no (missile's) hirai-wo (coming [accusative]) tanchi, (detected,) ? 2 (N) jigun-no (of their own forces) chi-tai-kuh-misairu-de (with a surface to air missile) geigeki-shiyou-toshita-ga, (tried to intercept, but,) ? 3 (17 )1 shippai-ni (in failure) owa-tta. (ended.) [Example 2] yamai-ga (disease [nominative]) susumi, (getting worse,) ? 1 (Y) sutajio-no (in the studio) sofa-ni (on a sofa) yoko-ni (down) nari-nagara (lying) ? 2 (N) shiji-wo (instructions [accusative]) dashite-ita (issuing) Kamei-san-ha, (Mr.Kamei [nominative],) saigo-no (last) rohru-ga (roll [nominative]) owatta-toki, (when finished,) namida-gunda. (eyes wet with tears.) [Example 3] yasumi-wo (holiday [accusative]) tora-nakere-ba, (if not take,) ? 1 (N) tsugi-tsugi-ni (successively) kasan-sarete-yuku-node, (because of being accumulated,) ? 2 (Y) matome-te (together) ?3 (N) moku, (Thursday,) kinyou-wo (Friday [accusative]) yasumi-ni (holiday) shite, (take, and,) ? 4 (Y) shukyu-to (with a weekend) awase-te (joining together) ? 5 (N) 4-renkyuni (4-day off) surukoto-mo (taking) dekiru. (possible.)", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "html": null, |
| "content": "<table><tr><td colspan=\"3\">Table 1. Values for the conjunctive attribute. Morphemes 'te', 'de', 'tame', `nagara',</td></tr><tr><td colspan=\"3\">ba', `ga (conjunctive)', `ha',`mo', `ga (case)' are particles.</td></tr><tr><td>value</td><td>main word</td><td>last morpheme</td></tr><tr><td>v-renyo\u00f9</td><td>verb, taigen+copula</td><td>renyou form</td></tr><tr><td>te'-renyo\u00f9</td><td>yougen</td><td>conjunctive particles 'te', 'de'</td></tr><tr><td>tame'</td><td>yougen</td><td>formal noun, temporal noun</td></tr><tr><td>A</td><td>yougen</td><td>conjunctive particles `nagara', and etc.</td></tr><tr><td>B</td><td>yougen</td><td>conjunctive particles `ba', and etc.</td></tr><tr><td>C</td><td>yougen</td><td>conjunctive particles `ga', and etc.</td></tr><tr><td>an-renyou</td><td>adjectival noun</td><td>renyou form</td></tr><tr><td>a-renyou</td><td>adjective</td><td>renyou form</td></tr><tr><td>v-rentai</td><td>verb, taigen+copula</td><td>rentai form</td></tr><tr><td>yougen</td><td>yougen</td><td>particles other than conjunctive particles</td></tr><tr><td>a-renta\u00ec</td><td>adjective, adjectival noun</td><td>rentai form</td></tr><tr><td>ha'</td><td>taigen</td><td>kakari particle 'ha'</td></tr><tr><td>mo'</td><td>taigen</td><td>kakari particle `mo'</td></tr><tr><td>ga'</td><td>taigen</td><td>case particle 'ga'</td></tr><tr><td>shushi</td><td>yougen</td><td>period</td></tr><tr><td colspan=\"3\">When a yougen modifies a taigen, it takes a rentai form.</td></tr><tr><td colspan=\"3\">Some bunsetsu phrases in a sentence play an important role in estimating segmentation</td></tr><tr><td colspan=\"3\">points, while others do not. A bunsetsu phrase whose final morpheme is a kakari particle</td></tr><tr><td colspan=\"3\">such as 'ha' and `mo', or a case particle `ga' is considered to be important. A predicate</td></tr><tr><td colspan=\"3\">bunsetsu phrase containing such word as a verb or an adjective is also important. Those</td></tr><tr><td colspan=\"3\">important bunsetsu phrases are marked, and their attribute values are extracted. Three</td></tr><tr><td>attributes</td><td/><td/></tr><tr><td/><td colspan=\"2\">(1) conjunctive, (2) scope, (3) punctuation</td></tr><tr><td colspan=\"3\">are employed here. The values of the conjunctive attribute are defined according to the</td></tr><tr><td colspan=\"3\">main word and the last morpheme in a bunsetsu phrase as in</td></tr></table>", |
| "type_str": "table", |
| "text": "Yougen refers to conjugating content words such as verbs, adjectives, adjectival nouns, and noun+copulas. A yougen changes its ending depending on its function. A base form is called a shushi form. When a yougen modifies a yougen, it takes a renyou form.", |
| "num": null |
| }, |
| "TABREF1": { |
| "html": null, |
| "content": "<table/>", |
| "type_str": "table", |
| "text": "hidan-shita (was shot) 5 toki, (when,) 6 kuehto-gun-ha (Kuwaiti forces [nominative]) ? misairuno (missile's)8 hirai-wo (coming [accusative]) 9 tanchi, (detected,) 10 jigun-no (of their own forces) 11 chi-tai-kuh-misairu-de (with a surface to air missile) 12 geigeki-shiyou-to-shitaga, (tried to intercept, but,) 13 shippai-ni (in failure) 14 owa-tta.(ended.)15 [Essential Phrase Sequence] Cga, null, nu/04 (v-rental, null, nu/0 5 (lame, null, punct) 6 Cha, null, null) 7 (v-ren you, null, punct) io (C, scope, punct) 13 (shushi, null, nu/015", |
| "num": null |
| }, |
| "TABREF5": { |
| "html": null, |
| "content": "<table><tr><td/><td colspan=\"2\">#correctly estimated segmentation points</td></tr><tr><td>Recall</td><td colspan=\"2\">#correct segmentation points</td></tr><tr><td>Accuracy</td><td colspan=\"2\">#evaluation sentences #correctly segmented sentences</td></tr><tr><td/><td>Precision %</td><td>81 (84)</td></tr><tr><td/><td>Recall %</td><td>84 (84)</td></tr><tr><td/><td>Accuracy %</td><td>72 (77)</td></tr></table>", |
| "type_str": "table", |
| "text": "Evaluation of segmentation results. Figures in the parentheses show the results with manual corrections described above.", |
| "num": null |
| }, |
| "TABREF6": { |
| "html": null, |
| "content": "<table><tr><td>(5)</td><td/><td/><td/><td/></tr><tr><td/><td colspan=\"2\">Before pruning</td><td colspan=\"2\">After pruning</td></tr><tr><td>Step k</td><td/><td>R(Tk , Di)</td><td colspan=\"2\">1n 1 R(111: , D3)</td></tr><tr><td>1</td><td>431</td><td>0.283</td><td>167</td><td>0.214</td></tr><tr><td>2</td><td>519</td><td>0.312</td><td>183</td><td>0.175</td></tr><tr><td>3</td><td>457</td><td>0.275</td><td>201</td><td>0.197</td></tr><tr><td>4</td><td>519</td><td>0.312</td><td>183</td><td>0.175</td></tr><tr><td/><td>457</td><td>0.275</td><td>201</td><td>0.197</td></tr><tr><td>With pruning</td><td colspan=\"5\">Tree size Precision % Recall % Accuracy %</td></tr><tr><td>Ti*</td><td>167</td><td>85</td><td/><td>83</td><td>76</td></tr><tr><td>2</td><td>183</td><td>84</td><td/><td>85</td><td>76</td></tr><tr><td/><td>201</td><td>84</td><td/><td>85</td><td>77</td></tr><tr><td/><td>183</td><td>84</td><td/><td>85</td><td>76</td></tr><tr><td>Without pruning</td><td>771</td><td>84</td><td/><td>84</td><td>77</td></tr><tr><td/><td/><td>T <T2</td><td/><td/></tr></table>", |
| "type_str": "table", |
| "text": "Generate a full grown classification tree T3 on D 1 extending branches from leaves of 711.", |
| "num": null |
| } |
| } |
| } |
| } |