ACL-OCL / Base_JSON /prefixU /json /U03 /U03-1005.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U03-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:11:40.003594Z"
},
"title": "S-clause Segmentation for Efficient Syntactic Analysis Using Decision Trees",
"authors": [
{
"first": "Mi-Young",
"middle": [],
"last": "Kim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Pohang University of Science and Technology (POSTECH) and Advanced Informa-tion Technology Research Center(AlTrc)",
"location": {
"addrLine": "San 31 Hyoja-dong, Nam-gu",
"postCode": "790-784",
"settlement": "Pohang",
"country": "R. of Korea"
}
},
"email": ""
},
{
"first": "Jong-Hyeok",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Pohang University of Science and Technology (POSTECH) and Advanced Informa-tion Technology Research Center(AlTrc)",
"location": {
"addrLine": "San 31 Hyoja-dong, Nam-gu",
"postCode": "790-784",
"settlement": "Pohang",
"country": "R. of Korea"
}
},
"email": "jhlee@postech.ac.kr"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In dependency parsing of long sentences with fewer subjects than predicates, it is difficult to recognize which predicate governs which subject. To handle such syntactic ambiguity between subjects and predicates, this paper proposes an \"Sclause\" segmentation method, where an S(ubject)clause is defined as a group of words containing several predicates and their common subject. We propose an automatic S-clause segmentation method using decision trees. The S-clause information was shown to be very effective in analyzing long sentences, with an improved performance of 5 percent.",
"pdf_parse": {
"paper_id": "U03-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "In dependency parsing of long sentences with fewer subjects than predicates, it is difficult to recognize which predicate governs which subject. To handle such syntactic ambiguity between subjects and predicates, this paper proposes an \"Sclause\" segmentation method, where an S(ubject)clause is defined as a group of words containing several predicates and their common subject. We propose an automatic S-clause segmentation method using decision trees. The S-clause information was shown to be very effective in analyzing long sentences, with an improved performance of 5 percent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The longer the input sentences, the worse the parsing results are, since problems with syntactic ambiguity increase drastically. In our parser, subject errors form the second largest error portion, as 24.15% of syntactic parsing errors (see Table 1 ). Although the dependency errors in NP form the largest error portion, these errors are not significant since many applications (e.g. MT systems) using parsers deal with the NP structure as a one unit and do not analyze the syntactic relations within NP. So, this paper proposes a method to resolve subject dependency error problems. To improve the dependency parsing performance, we need to determine the correct dependency relations of subjects.",
"cite_spans": [],
"ref_spans": [
{
"start": 241,
"end": 248,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In most cases, a long sentence has fewer subjects than predicates. The reason is that several predicates can share one subject if they require the same word as their subject, or that the subject of a predicate is often omitted in a Korean sentence. So, in a long sentence, it is difficult to recognize the correct subject of some subjectless VPs. This paper proposes an S(ubject)-clause segmentation method to reduce ambiguity in determining the governor of a subject in dependency parsing. An S(ubject)-clause is defined as a group of words containing several predicates and their common subject. An S-clause includes one subject and several predicates which share the subject. The Sclause segmentation algorithm detects the boundary of predicates which share a common subjective word. We employ the C4.5 decision tree learning algorithm for this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The next section presents the background of previous work on sentence segmentation and clause detection. Next, dependency analysis procedure using S-clauses in Korean will be described. Afterwards, the features for decision tree learning to detect S-clauses will be explained, and some experimental results will show that the proposed S-clause segmentation method is effective in dependency parsing. Finally, a conclusion will be given. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A considerable number of studies have been conducted on the syntactic analysis of long sentences. First, conjunctive structure identification methods have been proposed (Agarwal 1992; Jang 2002; Kurohashi 1994; Yoon 1997) . These methods are based on structural parallelism and the lexical similarity of coordinate structures. While they perform well in detecting the boundary of a coordinate structure, they cannot determine the boundary of predicates that share a common subject. In addition, some papers insist that coordinate structure identification is impossible since Korean coordinate sentences do not maintain structural parallelism (Ko 1999) .",
"cite_spans": [
{
"start": 169,
"end": 183,
"text": "(Agarwal 1992;",
"ref_id": "BIBREF0"
},
{
"start": 184,
"end": 194,
"text": "Jang 2002;",
"ref_id": "BIBREF5"
},
{
"start": 195,
"end": 210,
"text": "Kurohashi 1994;",
"ref_id": "BIBREF12"
},
{
"start": 211,
"end": 221,
"text": "Yoon 1997)",
"ref_id": "BIBREF24"
},
{
"start": 642,
"end": 651,
"text": "(Ko 1999)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": null
},
{
"text": "the syntactic analysis of long sentences. First, conjunctive structure identification methods have been proposed (Agarwal 1992; Jang 2002; Kurohashi 1994; Yoon 1997) . These methods are based on structural parallelism and the lexical similarity of coordinate structures. While they perform well in detecting the boundary of a coordinate structure, they cannot determine the boundary of predicates that share a common subject. In addition, some papers insist that coordinate structure identification is impossible since Korean coordinate sentences do not maintain structural parallelism (Ko 1999) .",
"cite_spans": [
{
"start": 113,
"end": 127,
"text": "(Agarwal 1992;",
"ref_id": "BIBREF0"
},
{
"start": 128,
"end": 138,
"text": "Jang 2002;",
"ref_id": "BIBREF5"
},
{
"start": 139,
"end": 154,
"text": "Kurohashi 1994;",
"ref_id": "BIBREF12"
},
{
"start": 155,
"end": 165,
"text": "Yoon 1997)",
"ref_id": "BIBREF24"
},
{
"start": 586,
"end": 595,
"text": "(Ko 1999)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": null
},
{
"text": "Second, several studies have been made on clause segmentation (identification, splitting) (Sang and Dejean 2001). The clause seems to be a natural structure above the chunk (Ejerhed 1998) . Clause identification splits sentences that center around a verb. The major problem with clause identification concerns the sharing of the same subject by different clauses (Vilson 1998) . When a subject is omitted in a clause, Vilson(1998) attached the features of the previous subject to the conjunctions. However, the subject of a clause is not always the nearest subject. Therefore, a new method is required to detect the correct subject of a clause.",
"cite_spans": [
{
"start": 173,
"end": 187,
"text": "(Ejerhed 1998)",
"ref_id": "BIBREF3"
},
{
"start": 363,
"end": 376,
"text": "(Vilson 1998)",
"ref_id": "BIBREF13"
},
{
"start": 418,
"end": 430,
"text": "Vilson(1998)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": null
},
{
"text": "Second, several studies have been made on clause segmentation (identification, splitting) (Sang and Dejean 2001). The clause seems to be a natural structure above the chunk (Ejerhed 1998) . Clause identification splits sentences that center around a verb. The major problem with clause identification concerns the sharing of the same subject by different clauses (Vilson 1998) . When a subject is omitted in a clause, Vilson(1998) attached the features of the previous subject to the conjunctions. However, the subject of a clause is not always the nearest subject. Therefore, a new method is required to detect the correct subject of a clause.",
"cite_spans": [
{
"start": 173,
"end": 187,
"text": "(Ejerhed 1998)",
"ref_id": "BIBREF3"
},
{
"start": 363,
"end": 376,
"text": "(Vilson 1998)",
"ref_id": "BIBREF13"
},
{
"start": 418,
"end": 430,
"text": "Vilson(1998)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": null
},
{
"text": "In addition, many studies have focused on segmentation in long sentences. Some try to segment a long sentence using patterns and rules and to analyze each segment independently (Doi 1993; Kim 1995; Kim 2002 ; Li 1990) . Similarly, an intrasentence segmentation method using machine learning is proposed . Although this method reduces the complexity of syntactic analysis by segmenting a long sentence, the ambiguity problem with the dependency of subjects remains unsolved. Further, Lyon and Dickerson take advantage of the fact that declarative sentences can almost always be segmented into three concatena-In addition, many studies have focused on segmentation in long sentences. Some try to segment a long sentence using patterns and rules and to analyze each segment independently (Doi 1993; Kim 1995; Kim 2002 ; Li 1990) . Similarly, an intrasentence segmentation method using machine learning is proposed . Although this method reduces the complexity of syntactic analysis by segmenting a long sentence, the ambiguity problem with the dependency of subjects remains unsolved. Further, Lyon and Dickerson take advantage of the fact that declarative sentences can almost always be segmented into three concatena-ted sections (pre-subject, subject, predicate) which can reduce the complexity of parsing English sentences (Lyon and Dickerson 1995; Lyon and Dickerson 1997) . This approach is useful for a simple sentence that contains a subject and a predicate. A long sentence generally contains more than a subject and a predicate. Therefore, the segmentation methods proposed by Lyon and Dickerson are inefficient for parsing long sentences. In studies on segmenting long sentences, little attention has been paid to detecting the boundaries of predicates which share a common subject.",
"cite_spans": [
{
"start": 177,
"end": 187,
"text": "(Doi 1993;",
"ref_id": "BIBREF2"
},
{
"start": 188,
"end": 197,
"text": "Kim 1995;",
"ref_id": "BIBREF9"
},
{
"start": 198,
"end": 208,
"text": "Kim 2002 ;",
"ref_id": "BIBREF7"
},
{
"start": 209,
"end": 217,
"text": "Li 1990)",
"ref_id": "BIBREF14"
},
{
"start": 785,
"end": 795,
"text": "(Doi 1993;",
"ref_id": "BIBREF2"
},
{
"start": 796,
"end": 805,
"text": "Kim 1995;",
"ref_id": "BIBREF9"
},
{
"start": 806,
"end": 816,
"text": "Kim 2002 ;",
"ref_id": "BIBREF7"
},
{
"start": 817,
"end": 825,
"text": "Li 1990)",
"ref_id": "BIBREF14"
},
{
"start": 1324,
"end": 1349,
"text": "(Lyon and Dickerson 1995;",
"ref_id": "BIBREF16"
},
{
"start": 1350,
"end": 1374,
"text": "Lyon and Dickerson 1997)",
"ref_id": "BIBREF17"
},
{
"start": 1584,
"end": 1592,
"text": "Lyon and",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": null
},
{
"text": "ted sections (pre-subject, subject, predicate) which can reduce the complexity of parsing English sentences (Lyon and Dickerson 1995; Lyon and Dickerson 1997) . This approach is useful for a simple sentence that contains a subject and a predicate. A long sentence generally contains more than a subject and a predicate. Therefore, the segmentation methods proposed by Lyon and Dickerson are inefficient for parsing long sentences. In studies on segmenting long sentences, little attention has been paid to detecting the boundaries of predicates which share a common subject.",
"cite_spans": [
{
"start": 108,
"end": 133,
"text": "(Lyon and Dickerson 1995;",
"ref_id": "BIBREF16"
},
{
"start": 134,
"end": 158,
"text": "Lyon and Dickerson 1997)",
"ref_id": "BIBREF17"
},
{
"start": 368,
"end": 376,
"text": "Lyon and",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": null
},
{
"text": "To determine the correct subject of some subjectless VPs, we define the 'S-clause' and propose an S-clause segmentation method. In previous work, a clause is defined as a group of words containing a verb, and previous researchers split sentences centering around a verb to detect clauses. By contrast, we split sentences centering around a subject. So we call the proposed segment 'S(ubject)-clause' to distinguish it from a clause.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": null
},
{
"text": "To determine the correct subject of some subjectless VPs, we define the 'S-clause' and propose an S-clause segmentation method. In previous work, a clause is defined as a group of words containing a verb, and previous researchers split sentences centering around a verb to detect clauses. By contrast, we split sentences centering around a subject. So we call the proposed segment 'S(ubject)-clause' to distinguish it from a clause. This section overviews our dependency analysis procedure for the Korean language. First, we determine NP-and VP-chunks following the method of Kim (Kim et al, 2000) . Next, First, we determine NP-and VP-chunks following the method of Kim (Kim et al, 2000) . Next, we bind a predicate and its arguments to determine the heads of arguments using subcategorization and the selectional restriction information of predicates. This procedure is similar to the clause detection procedure. In this procedure, we also determine the grammatical function of unknown case words according to Lee's method (Lee et al, 2003) .",
"cite_spans": [
{
"start": 580,
"end": 597,
"text": "(Kim et al, 2000)",
"ref_id": "BIBREF6"
},
{
"start": 671,
"end": 688,
"text": "(Kim et al, 2000)",
"ref_id": "BIBREF6"
},
{
"start": 1025,
"end": 1042,
"text": "(Lee et al, 2003)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": null
},
{
"text": "It is important to identify the subject grammatical function of unknown case words correctly, since one S-clause is constructed per subject.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": null
},
{
"text": "In Korean, arguments of predicates, especially subjects, are often omitted in a sentence. We leave the dependency relations of subjects unconnected, since ambiguity occurs when detecting the heads of subjects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": null
},
{
"text": "Third, using predicate information and grammatical function detection results after clause detection, we detect S-clauses. And then, using Sclauses, we determine the dependency relations between subjects and predicates. It can be also helpful in determining the heads of adjuncts, since their heads can be found within an S-clause boundary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": null
},
{
"text": "3.2 4 4.1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Work",
"sec_num": null
},
{
"text": "Before S-clause segmentation, we have determined the dependency relations between arguments (except subjects) and predicates. Next, we determine the heads of subjects after S-clause segmentation. Although we assume that all the predicates in an S-clause require the subject within the S-clause, some S-clause segmentation errors may exist. To recover the S-clause segmentation errors, we use selectional restriction information to find the relevant head of a subject. We regard the head of the subject within an Sclause as the farthest predicate in the S-clause which requires the concept of the subject.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Analysis based on S-clauses",
"sec_num": "4.2"
},
{
"text": "Still, the dependency relations of adjuncts and those of predicates are not determined. The heads of adjuncts and those of predicates are dependent on the nearest head candidate not giving rise to crossing links. Using S-clauses, we can accomplish dependency parsing simply and effectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency Analysis based on S-clauses",
"sec_num": "4.2"
},
{
"text": "Decision tree induction algorithms have been successfully applied to NLP problems such as parsing (Magerman 1995; Haruno et al 1998) , discourse analysis (Nomoto and Matsumoto 1998) , sentence boundary disambiguation (Palmer 1997) , phrase break prediction (Kim 2000) and word segmentation (Sornertlamvanich et al 2000) . We employed a C4.5 (Quinlan 1993 ) decision tree induction program as the learning algorithm for Sclause segmentation.",
"cite_spans": [
{
"start": 98,
"end": 113,
"text": "(Magerman 1995;",
"ref_id": "BIBREF18"
},
{
"start": 114,
"end": 132,
"text": "Haruno et al 1998)",
"ref_id": "BIBREF4"
},
{
"start": 154,
"end": 181,
"text": "(Nomoto and Matsumoto 1998)",
"ref_id": "BIBREF19"
},
{
"start": 217,
"end": 230,
"text": "(Palmer 1997)",
"ref_id": "BIBREF20"
},
{
"start": 257,
"end": 267,
"text": "(Kim 2000)",
"ref_id": "BIBREF6"
},
{
"start": 290,
"end": 319,
"text": "(Sornertlamvanich et al 2000)",
"ref_id": "BIBREF23"
},
{
"start": 341,
"end": 354,
"text": "(Quinlan 1993",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "S-clause Segmentation based on Decision Tree Learning The C4.5 Learning Algorithm",
"sec_num": null
},
{
"text": "The induction algorithm proceeds by evaluating the information content of a series of attributes and iteratively building a tree from the attribute values, with the leaves of the decision tree representing the values of the goal attributes. At each step of the learning procedure, the evolving tree branches from the attribute that divides the data items with the highest gain in information.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S-clause Segmentation based on Decision Tree Learning The C4.5 Learning Algorithm",
"sec_num": null
},
{
"text": "Branches will be added to the tree until the decision tree can classify all items in the training set. To reduce the effects of overfitting, C4.5 prunes the entire decision tree after construction. It recursively examines each subtree to determine whether replacing it with a leaf or branch will reduce the expected error rate. Pruning improves the ability of the decision tree to handle data which is different from the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S-clause Segmentation based on Decision Tree Learning The C4.5 Learning Algorithm",
"sec_num": null
},
{
"text": "This section explains the concrete feature setting we used for learning. The S-clause is a broader concept than the clause. In order to determine the S-clauses, we must choose the clauses that are suitable for addition to the S-clause. Since the head word of a clause is the predicate in the clause, we merely use predicate information. The feature set focuses on the predicates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": null
},
{
"text": "An S-clause can be embedded in another Sclause. Therefore, we should learn two methods to detect the left boundary and right boundary of an S-clause independently. We should include one subject between the left boundary and the right boundary of an S-clause. We call the subject to include in an S-clause the 'target subject'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": null
},
{
"text": "First, when we detect the left boundary of an Sclause, we consider the predicates between the 1 st Feature Type of a predicate 2 nd Feature Surface form of the last ending of a predicate 3 rd Feature Comma 'target subject' and the nearest subject which precedes the 'target subject'. Each predicate has 3 features, as shown in Table 2. The 1 st feature concerns the type of a predicate. Next, the 2 nd feature takes the value of the surface form of the last ending of a predicate. Korean is an agglutinative language and the ending of a predicate indicates the connective function with the next VP (e.g. '\uc73c\ubbc0\ub85c(because)' indicates it functions as a reason for the next VP).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": null
},
{
"text": "The 3 rd feature deals with the information whether a predicate is followed by a comma or not. The use of a comma to insert a pause in a sentence is an important key to detect an S-clause boundary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": null
},
{
"text": "We use 12 features for left boundary detection -4 predicates, and 3 features for each predicate as summarized in Table 2 . The class set consists of 5 values (0~4) to indicate the position of the predicate that becomes a left boundary. If the class value is 0, it means the S-clause includes no predicates preceding the 'target subject'. Other wise, if the class value is 1, it means that that the S-clause includes one nearest predicate which appears preceding the 'target subject'. The window size of predicates for the left boundary is 4. If there are less than 4 predicates, then we fill the empty features with 'null'. For right boundary detection, we consider the predicates between the 'target subject' and the next subject following the 'target subject'.",
"cite_spans": [],
"ref_spans": [
{
"start": 113,
"end": 120,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Features",
"sec_num": null
},
{
"text": "We use 15 features for right boundary detection -5 predicates, and the same 3 features for each predicate as in Table 2 . Among the predicates between 'target subject' and the next subject following the 'target subject', we consider 4 predicates which appear near the 'target subject' and 1 predicate which locates last. The reason that 1 predicate which locates last is considered is as follows: If all the predicates between 'target subject' and the next subject following the 'target subject' require the 'target subject' as their common subject, the right boundary becomes the last predicate among them, since Korean is a head-final language.",
"cite_spans": [],
"ref_spans": [
{
"start": 112,
"end": 119,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Features",
"sec_num": null
},
{
"text": "Although the feature set is the same as that for right boundary detection, the window size for the right boundary is 5, which is larger than that for the left boundary. The reason is that Korean is a head-final language and the predicates of a subject appear after the subject. The detailed values of each feature type are summarized in Table 3 . We first detect the S-clause which includes the last subject of an input word set. If an S-clause is detected, we exclude the words which are included in the S-clause from the input word set. Then, we recursively detect the S-clause including the last subject in the remaining word set until there are no subjects in the modified word set. ",
"cite_spans": [],
"ref_spans": [
{
"start": 337,
"end": 344,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Features",
"sec_num": null
},
{
"text": "The test set is different from the training set and the average length of the test sentence is 19.27 words/sentence while that of the training sentence is 14.63 words/sentence. We selected longer sentences as a test set since the S-clause segmentation method is proposed to improve the performance of syntactic analysis in long sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Amount of Training Data vs. Sclause Segmentation Accuracy vs. Parsing Accuracy",
"sec_num": "5.1"
},
{
"text": "The parsing accuracy is calculated as (correct dependency links)/(all the dependency links). The number of detected dependency links and that of the true dependency links are equal, so parsing accuracy is the same as parsing recall. For the reason, we do not measure the parsing recall separately. However, in the case of S-clauses, the Sclause precision is different from the S-clause recall, since the subject grammatical function detection results for unknown case words are not perfectly correct. We measured the S-clause precision as (correct S-clauses)/(all the detected Sclauses), and the S-clause recall as (correct Sclauses)/(all the true S-clauses). To show the effectiveness of S-clauses, we compare the parsing result using S-clauses and without S-clauses, and also compare our parser performance with others which analyze similar languages with Korean.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Amount of Training Data vs. Sclause Segmentation Accuracy vs. Parsing Accuracy",
"sec_num": "5.1"
},
{
"text": "In the experiments, we obtained the following two results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Amount of Training Data vs. Sclause Segmentation Accuracy vs. Parsing Accuracy",
"sec_num": "5.1"
},
{
"text": "1. The better the S-clause segmentation performance, the better the parsing accuracy that results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Amount of Training Data vs. Sclause Segmentation Accuracy vs. Parsing Accuracy",
"sec_num": "5.1"
},
{
"text": "2. The maximum S-clause accuracy is 84.40% and the maximum parsing accuracy is 89.12% with 50000 training sentences. The test set size is 10,000 sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Amount of Training Data vs. Sclause Segmentation Accuracy vs. Parsing Accuracy",
"sec_num": "5.1"
},
{
"text": "We will discuss the maximum accuracy of 89.12% compared with the Japanese KN parser, which shows the highest performance in Japanese dependency parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Amount of Training Data vs. Sclause Segmentation Accuracy vs. Parsing Accuracy",
"sec_num": "5.1"
},
{
"text": "The characteristics of Japanese are similar to the Korean language. So, the mechanism of syntactic analysis in Korean can also be applied to Japanese. In Japanese dependency parsers and Korean dependency parsers, the KN parser shows the highest performance. In addition, we can freely obtain the programs. So, we compare the performance of our parser with that of the KN parser. To do this, we need a bilingual test corpus. We obtain 10,000 Japanese test set by translating the 10,000 Korean test set using Korean-to-Japanese machine translation system of our own. Then, several researchers specializing in Japanese manually corrected the translation results. We experimented on the performance of the KN parser using these 10,000 Japanese sets. To detect the head of a subject, the KN parser uses only some heuristics (Kurohashi 1994) . As shown in Table 5 , the performance of our parser",
"cite_spans": [
{
"start": 819,
"end": 835,
"text": "(Kurohashi 1994)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 850,
"end": 857,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "The Amount of Training Data vs. Sclause Segmentation Accuracy vs. Parsing Accuracy",
"sec_num": "5.1"
},
{
"text": "Feature Accuracy Change 1 st type -7.34% 3 rd type -0.04% 1 st surface form -1.15% 3 rd surface form -0.02% 1 st comma -2.42% 3 rd comma -0.82% 2 nd type -0.32% 4 th type -0.0% 2 nd surface form -0.23% 4 th surface form -0.0% 2 nd comma -5.29% 4 th comma -0.01% -0.8 % 4 th type -0.2 % 1 st comma -2.7 % 4 th surface form -0.3 % 2 nd type -0.3 % 4 th comma 0.0 % 2 nd surface form -1.3 % 5 th type -0.8 % 2 nd comma -1.9 % 5 th Surface form. -0.1 % 3 rd type 0.0 % 5 th comma 0.0 % 3 rd surface form 0.0 % Table 8 : The Type of S-clause Errors without S-clause segmentation is worse than that of the KN parser. In our parser without S-clause segmentation, a word simply depends on the nearest head not giving rise to crossing links. However, after S-clause segmentation, the performance of our parser is similar to that of the KN parser. The accuracy of our parser in detecting the head of a subject is also better than that of the KN parser. We also compare the performance of our parser with a Korean Yon-sei dependency parser, as shown in Table 5 . The parser using S-clauses outperforms the Yon-sei parser by 1 percent. Since the Yon-sei dependency parser is not an open resource, we simply compare the performance of our parser with that of Yon-sei parser written in Kim (2002) . Therefore, the comparison of the performance between our parser and the Korean Yonsei dependency parser may not be so reasonable.",
"cite_spans": [
{
"start": 1272,
"end": 1282,
"text": "Kim (2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 506,
"end": 513,
"text": "Table 8",
"ref_id": null
},
{
"start": 1042,
"end": 1049,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Accuracy Change",
"sec_num": null
},
{
"text": "Next, we will summarize the significance of each feature introduced in Section 4.2. Table 6 and Table 7 illustrate how the S-clause accuracy is reduced when each feature is removed. Table 6 clearly demonstrates that the most significant feature for the left boundary is the type of the previous 1 st predicate-we obtain the information from the decision rules that, especially, the 'adnominal' type of the previous 1 st predicate is a significant feature. As shown Table 6 , 4 th predicate information has no effect on the left boundary. Table 7 demonstrates that the most significant feature for the right boundary is comma information, since the S-clause accuracy without 1 st , 2 nd or 3 rd comma information shows high accuracy decrease. The 5 th predicate information is more useful than the 4 th predicate. In other words, the last predicate can be the head of a subject than the intermediate predicate.",
"cite_spans": [],
"ref_spans": [
{
"start": 84,
"end": 91,
"text": "Table 6",
"ref_id": "TABREF7"
},
{
"start": 182,
"end": 189,
"text": "Table 6",
"ref_id": "TABREF7"
},
{
"start": 465,
"end": 472,
"text": "Table 6",
"ref_id": "TABREF7"
},
{
"start": 538,
"end": 545,
"text": "Table 7",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "2 Significance of Features",
"sec_num": "5."
},
{
"text": "This result may partially support heuristics; the left boundary would be an adnominal predicate since only adnominal predicates are followed by their subjects (other predicates are preceded by their subjects). Next, after the comma, a boundary mostly occurs. In particular, we need to concentrate on the types of predicates to attain a higher level of accuracy. To some extent, most features contribute to the parsing performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2 Significance of Features",
"sec_num": "5."
},
{
"text": "In our experiment, only the surface form of the endings of conjunctive predicates, rather than other predicates, is effective on performance. The reason is that the surface form of the ending of the non-conjunctive predicates does not indicate the connective function with the next VPs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2 Significance of Features",
"sec_num": "5."
},
{
"text": "We classify the S-clause errors, as shown in Table 8 . Table 8 shows that many S-clause errors are due to the Korean characteristics.",
"cite_spans": [],
"ref_spans": [
{
"start": 45,
"end": 52,
"text": "Table 8",
"ref_id": null
},
{
"start": 55,
"end": 62,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Discussion about S-clause Errors",
"sec_num": "5.3"
},
{
"text": "Among the S-clause errors, subject detection errors rank first, which occupy 25.15%. So, the Sclause accuracy result is different from the Sclause recall result.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion about S-clause Errors",
"sec_num": "5.3"
},
{
"text": "Next, POS tagging errors result in the S-clause segmentation errors of 20.66 percent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion about S-clause Errors",
"sec_num": "5.3"
},
{
"text": "These two errors occur before S-clause segmentation. So, this is another issue that remains for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion about S-clause Errors",
"sec_num": "5.3"
},
{
"text": "Also, double subject errors are 11.38%. Some Korean predicates can require two subjects. This is contrary to our assumption of S-clauses. Since 11.38% is large portion of all the errors, we should consider double subject construction and identify the characteristics of the predicates in double subject constructions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion about S-clause Errors",
"sec_num": "5.3"
},
{
"text": "The right boundary errors are more than left boundary errors. It means that the right boundary detection is more difficult.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion about S-clause Errors",
"sec_num": "5.3"
},
{
"text": "Finally, some adverbials, not predicates, can function as predicates of subjects. Since we only detect boundaries focusing on predicates, these adverbials information cannot be used. We should include these adverbials that function as predicates into the S-clause boundary candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion about S-clause Errors",
"sec_num": "5.3"
},
{
"text": "This paper proposes an S-clause segmentation method to reduce syntactic ambiguity in long sentences. An S(ubject)-clause is defined as a group of words containing several predicates and their common subject. An S-clause includes one subject and several predicates that share the subject.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We have described an S-clause segmentation method that uses decision trees. The experimental results show that the parser using S-clauses outperforms the parser without S-clauses by 5% and also outperforms conventional Korean dependency parsers by 1 percent. To improve the Sclause accuracy, we should detect double subject constructions and adverbials which function as predicates. We plan to continue our research in two directions. First, we will combine our Sclause segmentation method with a coordinate structure detection method and test the parsing results. Second, we will apply it to a machine translation system and translate each S-clause independently and test the translation performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Experimental ResultsWe evaluated the proposed S-clause segmentation method using the Matec99' 1 test set. We evaluated the following 2 properties of the Sclause segmentation program.1. The amount of training data vs. S-clause segmentation accuracy vs. parsing accuracy 1 Morphological Analyzer and Tagger Evaluation Contest in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by the Korea Science and Engineering Foundation (KOSEF) through the Advanced Information Technology Research Center(AITrc) and by the Brain Korea 21 Project in 2003.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A simple but useful approach to conjunct identification",
"authors": [
{
"first": "Rajeev",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Lois",
"middle": [],
"last": "Boggess",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 30 th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "15--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rajeev Agarwal, Lois Boggess. A simple but useful approach to conjunct identification. In Proceedings of the 30 th Annual Meeting of the Association for Computational Linguistics, p.15-21, 1992.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A hybrid method for clause splitting in unrestricted English texts",
"authors": [
{
"first": "Orasan",
"middle": [],
"last": "Constantin",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of ACIDCA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Orasan Constantin. A hybrid method for clause splitting in unrestricted English texts. In Proceedings of ACIDCA 2000",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Long sentence analysis by domain-specific pattern grammar",
"authors": [
{
"first": "Shinchi",
"middle": [],
"last": "Doi",
"suffix": ""
},
{
"first": "Kazunori",
"middle": [],
"last": "Muraki",
"suffix": ""
},
{
"first": "Shinichiro",
"middle": [],
"last": "Kamei",
"suffix": ""
},
{
"first": "Kiyoshi",
"middle": [],
"last": "Yamabana",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 6 th Conference on the European Chapter of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shinchi Doi, Kazunori Muraki, Shinichiro Kamei and Kiyoshi Yamabana. Long sentence analysis by do- main-specific pattern grammar, In Proceedings of the 6 th Conference on the European Chapter of the Association of Computational Linguistics, p.466, 1993.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Finding clauses in unrestricted text by finitary and stochastic methods",
"authors": [
{
"first": "Eva",
"middle": [],
"last": "Ejerhed",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 2 nd Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "219--227",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eva Ejerhed. Finding clauses in unrestricted text by finitary and stochastic methods. In Proceedings of the 2 nd Conference on Applied Natural Language Processing, p.219-227, 1998.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Using Decision Trees to Construct a Practical Parser",
"authors": [
{
"first": "Masahiko",
"middle": [],
"last": "Haruno",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Shirai",
"suffix": ""
},
{
"first": "Yoshifumi",
"middle": [],
"last": "Ooyama",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 36 th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "505--511",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masahiko Haruno, Satoshi Shirai, and Yoshifumi Ooyama. Using Decision Trees to Construct a Prac- tical Parser, In Proceedings of the 36 th Annual Meet- ing of the Association for Computational Linguistics, p.505-511, 1998.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Korean conjunctive structure analysis based on sentence segmentation (In Korean)",
"authors": [
{
"first": "Jaechol",
"middle": [],
"last": "Jang",
"suffix": ""
},
{
"first": "Euikyu",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Dongryeol",
"middle": [],
"last": "Ra",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of 14 th Hangul and Korean Information Processing",
"volume": "",
"issue": "",
"pages": "139--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jaechol Jang, Euikyu Park, and Dongryeol Ra. A Ko- rean conjunctive structure analysis based on sen- tence segmentation (In Korean). In Proceedings of 14 th Hangul and Korean Information Processing, p.139-146, 2002.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Decision-Tree based Error Correction for Statistical Phrase Break Prediction in Korean",
"authors": [
{
"first": "Byeongchang",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Geunbae",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 18 th International conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1051--1055",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Byeongchang Kim and Geunbae Lee. Decision-Tree based Error Correction for Statistical Phrase Break Prediction in Korean. In Proceedings of the 18 th In- ternational conference on Computational Linguis- tics, p.1051-1055, 2000",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Dongryeol Ra and Juntae Yoon. A method of Korean parsing based on sentence segmentation",
"authors": [
{
"first": "Kwangbaek",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Euikyu",
"middle": [],
"last": "Park",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of 14 th Hangul and Korean Information Processing",
"volume": "",
"issue": "",
"pages": "163--168",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kwangbaek Kim, Euikyu Park, Dongryeol Ra and Jun- tae Yoon. A method of Korean parsing based on sentence segmentation. In Proceedings of 14 th Han- gul and Korean Information Processing, p.163-168, 2002",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Text Chunking by Rule and Lexical Information",
"authors": [
{
"first": "Miyoung",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "-Jae",
"middle": [],
"last": "Sin",
"suffix": ""
},
{
"first": "Jong-Hyeok",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": null,
"venue": "proceedings of the 12 th Hangul and Korean Information Processing Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miyoung Kim, Sin-Jae Kang and Jong-Hyeok Lee. Text Chunking by Rule and Lexical Information. In proceedings of the 12 th Hangul and Korean Infor- mation Processing Conference, Chonju, Korea. pp 103~109. 2000",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Sentence analysis using pattern matching in English-Korean machine translation",
"authors": [
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yungtaek",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the International Conference on Computer Processing of Oriental Languages",
"volume": "",
"issue": "",
"pages": "199--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sungdong Kim and Yungtaek Kim. Sentence analysis using pattern matching in English-Korean machine translation, In Proceedings of the International Con- ference on Computer Processing of Oriental Lan- guages, p.199-206, 1995.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning-based intrasentence segmentation for efficient translation of long sentences",
"authors": [
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Byungtak",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yungtaek",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2001,
"venue": "Machine Translation",
"volume": "16",
"issue": "",
"pages": "151--174",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sungdong Kim, Byungtak Zhang and Yungtaek Kim. Learning-based intrasentence segmentation for effi- cient translation of long sentences. Machine Trans- lation 16:151-174, 2001.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Review of coordinate sentences",
"authors": [
{
"first": "Kwangju",
"middle": [],
"last": "Ko",
"suffix": ""
}
],
"year": 1999,
"venue": "Korean)",
"volume": "9",
"issue": "",
"pages": "39--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kwangju Ko. Review of coordinate sentences (in Ko- rean), Journal of Korean, 9:39~80, 1999.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A syntactic analysis method of long Japanese sentences based on the detection of conjunctive structures",
"authors": [
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "4",
"pages": "507--534",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sadao Kurohashi and Makoto Nagao, A syntactic analysis method of long Japanese sentences based on the detection of conjunctive structures, Computa- tional Linguistics, 20(4):507-534,1994",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Clause processing in complex sentences",
"authors": [
{
"first": ".",
"middle": [
"J"
],
"last": "Vilson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Leffa",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 1 st International Conference on Language Resources and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vilson. J. Leffa. Clause processing in complex sen- tences. In Proceedings of the 1 st International Con- ference on Language Resources and Evaluation, 1998.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Parsing long English sentences with pattern rules",
"authors": [
{
"first": "Wei-Chuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tzusheng",
"middle": [],
"last": "Pei",
"suffix": ""
},
{
"first": "Bing-Huang",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Chuei-Feng",
"middle": [],
"last": "Chiou",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the 13 th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "410--412",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei-Chuan Li, Tzusheng Pei, Bing-Huang Lee and Chuei-Feng Chiou. Parsing long English sentences with pattern rules. In Proceedings of the 13 th Inter- national Conference on Computational Linguistics, p.410-412, 1990.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Grammatical Role Determination of Unknown Cases in Korean Coordinate Structures",
"authors": [
{
"first": "Yonghun",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Mi-Young",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Jong-Hyeok",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 30 th Korea Information Science Society Spring Conference",
"volume": "",
"issue": "",
"pages": "543--545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "YongHun Lee, Mi-Young Kim and Jong-Hyeok Lee. Grammatical Role Determination of Unknown Cases in Korean Coordinate Structures. In Proceed- ings of the 30 th Korea Information Science Society Spring Conference, p.543-545, 2003.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A fast partial parse of natural language sentences using a connectionist method",
"authors": [
{
"first": "Caroline",
"middle": [],
"last": "Lyon",
"suffix": ""
},
{
"first": "Bob",
"middle": [],
"last": "Dickerson",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 7th conference on the European Chapter of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "215--222",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caroline Lyon, and Bob Dickerson. A fast partial parse of natural language sentences using a connectionist method, In Proceedings of the 7th conference on the European Chapter of the Association of Computa- tional Linguistics, p.215-222, 1995",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Reducing the complexity of parsing by a method of decomposition",
"authors": [
{
"first": "Caroline",
"middle": [],
"last": "Lyon",
"suffix": ""
},
{
"first": "Bob",
"middle": [],
"last": "Dickerson",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 6 th International Workshop on Parsing Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caroline Lyon, and Bob Dickerson. Reducing the com- plexity of parsing by a method of decomposition. In Proceedings of the 6 th International Workshop on Parsing Technology, 1997.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Statistical Decision-Tree Models for Parsing",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Magerman",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33 rd Annual Meeting of Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "276--283",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Magerman. Statistical Decision-Tree Models for Parsing, In Proceedings of the 33 rd Annual Meet- ing of Association for Computational Linguistics, p.276-283. 1995",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Discourse Parsing: A Decision Tree Approach",
"authors": [
{
"first": "Tadashi",
"middle": [],
"last": "Nomoto",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 1998,
"venue": "proceedings of the 6 th Workshop on Very Large Corpora",
"volume": "",
"issue": "",
"pages": "216--224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tadashi Nomoto and Yuji Matsumoto. Discourse Pars- ing: A Decision Tree Approach. In proceedings of the 6 th Workshop on Very Large Corpora, p.216-224, 1998",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Adaptive Multilingual Sentence Boundary Disambiguation",
"authors": [
{
"first": "David",
"middle": [
"D"
],
"last": "Palmer",
"suffix": ""
},
{
"first": "Marti",
"middle": [
"A"
],
"last": "Hearst",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "",
"pages": "241--261",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David D. Palmer, Marti A. Hearst. Adaptive Multilin- gual Sentence Boundary Disambiguation, Computa- tional Linguistics, 27:241-261, 1997",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "C4.5 Programs for Machine Learning",
"authors": [
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Ross",
"middle": [],
"last": "Quinlan",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Ross Quinlan. C4.5 Programs for Machine Learning. Morgan Kaufmann Publishers. 1993",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Introduction to the CoNLL-2001 Shared Task: Clause Identification",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "Tjong",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Herve",
"middle": [],
"last": "Dejean",
"suffix": ""
}
],
"year": 2001,
"venue": "proceedings of CoNLL-2001",
"volume": "",
"issue": "",
"pages": "53--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang and Herve Dejean. Introduc- tion to the CoNLL-2001 Shared Task: Clause Identi- fication. In proceedings of CoNLL-2001, p. 53-57, 2001.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Automatic Corpus-Based Thai Word Extraction with the C4.5 Learning Algorithm",
"authors": [
{
"first": "Virach",
"middle": [],
"last": "Sornertlamvanich",
"suffix": ""
},
{
"first": "Tanapong",
"middle": [],
"last": "Potipiti",
"suffix": ""
},
{
"first": "Thatsanee",
"middle": [],
"last": "Charoenporn",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 18 th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "802--807",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Virach Sornertlamvanich, Tanapong Potipiti and That- sanee Charoenporn. Automatic Corpus-Based Thai Word Extraction with the C4.5 Learning Algorithm, In Proceedings of the 18 th International Conference on Computational Linguistics, p.802-807, 2000",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Analysis of coordinate conjunctive phrase in Korean (in Korean)",
"authors": [
{
"first": "Juntae",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Song",
"suffix": ""
}
],
"year": 1997,
"venue": "Journal of Korea Information Science Society",
"volume": "24",
"issue": "",
"pages": "326--336",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juntae Yoon, M. Song. Analysis of coordinate conjunc- tive phrase in Korean (in Korean), Journal of Korea Information Science Society, 24:326-336, 1997",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "This section overviews our dependency analysis procedure for the Korean language.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF2": {
"text": "Linguistic Feature Types Used for Learning",
"type_str": "table",
"content": "<table><tr><td>Feature</td><td>Values</td></tr><tr><td>Type</td><td/></tr><tr><td>1 st</td><td>adnominal, conjunctive, quotative, nominal, final, null</td></tr><tr><td>2 nd</td><td>\u3134, \u3141, \uae30, \uc74c, \u3134\ub370, \u3134\uc989, \u3134\uc9c0, \u3139\uc9c0, \u3139\uc9c0\ub2c8, \uac70\ub098, \uac8c, \uace0, \ub098, \ub294\ub370, \ub294\uc9c0, \ub2c8,</td></tr><tr><td/><td>\ub2e4\uac00, \ub3c4\ub85d, \ub4e0\uc9c0, \ub4ef\uc774, \ub77c, \ub824\uace0, \uba70, \uba74, \uba74\uc11c, \ubbc0\ub85c, \uc544, \uc544\uc11c, \uc5b4, \uc5b4\ub3c4, \uc5b4\uc11c,</td></tr><tr><td/><td>\uc5b4\uc57c, \uc73c\ub098, \uc73c\ub2c8, \uc73c\ub824\uace0, \uc73c\uba70, \uc73c\uba74, \uc73c\uba74\uc11c, \uc73c\ubbc0\ub85c, \uc790, \uc9c0, \uc9c0\ub9c8\ub294, null\u2026.</td></tr><tr><td>3 rd</td><td>1, 0, null</td></tr></table>",
"html": null,
"num": null
},
"TABREF3": {
"text": "Values for Each Feature Type",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF5": {
"text": "The Amount of Training Sentences vs. S-clause Accuracy vs.",
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"3\">Parsing Accuracy for the 10000 test sentences</td><td/></tr><tr><td/><td colspan=\"2\">Our parser without S-</td><td>Our parser with S-</td><td>KN Parser</td><td>Korean</td></tr><tr><td/><td>clause</td><td>segmentation</td><td>clause segmentation</td><td/><td>Yon-sei</td></tr><tr><td/><td>procedure</td><td/><td>procedure</td><td/><td>parser</td></tr><tr><td>Accuracy in de-</td><td>51.60 %</td><td/><td>84.03 %</td><td>74.21 %</td><td>Unknown</td></tr><tr><td>tecting the head of</td><td/><td/><td/><td/></tr><tr><td>a subject</td><td/><td/><td/><td/></tr><tr><td colspan=\"2\">Parsing Accuracy 84.29%</td><td/><td>89.12%</td><td>89.93 %</td><td>87.30%</td></tr></table>",
"html": null,
"num": null
},
"TABREF6": {
"text": "",
"type_str": "table",
"content": "<table><tr><td>: Parsing Accuracy Comparison</td><td>(avg 19.27word/sentence)</td></tr><tr><td>2. Significance of features</td><td/></tr></table>",
"html": null,
"num": null
},
"TABREF7": {
"text": "S-clause Accuracy Change When Each Attribute for Left Boundary Removed",
"type_str": "table",
"content": "<table><tr><td>Feature</td><td>Accuracy Change</td><td>Feature</td><td>Accuracy Change</td></tr><tr><td>1 st type</td><td>-3 %</td><td>3 rd comma</td><td>-3 %</td></tr><tr><td>1 st Surface form.</td><td/><td/><td/></tr></table>",
"html": null,
"num": null
},
"TABREF8": {
"text": "",
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"6\">: S-clause Accuracy Change When Each Attribute for Right Boundary Removed</td><td/><td/></tr><tr><td>S-clause</td><td>Subject</td><td>Pos-tag</td><td>Double</td><td>Left</td><td>Right</td><td colspan=\"2\">Predicate</td><td>Other-</td></tr><tr><td>errors</td><td>detection</td><td>errors</td><td>subject</td><td>boundary</td><td>boundary</td><td>role</td><td>of</td><td>wise</td></tr><tr><td/><td>errors</td><td/><td>errors</td><td>errors</td><td>errors</td><td colspan=\"2\">adverbials</td><td/></tr><tr><td>Error %</td><td>25.15%</td><td>20.66%</td><td>11.38%</td><td>16.47%</td><td>20.96%</td><td colspan=\"2\">2.00%</td><td>3.38%</td></tr></table>",
"html": null,
"num": null
}
}
}
}