ACL-OCL / Base_JSON /prefixO /json /O05 /O05-2006.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O05-2006",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:58:58.851305Z"
},
"title": "Similarity Based Chinese Synonym Collocation Extraction",
"authors": [
{
"first": "Wanyin",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Hong Kong Polytechnic University",
"location": {
"addrLine": "Hung Hom",
"settlement": "Kowloon",
"country": "Hong Kong"
}
},
"email": ""
},
{
"first": "Qin",
"middle": [],
"last": "Lu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Hong Kong Polytechnic University",
"location": {
"addrLine": "Hung Hom",
"settlement": "Kowloon",
"country": "Hong Kong"
}
},
"email": ""
},
{
"first": "Ruifeng",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The Hong Kong Polytechnic University",
"location": {
"addrLine": "Hung Hom",
"settlement": "Kowloon",
"country": "Hong Kong"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Collocation extraction systems based on pure statistical methods suffer from two major problems. The first problem is their relatively low precision and recall rates. The second problem is their difficulty in dealing with sparse collocations. In order to improve performance, both statistical and lexicographic approaches should be considered. This paper presents a new method to extract synonymous collocations using semantic information. The semantic information is obtained by calculating similarities from HowNet. We have successfully extracted synonymous collocations which normally cannot be extracted using lexical statistics. Our evaluation conducted on a 60MB tagged corpus shows that we can extract synonymous collocations that occur with very low frequency and that the improvement in the recall rate is close to 100%. In addition, compared with a collocation extraction system based on the Xtract system for English, our algorithm can improve the precision rate by about 44%.",
"pdf_parse": {
"paper_id": "O05-2006",
"_pdf_hash": "",
"abstract": [
{
"text": "Collocation extraction systems based on pure statistical methods suffer from two major problems. The first problem is their relatively low precision and recall rates. The second problem is their difficulty in dealing with sparse collocations. In order to improve performance, both statistical and lexicographic approaches should be considered. This paper presents a new method to extract synonymous collocations using semantic information. The semantic information is obtained by calculating similarities from HowNet. We have successfully extracted synonymous collocations which normally cannot be extracted using lexical statistics. Our evaluation conducted on a 60MB tagged corpus shows that we can extract synonymous collocations that occur with very low frequency and that the improvement in the recall rate is close to 100%. In addition, compared with a collocation extraction system based on the Xtract system for English, our algorithm can improve the precision rate by about 44%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A collocation refers to the conventional use of two or more adjacent or distant words which hold syntactic and semantic relations. For example, the conventional expressions \"warm greetings\", \"broad daylight\", \"\u601d\u60f3\u5305\u88b1\", and \"\u6258\u8fd0\ufa08\uf9e1\" all are collocations. Collocations bear certain properties that have been used to develop feasible methods to extract them automatically from running text. Since collocations are commonly found, they must be recurrent. Therefore, their appearance in running text should be statistically significant, making it feasible to extract them using the statistical approach. A collocation extraction system normally starts with a so-called headword (sometimes also called a keyword) and proceeds to find co-occurring words called the collocated words. For example, given the headword \"\u57fa\u672c\", such bi-gram collocations as \"\u57fa\u672c\uf9e4\u8bba\", \"\u57fa\u672c\u5de5 \u4f5c\", and, \"\u57fa\u672c\u539f\u56e0\" can be found using an extraction system where \"\uf9e4\u8bba\", \"\u5de5\u4f5c\", and \"\u539f \u56e0\" are called collocated words with respect to the headword \"\u57fa\u672c.\" Many collocation extraction algorithms and systems are based on lexical statistics [Church and Hanks 1990; Smadja 1993; Choueka 1993; ]. As the lexical statistical approach was developed based on the recurrence property of collocations, only collocations with reasonably good recurrence can be extracted. Collocations with low occurrence frequency cannot be extracted, thus affecting both the recall rate and precision rate. The precision rate achieved using the lexical statistics approach can reach around 60% if both word bi-gram extraction and n-gram extraction are employed [Smadja 1993; Lin 1997; Lu et al. 2003 ]. The low precision rate is mainly due to the low precision rate of word bi-gram extraction as only about a 30% -40% precision rate can be achieved for word bi-grams. The semantic information is largely ignored by statistics-based collocation extraction systems even though there exist multiple resources for lexical semantic knowledge, such as WordNet [Miller 98] and HowNet [Dong and Dong 99] .",
"cite_spans": [
{
"start": 1082,
"end": 1105,
"text": "[Church and Hanks 1990;",
"ref_id": "BIBREF2"
},
{
"start": 1106,
"end": 1118,
"text": "Smadja 1993;",
"ref_id": "BIBREF14"
},
{
"start": 1119,
"end": 1132,
"text": "Choueka 1993;",
"ref_id": "BIBREF1"
},
{
"start": 1578,
"end": 1591,
"text": "[Smadja 1993;",
"ref_id": "BIBREF14"
},
{
"start": 1592,
"end": 1601,
"text": "Lin 1997;",
"ref_id": "BIBREF5"
},
{
"start": 1602,
"end": 1616,
"text": "Lu et al. 2003",
"ref_id": "BIBREF9"
},
{
"start": 1971,
"end": 1982,
"text": "[Miller 98]",
"ref_id": null
},
{
"start": 1994,
"end": 2012,
"text": "[Dong and Dong 99]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In many collocations, the headword and its collocated words hold specific semantic relations, hence allowing collocate substitutability. The substitutability property provides the possibility of extracting collocations by finding synonyms of headwords and collocate words. Based on the above properties of collocations, this paper presents a new method that uses synonymous relationships to extract synonym word bi-gram collocations. The objective is to make use of synonym relations to extract synonym collocations, thus increasing the recall rate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Lin [Lin 1997 ] proposed a distributional hypothesis which says that if two words have similar sets of collocations, then they are probably similar. According to one definition [Miller 1992] , two expressions are synonymous in a context C if the substitution of one for the other in C does not change the truth-value of a sentence in which the substitution is made. Similarly, in HowNet, Liu Qun [Liu et al. 2002] defined word similarity as two words that can substitute for each other in a context and keep the sentence consistent in syntax and semantic structure. This means, naturally, that two similar words are very close to each other and they can be used in place of each other in certain contexts. For example, we may either say \"\u4e70\u4e66\"or \"\u8ba2\u4e66\" since \"\u4e70\" and \"\u8ba2\" are semantically close to each other when used in the context of buying books. We can apply this lexical phenomena after a lexical statistics-based extractor is applied to find low frequency synonymous collocations, thus increasing the recall rate.",
"cite_spans": [
{
"start": 4,
"end": 13,
"text": "[Lin 1997",
"ref_id": "BIBREF5"
},
{
"start": 177,
"end": 190,
"text": "[Miller 1992]",
"ref_id": "BIBREF12"
},
{
"start": 396,
"end": 413,
"text": "[Liu et al. 2002]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The rest of this paper is organized as follows. Section 2 describes related existing collocation extraction techniques that are based on both lexical statistics and synonymous collocation. Section 3 describes our approach to collocation extraction. Section 4 describes the data set and evaluation method. Section 5 evaluates the proposed method. Section 6 presents our conclusions and possible future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Methods have been proposed to extract collocations based on lexical statistics. Choueka [Choueka 1993 ] applied quantitative selection criteria based on a frequency threshold to extract adjacent n-grams (including bi-grams). Church and Hanks [Church and Hanks 1990] employed mutual information to extract both adjacent and distant bi-grams that tend to co-occur within a fixed-size window. However, the method can not be extended to extract n-grams. Smadja [Smadja 1993 ] proposed a statistical model that measures the spread of the distribution of co-occurring pairs of words with higher strength. This method can successfully extract both adjacent and distant bi-grams, and n-grams. However, it can not extract bi-grams with lower frequency. The precision rate of bi-grams collocation is very low, only around 30%. Generally speaking, it is difficult to measure the recall rate in collocation extraction (there are almost no reports on recall estimation) even though it is understood that low occurrence collocations cannot be extracted. Sun [Sun 1997 ] performed a preliminary Quantitative analysis of the strength, spread and peak of Chinese collocation extraction using different statistical functions. That study suggested that the statistical model is very limited and that syntax structures can perhaps be used to help identify pseudo collocations.",
"cite_spans": [
{
"start": 88,
"end": 101,
"text": "[Choueka 1993",
"ref_id": "BIBREF1"
},
{
"start": 242,
"end": 265,
"text": "[Church and Hanks 1990]",
"ref_id": "BIBREF2"
},
{
"start": 457,
"end": 469,
"text": "[Smadja 1993",
"ref_id": "BIBREF14"
},
{
"start": 1044,
"end": 1053,
"text": "[Sun 1997",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2."
},
{
"text": "Our research group has further applied the Xtract system to Chinese [Lu et al. 2003 ] by adjusting the parameters so at to optimize the algorithm for Chinese and developed a new weighted algorithm based on mutual information to acquire word bi-grams which are constructed with one higher frequency word and one lower frequency word. This method has achieved an estimated 5% improvement in the recall rate and a 15% improvement in the precision rate compared with the Xtract system.",
"cite_spans": [
{
"start": 68,
"end": 83,
"text": "[Lu et al. 2003",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2."
},
{
"text": "A method proposed by Lin ] applies a dependency parser for information extraction to collocation extraction, where a collocation is defined as a dependency triple which specifies the type of relationship between a word and the modifiee. This method collects dependency statistics over a parsed collocation corpus to cover the syntactic patterns of bi-gram collocations. Since it is statistically based, therefore it still is unable to extract bi-gram collocations with lower frequency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2."
},
{
"text": "Based on the availability of collocation dictionaries and semantic relations of words combinatorial possibilities, such as those in WordNet and HowNet, some researches have made a wide range of lexical resources, especially synonym information. Pearce [Pearce 2001 ] presented a collocation extraction technique that relies on a mapping from one word to its synonyms for each of its senses. The underlying intuition is that if the difference between the occurrence counts of a synonym pair with respect to a particular word is at least two, then they can be considered a collocation. To apply this approach, knowledge of word (concept) semantics and relations with other words must be available, such as that provided by WordNet. Dagan [Dagan 1997 ] applied a similarity-based smoothing method to solve the problem of data sparseness in statistical natural language processing. Experiments conducted in his later research showed that this method could achieve much better results than back-off smoothing methods in terms of word sense disambiguation. Similarly, Hua [Wu 2003 ] applied synonyms relationships between two different languages to automatically acquire English synonymous collocations. This was the first time that the concept of synonymous collocations was proposed. A side intuition raised here is that a natural language is full of synonymous collocations. As many of them have low occurrence rates, they can not be retrieved by using lexical statistical methods. Dong et al. [Dong and Dong 1999] is the best publicly available resource for Chinese semantics. Since semantic similarities of words are employed, synonyms can be defined by the closeness of their related concepts and this closeness can be calculated. In Section 3, we will present our method for extracting synonyms from HowNet and using synonym relations to further extract collocations. While a Chinese synonym dictionary, Tong Yi Ci Lin ( \u300a\u540c\u4e49\u8f9e\uf9f4\u300b ), is available in electronic form, it lacks structured knowledge, and the synonyms listed in it are too loosely defined and are not applicable to collocation extraction.",
"cite_spans": [
{
"start": 252,
"end": 264,
"text": "[Pearce 2001",
"ref_id": "BIBREF13"
},
{
"start": 736,
"end": 747,
"text": "[Dagan 1997",
"ref_id": "BIBREF3"
},
{
"start": 1066,
"end": 1074,
"text": "[Wu 2003",
"ref_id": "BIBREF16"
},
{
"start": 1479,
"end": 1511,
"text": "Dong et al. [Dong and Dong 1999]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2."
},
{
"text": "Our method to extract Chinese collocations consists of three steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Approach",
"sec_num": "3."
},
{
"text": "Step 1: We first take the output of any lexical statistical algorithm that extracts word bi-gram collocations. This data is then sorted according to each headword, w h , along with its collocated word, w c .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Approach",
"sec_num": "3."
},
{
"text": "Step 2: For each headword, w h, used to extract bi-grams, we acquire its synonyms based on a similarity function using HowNet. Any word in HowNet having a similarity value exceeding a threshold is considered a synonym headword, w s, for additional extractions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Approach",
"sec_num": "3."
},
{
"text": "Step 3: For each synonym headword, w s , and the collocated word, w c, of w h , if the bi-gram (w s , w c ) is not in the output of the lexical statistical algorithm applied in Step 1, then we take this bi-gram (w s , w c ) as a collocation if the pair appears in the corpus by applying an additional search on the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Approach",
"sec_num": "3."
},
{
"text": "In order to extract Chinese collocations from a corpus and to obtain result in Step 1 of our algorithm, we use an automatic collocation extraction system named CXtract, developed by a research group at Hong Kong Polytechnic University [Lu et al. 2003 ]. This collocation extraction system is based on English Xtract [Smaja 1993] with two improvements. First, the parameters (K 0 , K 1 , U 0 ) used in Xtract are adjusted so as to optimize them for a Chinese collocation extraction system, resulting in an 8% improvement in the precision rate. Secondly, a solution is provided to the so-called high-low problem in Xtract, where bi-grams with a high frequency the head word, w h , but a relatively low frequency collocated word, w i can not be extracted. We will explain the algorithm briefly here. According to Xtract, a word concurrence is denoted by a triplet (w h , w i , d), where w h is a given headword and w i is a collocated word appeared in the corpus with a distance d within the window [-5, 5 ]. The frequency, f i , of the collocated word, w i , in the window [-5, 5 ] is defined as",
"cite_spans": [
{
"start": 235,
"end": 250,
"text": "[Lu et al. 2003",
"ref_id": "BIBREF9"
},
{
"start": 996,
"end": 1002,
"text": "[-5, 5",
"ref_id": null
},
{
"start": 1071,
"end": 1077,
"text": "[-5, 5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-gram Collocation Extraction",
"sec_num": "3.1"
},
{
"text": "5 , 5 i ij j f f =\u2212 = \u2211 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-gram Collocation Extraction",
"sec_num": "3.1"
},
{
"text": "where f i, j is the frequency of the collocated word w i at position j in the corpus within the window. The average frequency of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-gram Collocation Extraction",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f i , denoted by i f , is given by 5 , 5 /10 i ij j f f =\u2212 = \u2211 .",
"eq_num": "(2)"
}
],
"section": "Bi-gram Collocation Extraction",
"sec_num": "3.1"
},
{
"text": "Then, the average frequency, f , and the standard deviation, \u03c3, are defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-gram Collocation Extraction",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 1 n i i f f n = = \u2211 ; 2 1 1 ( ) n i i f f n \u03c3 = = \u2212 \u2211 .",
"eq_num": "(3)"
}
],
"section": "Bi-gram Collocation Extraction",
"sec_num": "3.1"
},
{
"text": "The Strength of co-occurrence for the pair (w h , w i ,), denoted by k i , is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-gram Collocation Extraction",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "i i f f k \u03c3 \u2212 = .",
"eq_num": "(4)"
}
],
"section": "Bi-gram Collocation Extraction",
"sec_num": "3.1"
},
{
"text": "Furthermore, the Spread of (w h , w i ,), denoted by U i , which characterizes the distribution of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-gram Collocation Extraction",
"sec_num": "3.1"
},
{
"text": "w i around w h, is define as 2 , ( ) 10 i j i i f f U \u2212 = \u2211 . (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-gram Collocation Extraction",
"sec_num": "3.1"
},
{
"text": "To eliminate bi-grams which are unlikely to co-occur, the following set of threshold values is defined:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-gram Collocation Extraction",
"sec_num": "3.1"
},
{
"text": "0 1: i i f f C k K \u03c3 \u2212 = \u2265 (6) 0 2 : i C U U \u2265 (7) , 1 3 : ( ) i j i i C f f K U \u2265 + \u22c5 (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-gram Collocation Extraction",
"sec_num": "3.1"
},
{
"text": "where the threshold value set (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-gram Collocation Extraction",
"sec_num": "3.1"
},
{
"text": "K 0 , K 1 , U 0 ) is obtained through experiments. A bi-gram (w h , w i , d)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-gram Collocation Extraction",
"sec_num": "3.1"
},
{
"text": "will be filtered out as a collocation if it does not satisfy one of the above conditional thresholds. Condition C1 is used to measure the \"recurrence\" property of collocations when the bi-grams (w h , w i , d) with co-occurrences frequencies higher than K 0 times the standard deviation over the average are selected. C2 is used to select bi-gram pairs (w h , w i , d) having a spread values that are larger than a given threshold, U 0. A lower U value implies that the bi-gram is evenly distributed in all 10 positions and thus is not considered a \"rigid combination\". C3 is used to select bi-grams in these \"certain positions\". Only if certain peak positions exist, the co-occurrence bi-grams are considered collocations. The values of (K 0 , K 1 , U 0 ) are set to (1, 1, 10), which are the optimal parameters for English according to Xtract. For the CXtract, the values of (K 0 , K 1 , U 0 ) are adjusted to (1.2, 1.2, 12) which are suitable for the Chinese collocation extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-gram Collocation Extraction",
"sec_num": "3.1"
},
{
"text": "However, Xtract cannot extract high-low collocations when w h has a quite high frequency and its co-word w i has a relatively low frequency. For example, \"\u68d8\u624b\u95ee\u9898\" is a bi-gram collocation. But because freq (\u68d8\u624b) is much lower than the freq (\u95ee\u9898), this bi-gram collocation cannot be identified, resulting in a lower recall rate. In CXtract, an additional step is used to identify such high-low collocations by measuring the conditional probability as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-gram Collocation Extraction",
"sec_num": "3.1"
},
{
"text": "h 0 (w , ) , ( ) i i i f w R R f w = \u2265 (9)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-gram Collocation Extraction",
"sec_num": "3.1"
},
{
"text": "which measures the likelihood of occurrence of w h given w i, thus discounting the absolute frequency of w i . CXtract outputs a list of triplets (w h , w i , d), where (w h , w i ,) is considered to be a collocation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bi-gram Collocation Extraction",
"sec_num": "3.1"
},
{
"text": "In",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Construct Synonyms Set",
"sec_num": "3.2"
},
{
"text": "Step 2 of our system, for each given headword w h , we first need to find its synonym set W syn, which contains all the words that are said to be the synonyms of w h . As stated earlier, we estimate the synonym relation between words based on semantic similarity calculation in HowNet. Therefore, before explaining how the synonym set can be constructed, we will introduce the semantic structure of HowNet and the similarity model built based on HowNet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Construct Synonyms Set",
"sec_num": "3.2"
},
{
"text": "Because we hope to explore the different semantics meanings that each word carries, word sense disambiguation is the main issue when we calculate the similarity of words. For example, the word \"\u6253\" used with the words \"\u9171\u6cb9\" as in \"\u6253\u9171\u6cb9\" and \"\u7f51\u7403\" as in \"\u6253\u7f51\u7403\" has the meanings of buy( \"\u5356\") and exercise(\"\u953b\u70bc\"), respectively. As a bilingual semantic and syntactic knowledge base, HowNet provides separate entries when the same word contains more than one concept. Unlike WordNet, in which a semantic relation is a relation between synsets, HowNet adopts a constructive approach to semantic representation. It describes words as a set of concepts (\u4e49\u9879) and describes each concept using a set of primitives(\u4e49\u5143),which is the smallest semantic unit in HowNet and cannot be decomposed further. The template of word concepts is organized in HowNet as shown below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Structure of HowNet",
"sec_num": "3.2.1"
},
{
"text": "NO.= the record number of the lexical entries W_C/E = concept of the language (Chinese or English)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Structure of HowNet",
"sec_num": "3.2.1"
},
{
"text": "E_C/E = example of W_C/E G_C/E = Part-of-speech of the W_C/E DEF = Definition, which is constructed by primitives and pointers For example, in the following, for the word \"\u6253\", we list the two of its corresponding concepts:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Structure of HowNet",
"sec_num": "3.2.1"
},
{
"text": "NO.=000001 W_C=\u6253 G_C=V E_C=~\u9171\u6cb9\uff0c~\u5f20\u7968\uff0c~\u996d\uff0c\u53bb~\u74f6\u9152\uff0c\u918b~\u6765\uf9ba W_E=buy G_E=V E_E= DEF=buy|\u4e70 NO.=017144 W_C=\u6253 G_C=V E_C=~\u7f51\u7403\uff0c~\u724c\uff0c~\u79cb\u5343\uff0c~\u592a\u6781\uff0c\u7403~\u5f97\u5f88\u68d2 W_E=play G_E=V E_E=DEF=exercise|\u953b\u7ec3, sport|\u4f53\u80b2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Structure of HowNet",
"sec_num": "3.2.1"
},
{
"text": "Note: Replace all the graphics above by simple text. In the above records, DEFs are where the primitives are specified. DEF contains up to four types of primitives: basic independent primitives (\u57fa\u672c\u72ec\uf9f7\u4e49\u5143), other independent primitives (\u5176\u4ed6\u72ec\uf9f7\u4e49\u5143), relation primitives (\u5173\u7cfb\u4e49\u5143), and symbol primitives (\u7b26\u53f7\u4e49\u5143), where basic independent primitives and other independent primitives are used to indicate the basic concept, and the other types are used to indicate syntactical relationships. For example, the word \"\u751f\u65e5\" has all four types of primitives as shown below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Structure of HowNet",
"sec_num": "3.2.1"
},
{
"text": "NO.=072280 W_C=\u751f\u65e5 G_C=n E_C=\u795d\u8d3a~\uff0c\u8fc7~\uff0c~\u805a\u4f1a W_E=birthday G_E=n E_E= DEF=time|\u65f6\u95f4, day|\u65e5, @ComeToWorld|\u95ee\u4e16, $congratulate|\u795d\u8d3a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Structure of HowNet",
"sec_num": "3.2.1"
},
{
"text": "The basic independent primitive \"time| \u65f6 \u95f4 \" defines the general classification of \"birthday|\u751f\u65e5\". The other independent primitive \"day|\u65e5\" indicates that \"birthday|\u751f\u65e5\" is related to \"day|\u65e5\". The symbol primitives \"@ComeToWorld|\u95ee\u4e16\" and \"$congratdulate|\u795d \u8d3a\" provide more specific, distinguishing features to indicate syntactical relationships. The pointer \"@\" specifies \"time or space\", indicating that \"birthday|\u751f \u65e5 \" is the time of \"ComeToWorld|\u95ee\u4e16\". Another pointer \"$\" specifies \"object of V\", which means that \"birthday|\u751f\u65e5\" is the object of \"congratulate|\u795d\u8d3a\". In summary, we find that \"birthday|\u751f \u65e5\" belongs to \"time|\u65f6\u95f4\" in general and is related to \"day|\u65e5\" which specifies the time of \"ComeToWorld|\u95ee\u4e16\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Structure of HowNet",
"sec_num": "3.2.1"
},
{
"text": "The primitives are then linked by a hierarchical tree to indicate the parent-child relationships as shown in the following example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Structure of HowNet",
"sec_num": "3.2.1"
},
{
"text": "-entity|\u5b9e\u4f53 \u251c thing|\u4e07\u7269 \u2026 \u251c physical|\u7269\u8d28 \u2026 \u251c animate|\u751f\u7269 \u2026 \u251c AnimalHuman|\u52a8\u7269 \u2026 \u251c human|\u4eba \u2502 \u2514 humanized|\u62df\u4eba \u2514 animal|\u517d \u251c beast|\u8d70\u517d \u2026",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Structure of HowNet",
"sec_num": "3.2.1"
},
{
"text": "Note: Replace all the graphics above by simple text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Structure of HowNet",
"sec_num": "3.2.1"
},
{
"text": "This hierarchical structure provides a way to link a concept with any other concept in HowNet, and the closeness of concepts can be represented by the distance between the two concepts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Structure of HowNet",
"sec_num": "3.2.1"
},
{
"text": "Liu Qun [Liu 2002 ] defined word similarity as two words that can substitute for each other in the same context and still keep the sentence syntactically and semantically consistent. This is very close to our definition of synonyms. Thus, in this work, we will directly use the similarity function provided by Liu Qun, which is stated below.",
"cite_spans": [
{
"start": 8,
"end": 17,
"text": "[Liu 2002",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "A word in HowNet is defined as a set of concepts, and each concept is represented by primitives. We describe HowNet as a collection of n words, W:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "W = {w 1 , w 2 , \u2026 w n }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "Each word w i is, in turn, described by a set of concepts S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "w i = {S i1, S i2 , ... S ix },",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "and, each concept S i is, in turn, described by a set of primitives:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "S i = {p i1 , p i2, \u2026 p iy }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "For each word pair, w 1 and w 2 , the similarity function is defined by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 2 1 2 1 , 1 ( , ) max ( , ) i j i n j m Sim w w Sim S S = = =",
"eq_num": "(10)"
}
],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "where S 1i is the list of concepts associated with w 1 and S 2j is the list of concepts associated with w 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "As any concept, S i is represented by its primitives. The similarity of primitives for any p 1 and p 2 of the same type can be expressed by the following formula:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "1 2 1 2 ( , ) ( , ) Sim p p Dis p p \u03b1 \u03b1 = + (11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "where \u03b1 is an adjustable parameter with a value of 1.6 according to Liu [Liu 2002 ].",
"cite_spans": [
{
"start": 72,
"end": 81,
"text": "[Liu 2002",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "1 2 ( , ) Dis p p is the path length between p 1 and p 2 based on the semantic tree structure. The above formula does not explicitly indicate that the depth of a pair of nodes in the tree affects their similarity. For two pairs of nodes (p 1 , p 2 ) and (p 3 , p 4 ) with the same distance, the deeper the depth, the more commonly shared ancestors they have, which means that they are semantically closer to each other. In the following two tree structures, the pair of nodes (p 1, p 2 ) in the left tree should be more similar than (p 3 , p 4 ) in the right tree:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "Root p 2 p 1 root P 4 P 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "To clarify this observation, \u03b1 is modified as a function of the tree depths of the nodes using the formula \u03b1=min(d(p 1 ),d(p 2 )). Consequently, the formula (11) was rewritten as formula (11\u00aa) below for our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": ")) ( ), ( min( ) , ( )) ( ), ( min( ) , ( 2 1 2 1 2 1 2 1 p d p d p p Dis p d p d p p Sim + = (11\u00aa) where d(p i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "is the depth of node p i in the tree. Calculating the word similarity by applying formulas (11) and (11\u00aa) will be discussed in Section 4.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "Based on the DEF descriptions in HowNet, different primitive types play different roles, and only some are directly related to semantics. To make use of both semantic and syntactic information, the similarity between two concepts should take into consideration all the primitive types with weighted considerations; and thus, the formula is 4 1 2 1 2 1 1 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "( , )",
"eq_num": "( , ) i i"
}
],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b2 = = = \u2211 \u220f",
"eq_num": "(12)"
}
],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "where \u03b2 i is a weighting factor given in [Liu 2002] , where the sum of \u03b2 1 + \u03b2 2 + \u03b2 3 + \u03b2 4 is 1 and",
"cite_spans": [
{
"start": 41,
"end": 51,
"text": "[Liu 2002]",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "\u03b2 1 \u2265 \u03b2 2 \u2265 \u03b2 3 \u2265 \u03b2 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "The distribution of the weighting factors is given for each concept a priori in HowNet to indicate the importance of primitive p i for the corresponding concept S. The similarity model given here is the basis for building a synonyms set where \u03b2 1 and \u03b2 2 represent the semantic information, and \u03b2 3 and \u03b2 4 represent the syntactic relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity Model Based on HowNet",
"sec_num": "3.2.2"
},
{
"text": "For each given headword w h , we apply the similarity formula in Equation (10) to generate its synonym set, W syn, which is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Set of Synonyms Headwords",
"sec_num": "3.2.3"
},
{
"text": "} ) , ( : { \u03b8 > = s h s syn w w Sim w W (13)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Set of Synonyms Headwords",
"sec_num": "3.2.3"
},
{
"text": "where 0 <\u03b8 <1 is an algorithm parameter which is adjusted based on experience. We set it to 0.85 based on our experiment because we wanted to balance the strength of the synonym relationship and the coverage of the synonym set. Setting the parameter \u03b8 < 0.85 will weaken the similarity strength of the extracted synonyms. For example, a given collocation \"\u6539\u5584\u5173 \u7cfb\" is unlikely to include the candidates \"\u6539\u5584\u62a4\u7167\" and \uff0c\"\u6539\u5584\uf909\u636e\". On the other hand, setting the parameter \u03b8 > 0.85 will limit the coverage of the synonym set, thus valuable synonyms will be lost. For example, for a given bi-gram \"\u91cd\u5927\u8d21\u732e\", we hope to include candidate synonymous collocations such as \"\u91cd\u5927\u6210\u679c\", \"\u91cd\u5927\u6210\u7ee9\", and \"\u91cd\u5927\u6210\u5c31\". We will discuss the test on \u03b8 in section 5.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Set of Synonyms Headwords",
"sec_num": "3.2.3"
},
{
"text": "H. Wu [Wu 2003 ] defined a synonymous collocation pair as two collocations that are similar in meaning, but not identical in wording. Actually, in natural language, there exist many synonym collocations. For example, \"switch on light\" and \"turn on light\", \"\u8d22\u52a1\u95ee\u9898\" and \"\u8d22 \u653f\u95ee\u9898\". However, the sparse appearance of word combinations in a training corpus due to the limitation on the corpus size itself, some synonym collocations may not be extracted by the statistical method because of their lower co-occurrence frequencies. Based on this observation, we perform a further step. Our basic idea is to use a bi-gram collocation (w h, w c , d) to further obtain the synonym set W syn of w h, quantified by the similarity function. Then, for each w s in W syn, we consider (w s, w c , d ) as a collocation if it indeed appears in the corpus at least a given number of times.",
"cite_spans": [
{
"start": 6,
"end": 14,
"text": "[Wu 2003",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Synonyms Collocation",
"sec_num": "3.3"
},
{
"text": "Our definition of a synonym collocation as follows. For a given collocation (w s , w c, , d), if w s \u220a W syn , then we deem the triple (w s , w c, , d) to be a synonyms collocation with respect to the collocation (w h , w c, , d) if ( w s , w c, d) appears in the corpus N times, where N is a threshold value which we set to 2 in our experiment. Therefore, we define the collection of synonym collocations",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synonyms Collocation",
"sec_num": "3.3"
},
{
"text": "C syn as } ) , , ( : ) , , {( N d w w Freq d w w C c s c s syn >= = (14)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synonyms Collocation",
"sec_num": "3.3"
},
{
"text": "where w s \u220a W syn .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synonyms Collocation",
"sec_num": "3.3"
},
{
"text": "Our experimental results show that the precision rate of synonym collocation extraction is around 80% when we use the knowledge of HowNet. Some pseudo collocations can be automatically excluded because of the fact that they do not appear in the corpus. For example, for the headword \"\u589e\u957f\" in the collocation \"\u589e\u957f\u89c1\u8bc6\", the synonym set extracted from our system contains {\"\u589e\u52a0\", \"\u589e\u9ad8\", \"\u589e\u591a\"}, so the pseudo-collocations \"\u589e\u9ad8\u89c1\u8bc6\", \"\u589e\u52a0\u89c1 \u8bc6\", and \"\u589e\u591a\u89c1\u8bc6\" will be excluded because they are not being used customarily used and, thus, do not appear in the corpus. We checked them using Google and found that they did not appear either. On the other hand, for the collocated word \"\u89c1\u8bc6\", our system extracts the synonyms set {\"\u773c\u5149\", \"\u773c\u754c\"}, and the word combination \"\u589e\u957f\u773c\u754c\" appears twice in our corpus, thus according to our definition, it is a collocation. Therefore, the collocations \"\u589e\u957f \u89c1\u8bc6\" and \"\u589e\u957f\u773c\u754c\" are synonym collocations, and we can successfully extract \"\u589e\u957f\u773c\u754c\" even though its frequency is very low (below 10 in our system).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synonyms Collocation",
"sec_num": "3.3"
},
{
"text": "We modified Liu Qun's similarity model based on HowNet to obtain the synonyms of specified words. HowNet is a Chinese-English Bilingual Knowledge Dictionary. It includes both word entries and concept entries. There are more than 60 thousand Chinese concept entries and around 70 thousand English concept entries in HowNet. Both Chinese and English word entries are more than 50 thousand.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set and Evaluation Method",
"sec_num": "4."
},
{
"text": "The corpus we used contains over 60MB of tagged sentences. Our experiment was conducted using tagged corpus of 11 million words collected six months from the People's Daily. For word bi-gram extraction, we considered only content words, thus, headwords were nouns, verbs or adjectives only.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set and Evaluation Method",
"sec_num": "4."
},
{
"text": "In order to illustrate the effect of our algorithm, we used the statistically based system discussed in Section 3.1 as our baseline systems where the output data is called Set A. Using the output of the baseline system, we could further apply our algorithm to produce a data set called Set B.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set and Evaluation Method",
"sec_num": "4."
},
{
"text": "The collocation performance is normally evaluated based on precision and recall as defined below: However, in collocation extraction, the absolute recall rate is rarely used because there are no benchmark \"standard answers\". Alternatively, we can use recall improvement to evaluate our system as defined below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set and Evaluation Method",
"sec_num": "4."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "X N X N X N N recall syn syn none syn syn none / / / ) ( _ _ \u2212 + = ,",
"eq_num": "(17)"
}
],
"section": "Data Set and Evaluation Method",
"sec_num": "4."
},
{
"text": "where N none-syn stands for the number of non-synonyms collocations extracted by a statistical model, N syn stands for the number of synonym collocations extracted based on synonym relationships, and X stands for the total number of collocations in the corpus with respect to the given headwords.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set and Evaluation Method",
"sec_num": "4."
},
{
"text": "Because there are no readily available \"standard answers\" for collocations, our results were checked manually to verify whether each candidate bi-gram was a true collocation or not. Since the output from the baseline system obtained using 60MB of tagged data consisted of over 200,000 collocations, we had to use the random sampling method to conduct an evaluation. In order to perform a fair evaluation, we tried to avoid subjective selection of words. Therefore, we randomly selected 5 words for each of the three types of words, namely, 5 nouns, 5 verbs, and 5 adjectives. Because headwords we chose were completely random and we did not target any particular words, our results should be statistically sound. Following is a list of the 15 randomly selected words used for the purpose evaluation: nouns: \u57fa\u7840, \u601d\u60f3, \u7814\u7a76, \u6761\u4ef6, \u8bc4\u9009;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set and Evaluation Method",
"sec_num": "4."
},
{
"text": "verbs: \u6539\u5584, \u52a0\u5927, \u589e\u957f, \u63d0\u8d77, \u9881\u53d1;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set and Evaluation Method",
"sec_num": "4."
},
{
"text": "adjectives: \u660e\u663e, \u5168\u9762, \u91cd\u8981, \u4f18\u79c0, \u5927\u597d Table 1 shows samples of word bi-grams extracted using our algorithm that are considered collocations of the headwords \"\u91cd\u5927\", \"\u6539\u5584\" and \"\u52a0\u5927\". Table 2 shows bi-grams extracted by our algorithm that are not considered true collocations. ",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 38,
"text": "Table 1",
"ref_id": "TABREF2"
},
{
"start": 171,
"end": 178,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Set and Evaluation Method",
"sec_num": "4."
},
{
"text": "In",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Improvement in precision and recall rates",
"sec_num": "5.1"
},
{
"text": "Step 1 of the algorithm, 15 headwords were used to extract bi-gram collocations from the corpus, and 703 pairs of collocations were extracted. Evaluation by hand identified 232 true collocations in the set A test set. The overall precision rate was 31.7% (see Table 3 ). Step 2 of our algorithm, where \u03b8 = 0.85 was used, we obtained 94 synonym headwords (including the original 15 headwords). Out of these 94 synonym headwords, 841 bi-gram pairs were then extracted from the baseline system, and 243 were considered true collocations. Then, in Step 3 of our algorithm, we extracted an additional 311 bi-gram pairs; among them, 261 were considered true collocations. Because the synonym collocation extraction algorithm has achieved a high precision rate of around 84% (261/311 = 83.9%) according to our experimental result as shown in Table 4 . Since the data for Set B consisted of the additional extracted collocations. When we employed both Set A and Set B together as an overall system, the precision increased to 44 % ((243+261)/(841+311) = 43.7%), an improvement of almost 33% (43.7%-32.9%)/32.9% = 32.8%) comparing with the precision rate of the baseline system as shown in Table 5 . As stated earlier, we are not able to evaluate the recall rate. However, compared with the statistical method indicated by Set A, an additional 261 collocations were recalled. Thus, we can record the recall the improvement which is ((243+261) -243) /243= 107.4% as shown in Table 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 260,
"end": 267,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 835,
"end": 842,
"text": "Table 4",
"ref_id": "TABREF4"
},
{
"start": 1181,
"end": 1188,
"text": "Table 5",
"ref_id": null
},
{
"start": 1465,
"end": 1472,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Improvement in precision and recall rates",
"sec_num": "5.1"
},
{
"text": "To test the average recall improvement achieved with synonym collocation extraction, we experimented on three set tests with 9, 15, and 21 distinct headwords respectively. The results are shown in Table 6 . The above table shows that the average recall improvement was close to 100% when using the synonyms relationships were used in the collocation extraction. With different choices of headwords, the improvement averaged about 100% with a standard deviation of 0.106, which indicates that our sampling approach to evaluation is reasonable.",
"cite_spans": [],
"ref_spans": [
{
"start": 197,
"end": 204,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "A analysis of the loss / gain in recall",
"sec_num": "5.2"
},
{
"text": "We also conducted a set of experiments to choose the best value for the similarity function's threshold \u03b8. We tested the best value of \u03b8 based on both the precision rate and the estimated recall rate using so-called remainder bi-grams. The remainder bi-grams are all the bi-grams extracted by the algorithm. When the precision goes up, the size of the result is smaller, indicating a decreasing of recalled collocations. Figure 1 shows the precision rate and estimated recall rate recorded when we tested the value of \u03b8. Table 8 lists the similarity values calculated using equation 11, where \u03b1 is a constant with a given value of 1.6, and equation 11\u00aa, where \u03b1 is replaced with a function of the depths of the nodes. Results show that (11\u00aa) is finer tuned, and that it also reflects the nature of the data better. For example, \u5de5\u4eba and \u519c\u6c11 are more similar than \u5de5\u4eba and \u8fd0\u52a8\u5458. \u7c89\u7ea2 and \u7ea2 are similar but not the same. The above example shows for the collocation \"\u8fc5\u901f\u589e\u957f\", how each word is substituted and the statistical data for the synonym collocations. Our system extracts twenty candidate synonym collocations. Seven of them are synonym collocations with frequencies below than 10. Four of them have frequencies above 10, which means that they can be extracted by using statistical models only. Another nine of them do not appear in our corpus, which including two pseudo collocations \"\u9ebb\uf9dd\u589e\u957f\"and \"\u6e4d\u6025\u589e\u957f\".",
"cite_spans": [],
"ref_spans": [
{
"start": 421,
"end": 429,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 521,
"end": 528,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "The choice of \u03b8",
"sec_num": "5.3"
},
{
"text": "In this paper, we have presented a method to extract bi-gram collocations using a lexical statistics model with synonym information. Our method achieved a precision rate of 44% for the tested data. Comparing with the precision of 32% obtained using lexical statistics only, our method results in an improvement of close to 33%. In addition, the recall improvement achieved reached 100% on average. The main contribution of our method is that we make use of synonym information to extract collocations which otherwise cannot be extracted using a lexical statistical method alone. Our method can supplement a lexical statistical method to increase the recall quite significantly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and On-Going Work",
"sec_num": "6."
},
{
"text": "Our work focuses on synonym collocation extraction. However, Manning [Manning 99 ] claimed that the lack of valid substitutions for synonyms is a characteristic of collocations in general [Manning and Schutze 1999] . Nevertheless, our method shows that synonym collocations do exist and that they are not a minimal collection that can be ignored in collocation extraction.",
"cite_spans": [
{
"start": 69,
"end": 80,
"text": "[Manning 99",
"ref_id": null
},
{
"start": 188,
"end": 214,
"text": "[Manning and Schutze 1999]",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and On-Going Work",
"sec_num": "6."
},
{
"text": "To extend our work, we will further apply synonym information to identify collocations of different types. Our preliminary study has suggested that collocations can be classified into 4 types: Type 0 collocations: These are fully fixed collocations which including some idioms, proverbs, and sayings, such as \"\u7f18\u6728\u6c42\u9c7c\"\uff0c\"\u91dc\u5e95\u62bd\u85aa\" and so on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and On-Going Work",
"sec_num": "6."
},
{
"text": "Type 1 collocations: These are fixed collocation in which the appearance of one word implies the co-occurrence of another one as in \"\u5386\u53f2\u5305\u88b1\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and On-Going Work",
"sec_num": "6."
},
{
"text": "Type 2 collocations: These are strong collocation which allow very limited substitution of components, as in, \"\u88c1\u51cf\u804c\u4f4d\", \"\u51cf\u5c11\u804c\u4f4d\", \"\u7f29\u51cf\u804c\u4f4d\" and so on. These collocations are classified with type 3 collocations when substitution can occur at only one end, not both ends.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and On-Going Work",
"sec_num": "6."
},
{
"text": "Type 3 collocations: These are loose collocations which allow more substitutions of components; however a limitation is still required to restrict the substitution as in \"\u51cf\u5c11\u5f00\u652f\", \"\u7f29\u51cf\u5f00\u652f\", \"\u538b\u7f29\u5f00\u652f\", \"\u6d88\u51cf\u5f00\u652f\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and On-Going Work",
"sec_num": "6."
},
{
"text": "By using synonym information and defining substitutability, we can validate whether collocations are fixed collocations, strong collocations with very limited substitutions, or general collocations that can be substituted more freely. Based on this observation, we are currently working on a synonym substitution model for classifying the collocations into different types automatically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and On-Going Work",
"sec_num": "6."
}
],
"back_matter": [
{
"text": "Our great thanks go to Dr. Liu Qun of the Chinese Language Research Center of Peking University for letting us share their data structure in the Synonym Similarity Calculation. This work was partially supported by Hong Kong Polytechnic University (Project Code A-P203) and a CERG Grant (Project code 5087/01E). Ms. Wanyin Li is currently a lecturer in the department of Computer Science of Chu Hai College, Hong Kong.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements and notes",
"sec_num": null
},
{
"text": "From Figure 1 , it is obvious that at \u03b8=0.85, the recall rate starts to drop more drastically without much improvement in precision.",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 13,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
},
{
"text": "The original threshold for CXtract is (1.2, 1.2, 12) for the parameters (K 0 , K 1 , U 0 ). However, with respect to synonym collocations, we also conducted some experiments to see whether the parameters should be adjusted. Table 7 shows the statistics used to test the value of (K 0 , K 1 , U 0 ). The similarity threshold \u03b8 was fixed at 0.85 throughout the experiments.",
"cite_spans": [
{
"start": 38,
"end": 52,
"text": "(1.2, 1.2, 12)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 224,
"end": 231,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "The test of (K 0 , K 1 , U 0 )",
"sec_num": "5.4"
},
{
"text": "Bi-grams extracted using lexical statistics The experimental results show that varying the value of (K 0 , K 1 ) does not benefit our algorithm. However, increasing the value of U 0 does improve the extraction of synonymous collocations. Figure 2 shows that U 0 =14 provides a good trade-off between the precision rate and the remainder Bi-grams. This result is reasonable. According to Smadja, U 0 as defined in equation 8represents the co-occurrence distribution of the candidate collocation (w h, w c ) at the position d (-5 \u2264 d \u2264 5). For a true collocation (w h , w c, , d), its co-occurrence frequency at the position d is much higher than those at other positions, which leads to a peak in the co-occurrence distribution. Therefore, it is selected by the statistical algorithm based on equation 10. Based on the physical meaning, one way to improve the precision rate is to increase the value of the threshold U 0. A side effect of increasing the value of U 0 is a decreased recall rate because some true collocations do not meet the condition of co-occurrence frequency in the ten positions greater than U 0.Step 2 of the new algorithm regains some true collocations that are lost because of the higher value of U 0. in Step 1. ",
"cite_spans": [],
"ref_spans": [
{
"start": 238,
"end": 246,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Table 7.Values of (K0, K1, U0)",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Collocations and General Purpose Dictionaries",
"authors": [
{
"first": "M",
"middle": [],
"last": "Benson",
"suffix": ""
}
],
"year": 1990,
"venue": "International Journal of Lexicography",
"volume": "3",
"issue": "1",
"pages": "23--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benson, M., \"Collocations and General Purpose Dictionaries,\" International Journal of Lexicography, 3(1), 1990, pp. 23-35.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Looking for Needles in a Haystack or Locating Interesting Collocation Expressions in Large Textual Database",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Choueka",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of RIAO Conference on User-oriented Content-based Text and Image Handling",
"volume": "",
"issue": "",
"pages": "21--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Choueka, Y., \"Looking for Needles in a Haystack or Locating Interesting Collocation Expressions in Large Textual Database,\" Proceedings of RIAO Conference on User-oriented Content-based Text and Image Handling, 1993, pp. 21-24, Cambridge.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Word Association Norms, Mutual Information, and Lexicography",
"authors": [
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "6",
"issue": "1",
"pages": "22--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Church, K. and P. Hanks, \"Word Association Norms, Mutual Information, and Lexicography,\" Computational Linguistics, 6(1), 1990, pp. 22-29.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Similarity-based method for word sense disambiguation",
"authors": [
{
"first": "I",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 35th Annual Meeting of ACL",
"volume": "",
"issue": "",
"pages": "56--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dagan, I., L. Lee and F. Pereira, \"Similarity-based method for word sense disambiguation,\" Proceedings of the 35th Annual Meeting of ACL, 1997, pp. 56-63, Madrid, Spain.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Using Syntactic Dependency as Local Context to Resolve Word Sense Ambiguity",
"authors": [
{
"first": "D",
"middle": [
"K"
],
"last": "Lin",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of ACL/EACL-97",
"volume": "",
"issue": "",
"pages": "64--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, D. K., \"Using Syntactic Dependency as Local Context to Resolve Word Sense Ambiguity,\" Proceedings of ACL/EACL-97, 1997, pp. 64-71, Madrid, Spain",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Extracting collocations from text corpora",
"authors": [
{
"first": "D",
"middle": [
"K"
],
"last": "Lin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. First Workshop on Computational Terminology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, D. K., \"Extracting collocations from text corpora,\"Proc. First Workshop on Computational Terminology, 1998, Montreal, Canada.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Using Collocation Statistics in Information Extraction",
"authors": [
{
"first": "D",
"middle": [
"K"
],
"last": "Lin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Seventh Message Understanding Conference (MUC-7)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin, D. K., \"Using Collocation Statistics in Information Extraction,\" Proceedings of the Seventh Message Understanding Conference (MUC-7), 1998.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The Word Similarity Calculation on <<HowNet>>",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of 3 rd Conference on Chinese lexicography",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liu, Q.,\"The Word Similarity Calculation on <<HowNet>>,\" Proceedings of 3 rd Conference on Chinese lexicography, 2002, TaiBei.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Improving Xtract for Chinese Collocation Extraction",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "R",
"middle": [
"F"
],
"last": "Xu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of IEEE International Conference on Natural Language Processing and Knowledge Engineering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lu, Q., Y. Li and R. F. Xu, \"Improving Xtract for Chinese Collocation Extraction,\" Proceedings of IEEE International Conference on Natural Language Processing and Knowledge Engineering, 2003, Beijing.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Foundations of Statistical Natural Language Processing",
"authors": [
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Schutze",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manning, C. D. and H. Schutze, \"Foundations of Statistical Natural Language Processing,\" The MIT Press, 1999, Cambridge, Massachusetts.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Semantic networks of English",
"authors": [
{
"first": "G",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1992,
"venue": "Lexical and conceptual semantics",
"volume": "",
"issue": "",
"pages": "197--229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miller, G and C. Fellbaum, \"Semantic networks of English,\" In Beth Levin & Steven Pinker (eds.), Lexical and conceptual semantics, 1992, pp. 197-229.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Synonymy in Collocation Extraction",
"authors": [
{
"first": "D",
"middle": [],
"last": "Pearce",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of NAACL'01 Workshop on Wordnet and Other Lexical Resources: Applications, Extensions and Customizations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pearce, D., \"Synonymy in Collocation Extraction,\" Proceedings of NAACL'01 Workshop on Wordnet and Other Lexical Resources: Applications, Extensions and Customizations, 2001.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Retrieving collocations from text: Xtract",
"authors": [
{
"first": "F",
"middle": [],
"last": "Smadja",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "1",
"pages": "143--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smadja, F., \"Retrieving collocations from text: Xtract,\" Computational Linguistics, 19(1), 1993, pp. 143-177",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Preliminary Study on Quantitative Study on Chinese Collocations",
"authors": [
{
"first": "M",
"middle": [
"S"
],
"last": "Sun",
"suffix": ""
},
{
"first": "C",
"middle": [
"N"
],
"last": "Huang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Fang",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "29--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sun, M. S., C. N. Huang and J. Fang, \"Preliminary Study on Quantitative Study on Chinese Collocations,\" ZhongGuoYuWen, No.1, 1997, pp. 29-38, (in Chinese).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Synonymous Collocation Extraction Using Translation Information",
"authors": [
{
"first": "H",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceeding of the 41st Annual Meeting of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wu, H. and M. Zhou, \"Synonymous Collocation Extraction Using Translation Information,\" Proceeding of the 41st Annual Meeting of ACL, 2003.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The Research of Word Sense Disambiguation Method Based on Co-occurrence Frequency of HowNet",
"authors": [
{
"first": "E",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 1999,
"venue": "Communication of COLIPS",
"volume": "8",
"issue": "2",
"pages": "129--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang, E., G. Zhang and Y. Zhang, \"The Research of Word Sense Disambiguation Method Based on Co-occurrence Frequency of HowNet,\" Communication of COLIPS, 8(2) 1999, pp. 129-136.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "CHINERS: A Chinese Name Entity Recognition System for the Sports Domain",
"authors": [
{
"first": "T",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Erbach",
"suffix": ""
}
],
"year": 2003,
"venue": "Second SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "55--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yao, T., W. Ding and G. Erbach, \"CHINERS: A Chinese Name Entity Recognition System for the Sports Domain,\" Second SIGHAN Workshop on Chinese Language Processing, 2003, pp. 55-62.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "Precision rate vs. the value of \u03b8",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Precision rate vs. the value of U 0",
"num": null,
"uris": null
},
"TABREF2": {
"html": null,
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td colspan=\"3\">F_5 F_4 F_3</td><td colspan=\"2\">F_2 F_1</td><td>Headword</td><td>F1</td><td>F2</td><td colspan=\"2\">F3 F4</td></tr><tr><td>*</td><td>*</td><td>*</td><td>*</td><td>*</td><td>\u91cd\u5927</td><td colspan=\"2\">\u610f\u4e49 *</td><td>*</td><td>*</td></tr><tr><td>*</td><td>*</td><td>*</td><td>*</td><td>*</td><td>\u91cd\u5927</td><td colspan=\"2\">\u5f71\u54cd *</td><td>*</td><td>*</td></tr><tr><td>*</td><td>*</td><td>*</td><td>*</td><td>*</td><td>\u91cd\u5927</td><td colspan=\"2\">\u4f5c\u7528 *</td><td>*</td><td>*</td></tr><tr><td>*</td><td>*</td><td>*</td><td>*</td><td>*</td><td>\u6539\u5584</td><td colspan=\"2\">\u5173\u7cfb *</td><td>*</td><td>*</td></tr><tr><td>*</td><td>*</td><td>*</td><td>*</td><td>*</td><td>\u6539\u5584</td><td>*</td><td colspan=\"2\">\u73af\u5883 *</td><td>*</td></tr><tr><td>*</td><td>*</td><td>*</td><td>*</td><td>*</td><td>\u6539\u5584</td><td>*</td><td colspan=\"2\">\u4ea4\u901a *</td><td>*</td></tr><tr><td>*</td><td>*</td><td>*</td><td>*</td><td>*</td><td>\u6539\u5584</td><td>*</td><td colspan=\"2\">\u7ed3\u6784 *</td><td>*</td></tr><tr><td>*</td><td>*</td><td>*</td><td>*</td><td>\u8fdb\u4e00\u6b65</td><td>\u6539\u5584</td><td>*</td><td>*</td><td>*</td><td>*</td></tr><tr><td>*</td><td>*</td><td>*</td><td>*</td><td>\u660e\u663e</td><td>\u6539\u5584</td><td>*</td><td>*</td><td>*</td><td>*</td></tr><tr><td>*</td><td>*</td><td>*</td><td>*</td><td>*</td><td>\u6539\u5584</td><td>*</td><td colspan=\"2\">\u6761\u4ef6 *</td><td>*</td></tr><tr><td>*</td><td>*</td><td>*</td><td>*</td><td>*</td><td>\u6539\u5584</td><td>*</td><td colspan=\"2\">\u72b6\u51b5 *</td><td>*</td></tr><tr><td>*</td><td>*</td><td>*</td><td>*</td><td>\u8fdb\u4e00\u6b65</td><td>\u52a0\u5927</td><td>*</td><td>*</td><td>*</td><td>*</td></tr><tr><td>*</td><td>*</td><td>*</td><td>*</td><td>*</td><td>\u52a0\u5927</td><td>*</td><td colspan=\"2\">\u529b\u5ea6 *</td><td>*</td></tr><tr><td>*</td><td>*</td><td>*</td><td>*</td><td>*</td><td>\u63d0\u8d77</td><td colspan=\"2\">\u516c\u8bc9 *</td><td>*</td><td>*</td></tr><tr><td>*</td><td>*</td><td>*</td><td>*</td><td>*</td><td>\u63d0\u8d77</td><td colspan=\"2\">\u8bc9\u8bbc *</td><td>*</td><td>*</td></tr><tr><td>*</td><td>*</td><td>*</td><td>*</td><td>*</td><td>\u589e\u52a0</td><td>*</td><td colspan=\"2\">\u8d1f\u62c5 *</td><td>*</td></tr></table>"
},
"TABREF3": {
"html": null,
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td colspan=\"2\">. Statistics of the test set for set A</td></tr><tr><td/><td>n. + v. + a.</td></tr><tr><td>Headwords</td><td>15</td></tr><tr><td>Extracted Bi-grams</td><td>703</td></tr><tr><td>True collocations obtains using lexical statistics only</td><td>232</td></tr><tr><td>Precision rate</td><td>31.7 %</td></tr><tr><td>In</td><td/></tr></table>"
},
"TABREF4": {
"html": null,
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td/><td>B</td></tr><tr><td/><td>n. + v. + a.</td></tr><tr><td>Synonym headwords</td><td>94</td></tr><tr><td>Bi-grams (lexical statistics)</td><td>841</td></tr><tr><td>Non-synonym collocations (lexical statistics only)</td><td>243</td></tr><tr><td>Synonym collocations extracted in Step 3</td><td>311</td></tr><tr><td>True synonym collocations obtained in Step 3</td><td>261</td></tr><tr><td>Overall precision rate</td><td>83.9%</td></tr></table>"
},
"TABREF6": {
"html": null,
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>\u589e\u957f\"</td><td/><td/><td/><td/><td/><td/></tr><tr><td>Substitution</td><td>Substitution</td><td>Freq.</td><td>Freq. in</td><td>Substitution</td><td>Freq.</td><td>Freq. in</td></tr><tr><td>headword</td><td>collocated</td><td>in</td><td>Google</td><td>collocated</td><td>in</td><td>Google</td></tr><tr><td/><td>word</td><td>corpus</td><td>results</td><td>word</td><td>corpus</td><td>results</td></tr><tr><td>\u8fc5\u901f\u589e\u52a0</td><td/><td>15</td><td>17,000</td><td>\u8fc5\u6377\u589e\u957f</td><td>0</td><td>7</td></tr><tr><td>\u8fc5\u901f\u589e\u591a</td><td/><td>2</td><td>14,900</td><td>\u8fc5\u901f\u589e\u957f</td><td>20</td><td>224,000</td></tr><tr><td>\u8fc5\u901f\u589e\u9ad8</td><td/><td>0</td><td>744</td><td>\u98de\u5feb\u589e\u957f</td><td>0</td><td>2,530</td></tr><tr><td/><td>\u5feb\u901f\u589e\u957f</td><td>111</td><td colspan=\"2\">1,280,000 \u98de\u901f\u589e\u957f</td><td>4</td><td>48,100</td></tr><tr><td/><td>\u6025\u907d\u589e\u957f</td><td>4</td><td>64,100</td><td>\u9ad8\u901f\u589e\u957f</td><td>60</td><td>543,000</td></tr><tr><td/><td>\u6025\u4fc3\u589e\u957f</td><td>0</td><td>201</td><td>\u706b\u901f\u589e\u957f</td><td>2</td><td>211</td></tr><tr><td/><td>\u6025\u901f\u589e\u957f</td><td>2</td><td>19,700</td><td>\u5168\u901f\u589e\u957f</td><td>3</td><td>607</td></tr><tr><td/><td>\u6025\u9aa4\u589e\u957f</td><td>0</td><td>1,020</td><td>\u795e\u901f\u589e\u957f</td><td>0</td><td>55</td></tr><tr><td/><td>\u8fc5\u731b\u589e\u957f</td><td>4</td><td>84,600</td><td>\u9ebb\u5229\u589e\u957f</td><td>0</td><td>0</td></tr><tr><td/><td>\u8fc5\u75be\u589e\u957f</td><td>0</td><td>98</td><td>\u6e4d\u6025\u589e\u957f</td><td>0</td><td>0</td></tr></table>"
}
}
}
}