ACL-OCL / Base_JSON /prefixY /json /Y07 /Y07-1012.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y07-1012",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:46:46.505133Z"
},
"title": "Computing Thresholds of Linguistic Saliency *",
"authors": [
{
"first": "Siaw-Fong",
"middle": [],
"last": "Chung",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Taiwan University",
"location": {
"addrLine": "No.1, Roosevelt Road, Section 4, Taipei 106",
"country": "Taiwan"
}
},
"email": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Ahrens",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Taiwan University",
"location": {
"addrLine": "No.1, Roosevelt Road, Section 4, Taipei 106",
"country": "Taiwan"
}
},
"email": "kathleenahrens@yahoo.com"
},
{
"first": "Chung-Ping",
"middle": [],
"last": "Cheng",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Chengchi University",
"location": {
"addrLine": "No. 64, ZhiNan Road, Section 2",
"postCode": "11605",
"settlement": "Taipei",
"country": "Taiwan"
}
},
"email": "cpcheng@nccu.edu.tw"
},
{
"first": "Chu-Ren",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Academia Sinica",
"location": {
"addrLine": "No. 128, Academia Road, Section 2, Nangang, Taipei 115",
"country": "Taiwan"
}
},
"email": ""
},
{
"first": "Petr",
"middle": [],
"last": "\u0160imon",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Academia Sinica",
"location": {
"addrLine": "No. 128, Academia Road, Section 2, Nangang, Taipei 115",
"country": "Taiwan"
}
},
"email": "petr.simon@gmail.com"
},
{
"first": "Fong",
"middle": [],
"last": "Chung",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We propose and test several computational methods to automatically determine possible saliency cutoff points in Sketch Engine (Kilgarriff and Tugwell, 2001). Sketch Engine currently displays collocations in descending importance, as well as according to grammatical relations. However, Sketch Engine does not provide suggestions for a cutoff point such that any items above this cutoff point may be considered significantly salient. This proposal suggests improvement to the present Sketch Engine interface by calculating three different cutoff point methods, so that the presentation of results can be made more meaningful to users. In addition, our findings also contribute to linguistic analyses based on empirical data.",
"pdf_parse": {
"paper_id": "Y07-1012",
"_pdf_hash": "",
"abstract": [
{
"text": "We propose and test several computational methods to automatically determine possible saliency cutoff points in Sketch Engine (Kilgarriff and Tugwell, 2001). Sketch Engine currently displays collocations in descending importance, as well as according to grammatical relations. However, Sketch Engine does not provide suggestions for a cutoff point such that any items above this cutoff point may be considered significantly salient. This proposal suggests improvement to the present Sketch Engine interface by calculating three different cutoff point methods, so that the presentation of results can be made more meaningful to users. In addition, our findings also contribute to linguistic analyses based on empirical data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "All lexical resources, at the point of their design, will take into consideration whether the resources are useful to a target group. For example, WordNet (Fellbaum, 1998) was originally designed for psychologists, but later was used extensively by computational linguists. Similarly, corpora such as British National Corpus (BNC), the Academia Sinica Corpus of Mandarin Chinese (Chen et al., 1996) and the Gigaword corpus were also designed for the use of target groups such as lexicographers, linguists, language teachers, language learners, etc. These corpora usually provide some forms of statistical analyses so that users will be able to summarize their research results quickly. For example, many corpora provide collocational measures such as Mutual Information values (Church and Hanks, 1989) so that collocated words can be sorted according to their frequency of co-occurrence. Sketch Engine (Kilgarriff and Tugwell, 2001 ) is a powerful resource which displays search summary in collocated patterns, as well as according to grammatical relations. However, like many other resources, Sketch Engine is unable to determine which of the results in the list are meaningful linguistically.",
"cite_spans": [
{
"start": 155,
"end": 171,
"text": "(Fellbaum, 1998)",
"ref_id": null
},
{
"start": 379,
"end": 398,
"text": "(Chen et al., 1996)",
"ref_id": "BIBREF0"
},
{
"start": 777,
"end": 801,
"text": "(Church and Hanks, 1989)",
"ref_id": "BIBREF1"
},
{
"start": 902,
"end": 931,
"text": "(Kilgarriff and Tugwell, 2001",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Therefore, when provided with collocation lists, most linguists report the top \"few,\" based on their preferences. Some linguists report the top one or two and keep the rest in appendixes. In fact, the current search summary from corpora or lexical resources does not give enough information regarding which of the collocational patterns are significantly different from the bottom words. In this paper, a research question is asked, i.e., whether or not one can select top rankings from linguistic results using principled measures. This selection of top rankings is useful because it will provide an automatic identification of significant linguistic results from the data. This also involves deciding which significant results are likely to be prototypically used in certain linguistic environments (Rosch and Mervis, 1975) . In this paper, we propose three methods in which the threshold of linguistic listings can be extracted. In the following section, data presentation in Sketch Engine is first discussed.",
"cite_spans": [
{
"start": 801,
"end": 825,
"text": "(Rosch and Mervis, 1975)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Sketch Engine is a system that provides the collocations of words according to grammatical relations. It has been used to analyze large scale corpora data such as the British National Corpus (BNC) and the Chinese Gigaword corpus. The Chinese Sketch Engine was created by Kilgarriff, Huang, Rychly et al. (2005) . It has the same function as the English Sketch Engine, which also arranges collocates for query words in grammatical relations. For example, when a query word is searched in Sketch Engine, the system will return with the collocates for this query word. Sketch Engine then arranges them in grammatical relations such as 'objects of the query word,' 'subjects of the query word,' 'modifiers of the query word,' etc.",
"cite_spans": [
{
"start": 271,
"end": 310,
"text": "Kilgarriff, Huang, Rychly et al. (2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Presentation in the Sketch Engine",
"sec_num": "2."
},
{
"text": "The following Figure 1 shows an example of the search result for \u7d93\u6fdf jing1ji4 'economy' in the Chinese Sketch Engine. In Figure 1 , the query word and its frequency in the entire Gigaword corpus are shown (i.e. 1,295,965 instances). The frequency for pair of collocates such as \u7d93\u6fdf jing1ji4 'economy' and \u632f\u8208 zheng4xing4 'to give life to' under the 'object-of' relation (arrow in Figure 1 ) is given. In this case, it is 4,046 (in the second column for each relation), indicating that \u7d93\u6fdf jing1ji4 'economy' appears as the 'object of' the verb \u632f\u8208 zheng4xing4 'to give life to' 4046 times in the whole Gigaword corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 14,
"end": 22,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 120,
"end": 128,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 377,
"end": 385,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Data Presentation in the Sketch Engine",
"sec_num": "2."
},
{
"text": "In addition to frequency, Sketch Engine provides an additional score for the ranking of saliency of collocates. This is because Kilgarriff and Tugwell (2001) suggest that frequency alone may not be a reliable score because frequency of the collocates are relative to the number of both words in the whole corpus. Therefore, they suggest using a more reliable account to standardize all frequencies for the collocations based on the overall performance of the collocates in a particular condition. However, while the presentation of saliency in Sketch Engine is robust and useful, it does not indicate which of the collocates in each relation are meaningfully salient.",
"cite_spans": [
{
"start": 128,
"end": 157,
"text": "Kilgarriff and Tugwell (2001)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Saliency of each collocate and query word",
"sec_num": null
},
{
"text": "WordNet (http://wordnet.princeton.edu/) can also display search results based on a \"high frequency count\" (see Figure 2 ). This frequency count is ordered from the most frequent sense to the least frequent sense (Tengi, 1999) that is computed using a semantic concordance created by Landes, Leacock and Tengi (1999) based on two corpora -the Brown corpus and Stephen Crane's novella entitled The Red Badge of Courage. 1 From Figure 2 , one can see that the sense frequencies for 'depart' are 11, 5, 3 and 1. We can see that there is a bigger gap between the frequency of the first sense (11) and the frequency of the second sense (5). Based on this gap, we may say that the first sense is more often used than the second one. It is also possible to say that the first sense is more prototypical than the other senses. Therefore, there is possibly a threshold after the first sense to make the first sense more distinctive in use than the others. Therefore, this paper suggests that there should be some objective methods which can help determine the threshold of linguistic listings as such. This paper suggests three methods to find out how many of the top few results should be considered significant in Sketch Engine. These methods are elaborated below.",
"cite_spans": [
{
"start": 212,
"end": 225,
"text": "(Tengi, 1999)",
"ref_id": "BIBREF7"
},
{
"start": 283,
"end": 315,
"text": "Landes, Leacock and Tengi (1999)",
"ref_id": "BIBREF5"
},
{
"start": 418,
"end": 419,
"text": "1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 111,
"end": 119,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 425,
"end": 433,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Saliency of each collocate and query word",
"sec_num": null
},
{
"text": "This paper will discuss three methods. Methods One and Two are based on the characteristics of the distributional listings, which usually follow Zipf's law (Zipf, 1932) . Therefore, these two methods will be discussed together in section 3.1 below. Section 3.2 will discuss Method Three, which is different from both methods one and two. Section 4 will present results from all three methods.",
"cite_spans": [
{
"start": 156,
"end": 168,
"text": "(Zipf, 1932)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Thresholds of Linguistic Listings",
"sec_num": "3."
},
{
"text": "Zipf's law states that the most frequent value is most likely to be twice as much as the second most frequent value. For example, when a sample size is large enough, the result of a frequency listing is likely to be in a distributional pattern. For instance, the expression \u8d77\u98db qi3fei1 'takeoff' in (1) below, has the following collocates from the Sketch Engine (Figure 3 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 361,
"end": 370,
"text": "(Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "3.1.Methods One and Two",
"sec_num": null
},
{
"text": "(1) \u4f46 \u5728 \u53f0\u7063 \u7d93\u6fdf \u8d77\u98db \u5f8c (Central News Agency of Taiwan) dan4 zai4 tai2wan1 jing1ji4 qi3fei2 hou4 but at Taiwan economy takeoff after \"But after the economy of Taiwan takeoffs\u2026\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.1.Methods One and Two",
"sec_num": null
},
{
"text": "The collocates for \u8d77\u98db qi3fei1 'takeoff' which have similar grammatical relations with \u7d93\u6fdf jing1ji4 'economy' (the 'subject' relation) can be seen in Figure 3 (such as \u98db\u6a5f fei1ji1 'airplane,' \u73ed\u6a5f ban1ji1 'flight,' \u8dd1\u9053 pao3dao4 'path' as well as \u7d93\u6fdf jing1ji4 'economy').",
"cite_spans": [],
"ref_spans": [
{
"start": 148,
"end": 156,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "3.1.Methods One and Two",
"sec_num": null
},
{
"text": "We can see that in Figure 3 , the saliency values of the collocates are arranged in descending order (from 55.67, 48.31, 38.64, and continue on until the lowest value, which is zero). Most frequency list follows the pattern of the Zipf's law, where the top few are usually very high and the values will decrease until a state where changes become minimum. For example, for the saliency list in Figure 3 , when plotted in graph, the representation can be seen in Figure 4 below. In Figure 4 , the x-axis is the 'Chinese subject' and the y-axis is the 'saliency' (Figure 4 uses the rank of the Chinese word to represent the Chinese character -rank 1, 2, 3\u2026). All these Chinese words are the collocates of \u8d77\u98db qi3fei1 'takeoff.' The function for the type of graph in Figure 4 is such that in (2), where any point in the graph will be (x, f(x)). x is the rank of Chinese subjects on the x-axis and f(x) is the function to calculate the value on the y-axis. (2)",
"cite_spans": [],
"ref_spans": [
{
"start": 19,
"end": 27,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 394,
"end": 402,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 462,
"end": 471,
"text": "Figure 4",
"ref_id": "FIGREF4"
},
{
"start": 482,
"end": 490,
"text": "Figure 4",
"ref_id": "FIGREF4"
},
{
"start": 562,
"end": 571,
"text": "(Figure 4",
"ref_id": "FIGREF4"
},
{
"start": 764,
"end": 772,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "3.1.Methods One and Two",
"sec_num": null
},
{
"text": "Using this formula, Methods One and Two will find a point that separates any distributional listing into two lists, i.e., significant and insignificant lists. The purpose of doing this is to find out which among the list should be considered significant and which to be insignificant. Methods One and Two are based on the assumption that there is a point where the curve changes the most when it goes down the y-axis to the x-axis. Methods One calculates the position of (w, z) where it is of shortest distance from (0, 0). This is because when every line departs from the starting point of (0,0), there will be a line that is the shortest distance from the curve. The point where this line touches the curve is the point where the curve changes the most from the y-axis to the x-axis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.1.Methods One and Two",
"sec_num": null
},
{
"text": "Method Two calculates the most slanted slope between the x-axis and the y-axis. When the slope is most slanted, the possibility is high that the curve changes the most at a certain point (w, z). This is because the higher the curve on the y-axis, the more vertical the slope will be. Moreover, the further the curve moves away from (0, 0) on the x-axis, the more horizontal the slope will be. Therefore, the most slanted slope between the vertical and horizontal will be the possible threshold representing where the curve has changed the most.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.1.Methods One and Two",
"sec_num": null
},
{
"text": "The formulas for the two methods are shown in (3a) and (3b) below. In these two formula, a and b are the variables in the function of the nonlinear regression while i is the threshold value and n is the total number of collocates in the relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.1.Methods One and Two",
"sec_num": null
},
{
"text": "Method One:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ") ( a x b y =",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u23a5 \u23a6 \u23a4 \u23a2 \u23a3 \u23a1 \u2212 = \u2212 ) 2 2 1 ( 2 ) (( a ab i (3a) Method Two: \u23a5 \u23a6 \u23a4 \u23a2 \u23a3 \u23a1 \u2212 = \u2212 a ab i 1 1 ) (",
"eq_num": "(3b)"
}
],
"section": ") ( a x b y =",
"sec_num": null
},
{
"text": "Method Three is elaborated below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ") ( a x b y =",
"sec_num": null
},
{
"text": "Method Three is called 'mean of means' where series of means will be calculated. For example, for the saliency list in Figure 3 , the first mean is the mean of collocates one (55.67) and two (43.81); the second mean is the mean of collocates one (55.67), two (43.81), and three (38.64), i.e., add a new collocate every time. When all means have been calculated for all collocates, an overall mean is obtained from all the means (thus, mean of means). This overall mean will be used as a threshold value for the cut-off point, formulated below. The computation of mean of means is shown in Figure 6 below. From Figure 6 , we can see that a series of means is produced by increasing the number of collocates each time in the calculation. In the following section, we will discuss the overall results for the three methods.",
"cite_spans": [],
"ref_spans": [
{
"start": 119,
"end": 127,
"text": "Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 589,
"end": 597,
"text": "Figure 6",
"ref_id": "FIGREF8"
},
{
"start": 610,
"end": 618,
"text": "Figure 6",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "Method Three",
"sec_num": "3.2."
},
{
"text": "For both Methods One and Two, normalization is used because the ranking in the x-axis (1, 2, 3\u2026) is not comparable to the y-axis (between 0 to about 50). 2 The results for Methods One and Two are shown in Table 1 below for three metaphorical expressions, i.e., \u6210\u9577 cheng2zhang3 'grow/growth,' \u8d77\u98db qi3fei1 'takeoff' and \u7671\u7613 tan1huan4 'paralytic.' In this table, the first column shows the metaphorical expressions, followed by the total collocates each grammatical relation possesses. \"Pseudo-R-square\" in column four shows the percentages of the curve that fit the non-linear regression (or in colloquial term, \"curve fitting\"). For example, the first relation (subject) of \u6210\u9577 cheng2zhang3 'grow/growth' shows a \"curve fitting\" of 91%. The results for Methods One and Two are given in columns four and five. The 'subject' relation of \u6210\u9577 cheng2zhang3 'grow/growth' shows to have threshold values above collocate number 5 in Method One and collocate number 4 in Method Two. Similar results can be seen in the examples of \u8d77\u98db qi3fei1 'takeoff' and \u7671\u7613 tan1huan4 'paralytic' in Table 1 above. Table 2 provides the mean values in the last column using Method Three. As a comparison, the results for all three methods are shown in Table 2 For the axis-y (saliency values), each collocate from rank 1 to n will be divided by rank from highest to lowest. For example, if a Chinese word has 200 collocates in a particular relation, the normalization will divide collocates ranked 1 to 200 with 200 (thus, 200 200 ,... 200 2 , 200 1 ). Therefore, the output of the axis-y is a list of numbers ranging from 0 to 1. As for axis-x, each saliency value will be divided by the sum of all 200 saliency values. The output of the axis-x is also displayed on a scale ranging from 0 to 1 (which is also the percentage of the saliency values). Therefore, from the results, we can see that three different methods provide different threshold values. These methods are useful depending on the purpose of the research. For example, Methods One and Two can be applied to calculating smaller sampling of thresholds (about top 1 to 6) but Method Three allows the calculation of larger sampling of thresholds. For different purposes of linguistic research, these three methods provide choices as to how to select top results using principled methodology.",
"cite_spans": [
{
"start": 154,
"end": 155,
"text": "2",
"ref_id": null
},
{
"start": 1484,
"end": 1517,
"text": "(thus, 200 200 ,... 200 2 , 200 1",
"ref_id": null
}
],
"ref_spans": [
{
"start": 205,
"end": 212,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1069,
"end": 1076,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1084,
"end": 1091,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 1220,
"end": 1227,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4."
},
{
"text": "In this paper we have proposed three methods to help linguists ascertain which distributional patterns are linguistically meaningful. We suggest calculating a cut-off point for the saliency listings in Sketch Engine, since most empirical studies do not know where to stop when listing results. Most studies tend to list the top few items, and the number of the top few depends on the choice of the researchers. If there are criterion-based methods to find out the thresholds for the linguistic listings, subjectivity will be reduced in terms of choosing which collocational patterns are selected. Furthermore, most lexical resources provide wordlists according to different criteria such as frequency, Mutual Information values, collocation, saliency values, etc. However, a cut-off point for any one of these lists has yet to be suggested. This paper, therefore, deals with the general problems of these listings and suggests three possible ways to solve the problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "Future work suggests incorporation of the calculation of threshold values in lexical resources such as Sinica Corpus, the English and Chinese Sketch Engine, etc. This proposed idea should contribute to computational linguistic research, linguistic research that relies on statistical methods to analyze linguistic data, and researchers who need to run psycholinguistic experiments related to word meaning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5."
},
{
"text": "Only senses that were found in the two corpora can be shown their frequency counts in brackets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A small number of words in Sketch Engine are wrongly tagged. For example, \u79cb\u9580ciu1men2 is a location where the airplane takeoffs but it is wrongly tagged. These errors are due to the problems of Sketch Engine but they will be removed automatically during clustering because they may not fall in any clusters within the list of collocates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "SINICA CORPUS: Design Methodology for Balanced Corpora",
"authors": [
{
"first": "K.-J",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "C.-R",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the Eleventh Pacific Asia Conference on Language, Information and Computation",
"volume": "",
"issue": "",
"pages": "167--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, K.-J. and C.-R. Huang. 1996. SINICA CORPUS: Design Methodology for Balanced Corpora. Proceedings of the Eleventh Pacific Asia Conference on Language, Information and Computation, 167-176.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Word Association Norms, Mutual Information and Lexicography",
"authors": [
{
"first": "K",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1989,
"venue": "the Proceedings of the. 27th Annual Meeting of ACL",
"volume": "",
"issue": "",
"pages": "76--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Church, K. W. and P. Hanks. 1989. Word Association Norms, Mutual Information and Lexicography. In the Proceedings of the. 27th Annual Meeting of ACL, Vancouver, 76-83",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "WordNet: An Electronic Lexical Database",
"authors": [],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fellbaum, C. ed., 1998. WordNet: An Electronic Lexical Database. MIT Press.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "WORD SKETCH: Extraction and Display of Significant Collocations for Lexicography",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Tugwell",
"suffix": ""
}
],
"year": 2001,
"venue": "the Proceedings of the ACL Workshop on COLLOCATION: Computational Extraction",
"volume": "",
"issue": "",
"pages": "32--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kilgarriff, A. and D. Tugwell. 2001. WORD SKETCH: Extraction and Display of Significant Collocations for Lexicography. In the Proceedings of the ACL Workshop on COLLOCATION: Computational Extraction, Analysis and Exploitation, 32-38.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Chinese Word Sketches",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kilgarriff",
"suffix": ""
},
{
"first": "C.-R",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Rychly",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Tugwell",
"suffix": ""
}
],
"year": 2005,
"venue": "the Proceedings of Asialex",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kilgarriff, A., C.-R. Huang, P. Rychly, S. Smith, D. Tugwell. 2005. Chinese Word Sketches. In the Proceedings of Asialex, Singapore.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Building Semantic Concordance",
"authors": [
{
"first": "S",
"middle": [],
"last": "Landes",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Leacock",
"suffix": ""
},
{
"first": "R",
"middle": [
"I"
],
"last": "Tengi",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "199--216",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Landes, S., C. Leacock, and R. I. Tengi. 1999. Building Semantic Concordance. In C. Fellbaum. Ed., WordNet: An Electronic Lexical Database. MIT: Cambridge, Mass. and London, England, 199-216.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Family Resemblances: Studies in the Internal Structure of Categories",
"authors": [
{
"first": "E",
"middle": [],
"last": "Rosch",
"suffix": ""
},
{
"first": "C",
"middle": [
"B"
],
"last": "Mervis",
"suffix": ""
}
],
"year": 1975,
"venue": "Cognitive Psychology",
"volume": "7",
"issue": "",
"pages": "573--605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rosch, E. and C. B. Mervis. 1975. Family Resemblances: Studies in the Internal Structure of Categories. Cognitive Psychology, 7, 573-605.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Design and Implementation of the WordNet Lexical Database and Searching Software",
"authors": [
{
"first": "Randee",
"middle": [
"I"
],
"last": "Tengi",
"suffix": ""
}
],
"year": 1999,
"venue": "In Christiane Fellbaum. Ed",
"volume": "",
"issue": "",
"pages": "105--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tengi, Randee I. 1999. \"Design and Implementation of the WordNet Lexical Database and Searching Software.\" In Christiane Fellbaum. Ed., WordNet: An Electronic Lexical Database. MIT: Cambridge, Mass. and London, England, 105-127.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Selected Studies of the Principle of Relative Frequency in Language",
"authors": [
{
"first": "George",
"middle": [],
"last": "Zipf",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kingsley",
"suffix": ""
}
],
"year": 1932,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zipf, George Kingsley. 1932. Selected Studies of the Principle of Relative Frequency in Language. Cambridge (Mass.)",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Collocates for the Query Word \u7d93\u6fdf jing1ji4 'Economy' in the Chinese Sketch Engine",
"num": null
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"text": "Displayed by Frequency Counts in WordNet 3.0",
"num": null
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"text": "Collocates of 'Subjects' of \u8d77\u98db qi3fei1 'takeoff' in the CNA in the Sketch Engine",
"num": null
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"text": "Pattern of Distributional Data for \u8d77\u98db qi3fei1 'takeoff' following Zipf's Law",
"num": null
},
"FIGREF6": {
"type_str": "figure",
"uris": null,
"text": "Three Ways to find Threshold Values",
"num": null
},
"FIGREF8": {
"type_str": "figure",
"uris": null,
"text": "Computing 'Means' for the Collocates of 'Subjects' of \u8d77\u98db qi3fei1 'takeoff' (CNA)",
"num": null
},
"TABREF1": {
"type_str": "table",
"content": "<table><tr><td>'Types of Metaphorical Expressions'</td><td>Relations</td><td>Total Collocates</td><td>Pseudo-R-square</td><td>Method One</td><td>Method Two</td></tr><tr><td>\u6210\u9577</td><td/><td/><td/><td/><td/></tr><tr><td>cheng2zhang3</td><td>Subject</td><td>1490</td><td>0.906935</td><td>5.472613</td><td>4.211427</td></tr><tr><td>'grow/growth'</td><td/><td/><td/><td/><td/></tr><tr><td>\u8d77\u98db</td><td/><td/><td/><td/><td/></tr><tr><td>qi3fei1</td><td>Subject</td><td>268</td><td>0.933048</td><td>3.630461</td><td>2.926560</td></tr><tr><td>'takeoff'</td><td/><td/><td/><td/><td/></tr><tr><td>\u7671\u7613</td><td>Subject</td><td>276</td><td>0.935357</td><td>4.384251</td><td>3.748123</td></tr><tr><td>tan1huan4 'paralytic'</td><td>Modifies</td><td>221</td><td>0.967868</td><td>3.787687</td><td>3.173571</td></tr></table>",
"text": "",
"html": null,
"num": null
},
"TABREF3": {
"type_str": "table",
"content": "<table><tr><td/><td>Chinese Collocates</td><td>English</td><td colspan=\"3\">Frequency Saliency Means</td><td/></tr><tr><td/><td/><td>Gloss</td><td/><td/><td/><td/></tr><tr><td>1</td><td>\u98db\u6a5f fei1ji1</td><td>airplane</td><td>538</td><td>60.19</td><td>---</td><td/></tr><tr><td>2 3</td><td>\u73ed\u6a5f ban1ji1 \u8dd1\u9053 pao3dao4</td><td>airliner runway</td><td>248 71</td><td>48.40 39.86</td><td>54.30 49.48</td><td>Method Two</td></tr><tr><td>4</td><td>\u7d93\u6fdf jing1ji4</td><td>economy</td><td>591</td><td>37.79</td><td>46.56</td><td/></tr><tr><td>5</td><td>\u5922\u60f3 meng4xiang3</td><td>dream</td><td>35</td><td>36.09</td><td>44.47</td><td/></tr><tr><td>6 7</td><td>\u5ba2\u6a5f ke4ji1 hang2kung1 mu3jian4 \u822a\u7a7a\u6bcd\u8266</td><td>passenger plane aircraft carrier</td><td>67 33</td><td>33.92 32.32</td><td>42.71 41.22</td><td>Method One</td></tr><tr><td>8</td><td>\uf904\ufa08\u9053 hua2xing2dao4</td><td>taxiway</td><td>8</td><td>30.8</td><td>39.92</td><td/></tr><tr><td>9</td><td>\u5c08\u6a5f zhuan1ji1</td><td>special plane</td><td>28</td><td>27.07</td><td>38.49</td><td/></tr><tr><td>10</td><td>\u5c0f\u6642 xiao3shi2</td><td>hour</td><td>35</td><td>24.6</td><td>37.10</td><td/></tr><tr><td>11</td><td>\u822a\u6a5f hang2ji1</td><td>flight</td><td>14</td><td>24.43</td><td>35.95</td><td/></tr><tr><td>12</td><td>\u5305\u6a5f bao1ji1</td><td>charter plane</td><td>15</td><td>21.96</td><td>34.79</td><td/></tr><tr><td>13</td><td>\u822a\u73ed hang2ban1</td><td>flight</td><td>14</td><td>21.5</td><td>33.76</td><td/></tr><tr><td>14</td><td>\u8ecd\u6a5f jun1ji1</td><td>military plane</td><td>16</td><td>21.41</td><td>32.88</td><td/></tr><tr><td>15</td><td>\u6230\u6a5f zhan4ji1</td><td>fighter plane</td><td>26</td><td>21.19</td><td>32.10</td><td/></tr><tr><td>16</td><td>\u76f4\u6607\u6a5f zhi2shen1ji1</td><td>helicopter</td><td>18</td><td>20.41</td><td>31.37</td><td/></tr><tr><td>17</td><td>\u73ed\u6b21 ban1ci4</td><td>flight order</td><td>13</td><td>19.93</td><td>27.85</td><td/></tr><tr><td>\u2026..</td><td>..\u2026..</td><td>\u2026..</td><td>..\u2026..</td><td>\u2026..</td><td>..\u2026..</td><td/></tr><tr><td>87</td><td>\u99d5\u99db\u54e1 jia4shi3yuan2</td><td>driver</td><td>3</td><td>7.94</td><td>15.28</td><td/></tr><tr><td>88 89</td><td>\u7279\u865f te4hao4 \u79cb\u9580 ciu1men2</td><td>special umber a state in Siberia</td><td>1 1</td><td>7.85 7.83</td><td>15.20 15.11</td><td>Method Three</td></tr><tr><td>90</td><td>\u7522\u696d chan3ye4</td><td>Industry</td><td>15</td><td>7.82</td><td>15.03</td><td/></tr><tr><td>91</td><td>\u96d9\u6a5f shuang1ji1</td><td>dual machines</td><td>1</td><td>7.78</td><td>14.95</td><td/></tr><tr><td>92</td><td>\u7238\u7238\u7bc0 ba1ba1jie2</td><td>father's day</td><td>1</td><td>7.66</td><td>14.87</td><td/></tr><tr><td>\u2026..</td><td>..\u2026..</td><td>\u2026..</td><td>..\u2026..</td><td>\u2026..</td><td>..\u2026..</td><td/></tr><tr><td>\u2026..</td><td>\u2026\u2026</td><td>\u2026..</td><td>\u2026\u2026</td><td>\u2026..</td><td>\u2026\u2026</td><td/></tr><tr><td>267</td><td>\u80fd\uf98a neng2li4</td><td>capability</td><td>1</td><td>0.04</td><td>7.45</td><td/></tr><tr><td>268</td><td>\u76ee\u6a19 mu4biao1</td><td>goal</td><td>1</td><td>0.03</td><td>7.42</td><td/></tr><tr><td/><td colspan=\"2\">Mean of Means (Threshold)</td><td/><td/><td>15.03</td><td/></tr></table>",
"text": "",
"html": null,
"num": null
}
}
}
}