ACL-OCL / Base_JSON /prefixU /json /U08 /U08-1010.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "U08-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:11:54.383835Z"
},
"title": "Text Mining Based Query Expansion for Chinese IR",
"authors": [
{
"first": "Zhihan",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Queensland University of Technology Brisbane",
"location": {
"postCode": "4001",
"region": "QLD",
"country": "Australia"
}
},
"email": ""
},
{
"first": "Yue",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Queensland University of Technology Brisbane",
"location": {
"postCode": "4001",
"region": "QLD",
"country": "Australia"
}
},
"email": "yue.xu@qut.edu.au"
},
{
"first": "Shlomo",
"middle": [],
"last": "Geva",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Queensland University of Technology Brisbane",
"location": {
"postCode": "4001",
"region": "QLD",
"country": "Australia"
}
},
"email": "s.geva@qut.edu.au"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Query expansion has long been suggested as a technique for dealing with word mismatch problem in information retrieval. In this paper, we describe a novel query expansion method which incorporates text mining techniques into query expansion for improving Chinese information retrieval performance. Unlike most of the existing query expansion strategies which generally select indexing terms from the top N retrieved documents and use them to expand the query, in our proposed method, we apply text mining techniques to find patterns from the retrieved documents which contain relevant terms to the query terms, then use these relevant terms which can be indexing terms or indexing term patterns to expand the query. The experiment with NTCIR-5 collection shows apparent improvement in both precision and recall.",
"pdf_parse": {
"paper_id": "U08-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "Query expansion has long been suggested as a technique for dealing with word mismatch problem in information retrieval. In this paper, we describe a novel query expansion method which incorporates text mining techniques into query expansion for improving Chinese information retrieval performance. Unlike most of the existing query expansion strategies which generally select indexing terms from the top N retrieved documents and use them to expand the query, in our proposed method, we apply text mining techniques to find patterns from the retrieved documents which contain relevant terms to the query terms, then use these relevant terms which can be indexing terms or indexing term patterns to expand the query. The experiment with NTCIR-5 collection shows apparent improvement in both precision and recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The amazing growing speed in the number of Chinese Internet users indicates that building Chinese information retrieval systems is in great demand. This paper presents a method aimed at improving the performance of Chinese document retrieval.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unlike English text in which sentences are sequences of words delimited by white spaces, in Chinese text, sentences are represented as strings of Chinese characters without separating spaces between words. For Chinese information retrieval, the query is usually a set of Chinese words rather than a sequence of Chinese characters. For character based Chinese information retrieval, since the texts are not segmented, the retrieved documents which contain the character sequence of the query may not be relevant to the query since they may not contain the words in the query. Therefore, the quality of character based Chinese information retrieval is not satisfactory. On the other hand, a study has shown that the relationship between segmentation and retrieval is in fact non-monotonic, that is, high precision of segmentation alone may not improve retrieval performance (Peng, Huang, Schuurmans and Cercone 2002) . In this paper we propose a Chinese information retrieval model which combines character based retrieval and word based ranking to achieve better performance.",
"cite_spans": [
{
"start": 872,
"end": 914,
"text": "(Peng, Huang, Schuurmans and Cercone 2002)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Another critical problem is that the original query may not represent adequately what a user wants. By appending additional terms to the original query, Query Expansion attempts to construct a richer expression to better represent the user's information need. Pseudo relevance feedback is a popular technique for query expansion. The basic idea is to automatically extract additional relevant terms from the top ranked documents in the initial result list. These terms are added to the original query, and the extended query is executed with the expectation of improved performance. In this paper, we propose a new approach to improving the performance of Chinese document retrieval by expanding queries with highly correlated segmented words, generated by using text mining techniques.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of this paper is structured as follow. Section 2 briefly reviews some related work. In Section 3, we describe our retrieval model, and then present the text mining based query expansion method in Section 4. Section 5 presents experimental results and Section 6 concludes the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Unlike English text in which sentences consist of words delimited by white spaces, in Chinese text, sentences are represented as strings of Chinese characters without delimiters. Therefore, Chinese word segmentation is the first phase in Chinese language processing and has been widely studied for many years (Gao and Li 2005; Xue 2003; Sproat and Shih 2002; Wang, Liu and Qin 2006) . Both Chinese characters and words can be used as the indexing units for Chinese IR. Several approaches have shown that single character indexing can produce good results, but word and bi-gram indexing can achieve slightly better performance. This however incurs greater time and space complexity with limited performance improvement (Sproat and Shih 2002; Li 1999; Kwok 1997; Peng, Huang, Schuurmans and Cercone 2002) . In this paper, we propose a ranking method that combines character indexing and segmented word indexing to re-rank retrieved documents and promote relevant documents to higher positions.",
"cite_spans": [
{
"start": 309,
"end": 326,
"text": "(Gao and Li 2005;",
"ref_id": "BIBREF3"
},
{
"start": 327,
"end": 336,
"text": "Xue 2003;",
"ref_id": "BIBREF6"
},
{
"start": 337,
"end": 358,
"text": "Sproat and Shih 2002;",
"ref_id": "BIBREF12"
},
{
"start": 359,
"end": 382,
"text": "Wang, Liu and Qin 2006)",
"ref_id": "BIBREF17"
},
{
"start": 718,
"end": 740,
"text": "(Sproat and Shih 2002;",
"ref_id": "BIBREF12"
},
{
"start": 741,
"end": 749,
"text": "Li 1999;",
"ref_id": "BIBREF7"
},
{
"start": 750,
"end": 760,
"text": "Kwok 1997;",
"ref_id": "BIBREF4"
},
{
"start": 761,
"end": 802,
"text": "Peng, Huang, Schuurmans and Cercone 2002)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Pseudo-relevance feedback is an important query expansion technique for improving IR performance (Qiu and Frei 1993; Sun, Ong and Chua 2006; Robertson and Jones 1976) . The basic insight which motivates pseudo relevance feedback is that often the top of the initially ranked list of results contains a relatively high proportion of relevant documents. The conjecture is that despite the presence of some irrelevant documents, these retrieved documents might still be used to identify relevant terms that co-occur in the relevant documents. These terms are then used to modify the original query and better reflect the user's information needs. With the expanded query, a second retrieval round is performed and the returned result is expected to contain more relevant documents which have been missed in the first retrieval round. For pseudo relevance feedback query expansion, the most important task is to find the terms from the retrieved documents that are considered relevant to the query. Therefore, relevant term selection is crucial in pseudo relevance feedback query expansion. The standard criteria for selecting relevant terms have been proposed using tf/idf in vector space model (Rocchio 1997) and probabilistic model (Robertson and Jones 1976) . Query length has been considered in (Kwok, Grunfeld and Chan 2000) for weighting expansion terms and some linguistic features also have been tried in (Smeaton and Rijsbergen 1983) . We are proposing to use text mining techniques to find the relevant terms.",
"cite_spans": [
{
"start": 97,
"end": 116,
"text": "(Qiu and Frei 1993;",
"ref_id": "BIBREF9"
},
{
"start": 117,
"end": 140,
"text": "Sun, Ong and Chua 2006;",
"ref_id": "BIBREF11"
},
{
"start": 141,
"end": 166,
"text": "Robertson and Jones 1976)",
"ref_id": "BIBREF13"
},
{
"start": 1192,
"end": 1206,
"text": "(Rocchio 1997)",
"ref_id": "BIBREF14"
},
{
"start": 1231,
"end": 1257,
"text": "(Robertson and Jones 1976)",
"ref_id": "BIBREF13"
},
{
"start": 1296,
"end": 1326,
"text": "(Kwok, Grunfeld and Chan 2000)",
"ref_id": "BIBREF5"
},
{
"start": 1410,
"end": 1439,
"text": "(Smeaton and Rijsbergen 1983)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Data Mining is about analyzing data and finding hidden patterns using automatic or semi-automatic means. Text mining is a research field of data mining which refers to the process of deriving high quality patterns and trends from text. We are proposing to apply text mining techniques to finding frequent patterns in the retrieved documents in the first retrieval round which contain query terms. These patterns provide us with the candidate sequences to find more terms which are relevant to the original query.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The application of text mining to information retrieval may improve precision and recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In general, character indexing based IR can retrieve most of the relevant documents as long as they contain the query terms (the query terms are sequences of Chinese characters but not necessarily sequences of segmented words in the retrieved documents since the documents are not segmented). However, the retrieval performance is not necessarily good. This is because many irrelevant documents are highly ranked due to high query term frequency corresponding to instances of the query term sequences which are actually not valid words but rather correspond to what would be incorrect word segmentation. On the other hand, the word indexing based IR can apply better ranking and therefore achieve somewhat better performance than that of character indexing based IR. The improvement is limited since some relevant documents may not contain the query terms as segmented words and thus won't be retrieved. In this section, we describe our approach to Chinese information retrieval in which, we firstly create two indexing tables from the data collection, a Chinese character indexing table and a segmented Chinese word indexing table; secondly, the relevant documents are retrieved based on the character indexing, then the retrieved documents are ranked by a method proposed in this paper that ranks the retrieved documents based on both character indexing and word indexing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Retrieval Model",
"sec_num": "3"
},
{
"text": "The first task is word segmentation. We used an online program provided by the Institute of Information science, Academia Sinica in TaiWan to segment the documents. The segmentation precision of the system is approximately 95% as reported in (Ma and Chen 2002) , and most importantly, this system not only accomplishes word segmentation, but also incorporates POS (part of speech) annotation information into the segmented documents which is very important for this research since we are therefore able to utilize the POS information to improve the efficiency and effectiveness of Chinese IR based on word indexing. For example, for the following original Chinese text:",
"cite_spans": [
{
"start": 242,
"end": 260,
"text": "(Ma and Chen 2002)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Indexing and Retrieval Model",
"sec_num": "3.1"
},
{
"text": "\u5728\u4fc4\u7f85\u65af\u7684\u6551\u63f4\u884c\u52d5\u5931\u6557\u4e4b\u5f8c\uff0c\u4e00\u8258\u82f1\u570b\u8ff7\u4f60 \u6f5b\u8247\u6b63\u7dca\u6025\u99b3\u63f4\u9014\u4e2d\uff0c\u9810\u8a08 19 \u65e5\u665a\u4e0a 7 \u6642\u53f0\u5317 \u6642\u9593 19 \u65e5\u665a 23 \u6642\u53ef\u62b5\u9054\u79d1\u65af\u514b\u865f\u6838\u5b50\u52d5\u529b\u6f5b \u8247\u7684\u6c89\u6c92\u73fe\u5834\uff0c\u5c55\u958b\u6551\u63f4\u5de5\u4f5c\u3002",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Indexing and Retrieval Model",
"sec_num": "3.1"
},
{
"text": "We can get the following segmented text: The left part of figure 1 is the segmented document and the right part lists all words in the segmented document and each word is associated with a POS tag immediately after the word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Indexing and Retrieval Model",
"sec_num": "3.1"
},
{
"text": "We built two indexing tables, one is character based and the other is segmented word based. In earlier research (Lu, Xu and Geva 2007) , we have developed a character based indexing system to perform character based retrieval and ranking. In this paper, we directly use the existing retrieval model, which is the traditional Boolean model, a simple model based on set theory and Boolean algebra, to perform document retrieval.",
"cite_spans": [
{
"start": 112,
"end": 134,
"text": "(Lu, Xu and Geva 2007)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Indexing and Retrieval Model",
"sec_num": "3.1"
},
{
"text": "In our previous research, we used the following ranking model to calculate the ranking value of a retrieved document to determine the top N relevant documents:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking Method",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= 5 \u00d7 c =1",
"eq_num": "(1)"
}
],
"section": "Ranking Method",
"sec_num": "3.2"
},
{
"text": "Here, m is the number of query terms, n is the number of distinct query terms that appear in the document as character sequences (not necessarily segmented words). is the frequency of the i th term in the document and is the inverse document frequency of the i th term in the collection. The equation can ensure two things: firstly, the more distinct query terms are matched in a document, the higher the rank of the document. For example, a document that contains four distinct query terms will almost always have higher rank than a document that contains three distinct query terms, regardless of the query terms frequency in the document. Secondly, when documents contain a similar number of distinct terms, the score of a document will be determined by the sum of query terms' tf-idf value, as in traditional information retrieval.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking Method",
"sec_num": "3.2"
},
{
"text": "In this paper, we use the following equation to calculate documents' ranking scores, which is simply the average of character based ranking and word based ranking:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking Method",
"sec_num": "3.2"
},
{
"text": "(2) Where n c is the number of distinct query terms which appear in the document as character sequences (not necessarily segmented words). and are the same as in Equation 1, is the frequency of the i th term in the document as a segmented word, and is the inverse document frequency of the i th term in the collection as a segmented word, and n w is the number of query terms which appear in the document as a segmented word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking Method",
"sec_num": "3.2"
},
{
"text": "It has been recognized that user queries consisting of a few terms are inadequate to fully reflect users' information needs and lead to a poor coverage of the relevant documents. Query expansion is the technique widely used to deal with this problem. In this section, we describe a new method that applies text mining techniques to find terms from the retrieved documents that are highly correlated to the query, and then performs a second retrieval using the expanded query with these relevant terms. In the first part of this section, we introduce the method to generate a set of candidate relevant terms from the retrieved documents. Then, in the second part of this section, we introduce a method to select the most relevant terms to expand the original query.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query Expansion",
"sec_num": "4"
},
{
"text": "In this stage, the top N retrieved documents from the first retrieval round are converted to a set of transactions which are used to mine frequent patterns using text mining methods such as FP-Tree method (Han, Pei and Yin 2000; Zou, Chu, Johnson and Chiu 2001; Agrawal and Srikant 1994) . Query terms are usually nouns. So, it is reasonable to only extract patterns of nouns rather than patterns of all words in the retrieved documents. Therefore, from the retrieved documents we first eliminate all nonnoun words based on POS tag information, then construct a collection of transactions each of which consists of the nouns in a sentence from a retrieved document. Thus, all sentences in the retrieved documents are included in the transaction collection if they contain nouns. A sentence will be excluded from the transaction collection if it does not contain nouns. For example, we extracted 18 unique nouns from the example in Figure 1 . The 18 nouns form a vector with size 18:",
"cite_spans": [
{
"start": 205,
"end": 228,
"text": "(Han, Pei and Yin 2000;",
"ref_id": "BIBREF2"
},
{
"start": 229,
"end": 261,
"text": "Zou, Chu, Johnson and Chiu 2001;",
"ref_id": "BIBREF8"
},
{
"start": 262,
"end": 287,
"text": "Agrawal and Srikant 1994)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 931,
"end": 939,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Query Expansion",
"sec_num": "4"
},
{
"text": "\u4fc4\u7f85\u65af(Russia), \u884c\u52d5 (Action), \u82f1\u570b (England), \u6f5b \u8247 (Submarine), \u9014\u4e2d (Way), 19 \u65e5 (19 th ), \u665a\u4e0a (Evening), 7 \u6642 (7PM), \u53f0\u5317 (TaiBei), \u6642\u9593 (Time), \u665a (Night), 23 \u6642 (23PM), \u79d1\u65af (Kesi), \u865f (Name), \u6838\u5b50 (Nucleon), \u52d5\u529b (Power), \u73fe\u5834 (Scene), \u5de5\u4f5c(Work).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query Expansion",
"sec_num": "4"
},
{
"text": "Each transaction created from one sentence in a retrieved document is a vector of 1s and 0s, each element of the transaction corresponds to a noun with value 1 indicating the corresponding noun appearing in the sentence and otherwise 0. The number of transactions is the number of sentences in the retrieved top N documents. The size of the transaction is the number of unique nouns in the top N documents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query Expansion",
"sec_num": "4"
},
{
"text": "The collection generated from the example is shown in Any pattern mining algorithm can be used to generate frequent patterns from the constructed collection mentioned above. In this paper, we choose the pop-ular used FP-Tree algorithm to perform the pattern mining task. Let Q={q 1 , \u2026, q m } be the query containing m query terms and be the set of mined frequent patterns, Supp(p k ) be the support value of pattern p k . For a query term q i , the set of patterns which contain the query term is denoted as . For a pattern and a query term q i , the set of patterns which contain both and q i is denoted as , and the set of patterns which contain p j is denoted as .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query Expansion",
"sec_num": "4"
},
{
"text": "The following equation is proposed to calculate the relevancy between a query term q i and a pattern which is denoted as :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query Expansion",
"sec_num": "4"
},
{
"text": "Where , , , which are the number of frequent patterns. The first part in Equation 3measures the ratio between the patterns which contain both the query term q i and the pattern . The higher the ratio is, the more the pattern is relevant to the query q i . The second part in the equation is the average support value of the patterns containing both the query term q i and the pattern . A high average support indicates that the co-occurrence of the query term q i and the pattern is high which in turn indicates that q i and are highly correlated with each other. Similarly, the third part is the average support value of the patterns containing only the pattern . A high average support of pattern means that is a popular pattern in the retrieved documents and may have a high relevance to the query term. Equation 3is a linear combination of the three parts with W 1 , W 2 and W 3 as coefficients which can be controlled by users to adjust the weights of the three parts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query Expansion",
"sec_num": "4"
},
{
"text": "For the whole query, the relevancy value is calculated as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query Expansion",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= 100 \u00d7",
"eq_num": "( , ) \u2208 (4)"
}
],
"section": "Query Expansion",
"sec_num": "4"
},
{
"text": "All these patterns are ranked by relevancy values. Based on the relevancy value j p R , we select the top 3 patterns and use the words in the patterns to expand the original query.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query Expansion",
"sec_num": "4"
},
{
"text": "We select NTCIR-5 Chinese corpus as our dataset, which includes 434,882 documents in traditional Chinese. We built two indexing tables, one contain indexes of all Chinese characters appearing in the documents, and the other contains all Chinese words with POS information produced by the online segmentation program developed by the Institute of Information science, Academia Sinica in TaiWan. Fifty queries are used and each query is a simple description which consists of one or several terms. We use the average precision (denoted as P in Table 2 ) and average recall (denoted as R) of the top 10, 15, 20, 30 and 100 retrieved documents to evaluate the performance of the proposed Chinese IR model with the text mining based query expansion (denoted as QE(TM)) by comparing with the character based model without query expansion (denoted as C) , the word-character based model without query expansion (denoted as W-C), and the proposed Chinese IR model with the popularly used standard query expansion method Rocchio (denoted as QE(R)). In the experiments, W1, W2, W3 are set to 0.8, 0.18, 0.02, respectively. Additionally, we set the support value as 0.1 for using FP-Tree method to mine frequent patterns. The experiment results are given in Table 2 . Table 2 shows the precision and recall of the four different retrieval approaches. From Table 2 , we can see that the performance is improved slightly from the character based model to the wordcharacter based model, the precision is improved by 0.2% on average from 29.9% to 30.1% and the recall is improved by 0.1% on average from 33.4% to 33.5%. With the Rocchio standard query expansion, we achieved a little more improvement, the precision is improved by 1.4% on average from 30.1% to 32.5% and the recall is improved by 3.3% from 33.5% to36.8%. However, with our text mining based query expansion method, we achieved much larger improvements; precision is improved by 6.3% on average from 30.1% to 36.4% and the recall is improved by 8.2% from 33.5% to 41.7%.",
"cite_spans": [],
"ref_spans": [
{
"start": 542,
"end": 549,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 1247,
"end": 1254,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 1257,
"end": 1264,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 1345,
"end": 1352,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5"
},
{
"text": "In this paper, we proposed an approach to improve the performance of Chinese information retrieval. The approach includes two aspects, retrieval based on segmented words and text query expansion using text mining techniques. The experiment results show that Chinese word segmentation does not bring significant improvement to Chinese informa-tion retrieval. However, the proposed text mining based query expansion method can effectively improve the performance of the Chinese IR and the improvement is much greater than that achieved by using the standard query expansion method Rocchio.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Translation disambiguation in web-based translation extraction for English-Chinese CLIR",
"authors": [
{
"first": "Chengye",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Shlomo",
"middle": [],
"last": "Geva",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 ACM symposium on Applied computing",
"volume": "",
"issue": "",
"pages": "819--823",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chengye Lu, Yue Xu, Shlomo Geva. 2007. Translation disambiguation in web-based translation extraction for English-Chinese CLIR. Proceedings of the 2007 ACM symposium on Applied computing, 819-823.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Investigating the relationship between word segmentation performance and retrieval performance in Chinese IR",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Schuurmans",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Cercone",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 19th international conference on Computational linguistics",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Peng, X. Huang, D. Schuurmans, and N. Cercone. 2002. Investigating the relationship between word segmentation performance and retrieval perfor- mance in Chinese IR, Proceedings of the 19th inter- national conference on Computational linguistics, 1- 7.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Mining Frequent Patterns without Candidate Generation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Pei",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yin",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc of the 2000 ACM International Conference on Management of Data",
"volume": "",
"issue": "",
"pages": "3--12",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Han, J. Pei and Y. Yin. 2000. Mining Frequent Pat- terns without Candidate Generation. Proc of the 2000 ACM International Conference on Manage- ment of Data, Dallas, TX, 2000, 3-12.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Chinese Word Segmentation and Named Entity Recognition: A Pragmatic Approach. Computational Linguistics, MIT",
"authors": [
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "31",
"issue": "",
"pages": "531--574",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianfeng Gao and Mu Li. 2005. Chinese Word Segmenta- tion and Named Entity Recognition: A Pragmatic Approach. Computational Linguistics, MIT. 531- 574, Vol 31, Issue 4.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Comparing representations in Chinese information retrieval. Proc. Of the ACM SIGIR97",
"authors": [
{
"first": "K",
"middle": [
"L"
],
"last": "Kwok",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "34--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. L. Kwok. 1997. Comparing representations in Chi- nese information retrieval. Proc. Of the ACM SIGIR97, 34-41.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "THREC-8 adhoc, query and filtering track experiments using PIRCS",
"authors": [
{
"first": "K",
"middle": [
"L"
],
"last": "Kwok",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Grunfeld",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Chan",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kwok, K.L, Grunfeld, L., Chan, K. 2000. THREC-8 ad- hoc, query and filtering track experiments using PIRCS, In TREC10.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Chinese Word Segmentation as Character Tagging. Computational Linguistics and Chinese Language Processing",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "8",
"issue": "",
"pages": "29--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue. 2003. Chinese Word Segmentation as Character Tagging. Computational Linguistics and Chinese Language Processing, Vol 8, No 1, Pages 29-48.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Research on improvement of single Chinese character indexing method",
"authors": [
{
"first": "P",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 1999,
"venue": "Journal of the China Society for Scientific and Technical Information",
"volume": "18",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Li. 1999. Research on improvement of single Chinese character indexing method. Journal of the China So- ciety for Scientific and Technical Information, 18(5).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A Pattern Decomposition (PD) Algorithm for Finding all Frequent Patterns in Large Datasets",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Chu",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Chiu",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc of the 2001 IEEE International Conference on Data Mining (ICDM01)",
"volume": "",
"issue": "",
"pages": "673--674",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Q. Zou, W. Chu, D. Johnson and H. Chiu. 2001. A Pat- tern Decomposition (PD) Algorithm for Finding all Frequent Patterns in Large Datasets. Proc of the 2001 IEEE International Conference on Data Min- ing (ICDM01), San Jose, California, 673-674.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Concept based query expansion",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Frei",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of SIGIR 1993",
"volume": "",
"issue": "",
"pages": "160--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qiu Y. and Frei H. 1993. Concept based query expan- sion. In Proceedings of SIGIR 1993, pp. 160-169.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Fast algorithms for mining association rules",
"authors": [
{
"first": "R",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Srikant",
"suffix": ""
}
],
"year": 1994,
"venue": "Proc. Of the 1994 International Conference on Very Large Data Bases",
"volume": "",
"issue": "",
"pages": "487--499",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Agrawal and R. Srikant. 1994. Fast algorithms for mining association rules. Proc. Of the 1994 Interna- tional Conference on Very Large Data Bases, San- tiago, Chile, 487-499.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Mining dependency relations for query expansion in passage retrieval",
"authors": [
{
"first": "Renxu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Chai-Huat",
"middle": [],
"last": "Ong",
"suffix": ""
},
{
"first": "Tat-Seng",
"middle": [],
"last": "Chua",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval",
"volume": "",
"issue": "",
"pages": "382--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Renxu Sun, Chai-Huat Ong, Tat-Seng Chua. 2006. Min- ing dependency relations for query expansion in passage retrieval. Proceedings of the 29th annual in- ternational ACM SIGIR conference on Research and development in information retrieval, Pages: 382 - 389.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Corpus-Based Methods in Chinese Morphology and Phonology",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Sproat",
"suffix": ""
},
{
"first": "Chilin",
"middle": [],
"last": "Shih",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Sproat and Chilin Shih. 2002. Corpus-Based Methods in Chinese Morphology and Phonology.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Relevance Weighting of Search Terms",
"authors": [
{
"first": "S",
"middle": [
"E"
],
"last": "Robertson",
"suffix": ""
},
{
"first": "K. Sparck",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 1976,
"venue": "Journal of the American Society for Information Science",
"volume": "27",
"issue": "3",
"pages": "129--146",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robertson, S.E. and K. Sparck Jones. 1976. Relevance Weighting of Search Terms. Journal of the American Society for Information Science, 27(3): 129-146.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Relevance feedback in information retrieval",
"authors": [
{
"first": "J",
"middle": [],
"last": "Rocchio",
"suffix": ""
}
],
"year": 1997,
"venue": "the Smart Retrieval System: Experiment in Automatic Document Processing",
"volume": "",
"issue": "",
"pages": "313--323",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rocchio, J. 1997. Relevance feedback in information retrieval. In the Smart Retrieval System: Experiment in Automatic Document Processing, Pages 313-323.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The retrieval effects of query expansion on a feedback document retrieval system",
"authors": [
{
"first": "A",
"middle": [
"F"
],
"last": "Smeaton",
"suffix": ""
},
{
"first": "C",
"middle": [
"J"
],
"last": "Van Rijsbergen",
"suffix": ""
}
],
"year": 1983,
"venue": "Computer Journal",
"volume": "26",
"issue": "3",
"pages": "239--246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smeaton, A. F. and Van Rijsbergen, C. J. 1983. The re- trieval effects of query expansion on a feedback document retrieval system.Computer Journal, 26(3): 239-246.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Introduction to CKIP Chinese Word Segmentation System for the First International Chinese Word Segmentation Bakeoff",
"authors": [
{
"first": "Wei-Yun",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Keh-Jiann",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei-Yun Ma, Keh-Jiann Chen. 2002. Introduction to CKIP Chinese Word Segmentation System for the First International Chinese Word Segmentation Ba- keoff. Institute of Information science, Academia Sinica.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A Searchbased Chinese Word Segmentation Method",
"authors": [
{
"first": "Xinjing",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wen",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Qin",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 16th international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinjing Wang, Wen Liu and Yong Qin. 2006. A Search- based Chinese Word Segmentation Method. Proceed- ings of the 16th international conference on World Wide Web.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"text": "Chinese Word Segmentation Example",
"uris": null
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"text": "Figure 2 Precision",
"uris": null
},
"TABREF0": {
"content": "<table><tr><td>sentence</td><td>Transaction</td></tr><tr><td>\u5728\u4fc4\u7f85\u65af\u7684\u6551\u63f4\u884c\u52d5\u5931</td><td>110000000000000000</td></tr><tr><td>\u6557\u4e4b\u5f8c</td><td/></tr><tr><td>\u4e00\u8258\u82f1\u570b\u8ff7\u4f60\u6f5b\u8247\u6b63\u7dca</td><td>001110000000000000</td></tr><tr><td>\u6025\u99b3\u63f4\u9014\u4e2d</td><td/></tr><tr><td>\u9810\u8a08 19 \u65e5\u665a\u4e0a 7 \u6642\u53f0\u5317</td><td>000101111111111110</td></tr><tr><td>\u6642\u9593 19 \u65e5\u665a 23 \u6642\u53ef\u62b5</td><td/></tr><tr><td>\u9054\u79d1\u65af\u514b\u865f\u6838\u5b50\u52d5\u529b\u6f5b</td><td/></tr><tr><td>\u8247\u7684\u6c89\u6c92\u73fe\u5834</td><td/></tr><tr><td>\u5c55\u958b\u6551\u63f4\u5de5\u4f5c</td><td>000000000000000001</td></tr><tr><td colspan=\"2\">Table 1. Example collection of transactions</td></tr></table>",
"num": null,
"type_str": "table",
"text": "",
"html": null
},
"TABREF2": {
"content": "<table/>",
"num": null,
"type_str": "table",
"text": "Precision and Recall",
"html": null
}
}
}
}