ACL-OCL / Base_JSON /prefixD /json /D13 /D13-1045.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D13-1045",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:43:32.343259Z"
},
"title": "Improving Web Search Ranking by Incorporating Structured Annotation of Queries*",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Ding",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology",
"location": {
"country": "China"
}
},
"email": ""
},
{
"first": "Zhicheng",
"middle": [],
"last": "Dou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research Asia",
"location": {
"postCode": "100190",
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology",
"location": {
"country": "China"
}
},
"email": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Harbin Institute of Technology",
"location": {
"country": "China"
}
},
"email": ""
},
{
"first": "Ji-Rong",
"middle": [],
"last": "Wen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Renmin University of China",
"location": {
"settlement": "Beijing",
"country": "China"
}
},
"email": "jirong.wen@gmail.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Web users are increasingly looking for structured data, such as lyrics, job, or recipes, using unstructured queries on the web. However, retrieving relevant results from such data is a challenging problem due to the unstructured language of the web queries. In this paper, we propose a method to improve web search ranking by detecting Structured Annotation of queries based on top search results. In a structured annotation, the original query is split into different units that are associated with semantic attributes in the corresponding domain. We evaluate our techniques using real world queries and achieve significant improvement.",
"pdf_parse": {
"paper_id": "D13-1045",
"_pdf_hash": "",
"abstract": [
{
"text": "Web users are increasingly looking for structured data, such as lyrics, job, or recipes, using unstructured queries on the web. However, retrieving relevant results from such data is a challenging problem due to the unstructured language of the web queries. In this paper, we propose a method to improve web search ranking by detecting Structured Annotation of queries based on top search results. In a structured annotation, the original query is split into different units that are associated with semantic attributes in the corresponding domain. We evaluate our techniques using real world queries and achieve significant improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Search engines are getting more sophisticated by utilizing information from multiple diverse sources. One such valuable source of information is structured and semi-structured data, which is not very difficult to access, owing to information extraction (Wong et al., 2009; Etzioni et al., 2008; Zhai and Liu 2006) and semantic web efforts.",
"cite_spans": [
{
"start": 253,
"end": 272,
"text": "(Wong et al., 2009;",
"ref_id": "BIBREF22"
},
{
"start": 273,
"end": 294,
"text": "Etzioni et al., 2008;",
"ref_id": "BIBREF8"
},
{
"start": 295,
"end": 313,
"text": "Zhai and Liu 2006)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\uf020 *Work was done when the first author was visiting Microsoft Research Asia Driving the web search evolution are the user needs. Users usually have a template in mind when formulating queries to search for information. Agarwal et al., (2010) surveyed a search log of 15 million queries from a commercial search engine. They found that 90% of queries follow certain templates. For example, by issuing the query \"taylor swift lyrics falling in love\", the users are actually seeking for the lyrics of the song \"Mary's Song (oh my my my)\" by artist Taylor Swift. The words \"falling in love\" are actually part of the lyrics they are searching for. However, some top search results are irrelevant to the query, although they contain all the query terms. For example, the first top search result shown in Figure 1 (a) does not contain the required lyrics. It just contains the lyrics of another song of Taylor Swift, rather than the song that users are seeking.",
"cite_spans": [
{
"start": 219,
"end": 241,
"text": "Agarwal et al., (2010)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 798,
"end": 806,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A possible way to solve the above ranking problem is to understand the underlying query structure. For example, after recognizing that \"taylor swift\" is an artist name and \"falling in love\" are part of the lyrics, we can improve the ranking by comparing the structured query with the corresponding structured data in documents (shown in Figure 1(b) ). Some previous studies investigated how to extract structured information from user queries, such as query segmentation (Bergsma and Wang, 2007) . The task of query segmentation is to separate the query words into disjointed segments so that each segment maps to a semantic unit (Li et al., 2011) . For example, the segmentation of the query \"taylor swift lyrics falling in love\" can be \"taylor swift | lyrics | falling in love\". Since query segmentation cannot tell \"talylor swift\" is an artist name and \"falling in love\" are part of lyrics, it is still difficult for us to judge whether each part of the query segmentations matches the right field of the documents or not (such as judge whether \"talylor swift\" matches the artist name in the document). Recently, a lot of work (Sarkas et al., 2010; Li et al., 2009) proposed the task of structured annotation of queries which aims to detect the structure of the query and assign a specific label to it. However, to our knowledge, the previous methods do not exploit an effective approach for improving web search ranking by incorporating structured annotation of queries.",
"cite_spans": [
{
"start": 471,
"end": 495,
"text": "(Bergsma and Wang, 2007)",
"ref_id": null
},
{
"start": 630,
"end": 647,
"text": "(Li et al., 2011)",
"ref_id": null
},
{
"start": 1130,
"end": 1151,
"text": "(Sarkas et al., 2010;",
"ref_id": null
},
{
"start": 1152,
"end": 1168,
"text": "Li et al., 2009)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 337,
"end": 348,
"text": "Figure 1(b)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we investigate the possibility of using structured annotation of queries to improve web search ranking. Specifically, we propose a greedy algorithm which uses the structured data (named annotated tokens in Figure 1 (b)) extracted from the top search results to annotate the latent structured semantics in web queries. We then compute matching scores between the annotated query and the corresponding structured information contained in documents. The top search results can be re-ranked according to the matching scores. However, it is very difficult to extract structured data from all of the search results.",
"cite_spans": [],
"ref_spans": [
{
"start": 221,
"end": 229,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Hence, we propose a relevance feedback based reranking model. We use these structured documents whose matching scores are greater than a threshold as feedback documents, to effectively re-rank other search results to bring more relevant and novel information to the user.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Experiments on a large web search dataset from a major commercial search engine show that the F-Measure of structured annotation generated by our approach is as high as 91%. On this dataset, our reranking model using the structured annotations significantly outperforms two baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The main contributions of our work include:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1. We propose a novel approach to generate structured annotation of queries based on top search results. 2. Although structured annotation of queries has been studied previously, to the best of our knowledge this is the first paper that attempts to improve web search ranking by incorporating structured annotation of queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows. We briefly introduce related work in Section 2. Section 3 presents our method for generating structured annotation of queries. We then propose two novel re-ranking models based on structured annotation in Section 4. Section 5 introduces the data used in this paper. We report experimental results in Section 6. Finally we conclude the work in Section 7. 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "4.",
"sec_num": null
},
{
"text": "4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3.",
"sec_num": null
},
{
"text": "There is a great deal of prior research that identifies query structured information. We summarize this research according to their different approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Recently, a lot of work has been done on understanding query structure (Sarkas et al., 2010; Li et al., 2009; Bendersky et al., 2010) . One important method is structured annotation of queries which aims to detect the structure of the query and assign a specific label to it. Li et al., (2009) proposed web query tagging and its goal is to assign to each query term a specified category, roughly corresponding to a list of attributes. A semi-supervised Conditional Random Field (CRF) is used to capture dependencies between query words and to identify the most likely joint assignment of words to \"categories.\" Comparing with previous work, the advantages of our approach are on the following aspects. First, we generate structured annotation of queries based on top search results, not some global knowledge base or query logs. Second, they mainly focus on the method of generating structured annotation of queries, rather than leverage the generated query structures to improve web search rankings. In this paper, we not only offer a novel solution for generating structured annotation of queries, but also propose a re-ranking approach to improve Web search based on structured annotation of queries. Bendersky et al., (2011) also used top search results to generate structured annotation of queries. However, the annotations in their definition are capitalization, POS tags, and segmentation indicators, which are different from ours.",
"cite_spans": [
{
"start": 71,
"end": 92,
"text": "(Sarkas et al., 2010;",
"ref_id": null
},
{
"start": 93,
"end": 109,
"text": "Li et al., 2009;",
"ref_id": "BIBREF9"
},
{
"start": 110,
"end": 133,
"text": "Bendersky et al., 2010)",
"ref_id": "BIBREF2"
},
{
"start": 276,
"end": 293,
"text": "Li et al., (2009)",
"ref_id": "BIBREF9"
},
{
"start": 1204,
"end": 1228,
"text": "Bendersky et al., (2011)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Structured Annotation of Queries",
"sec_num": "2.1"
},
{
"text": "The concept of query template has been discussed in a few recent papers (Agarwal et al., 2010; Pasca 2011; Liu et al., 2011; Szpektor et al., 2011) . A query template is a sequence of terms, where each term could be a word or an attribute. For example, <#artist_name lyrics #lyrics> is a query template, \"#artist_name\" and \"#lyrics\" are attributes, and \"lyrics\" is a word. Structured annotation of queries is different from query template, as a query template can instantiate multiple queries while a structured annotation only serves for a specific query. Unlike query template, our work is rankingoriented. We aim to automatically annotate query structure based on top search results, and further use these structured annotations to re-rank top search results for improving search performance.",
"cite_spans": [
{
"start": 72,
"end": 94,
"text": "(Agarwal et al., 2010;",
"ref_id": null
},
{
"start": 95,
"end": 106,
"text": "Pasca 2011;",
"ref_id": null
},
{
"start": 107,
"end": 124,
"text": "Liu et al., 2011;",
"ref_id": null
},
{
"start": 125,
"end": 147,
"text": "Szpektor et al., 2011)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Query Template Generation",
"sec_num": "2.2"
},
{
"text": "The task of query segmentation is to separate the query words into disjointed segments so that each segment maps to a semantic unit (Li et al., 2011) . Query segmentation techniques have been well studied in recent literature (Tan and Peng, 2008; Yu and Shi, 2009) . However, structured annotation of queries cannot only separate the query words into disjoint segments but can also assign each segment a semantic label which can help the search engine to judge whether each part of query segmentation matches the right field of the documents or not.",
"cite_spans": [
{
"start": 132,
"end": 149,
"text": "(Li et al., 2011)",
"ref_id": null
},
{
"start": 226,
"end": 246,
"text": "(Tan and Peng, 2008;",
"ref_id": null
},
{
"start": 247,
"end": 264,
"text": "Yu and Shi, 2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Query Segmentation",
"sec_num": "2.3"
},
{
"text": "The problem of entity search has received a great deal of attention in recent years (Guo et al., 2009; Bron et al., 2010; Cheng et al., 2007) . Its goal is to answer information needs that focus on entities. The problem of structured annotation of queries is related to entity search because for some queries, structured annotation items are entities or attributes. Some existing entity search approaches also exploit knowledge from the structure of webpages (Zhao et al., 2005) . Annotating query structured information differs from entity search in the following aspects. First, structured annotation based ranking is applicable for all queries, rather than just entity related queries. Second, the result of an entity search is usually a list of entities, their attributes, and associated homepages, whereas our work uses the structured information from webpages to annotate query structured information and further leverage structured annotation of queries to re-rank top search results. ",
"cite_spans": [
{
"start": 84,
"end": 102,
"text": "(Guo et al., 2009;",
"ref_id": "BIBREF9"
},
{
"start": 103,
"end": 121,
"text": "Bron et al., 2010;",
"ref_id": null
},
{
"start": 122,
"end": 141,
"text": "Cheng et al., 2007)",
"ref_id": null
},
{
"start": 459,
"end": 478,
"text": "(Zhao et al., 2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity Search",
"sec_num": "2.4"
},
{
"text": "We start our discussion by defining some basic concepts. A token is defined as a sequence of words including space, i.e., one or more words. For example, the bigram \"taylor swift\" can be a single token. As our objective is to find structured annotation of queries in a specific domain, we begin with a definition of domain schema. Definition 1 (Domain Schema): For a given domain of interest, the domain schema is the set of attributes. We denote the domain schema as = { 1 , 2 , \u22ef , }, where each is the name of an attribute of the domain. Sample domain schemas are shown in Table 1 . In contrast to previous methods (Agarwal et al., 2010) , our definition of domain schema does not need attribute values. For the sake of simplicity, this paper assumes that attributes in domain schema are available. However, it is not difficult to pre-specify attributes in a specific domain. Definition 2 (Annotated Token): An annotated token in a specific domain is a pair [ , ] , where v is a token and a is a corresponding attribute for v in this domain. [hey jude, #song_name] is an example of an annotated token for the \"lyrics\" domain shown in Table 1 . The words \"hey jude\" comprise a token, and its corresponding attribute name is #song_name. If a token does not have any corresponding attributes, we denote it as free token.",
"cite_spans": [
{
"start": 618,
"end": 640,
"text": "(Agarwal et al., 2010)",
"ref_id": null
},
{
"start": 961,
"end": 966,
"text": "[ , ]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 576,
"end": 583,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1137,
"end": 1144,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3.1"
},
{
"text": "A structured annotation p is a sequence of terms < 1, 2,\u22ef, >, where each could be a free token or an annotated token, and at least one of the terms is an annotated token, i.e., \u2203 \u2208 [1, ] for which is an annotated token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 3 (Structured Annotation):",
"sec_num": null
},
{
"text": "Given the schema for the domain \"lyrics\", <[taylor swift, #artist_name] lyrics [falling in love, #lyrics]> is a possible structured annotation for the query \"taylor swift lyrics falling in love\". In this annotation, [taylor swift, #artist_name] and [falling in love, #lyrics] are two annotated tokens. The word \"lyrics\" is a free token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 3 (Structured Annotation):",
"sec_num": null
},
{
"text": "Intuitively, a structured annotation corresponds to an interpretation of the query as a request for some structured information from documents. The set of annotated tokens expresses the information need of the documents that have been requested. The free tokens may provide more diverse information. Annotated tokens and free tokens together cover all query terms, reflecting the complete user intent of the query.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Definition 3 (Structured Annotation):",
"sec_num": null
},
{
"text": "In this paper, given a domain schema A, we generate structured annotation for a query q based on the top search results of q. We propose using top search results, rather than some global knowledge base or query logs, because:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Structured Annotation",
"sec_num": "3.2"
},
{
"text": "(1) Top search results have been proven to be a successful technique for query explanation (Bendersky et al., 2010) .",
"cite_spans": [
{
"start": 91,
"end": 115,
"text": "(Bendersky et al., 2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Structured Annotation",
"sec_num": "3.2"
},
{
"text": "(2) We have observed that in most cases, a reasonable percentage of the top search results are relevant to the query. By aggregating structured information from the top search results, we can get more query-dependent annotated tokens than using global data sources which may contain more noise and outdated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Structured Annotation",
"sec_num": "3.2"
},
{
"text": "(3) Our goal for generating structured annotation is to improve the ranking quality of queries. Using top search results enables simultaneous and consistent detection of structured information from documents and queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Structured Annotation",
"sec_num": "3.2"
},
{
"text": "As mentioned in Section 3.1, we generate structured annotation of queries based on annotated tokens, which are actually structured data (shown in Figure 1 (b)) embedded in web documents. In this paper, we assume that the annotated tokens are Algorithm 1: Query Structured Annotation Generation Input: a list of weighted annotated tokens T = {t1, \u2026 , tm} ; a query q = \"w1, \u2026 , wn\" where wi \u2208 W; a pre-defined threshold score . Output: a query structured annotation p = <s1, \u2026 , sk>.",
"cite_spans": [],
"ref_spans": [
{
"start": 146,
"end": 154,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Generating Structured Annotation",
"sec_num": "3.2"
},
{
"text": "1: Set p = q = {s1, \u2026, sn}, where si = wi 2: for u = 1 to T.size do 3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Structured Annotation",
"sec_num": "3.2"
},
{
"text": "compute \u210e( , ) = \u210e( , . ) = . \u00d7 0\u2264 < \u2264 ( , . ), where pij = si,\u2026,sj, s.t. sl \u2208 W for l \u2208 [i, j]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Structured Annotation",
"sec_num": "3.2"
},
{
"text": ". //pij is just in the remaining query words 4: end for 5: find the maximum matching tu with = 1\u2264 \u2264 \u210e( , ) 6: if \u210e( , ) > then 7: replace si,\u2026,sj in p with [si,\u2026,sj, tmax.a ] 8: remove tmax from T 9: n \u2190 n -(j -i) 10: go to step 2 11: else 12: return p 13: end if available and we mainly focus on how to use these annotated tokens from top search results to generate structured annotation of queries. The approach is comprised of two parts, one for weighting annotated tokens and the other for generating structured annotation of queries based on the weighted annotated tokens. Weighting: As shown in Figure 1 , annotated tokens extracted from top results may be inconsistent, and hence some of the extracted annotated tokens are less useful or even useless for generating structured annotation.",
"cite_spans": [],
"ref_spans": [
{
"start": 601,
"end": 609,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Generating Structured Annotation",
"sec_num": "3.2"
},
{
"text": "We assume that a better annotated token should be supported by more top results; while a worse annotated token may appear in fewer results. Hence we aggregate all the annotated tokens extracted from top search results, and evaluate the importance of each unique one by a ranking-aware voting model as follows. For an annotated token [v, a] , its weight w is defined as:",
"cite_spans": [
{
"start": 333,
"end": 339,
"text": "[v, a]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Structured Annotation",
"sec_num": "3.2"
},
{
"text": "= 1 \u2211 1\u2264 \u2264",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Structured Annotation",
"sec_num": "3.2"
},
{
"text": "(1) where wj is a voting from document dj, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Structured Annotation",
"sec_num": "3.2"
},
{
"text": "= { \u2212 + 1 , if [ , ] \u2208 0, else",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Structured Annotation",
"sec_num": "3.2"
},
{
"text": "Here, N is the number of top search results and j is the ranking position of document dj. We then generate a weighted annotated token [v, a, w] for each original unique token [v, a] . Generating: The process by which we map a query q to Structured Annotation is shown in Algorithm 1. The algorithm takes as input a list of weighted annotated tokens and the query q, and outputs the structured annotation of the query q. The algorithm first partitions the query q by comparing each sub-sequence of the query with all the weighted annotated tokens, and find the maximum matching annotated token (line 1 to line 5). Then, if the degree of match is greater than the threshold which is a pre-defined threshold score for fuzzy string matching, the query substring will be assigned the attribute label of the maximum matching annotated token (line 6 to line 8). The algorithm stops when all the weighted annotated tokens have been scanned, and outputs the structured annotation of the query.",
"cite_spans": [
{
"start": 134,
"end": 143,
"text": "[v, a, w]",
"ref_id": null
},
{
"start": 175,
"end": 181,
"text": "[v, a]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Structured Annotation",
"sec_num": "3.2"
},
{
"text": "Note that in some cases, the query may fail to exactly match with the annotated tokens, due to spelling errors, acronyms or abbreviations in users' queries. For example, in the query \"broken and beatuful lyrics\", \"broken and beatuful\" is a misspelling of \"broken and beautiful.\" We adopt a fuzzy string matching function for comparing a sub-sequence string s with a token v:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Structured Annotation",
"sec_num": "3.2"
},
{
"text": "( , ) = 1 \u2212 ( , ) max (| |,| |) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Structured Annotation",
"sec_num": "3.2"
},
{
"text": "where EditDistance (s, v) measures the edit distances of two strings, |s| is the length of string s and |v| is the length of string v.",
"cite_spans": [
{
"start": 19,
"end": 25,
"text": "(s, v)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating Structured Annotation",
"sec_num": "3.2"
},
{
"text": "Given a domain schema = { 1 , 2 , \u22ef , }, and a query q, suppose that = < 1, 2,\u22ef, > is the structured annotation for query q obtained using the method introduced in the above sections. p can better reflects the user's real search intent than the original q, as it presents the structured semantic information needed instead of a simple word string. Therefore, a document di can better satisfy a user's information need if it contains corresponding structured semantic information in p. Suppose that Ti is the set of annotated tokens extracted from document di, we compute a re-ranking score, denoted by RScore, for document di as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking with Structured Annotation",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "RScore(q, di) = \u210e( , ) = \u210e( , ) = \u2211 \u2211 \u210e( , ) \u2208 1\u2264 \u2264 where \u210e( , )= { ( . , . ), if . = . 0, else",
"eq_num": "(3)"
}
],
"section": "Ranking with Structured Annotation",
"sec_num": "4"
},
{
"text": "where is an annotated token in p and t is an annotated token in di. We use Equation 2to compute the similarity between values in query annotated tokens and values in document annotated tokens. We propose two re-ranking models, namely the conservative re-ranking model, to rerank top results based on RScore and relevance feedback based re-ranking model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ranking with Structured Annotation",
"sec_num": "4"
},
{
"text": "A nature way to re-rank top search results is according to their RScore. However, we fail to obtain annotated tokens from some retrieved documents, and hence the RScore of these documents are not available. In the conservative re-ranking model, we only re-rank search results that have an RScore. For example, suppose there are five retrieved documents {d1, d2, d3, d4, d5} for query q, we can extract structured information from document d3 and d4 and RScore(q, d4) > RScore (q, d3) . Note that we cannot obtain structured information from d1, d2, and d5. In the conservative re-ranking method, d1, d2, and d5 retain their original positions; while d3 and d4 will be re-ranked according to their RScore. Therefore, the final ranking generated by our conservative reranking model should be {d1, d2, d4, d3, d5}, in which the documents are re-ranked among the affected positions. There is also useful information in the documents without structured data, such as community question answering websites. However, in the conservative re-ranking model they will not be re-ranked. This may hurt the performance of our re-ranking model. One reasonable solution is relevance feedback model.",
"cite_spans": [
{
"start": 476,
"end": 483,
"text": "(q, d3)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conservative Re-ranking Model",
"sec_num": "4.1"
},
{
"text": "The disadvantage of the conservative re-ranking model is that it only can re-rank those top search results with structured data. To make up its limitation, we propose a relevance feedback based re-ranking model. The key idea of this model is based on the observation that the search results with the corrected annotated tokens could give implicit feedback information. Hence, we use these structured documents whose RScore are greater than a threshold \u03b3 (empirically set it as 0.6) as feedback documents, to effectively re-rank other search results to bring more relevant and novel information to the user.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "Formally, given a query Q and a document collection C, a retrieval system returns a ranked list of documents D. Let di denote the i-th ranked document in the ranked list. Our goal is to study how to use these feedback documents, J \u2286 {d1,\u2026, dk}, to effectively re-rank the other r search results: U \u2286 {dk+1,\u2026, dk+r}. A general formula of relevance feedback model (Salton et al, 1990 ) R is as follows:",
"cite_spans": [
{
"start": 362,
"end": 381,
"text": "(Salton et al, 1990",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "( \u2032) = (1 \u2212 \u03b1) (Q) + \u03b1 (J) (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "where \u03b1 \u2208 [0, 1] is the feedback coefficient, and and are two models that map a query and a set of relevant documents, respectively, into some comparable representations. For example, they can be represented as vectors of weighted terms or language models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "In this paper, we explore the problem in the language model framework, particularly the KLdivergence retrieval model and mixture-model feedback method , mainly because language models deliver state-of-the-art retrieval performance and the mixture-model based feedback is one of the most effective feedback techniques which outperforms Rocchio feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "The KL-divergence retrieval model was introduced in as a special case of the risk minimization retrieval framework and can support feedback more naturally. In this model, queries and documents are represented by unigram language models. Assuming that these language models can be appropriately estimated, KLdivergence retrieval model measures the relevance value of a document D with respect to a query Q by computing the negative Kullback-Leibler divergence between the query language model and the document language model as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The KL-Divergence Retrieval Model",
"sec_num": "4.2.1"
},
{
"text": "( , ) = \u2212 ( || ) = \u2212 \u2211 ( | ) ( | ) ( | ) \u2208 (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The KL-Divergence Retrieval Model",
"sec_num": "4.2.1"
},
{
"text": "where V is the set of words in our vocabulary. Intuitively, the retrieval performance of the KLdivergence relies on the estimation of the document model and the query model . For the set of k relevant documents, the document model is estimated as (w| )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The KL-Divergence Retrieval Model",
"sec_num": "4.2.1"
},
{
"text": "= 1 \u2211 ( , ) | | =1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The KL-Divergence Retrieval Model",
"sec_num": "4.2.1"
},
{
"text": ", where ( , ) is the count of word w in the i-th relevant document, and | | is the total number of words in that document. The document model needs to be smoothed and an effective method is Dirichlet smoothing ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The KL-Divergence Retrieval Model",
"sec_num": "4.2.1"
},
{
"text": "The query model intuitively captures what the user is interested in, and thus would affect retrieval performance. With feedback documents, is estimated by the mixture-model feedback method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The KL-Divergence Retrieval Model",
"sec_num": "4.2.1"
},
{
"text": "As the problem definition in Equation (4), the query model can be estimated by the original query where c(w,Q) is the count of word w in the query Q, and |Q| is the total number of words in the query) and the feedback document model. proposed a mixture model feedback method to estimate the feedback document model. More specifically, the model assumes that the feedback documents can be generated by a background language model ( | ) estimated using the whole collection and an unknown topic language model to be estimated. Formally, let F \u2282 C be a set of feedback documents. In this paper, F is comprised of documents that RScore are greater than\u03b3. The log-likelihood function of the mixture model is:",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 110,
"text": "where c(w,Q)",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Mixture Model Feedback Method",
"sec_num": "4.2.2"
},
{
"text": "model ( | ) = ( , ) | | (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Mixture Model Feedback Method",
"sec_num": "4.2.2"
},
{
"text": "( | ) = \u2211 \u2211 ( , ) \u2208 log [(1 \u2212 ) ( | ) + ( | )] \u2208 (6) where \u2208 [0,1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Mixture Model Feedback Method",
"sec_num": "4.2.2"
},
{
"text": "is a mixture noise parameter which controls the weight of the background model. Given a fixed , a standard EM algorithm can then be used to estimate ( | ), which is then interpolated with the original query model ( |Q) to obtain an improved estimation of the query model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Mixture Model Feedback Method",
"sec_num": "4.2.2"
},
{
"text": "( | ) = (1 \u2212 ) ( | ) + ( | ) (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Mixture Model Feedback Method",
"sec_num": "4.2.2"
},
{
"text": "where is the feedback coefficient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Mixture Model Feedback Method",
"sec_num": "4.2.2"
},
{
"text": "We used a dataset composed of 12,396 queries randomly sampled from query logs of a search engine. For each query, we retrieved its top 100 results from a commercial search engine. The documents were judged by human editors. A fivegrade (from 0 to 4 meaning from bad to perfect) relevance rating was assigned for each document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5"
},
{
"text": "We used a proprietary query domain classifier to identify queries in three domains, namely \"lyrics,\" \"recipe,\" and \"job,\" from the dataset. The statistics about these domains are shown in Table 2 . To investigate how many queries may potentially have structured annotations, we manually created structured annotations for these queries. The last column of Table 2 shows the percentage of queries that have structured annotations created by annotators. We found that for each domain, there was on average more than 90% of queries identified by us that had a certain structured annotation. This indicates that a large percentage of these queries contain structured information, as we expected.",
"cite_spans": [],
"ref_spans": [
{
"start": 188,
"end": 195,
"text": "Table 2",
"ref_id": null
},
{
"start": 356,
"end": 363,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "5"
},
{
"text": "In this section, we present the structured annotation of queries and further re-rank the top search results for the three domains introduced in Section 5. We used the ranking returned by a commercial search engine as our one of the Baselines. Note that as the baseline already uses a large number of ranking signals, it is very difficult to improve it any further.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "We evaluate the ranking quality using the widely used Normalized Discounted Cumulative Gain measure (NDCG) (Javelin and Kekalainen., 2000). We use the same configuration for NDCG as (Burges et al. 2005) . More specifically, for a given query q, the NDCG@K is computed as:",
"cite_spans": [
{
"start": 182,
"end": 202,
"text": "(Burges et al. 2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= 1 \u2211 (2 ( ) \u22121) =1 log (1 + )",
"eq_num": "(4)"
}
],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "Mq is a normalization constant (the ideal NDCG) so that a perfect ordering would obtain an NDCG of 1; and r(j) is the rating score of the j-th document in the ranking list.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "6"
},
{
"text": "We generated the structured annotation of queries based on the top 10 search results and used = 0.04 for Algorithm 1. We used several existing metrics, P (Precision), R (Recall), and F-Measure to evaluate the quality of the structured annotation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quality of Structured Annotation of Queries",
"sec_num": "6.1.1"
},
{
"text": "As a query structured annotation may contain more than one annotated token, we concluded that the annotation was correct only if the entire annotation was completely the same as the annotation labeled by annotators. Otherwise we treated the structured annotation as incorrect. Experimental results for the three domains are shown in Table 3 . We compare our approach with Xiao Li, (2010) (denoted as baseline), on the dataset described in Section 5. They labeled the semantic structure of noun phrase queries based on semi-Markov CRFs. Our approach achieves better performance than the baseline (about 5.5% significant improvement on F-Measure). This indicates that the approach of generating structured annotation based on the top search results is more effective. With the highquality structured annotation of queries in hand, it may be possible to obtain better ranking results using our proposed re-ranking models.",
"cite_spans": [],
"ref_spans": [
{
"start": 333,
"end": 340,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Quality of Structured Annotation of Queries",
"sec_num": "6.1.1"
},
{
"text": "We used the models introduced in Section 4 to rerank the top 10 search results, based on structured annotation of queries and annotated tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Re-ranking Result",
"sec_num": "6.1.2"
},
{
"text": "Recall that our goal is to quantify the effectiveness of structured annotation of queries for real web search. One dimension is to compare with the original search results of a commercial search engine (denoted as Ori-Ranker). The other is to compare with the query segmentation based re-ranking model (denoted as Seg-Ranker; Li et al., 2011) which tries to improve web search ranking by incorporating query segmentation. Li et al., (2011) incorporated query segmentation in the BM25, unigram language model and bigram language model retrieval framework, and bigram language model achieved the best performance. In this paper, Seg-Ranker integrates bigram language model with query segmentation.",
"cite_spans": [
{
"start": 326,
"end": 342,
"text": "Li et al., 2011)",
"ref_id": null
},
{
"start": 422,
"end": 439,
"text": "Li et al., (2011)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Re-ranking Result",
"sec_num": "6.1.2"
},
{
"text": "The ranking results of these models are shown in Figure 2 . This figure shows that all our two rankers significantly outperform the Ori-Rankerthe original search results of a commercial search engine. This means that using high-quality structured annotation does help better understanding of user intent. By comparing these structured annotations and the annotated tokens in documents, we can re-rank the more relevant results higher and yield better ranking quality.",
"cite_spans": [],
"ref_spans": [
{
"start": 49,
"end": 57,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Re-ranking Result",
"sec_num": "6.1.2"
},
{
"text": "Figure 2 also suggests that structured annotation based re-ranking models outperform query segmentation based re-ranking model. This is mainly because structured annotation can not only separate the query words into disjoint segments but can also assign each segment a semantic label. Taking full advantage of the semantic label can lead to better ranking performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Re-ranking Result",
"sec_num": "6.1.2"
},
{
"text": "Furthermore, Figure 2 shows that FB-Ranker outperforms Con-Ranker. The main reason is that in Con-Ranker, we can only reasonably re-rank the search results with structured data. However, in FB-Ranker we can not only re-rank the structured search results but also can re-rank other documents by incorporating implicit information from those structured documents. On average, FB-Ranker achieves the best ranking performance. Table 4 shows more detailed Figure 3 . Quality of re-ranking and quality of query structured annotation with different number of search results results for the three selected domains. This table shows that FB-Ranker consistently outperforms the two baseline rankers on these domains. In the remaining part of this paper, we will only report the results for this ranker, due to space limitations. Table 4 also indicates that we can get robust ranking improvement in different domains, and we will consider applying it to more domains.",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 21,
"text": "Figure 2",
"ref_id": null
},
{
"start": 423,
"end": 430,
"text": "Table 4",
"ref_id": "TABREF3"
},
{
"start": 451,
"end": 459,
"text": "Figure 3",
"ref_id": null
},
{
"start": 819,
"end": 826,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Re-ranking Result",
"sec_num": "6.1.2"
},
{
"text": "As introduced in Algorithm 1, we pre-defined a threshold \u03b4 for fuzzy string matching. We evaluated the quality of re-ranking and query structured annotation with different settings for \u03b4. The results are shown in Figure 3 . We found that:",
"cite_spans": [],
"ref_spans": [
{
"start": 213,
"end": 221,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment with Different Thresholds of Query Structured Annotation Algorithm",
"sec_num": "6.2"
},
{
"text": "(1) When we use \u03b4 = 0, which means that the structured annotations can be generated no matter how small the similarity between the query string and a weighted annotated token is, we can get a significant NDCG@3 gain of 2.15%. Figure 3(b) shows that the precision of the structured annotation is lowest when \u03b4 = 0 . However, the precision is still as high as 0.7375, and the highest recall is obtained in this case. This means that the quality of the generated structured annotations is still reasonable, and hence we can get a ranking improvement when \u03b4 = 0, as shown in Figure 3(a) .",
"cite_spans": [],
"ref_spans": [
{
"start": 226,
"end": 237,
"text": "Figure 3(b)",
"ref_id": null
},
{
"start": 571,
"end": 582,
"text": "Figure 3(a)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment with Different Thresholds of Query Structured Annotation Algorithm",
"sec_num": "6.2"
},
{
"text": "(2) Figure 3(a) suggests that the quality of reranking increases when the threshold \u03b4 increases from 0 to 0.05. It then decreases when \u03b4 increases from 0.06 to 0.5. Comparing these two figures shows that the trend of re-ranking performance adheres to the quality of the structured annotation. The settings for \u03b4 dramatically affect the recall and precision of the structured annotation; and hence the ranking quality is impacted. The larger \u03b4 is, the lower the recall of the structured annotation is.",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 15,
"text": "Figure 3(a)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment with Different Thresholds of Query Structured Annotation Algorithm",
"sec_num": "6.2"
},
{
"text": "(3) Since the re-ranking performance dramatically changes along with the quality of the structured annotation, we conducted a re-ranking experiment with perfect structured annotations (F-Measure equal to 1.0). Perfect structured annotations mean the annotations created by annotators as introduced in Section 5. The results are shown in the last bar of Figure 3(a) . We did not find a large space for ranking improvement. The NDCG@3 when using perfect structured annotations was 0.606, which is just slightly better than our best result (yield when \u03b4=0.05). It indicates that our structured annotation generation algorithm is already quite effective.",
"cite_spans": [],
"ref_spans": [
{
"start": 353,
"end": 364,
"text": "Figure 3(a)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment with Different Thresholds of Query Structured Annotation Algorithm",
"sec_num": "6.2"
},
{
"text": "(4) Figure 3 (a) shows that our approach outperforms the two baseline approaches with most settings for \u03b4. This indicates that our approach is relatively stable with different settings for \u03b4.",
"cite_spans": [],
"ref_spans": [
{
"start": 4,
"end": 12,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment with Different Thresholds of Query Structured Annotation Algorithm",
"sec_num": "6.2"
},
{
"text": "The above experiments are conducted based on the top 10 search results. In this section, by adjusting the number of top search results, ranging from 2 to 100, we investigate whether the quality of structured annotation of queries and the performance of re-ranking are affected by the quantity of search results. The results shown in Figure 4 indicate that the number of search results does affect the quality of structured annotation of queries and the performance of re-ranking. Structured annotations of queries become better when more search results are used from 2 to 20. This is because more search results cover more websites in our domain list, and hence can generate more annotated tokens. More results also provide more evidence for voting the importance of 2 3 4 5 6 7 8 9 10 20 30 40 50 60 70 80 90 100NDCG@3",
"cite_spans": [],
"ref_spans": [
{
"start": 333,
"end": 341,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": null
},
{
"text": "Ori-Ranker FB-Ranker (a) Quality of re-ranking (b) Quality of query structured annotation Figure 4 . Quality of re-ranking and quality of query structured annotation with different number of search results annotated tokens, and hence can improve the quality of structured annotation of queries.",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 98,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Seg-Ranker",
"sec_num": null
},
{
"text": "In addition, we also found that structured annotation of queries become worse when too many lower ranked results are used (e.g, using results ranked lower than 20). This is because the lower ranked results are less relevant than the higher ranked results. They may contain more irrelevant or noisy annotated tokens than higher ranked documents; and hence using them may harm the precision of the structured annotations. Figure 4 also indicates that the quality of ranking and the accuracy of structured annotations are correlated.",
"cite_spans": [],
"ref_spans": [
{
"start": 420,
"end": 428,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Seg-Ranker",
"sec_num": null
},
{
"text": "In this paper, we studied the problem of improving web search ranking by incorporating structured annotation of queries. We proposed a systematic solution, first to generate structured annotation of queries based on top search results, and then launching two structured annotation based reranking models. We performed a large-scale evaluation over 12,396 queries from a major search engine. The experiment results show that the F-Measure of query structured annotation generated by our approach is as high as 91%. In the same dataset, our structured annotation based re-ranking model significantly outperforms the original ranker -the ranking of a major search engine, with improvements 5.2%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
}
],
"back_matter": [
{
"text": "This work was supported by National Natural Science Foundation of China (NSFC) via grant 61273321, 61133012 and the Nation-al 863 Leading Technology Research Project via grant 2012AA011102.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Towards rich query interpretation: walking back and forth for mining query templates",
"authors": [
{
"first": "G",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Kabra",
"suffix": ""
},
{
"first": "K. C.-C",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": null,
"venue": "Proc. of WWW '10",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Agarwal, G. Kabra, and K. C.-C. Chang. Towards rich query interpretation: walking back and forth for mining query templates. In Proc. of WWW '10.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Joint Annotation of Search Queries",
"authors": [
{
"first": "M",
"middle": [],
"last": "Bendersky",
"suffix": ""
},
{
"first": "W",
"middle": [
"Bruce"
],
"last": "Croft",
"suffix": ""
},
{
"first": "D",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2011,
"venue": "Proc. of ACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Bendersky, W. Bruce Croft and D. A. Smith. Joint Annotation of Search Queries, In Proc. of ACL-HLT 2011.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Structural Annotation of Search Queries Using Pseudo-Relevance Feedback",
"authors": [
{
"first": "M",
"middle": [],
"last": "Bendersky",
"suffix": ""
},
{
"first": "W",
"middle": [
"Bruce"
],
"last": "Croft",
"suffix": ""
},
{
"first": "D",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. Of CIKM",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Bendersky, W. Bruce Croft and D. A. Smith. Structural Annotation of Search Queries Using Pseudo-Relevance Feedback, In Proc. Of CIKM 2010.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning noun phrase query segmentation",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bergsma",
"suffix": ""
},
{
"first": "Q",
"middle": [
"I"
],
"last": "Wang",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of EMNLP-CoNLL'07",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Bergsma and Q. I. Wang. Learning noun phrase query segmentation. In Proceedings of EMNLP- CoNLL'07.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Ranking related entities: components and analyses",
"authors": [
{
"first": "M",
"middle": [],
"last": "Bron",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Balog",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "De Rijke",
"suffix": ""
}
],
"year": null,
"venue": "Proc. of CIKM '10",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Bron, K. Balog, and M. de Rijke. Ranking related entities: components and analyses. In Proc. of CIKM '10.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automatic query expansion using SMART. InProc. of TREC-3",
"authors": [
{
"first": "C",
"middle": [],
"last": "Buckley",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "69--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Buckley. Automatic query expansion using SMART. InProc. of TREC-3, pages 69-80, 1995.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Learning to rank usinggradient descent",
"authors": [
{
"first": "C",
"middle": [],
"last": "Burges",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Shaked",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Renshaw",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Lazier",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Deeds",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Hamilton",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Hullender",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of ICML '05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to rank usinggradient descent. In Proceedings of ICML '05.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Supporting entity search: a large-scale prototype search engine",
"authors": [
{
"first": "T",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "K. C.-C",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": null,
"venue": "Proc. of SIGMOD '07",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Cheng, X. Yan, and K. C.-C. Chang. Supporting entity search: a large-scale prototype search engine. In Proc. of SIGMOD '07.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Open Information Extraction from the Web",
"authors": [
{
"first": "O",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Banko",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "D",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2008,
"venue": "Communications of the ACM",
"volume": "51",
"issue": "12",
"pages": "68--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "O. Etzioni, M. Banko, S. Soderland, and D.S. Weld, (2008). Open Information Extraction from the Web, Communications of the ACM, 51(12): 68-74.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Named entity recognition in query",
"authors": [
{
"first": "J",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. Of SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Guo, G. Xu, X. Cheng, and H. Li. Named entity recognition in query. In Proc. Of SIGIR' 2009.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Ir evaluation methods for retrieving highly relevant documents",
"authors": [
{
"first": "K",
"middle": [],
"last": "Jarvelin",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kekalainen",
"suffix": ""
}
],
"year": null,
"venue": "SIGIR '00",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Jarvelin and J. Kekalainen. Ir evaluation methods for retrieving highly relevant documents. In SIGIR '00.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Document language models, query models, and risk minimization for information retrieval",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of SIGIR'01",
"volume": "",
"issue": "",
"pages": "111--119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Lafferty and C. Zhai, Document language models, query models, and risk minimization for information retrieval, In Proceedings of SIGIR'01, pages 111-119, 2001.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Relevance based language models",
"authors": [
{
"first": "V",
"middle": [],
"last": "Lavrenko",
"suffix": ""
},
{
"first": "W",
"middle": [
"B"
],
"last": "Croft",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of SIGIR",
"volume": "",
"issue": "",
"pages": "120--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Lavrenko and W. B. Croft. Relevance based language models. In Proc. of SIGIR, pages 120-127, 2001.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Unsupervised Query Segmentation Using Clickthrough for Information Retrieval",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Bjp",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "C",
"middle": [
"X"
],
"last": "Zhai",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": null,
"venue": "Proc. of SIGIR'11",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Li, BJP. Hsu, CX. Zhai and K. Wang. Unsupervised Query Segmentation Using Clickthrough for Information Retrieval. In Proc. of SIGIR'11.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Extracting structured information from user queries with semi-supervised conditional random fields",
"authors": [
{
"first": "X",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Y.-Y.",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Acero",
"suffix": ""
}
],
"year": null,
"venue": "Proc. of SIGIR'09",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Li, Y.-Y. Wang, and A. Acero. Extracting structured information from user queries with semi-supervised conditional random fields. In Proc. of SIGIR'09.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Unsupervised Transactional Query Classification Based on Webpage Form Understanding",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "J-T",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": null,
"venue": "Proc. of CIKM '11",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Liu, X. Ni, J-T. Sun, Z. Chen. Unsupervised Transactional Query Classification Based on Webpage Form Understanding. In Proc. of CIKM '11.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Automatic query type identification based on click-through information",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ru",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2006,
"venue": "LNCS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Liu, M. Zhang, L. Ru, and S. Ma. Automatic query type identification based on click-through information. In LNCS, 2006.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Asking What No One Has Asked Before: Using Phrase Similarities To Generate Synthetic Web Search Queries",
"authors": [
{
"first": "M",
"middle": [],
"last": "Pasca",
"suffix": ""
}
],
"year": null,
"venue": "Proc. of CIKM '11",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Pasca. Asking What No One Has Asked Before: Using Phrase Similarities To Generate Synthetic Web Search Queries. In Proc. of CIKM '11.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Improving retrieval performance by relevance feedback",
"authors": [
{
"first": "G",
"middle": [],
"last": "Salton",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Buckley",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of the American Society for Information Science",
"volume": "41",
"issue": "4",
"pages": "288--297",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Salton and C. Buckley. Improving retrieval performance by relevance feedback. Journal of the American Society for Information Science, 41(4):288-297, 1990.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Structured annotations of web queries",
"authors": [
{
"first": "N",
"middle": [],
"last": "Sarkas",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Paparizos",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Tsaparas",
"suffix": ""
}
],
"year": null,
"venue": "Proc. of SIGMOD'10",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "N. Sarkas, S. Paparizos, and P. Tsaparas. Structured annotations of web queries. In Proc. of SIGMOD'10.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Improving recommendation for long-tail queries via templates",
"authors": [
{
"first": "I",
"middle": [],
"last": "Szpektor",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gionis",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Maarek",
"suffix": ""
}
],
"year": null,
"venue": "Proc. of WWW '11",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Szpektor, A. Gionis, and Y. Maarek. Improving recommendation for long-tail queries via templates. In Proc. of WWW '11",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Unsupervised query segmentation using generative language models and wikipedia",
"authors": [
{
"first": "B",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Peng",
"suffix": ""
}
],
"year": null,
"venue": "WWW'08",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Tan and F. Peng. Unsupervised query segmentation using generative language models and wikipedia. In WWW'08.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Mining employment market via text block detection and adaptive cross-domain information extraction",
"authors": [
{
"first": "T.-L",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Lam",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. SIGIR",
"volume": "",
"issue": "",
"pages": "283--290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T.-L. Wong, W. Lam, and B. Chen. Mining employment market via text block detection and adaptive cross-domain information extraction. In Proc. SIGIR, pages 283-290, 2009.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Query segmentation using conditional random fields",
"authors": [
{
"first": "X",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Shi",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of KEYS '09",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Yu and H. Shi. Query segmentation using conditional random fields. In Proceedings of KEYS '09.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Model-based feedback in the language modeling approach to information retrieval",
"authors": [
{
"first": "C",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of CIKM'01",
"volume": "",
"issue": "",
"pages": "403--410",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Zhai and J. Lafferty, Model-based feedback in the language modeling approach to information retrieval , In Proceedings of CIKM'01, pages 403-410, 2001.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A study of smoothing methods for language models applied to ad hoc information retrieval",
"authors": [
{
"first": "C",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of SIGIR'01",
"volume": "",
"issue": "",
"pages": "334--342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Zhai and J. Lafferty, A study of smoothing methods for language models applied to ad hoc information retrieval, In Proceedings of SIGIR'01, pages 334-342, 2001.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Structured data extraction from the Web based on partial tree alignment",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2006,
"venue": "IEEE Trans. Knowl. Data Eng",
"volume": "18",
"issue": "12",
"pages": "1614--1628",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Zhai and B. Liu. Structured data extraction from the Web based on partial tree alignment. IEEE Trans. Knowl. Data Eng., 18(12):1614\u22121628, Dec. 2006.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Fully automatic wrapper generation for search engines",
"authors": [
{
"first": "H",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Raghavan",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of WWW '05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Zhao, W. Meng, Z. Wu, V. Raghavan, and C. Yu. Fully automatic wrapper generation for search engines. In Proceedings of WWW '05.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Joint optimization of wrapper generation and template detection",
"authors": [
{
"first": "S",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "J.-R",
"middle": [],
"last": "Wen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": null,
"venue": "Proc. of SIGKDD'07",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Zheng, R. Song, J.-R. Wen, and D. Wu. Joint optimization of wrapper generation and template detection. In Proc. of SIGKDD'07.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Search results of query \"taylor swift lyrics falling in love\" and processing pipeline"
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Figure 2. Ranking Quality (* indicates significant improvement)"
},
"TABREF0": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td>Annotated Tokens (b)</td></tr><tr><td>1.</td><td/><td>d1 [Taylor Swift, #artist_name]</td></tr><tr><td/><td/><td>[Crazier, #song_name]</td></tr><tr><td/><td/><td>[Feel like I'm falling and \u2026, #lyrics]</td></tr><tr><td>2.</td><td/><td>d2 [Taylor Swift, #artist_name]</td></tr><tr><td/><td/><td>[Mary's Song (oh my my my), #song_name]</td></tr><tr><td/><td/><td>[Growing up and falling in love\u2026, #lyrics]</td></tr><tr><td>3.</td><td/><td>d3 [Taylor Swift, #artist_name]</td></tr><tr><td/><td/><td>[Jump Then Fall, #song_name]</td></tr><tr><td>4.</td><td/><td>[I realize your love is the best \u2026, #lyrics]</td></tr><tr><td/><td/><td>d4 [Taylor Swift, #artist_name]</td></tr><tr><td/><td/><td>[Mary's Song (oh my my my), #song_name]</td></tr><tr><td colspan=\"2\">Search Results (a)</td><td>[Growing up and falling in love\u2026, #lyrics]</td></tr><tr><td>1.</td><td/><td>[Taylor Swift, #artist_name, 0.34]</td></tr><tr><td/><td/><td>...</td></tr><tr><td/><td/><td>[Mary's Song (oh my my my), #song_name, 0.16]</td></tr><tr><td/><td/><td>[Crazier, #song_name, 0.1]</td></tr><tr><td/><td>&lt;[taylor swift, #artist_name] lyrics</td><td>[Jump Then Fall, #song_name, 0.08]</td></tr><tr><td/><td>[falling in love, #lyrics]&gt;</td><td>...</td></tr><tr><td/><td/><td>[Growing up and falling in love\u2026, #lyrics, 0.16]</td></tr><tr><td/><td/><td>[Feel like I'm falling and \u2026, #lyrics, 0.1]</td></tr><tr><td/><td/><td>[I realize your love is the best \u2026, #lyrics, 0.08]</td></tr><tr><td>Top Results Re-ranking (e)</td><td>Query Structured Annotation Generation (d)</td><td>Weighted Annotated Tokens (c)</td></tr></table>",
"text": ""
},
"TABREF1": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Domain Schema</td><td>Example structured annotations</td></tr><tr><td>lyrics</td><td>#artist_name</td><td>&lt;lyrics of [hey jude, #song_name] [beatles,</td></tr><tr><td/><td>#song_name</td><td>#artist_name]&gt;</td></tr><tr><td/><td>#lyrics</td><td/></tr><tr><td>job</td><td>#category</td><td>&lt;[teacher, #category] job in [America,</td></tr><tr><td/><td>#location</td><td>#location]&gt;</td></tr><tr><td colspan=\"2\">recipe #directions</td><td>&lt;[baking, # directions] [bread, #</td></tr><tr><td/><td>#ingredients</td><td>ingredients] recipe&gt;</td></tr></table>",
"text": "Example domain schemas"
},
"TABREF2": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td colspan=\"6\">Table 2. Domain queries used in our experiment</td></tr><tr><td colspan=\"3\">Domain Containing</td><td colspan=\"3\">Queries Structured</td></tr><tr><td/><td colspan=\"2\">Keyword</td><td/><td colspan=\"2\">Annotation%</td></tr><tr><td>lyrics</td><td>\"lyrics\"</td><td/><td>196</td><td>95%</td></tr><tr><td>job</td><td>\"job\"</td><td/><td>124</td><td>92%</td></tr><tr><td>recipe</td><td colspan=\"2\">\"recipe\"</td><td>76</td><td>93%</td></tr><tr><td/><td colspan=\"5\">. Quality of Structured Annotation. All the</td></tr><tr><td colspan=\"6\">improvements are significant (p &lt; 0.05)</td></tr><tr><td colspan=\"5\">Domain Method Precision Recall</td><td>F-Measure</td></tr><tr><td>lyrics</td><td>Baseline</td><td colspan=\"2\">90.06%</td><td>84.92%</td><td>87.41%</td></tr><tr><td/><td>Our</td><td colspan=\"2\">95.45%</td><td>89.83%</td><td>92.55%</td></tr><tr><td>job</td><td>Baseline</td><td colspan=\"2\">89.62%</td><td>80.14%</td><td>84.62%</td></tr><tr><td/><td>Our</td><td colspan=\"2\">95.31%</td><td>84.93%</td><td>89.82%</td></tr><tr><td>recipe</td><td>Baseline</td><td colspan=\"2\">83.96%</td><td>84.23%</td><td>84.09%</td></tr><tr><td/><td>Our</td><td colspan=\"2\">89.68%</td><td>88.44%</td><td>89.06%</td></tr><tr><td>All</td><td>Baseline</td><td colspan=\"2\">87.88%</td><td>83.10%</td><td>85.42%</td></tr><tr><td/><td>Our</td><td colspan=\"2\">93.61%</td><td>88.45%</td><td>90.96%</td></tr></table>",
"text": ""
},
"TABREF3": {
"num": null,
"html": null,
"type_str": "table",
"content": "<table><tr><td/><td/><td>0.61</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>1</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>NDCG@3</td><td>0.55 0.56 0.57 0.58 0.59 0.6</td><td/><td/><td/><td/><td colspan=\"2\">Seg-Ranker</td><td>Ori-Ranker</td><td colspan=\"2\">FB-Ranker</td><td>Value of measurement</td><td>0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9</td><td/><td>Recall F-Measure Precision</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td>0.54</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>0</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td/><td>0</td><td>0.02</td><td>0.04</td><td>0.06</td><td>0.08</td><td>0.1</td><td>0.3</td><td>0.5</td><td>0.7</td><td>0.9</td><td>perfect</td><td>0</td><td>0.02</td><td>0.04</td><td>0.06</td><td>0.08</td><td>0.1</td><td>0.3</td><td>0.5</td><td>0.7</td><td>0.9</td></tr><tr><td/><td/><td/><td/><td colspan=\"7\">Query structured annotation generation threshold \u03b4</td><td/><td/><td/><td colspan=\"8\">Query structured annotation generation threshold \u03b4</td></tr><tr><td/><td/><td/><td colspan=\"6\">(a) Quality of re-ranking</td><td/><td/><td/><td colspan=\"10\">(b) Quality of query structured annotation</td></tr><tr><td colspan=\"11\">All the improvements are significant (p &lt; 0.05)</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"11\">Domain Ranking Method NDCG@1 NDCG@3 NDCG@5</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>lyrics</td><td colspan=\"3\">Seg-Ranker</td><td colspan=\"2\">0.572</td><td/><td>0.574</td><td/><td>0.575</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td colspan=\"2\">Ori-Ranker</td><td/><td colspan=\"2\">0.621</td><td/><td>0.628</td><td/><td>0.636</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td colspan=\"2\">FB-Ranker</td><td/><td colspan=\"2\">0.637</td><td/><td>0.639</td><td/><td>0.647</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"4\">recipe Seg-Ranker</td><td colspan=\"2\">0.629</td><td/><td>0.631</td><td/><td>0.634</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td colspan=\"2\">Ori-Ranker</td><td/><td colspan=\"2\">0.678</td><td/><td>0.687</td><td/><td>0.696</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td colspan=\"2\">FB-Ranker</td><td/><td colspan=\"2\">0.707</td><td/><td>0.704</td><td/><td>0.709</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>job</td><td colspan=\"3\">Seg-Ranker</td><td colspan=\"2\">0.438</td><td/><td>0.413</td><td/><td>0.408</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td colspan=\"2\">Ori-Ranker</td><td/><td colspan=\"2\">0.470</td><td/><td>0.453</td><td/><td>0.442</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td colspan=\"2\">FB-Ranker</td><td/><td colspan=\"2\">0.504</td><td/><td>0.474</td><td/><td>0.459</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"text": "Detailed ranking results on three domains."
}
}
}
}