ACL-OCL / Base_JSON /prefixP /json /P08 /P08-1019.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P08-1019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:35:14.141474Z"
},
"title": "Searching Questions by Identifying Question Topic and Question Focus",
"authors": [
{
"first": "Huizhong",
"middle": [],
"last": "Duan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": {
"postCode": "200240",
"settlement": "Shanghai",
"country": "China"
}
},
"email": ""
},
{
"first": "Yunbo",
"middle": [],
"last": "Cao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": {
"postCode": "200240",
"settlement": "Shanghai",
"country": "China"
}
},
"email": "yunbo.cao@microsoft.com"
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research Asia",
"location": {
"postCode": "100080",
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiao Tong University",
"location": {
"postCode": "200240",
"settlement": "Shanghai",
"country": "China"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper is concerned with the problem of question search. In question search, given a question as query, we are to return questions semantically equivalent or close to the queried question. In this paper, we propose to conduct question search by identifying question topic and question focus. More specifically, we first summarize questions in a data structure consisting of question topic and question focus. Then we model question topic and question focus in a language modeling framework for search. We also propose to use the MDLbased tree cut model for identifying question topic and question focus automatically. Experimental results indicate that our approach of identifying question topic and question focus for search significantly outperforms the baseline methods such as Vector Space Model (VSM) and Language Model for Information Retrieval (LMIR).",
"pdf_parse": {
"paper_id": "P08-1019",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper is concerned with the problem of question search. In question search, given a question as query, we are to return questions semantically equivalent or close to the queried question. In this paper, we propose to conduct question search by identifying question topic and question focus. More specifically, we first summarize questions in a data structure consisting of question topic and question focus. Then we model question topic and question focus in a language modeling framework for search. We also propose to use the MDLbased tree cut model for identifying question topic and question focus automatically. Experimental results indicate that our approach of identifying question topic and question focus for search significantly outperforms the baseline methods such as Vector Space Model (VSM) and Language Model for Information Retrieval (LMIR).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Over the past few years, online services have been building up very large archives of questions and their answers, for example, traditional FAQ services and emerging community-based Q&A services (e.g., Yahoo! Answers 1 , Live QnA 2 , and Baidu Zhidao 3 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To make use of the large archives of questions and their answers, it is critical to have functionality facilitating users to search previous answers. Typically, such functionality is achieved by first retrieving questions expected to have the same answers as a queried question and then returning the related answers to users. For example, given question Q1 in Table 1 , question Q2 can be re-turned and its answer will then be used to answer Q1 because the answer of Q2 is expected to partially satisfy the queried question Q1. This is what we called question search. In question search, returned questions are semantically equivalent or close to the queried question. Many methods have been investigated for tackling the problem of question search. For example, Jeon et al. have compared the uses of four different retrieval methods, i.e. vector space model, Okapi, language model, and translation-based model, within the setting of question search (Jeon et al., 2005b) . However, all the existing methods treat questions just as plain texts (without considering question structure). For example, obviously, Q2 can be considered semantically closer to Q1 than Q3-Q5 although all questions (Q2-Q5) are related to Q1. The existing methods are not able to tell the difference between question Q2 and questions Q3, Q4, and Q5 in terms of their relevance to question Q1. We will clarify this in the following.",
"cite_spans": [
{
"start": 951,
"end": 971,
"text": "(Jeon et al., 2005b)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 361,
"end": 368,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose to conduct question search by identifying question topic and question focus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The question topic usually represents the major context/constraint of a question (e.g., Berlin, Hamburg) which characterizes users' interests. In contrast, question focus (e.g., cool club, cheap hotel) presents certain aspect (or descriptive features) of the question topic. For the aim of retrieving semantically equivalent (or close) questions, we need to assure that returned questions are related to the queried question with respect to both question topic and question focus. For example, in Table 1 , Q2 preserves certain useful information of Q1 in the aspects of both question topic (Berlin) and question focus (fun club) although it loses some useful information in question topic (Hamburg). In contrast, questions Q3-Q5 are not related to Q1 in question focus (although being related in question topic, e.g. Hamburg, Berlin), which makes them unsuitable as the results of question search.",
"cite_spans": [],
"ref_spans": [
{
"start": 497,
"end": 504,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We also propose to use the MDL-based (Minimum Description Length) tree cut model for automatically identifying question topic and question focus. Given a question as query, a structure called question tree is constructed over the question collection including the queried question and all the related questions, and then the MDL principle is applied to find a cut of the question tree specifying the question topic and the question focus of each question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In a summary, we summarize questions in a data structure consisting of question topic and question focus. On the basis of this, we then propose to model question topic and question focus in a language modeling framework for search. To the best of our knowledge, none of the existing studies addressed question search by modeling both question topic and question focus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We empirically conduct the question search with questions about 'travel' and 'computers & internet'. Both kinds of questions are from Yahoo! Answers. Experimental results show that our approach can significantly improve traditional methods (e.g. VSM, LMIR) in retrieving relevant questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follow. In Section 2, we present our approach to question search which is based on identifying question topic and question focus. In Section 3, we empirically verify the effectiveness of our approach to question search. In Section 4, we employ a translation-based retrieval framework for extending our approach to fix the issue called 'lexical chasm'. Section 5 surveys the related work. Section 6 concludes the paper by summarizing our work and discussing the future directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our approach to question search consists of two steps: (a) summarize questions in a data structure consisting of question topic and question focus; (b) model question topic and question focus in a language modeling framework for search.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Approach to Question Search",
"sec_num": "2"
},
{
"text": "In the step (a), we employ the MDL-based (Minimum Description Length) tree cut model for automatically identifying question topic and question focus. Thus, this section will begin with a brief review of the MDL-based tree cut model and then follow that by an explanation of steps (a) and (b).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Approach to Question Search",
"sec_num": "2"
},
{
"text": "Formally, a tree cut model (Li and Abe, 1998) can be represented by a pair consisting of a tree cut , and a probability parameter vector of the same length, that is, ,",
"cite_spans": [
{
"start": 27,
"end": 45,
"text": "(Li and Abe, 1998)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The MDL-based tree cut model",
"sec_num": "2.1"
},
{
"text": "where and are",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The MDL-based tree cut model",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ", , . . , , , \u2026 ,",
"eq_num": "(2)"
}
],
"section": "The MDL-based tree cut model",
"sec_num": "2.1"
},
{
"text": "where , , \u2026 are classes determined by a cut in the tree and \u2211 1 . A 'cut' in a tree is any set of nodes in the tree that defines a partition of all the nodes, viewing each node as representing the set of child nodes as well as itself. For example, the cut indicated by the dash line in Figure 1 corresponds to three classes: , , ,",
"cite_spans": [],
"ref_spans": [
{
"start": 286,
"end": 294,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The MDL-based tree cut model",
"sec_num": "2.1"
},
{
"text": "A straightforward way for determining a cut of a tree is to collapse the nodes of less frequency into their parent nodes. However, the method is too heuristic for it relies much on manually tuned frequency threshold. In our practice, we turn to use a theoretically well-motivated method based on the MDL principle. MDL is a principle of data compression and statistical estimation from information theory (Rissanen, 1978) .",
"cite_spans": [
{
"start": 405,
"end": 421,
"text": "(Rissanen, 1978)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": ". An Example on the Tree Cut Model",
"sec_num": null
},
{
"text": "Given a sample and a tree cut , we employ MLE to estimate the parameters of the corresponding tree cut model , , where denotes the estimated parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ". An Example on the Tree Cut Model",
"sec_num": null
},
{
"text": "According to the MDL principle, the description length (Li and Abe, 1998) , of the tree cut model and the sample is the sum of the model description length , the parameter description length | , and the data description length |\u0393, , i.e.",
"cite_spans": [
{
"start": 55,
"end": 73,
"text": "(Li and Abe, 1998)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": ". An Example on the Tree Cut Model",
"sec_num": null
},
{
"text": "The model description length is a subjective quantity which depends on the coding scheme employed. Here, we simply assume that each tree cut model is equally likely a priori.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ", | ,",
"sec_num": null
},
{
"text": "The parameter description length | is calculated as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ", | ,",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "log | |",
"eq_num": "(4)"
}
],
"section": ", | ,",
"sec_num": null
},
{
"text": "where | | denotes the sample size and denotes the number of free parameters in the tree cut model, i.e. equals the number of nodes in minus one. The data description length |\u0393, is calculated as , \u2211 \u0302",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ", | ,",
"sec_num": null
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ", | ,",
"sec_num": null
},
{
"text": "\u0302 | | | | (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ", | ,",
"sec_num": null
},
{
"text": "where is the class that belongs to and denotes the total frequency of instances in class in the sample .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ", | ,",
"sec_num": null
},
{
"text": "With the description length defined as (3), we wish to select a tree cut model with the minimum description length and output it as the result. Note that the model description length can be ignored because it is the same for all tree cut models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": ", | ,",
"sec_num": null
},
{
"text": "The MDL-based tree cut model was originally introduced for handling the problem of generalizing case frames using a thesaurus (Li and Abe, 1998) . To the best of our knowledge, no existing work utilizes it for question search. This may be partially because of the unavailability of the resources (e.g., thesaurus) which can be used for embodying the questions in a tree structure. In Section 2.2, we will introduce a tree structure called question tree for representing questions.",
"cite_spans": [
{
"start": 126,
"end": 144,
"text": "(Li and Abe, 1998)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": ", | ,",
"sec_num": null
},
{
"text": "In principle, it is possible to identify question topic and question focus of a question by only parsing the question itself (for example, utilizing a syntactic parser). However, such a method requires accurate parsing results which cannot be obtained from the noisy data from online services. Instead, we propose using the MDL-based tree cut model which identifies question topics and question foci for a set of questions together. More specifically, the method consists of two phases: 1) Constructing a question tree: represent the queried question and all the related questions in a tree structure called question tree; 2) Determining a tree cut: apply the MDL principle to the question tree, which yields the cut specifying question topic and question focus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying question topic and question focus",
"sec_num": "2.2"
},
{
"text": "In the following, with a series of definitions, we will describe how a question tree is constructed from a collection of questions. Let's begin with explaining the representation of a question. A straightforward method is to represent a question as a bag-of-words (possibly ignoring stop words). However, this method cannot discern 'the hotels in Paris' from 'the Paris hotel'. Thus, we turn to use the linguistic units carrying on more semantic information. Specifically, we make use of two kinds of units: BaseNP (Base Noun Phrase) and WH-ngram. A BaseNP is defined as a simple and non-recursive noun phrase (Cao and Li, 2002) . A WH-ngram is an ngram beginning with WH-words. The WH-words that we consider include 'when', 'what', 'where', 'which', and 'how'. We refer to these two kinds of units as 'topic terms'. With 'topic terms', we represent a question as a topic chain and a set of questions as a question tree.",
"cite_spans": [
{
"start": 610,
"end": 628,
"text": "(Cao and Li, 2002)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing a question tree",
"sec_num": "2.2.1"
},
{
"text": "Definition 1 (Topic Profile) The topic profile of a topic term in a categorized question collection is a probability distribution of categories | where is a set of categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing a question tree",
"sec_num": "2.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "| , \u2211 ,",
"eq_num": "(7)"
}
],
"section": "Constructing a question tree",
"sec_num": "2.2.1"
},
{
"text": "where , is the frequency of the topic term within category . Clearly, we have \u2211 | 1. By 'categorized questions', we refer to the questions that are organized in a tree of taxonomy. For example, at Yahoo! Answers, the question \"How do I install my wireless router\" is categorized as \"Computers & Internet Computer Networking\". Actually, we can find categorized questions at other online services such as FAQ sites, too.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing a question tree",
"sec_num": "2.2.1"
},
{
"text": "Definition 2 (Specificity) The specificity of a topic term is the inverse of the entropy of the topic profile . More specifically,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing a question tree",
"sec_num": "2.2.1"
},
{
"text": "1 \u2211 | log | (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing a question tree",
"sec_num": "2.2.1"
},
{
"text": "where is a smoothing parameter used to cope with the topic terms whose entropy is 0. In our experiments, the value of was set 0.001. We use the term specificity to denote how specific a topic term is in characterizing information needs of users who post questions. A topic term of high specificity (e.g., Hamburg, Berlin) usually specifies the question topic corresponding to the main context of a question because it tends to occur only in a few categories. A topic term of low specificity is usually used to represent the question focus (e.g., cool club, where to see) which is relatively volatile and might occur in many categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing a question tree",
"sec_num": "2.2.1"
},
{
"text": "Definition 3 (Topic Chain) A topic chain of a question is a sequence of ordered topic terms such that 1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing a question tree",
"sec_num": "2.2.1"
},
{
"text": "is included in , 1 ; 2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing a question tree",
"sec_num": "2.2.1"
},
{
"text": ", 1 . For example, the topic chain of \"any cool clubs in Berlin or Hamburg?\" is \"Hamburg Berlin cool club\" because the specificities for 'Hamburg', 'Berlin', and 'cool club' are 0.99, 0.62, and 0.36.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing a question tree",
"sec_num": "2.2.1"
},
{
"text": "Definition 4 (Question Tree) A question tree of a question set is a prefix tree built over the topic chains of the question set . Clearly, if a question set contains only one question, its question tree will be exactly same as the topic chain of the question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing a question tree",
"sec_num": "2.2.1"
},
{
"text": "Note that the root node of a question tree is associated with empty string as the definition of prefix tree requires (Fredkin, 1960) . ",
"cite_spans": [
{
"start": 117,
"end": 132,
"text": "(Fredkin, 1960)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constructing a question tree",
"sec_num": "2.2.1"
},
{
"text": "According to the definition of a topic chain, the topic terms in a topic chain of a question are ordered by their specificity values. Thus, a cut of a topic chain naturally separates the topic terms of low specificity (representing question focus) from the topic terms of high specificity (representing question topic). Given a topic chain of a question consisting of topic terms, there exist ( 1 possible cuts. The question is: which cut is the best?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Determining the tree cut",
"sec_num": "2.2.2"
},
{
"text": "We propose using the MDL-based tree cut model for the search of the best cut in a topic chain. Instead of dealing with each topic chain individually, the proposed method handles a set of questions together. Specifically, given a queried question, we construct a question tree consisting of both the queried question and the related questions, and then apply the MDL principle to select the best cut of the question tree. For example, in Figure 2 , we hope to get the cut indicated by the dashed line. The topic terms on the left of the dashed line represent the question topic and those on the right of the dashed line represent the question focus. Note that the tree cut yields a cut for each individual topic chain (each path) within the question tree accordingly.",
"cite_spans": [],
"ref_spans": [
{
"start": 437,
"end": 445,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Determining the tree cut",
"sec_num": "2.2.2"
},
{
"text": "A cut of a topic chain of a question q separates the topic chain in two parts: HEAD and TAIL. HEAD (denoted as ) is the subsequence of the original topic chain before the cut. TAIL (denoted as ) is the subsequence of after the cut. Thus, . For instance, given the tree cut specified in Figure 2 , for the topic chain of Q1 \"Hamburg Berlin cool club\", the HEAD and TAIL are \" Hamburg Berlin \" and \"cool club\" respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 286,
"end": 294,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Determining the tree cut",
"sec_num": "2.2.2"
},
{
"text": "We employ the framework of language modeling (for information retrieval) to develop our approach to question search.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling question topic and question focus for search",
"sec_num": "2.3"
},
{
"text": "In the language modeling approach to information retrieval, the relevance of a targeted question to a queried question is given by the probability | of generating the queried question Following the framework, we propose a mixture model for modeling question structure (namely, question topic and question focus) within the process of searching questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling question topic and question focus for search",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "| \u2022 | 1 \u2022 |",
"eq_num": "(9)"
}
],
"section": "Modeling question topic and question focus for search",
"sec_num": "2.3"
},
{
"text": "In the mixture model, it is assumed that the process of generating question topics and the process of generating question foci are independent from each other.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling question topic and question focus for search",
"sec_num": "2.3"
},
{
"text": "In traditional language modeling, a single multinomial model | over terms is estimated for each targeted question . In our case, two multinomial models and need to be estimated for each targeted question .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling question topic and question focus for search",
"sec_num": "2.3"
},
{
"text": "If unigram document language models are used, the equation 9can then be re-written as,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling question topic and question focus for search",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "| \u2022 \u220f , 1 \u2022 \u220f ,",
"eq_num": "(10)"
}
],
"section": "Modeling question topic and question focus for search",
"sec_num": "2.3"
},
{
"text": "where , is the frequency of within . To avoid zero probabilities and estimate more accurate language models, the HEAD and TAIL of questions are smoothed using background collection,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling question topic and question focus for search",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2022 \u0302 1 \u2022 \u0302 | (11) \u2022 \u0302 1 \u2022 \u0302 |",
"eq_num": "(12)"
}
],
"section": "Modeling question topic and question focus for search",
"sec_num": "2.3"
},
{
"text": "where \u0302 | , \u0302 | , and \u0302 | are the MLE estimators with respect to the HEAD of , the TAIL of , and the collection .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling question topic and question focus for search",
"sec_num": "2.3"
},
{
"text": "We have conducted experiments to verify the effectiveness of our approach to question search. Particularly, we have investigated the use of identifying question topic and question focus for search.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "3"
},
{
"text": "We made use of the questions obtained from Yahoo! Answers for the evaluation. More specifically, we utilized the resolved questions under two of the top-level categories at Yahoo! Answers, namely 'travel' and 'computers & internet'. The questions include 314,616 items from the 'travel' category and 210,785 items from the 'computers & internet' category. Each resolved question consists of three fields: 'title', 'description', and 'answers'. For search we use only the 'title' field. It is assumed that the titles of the questions already provide enough semantic information for understanding users' information needs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and evaluation measures",
"sec_num": "3.1"
},
{
"text": "We developed two test sets, one for the category 'travel' denoted as 'TRL-TST', and the other for 'computers & internet' denoted as 'CI-TST'. In order to create the test sets, we randomly selected 200 questions for each category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and evaluation measures",
"sec_num": "3.1"
},
{
"text": "To obtain the ground-truth of question search, we employed the Vector Space Model (VSM) (Salton et al., 1975) to retrieve the top 20 results and obtained manual judgments. The top 20 results don't include the queried question itself. Given a returned result by VSM, an assessor is asked to label it with 'relevant' or 'irrelevant'. If a returned result is considered semantically equivalent (or close) to the queried question, the assessor will label it as 'relevant'; otherwise, the assessor will label it as 'irrelevant'. Two assessors were involved in the manual judgments. Each of them was asked to label 100 questions from 'TRL-TST' and 100 from 'CI-TST'. In the process of manually judging questions, the assessors were presented only the titles of the questions (for both the queried questions and the returned questions). We utilized two baseline methods for demonstrating the effectiveness of our approach, the VSM and the LMIR (language modeling method for information retrieval) (Ponte and Croft, 1998) .",
"cite_spans": [
{
"start": 88,
"end": 109,
"text": "(Salton et al., 1975)",
"ref_id": "BIBREF6"
},
{
"start": 990,
"end": 1013,
"text": "(Ponte and Croft, 1998)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and evaluation measures",
"sec_num": "3.1"
},
{
"text": "We made use of three measures for evaluating the results of question search methods. They are MAP, R-precision, and MRR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and evaluation measures",
"sec_num": "3.1"
},
{
"text": "In the experiments, we made use of the questions about 'travel' to test the performance of our approach to question search. More specifically, we used the 200 queries in the test set 'TRL-TST' to search for 'relevant' questions from the 314,616 questions categorized as 'travel'. Note that only the questions occurring in the test set can be evaluated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Searching questions about 'travel'",
"sec_num": "3.2"
},
{
"text": "We made use of the taxonomy of questions provided at Yahoo! Answers for the calculation of specificity of topic terms. The taxonomy is organized in a tree structure. In the following experiments, we only utilized as the categories of questions the leaf nodes of the taxonomy tree (regarding 'travel'), which includes 355 categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Searching questions about 'travel'",
"sec_num": "3.2"
},
{
"text": "We randomly divided the test queries into five even subsets and conducted 5-fold cross-validation experiments. In each trial, we tuned the parameters , , and in the equation 10-(12) with four of the five subsets and then applied it to one remaining subset. The experimental results reported below are those averaged over the five trials. In Table 3 , our approach denoted by LMIR-CUT is implemented exactly as equation 10. Neither VSM nor LMIR uses the data structure composed of question topic and question focus.",
"cite_spans": [],
"ref_spans": [
{
"start": 341,
"end": 348,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Searching questions about 'travel'",
"sec_num": "3.2"
},
{
"text": "From Table 3 , we see that our approach outperforms the baseline approaches VSM and LMIR in terms of all the measures. We conducted a significance test (t-test) on the improvements of our approach over VSM and LMIR. The result indicates that the improvements are statistically significant (p-value < 0.05) in terms of all the evaluation measures. In equation 9, we use the parameter \u03bb to balance the contribution of question topic and the contribution of question focus. Figure 3 illustrates how influential the value of \u03bb is on the performance of question search in terms of MRR. The result was obtained with the 200 queries directly, instead of 5-fold cross-validation. From Figure 3 , we see that our approach performs best when \u03bb is around 0.7. That is, our approach tends to emphasize question topic more than question focus.",
"cite_spans": [],
"ref_spans": [
{
"start": 5,
"end": 12,
"text": "Table 3",
"ref_id": null
},
{
"start": 471,
"end": 479,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 677,
"end": 685,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "MAP R-Precision",
"sec_num": null
},
{
"text": "We also examined the correctness of question topics and question foci of the 200 queried questions. The question topics and question foci were obtained with the MDL-based tree cut model automatically. In the result, 69 questions have incorrect question topics or question foci. Further analysis shows that the errors came from two categories: (a) 59 questions have only the HEAD parts (that is, none of the topic terms fall within the TAIL part), and (b) 10 have incorrect orders of topic terms because the specificities of topic terms were estimated inaccurately. For questions only having the HEAD parts, our approach (equation 9) reduces to traditional language modeling approach. Thus, even when the errors of category (a) occur, our approach can still work not worse than the traditional language modeling approach. This also explains why our approach performs best when \u03bb is around 0.7. The error category (a) pushes our model to emphasize more in question topic. Table 4 . Search Results for \"How cold does it get in winters in Alaska?\" Table 4 provides the TOP-3 search results which are given by VSM, LMIR, and LMIR-CUT (our approach) respectively. The questions in bold are labeled as 'relevant' in the evaluation set. The queried question seeks for the 'weather' information about 'Alaska'. Both VSM and LMIR rank certain 'irrelevant' questions higher than 'relevant' questions. The 'irrelevant' questions are not about 'Alaska' although they are about 'weather'. The reason is that neither VSM nor PVSM is aware that the query consists of the two aspects 'weather' (how cold, winter) and 'Alaska'. In contrast, our approach assures that both aspects are matched. Note that the HEAD part of the topic chain of the queried question given by our approach is \"Alaska\" and the TAIL part is \"winter how cold\".",
"cite_spans": [],
"ref_spans": [
{
"start": 970,
"end": 977,
"text": "Table 4",
"ref_id": null
},
{
"start": 1044,
"end": 1051,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "MAP R-Precision",
"sec_num": null
},
{
"text": "In the experiments, we made use of the questions about 'computers & internet' to test the performance of our proposed approach to question search. More specifically, we used the 200 queries in the test set 'CI-TST'' to search for 'relevant' questions from the 210,785 questions categorized as 'computers & internet'. For the calculation of specificity of topic terms, we utilized as the categories of questions the leaf nodes of the taxonomy tree regarding 'computers & Internet', which include 23 categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Searching questions about 'computers & internet'",
"sec_num": "3.3"
},
{
"text": "We conducted 5-fold cross-validation for the parameter tuning. The experimental results reported in Table 5 are averaged over the five trials. Again, we see that our approach outperforms the baseline approaches VSM and LMIR in terms of all the measures. We conducted a significance test (t-test) on the improvements of our approach over VSM and LMIR. The result indicates that the improvements are statistically significant (p-value < 0.05) in terms of all the evaluation measures.",
"cite_spans": [],
"ref_spans": [
{
"start": 100,
"end": 107,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Searching questions about 'computers & internet'",
"sec_num": "3.3"
},
{
"text": "We also conducted the experiment similar to that in Figure 3. Figure 4 provides the result. The trend is consistent with that in Figure 3 .",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 70,
"text": "Figure 3. Figure 4",
"ref_id": "FIGREF1"
},
{
"start": 129,
"end": 137,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "MAP R-Precision",
"sec_num": null
},
{
"text": "We examined the correctness of (automatically identified) question topics and question foci of the 200 queried questions, too. In the result, 65 questions have incorrect question topics or question foci. Among them, 47 fall in the error category (a) and 18 in the error category (b). The distribution of errors is also similar to that in Section 3.2, which also justifies the trend presented in Figure 4 . ",
"cite_spans": [],
"ref_spans": [
{
"start": 395,
"end": 403,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "MAP R-Precision",
"sec_num": null
},
{
"text": "In the setting of question search, besides the topic what we address in the previous sections, another research topic is to fix lexical chasm between questions. Sometimes, two questions that have the same meaning use very different wording. For example, the questions \"where to stay in Hamburg?\" and \"the best hotel in Hamburg?\" have almost the same meaning but are lexically different in question focus (where to stay vs. best hotel). This is the socalled 'lexical chasm'. Jeon and Bruce (2007) proposed a mixture model for fixing the lexical chasm between questions. The model is a combination of the language modeling approach (for information retrieval) and translation-based approach (for information retrieval). Our idea of modeling question structure for search can naturally extend to Jeon et al.'s model. More specifically, by using translation probabilities, we can rewrite equation 11and (12) as follow:",
"cite_spans": [
{
"start": 474,
"end": 495,
"text": "Jeon and Bruce (2007)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Using Translation Probability",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2022 \u0302 \u2022 \u2211 | \u2022 \u0302 1 \u2022 \u0302 | (13) \u2022 \u0302 \u2022 \u2211 | \u2022 \u0302 1 \u2022 \u0302 |",
"eq_num": "(14)"
}
],
"section": "Using Translation Probability",
"sec_num": "4"
},
{
"text": "where | denotes the probability that topic term is the translation of . In our experiments, to estimate the probability | , we used the collections of question titles and question descriptions as the parallel corpus and the IBM model 1 (Brown et al., 1993) as the alignment model. ",
"cite_spans": [
{
"start": 236,
"end": 256,
"text": "(Brown et al., 1993)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Using Translation Probability",
"sec_num": "4"
},
{
"text": "Usually, users reiterate or paraphrase their questions (already described in question titles) in question descriptions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "MRR \u03bb",
"sec_num": null
},
{
"text": "We utilized the new model elaborated by equation (13) and (14) for searching questions about 'travel' and 'computers & internet'. The new model is denoted as 'SMT-CUT'. Table 6 provides the evaluation results. The evaluation was conducted with exactly the same setting as in Section 3. From Table 6 , we see that the performance of our approach can be further boosted by using translation probability. ",
"cite_spans": [],
"ref_spans": [
{
"start": 169,
"end": 176,
"text": "Table 6",
"ref_id": null
},
{
"start": 291,
"end": 298,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "MRR \u03bb",
"sec_num": null
},
{
"text": "The major focus of previous research efforts on question search is to tackle the lexical chasm problem between questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "The research of question search is first conducted using FAQ data. FAQ Finder (Burke et al., 1997) heuristically combines statistical similarities and semantic similarities between questions to rank FAQs. Conventional vector space models are used to calculate the statistical similarity and WordNet (Fellbaum, 1998) is used to estimate the semantic similarity. Sneiders (2002) proposed template based FAQ retrieval systems. Lai et al. (2002) proposed an approach to automatically mine FAQs from the Web. Jijkoun and Rijke (2005) used supervised learning methods to extend heuristic extraction of Q/A pairs from FAQ pages, and treated Q/A pair retrieval as a fielded search task. Harabagiu et al. (2005) used a Question Answer Database (known as QUAB) to support interactive question answering. They compared seven different similarity metrics for selecting related questions from QUAB and found that the conceptbased metric performed best.",
"cite_spans": [
{
"start": 78,
"end": 98,
"text": "(Burke et al., 1997)",
"ref_id": "BIBREF15"
},
{
"start": 299,
"end": 315,
"text": "(Fellbaum, 1998)",
"ref_id": "BIBREF1"
},
{
"start": 361,
"end": 376,
"text": "Sneiders (2002)",
"ref_id": "BIBREF5"
},
{
"start": 424,
"end": 441,
"text": "Lai et al. (2002)",
"ref_id": "BIBREF19"
},
{
"start": 504,
"end": 528,
"text": "Jijkoun and Rijke (2005)",
"ref_id": "BIBREF17"
},
{
"start": 679,
"end": 702,
"text": "Harabagiu et al. (2005)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Recently, the research of question search has been further extended to the community-based Q&A data. For example, Jeon et al. (Jeon et al., 2005a; Jeon et al., 2005b) compared four different retrieval methods, i.e. vector space model, Okapi, language model (LM), and translation-based model, for automatically fixing the lexical chasm between questions of question search. They found that the translation-based model performed best.",
"cite_spans": [
{
"start": 126,
"end": 146,
"text": "(Jeon et al., 2005a;",
"ref_id": "BIBREF9"
},
{
"start": 147,
"end": 166,
"text": "Jeon et al., 2005b)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "However, all the existing methods treat questions just as plain texts (without considering question structure). In this paper, we proposed to conduct question search by identifying question topic and question focus. To the best of our knowledge, none of the existing studies addressed question search by modeling both question topic and question focus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Question answering (e.g., Pasca and Harabagiu, 2001; Echihabi and Marcu, 2003; Voorhees, 2004; Metzler and Croft, 2005) relates to question search. Question answering automatically extracts short answers for a relatively limited class of question types from document collections. In contrast to that, question search retrieves answers for an unlimited range of questions by focusing on finding semantically similar questions in an archive.",
"cite_spans": [
{
"start": 26,
"end": 52,
"text": "Pasca and Harabagiu, 2001;",
"ref_id": "BIBREF13"
},
{
"start": 53,
"end": 78,
"text": "Echihabi and Marcu, 2003;",
"ref_id": "BIBREF0"
},
{
"start": 79,
"end": 94,
"text": "Voorhees, 2004;",
"ref_id": "BIBREF4"
},
{
"start": 95,
"end": 119,
"text": "Metzler and Croft, 2005)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this paper, we have proposed an approach to question search which models question topic and question focus in a language modeling framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "The contribution of this paper can be summarized in 4-fold: (1) A data structure consisting of question topic and question focus was proposed for summarizing questions; (2) The MDL-based tree cut model was employed to identify question topic and question focus automatically; (3) A new form of language modeling using question topic and question focus was developed for question search; (4) Extensive experiments have been conducted to evaluate the proposed approach using a large collection of real questions obtained from Yahoo! Answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "Though we only utilize data from communitybased question answering service in our experiments, we could also use categorized questions from forum sites and FAQ sites. Thus, as future work, we will try to investigate the use of the proposed approach for other kinds of web services.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "6"
},
{
"text": "http://answers.yahoo.com 2 http://qna.live.com 3 http://zhidao.baidu.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Xinying Song, Shasha Li, and Shilin Ding for their efforts on developing the evaluation data. We would also like to thank Stephan H. Stiller for his proof-reading of the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Noisy-Channel Approach to Question Answering",
"authors": [
{
"first": "A",
"middle": [],
"last": "Echihabi",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proc. of ACL'03",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Echihabi and D. Marcu. 2003. A Noisy-Channel Ap- proach to Question Answering. In Proc. of ACL'03.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "WordNet: An electronic lexical database",
"authors": [
{
"first": "C",
"middle": [],
"last": "Fellbaum",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Fellbaum. 1998. WordNet: An electronic lexical da- tabase. MIT Press.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Analysis of statistical question classification for fact-based questions",
"authors": [
{
"first": "D",
"middle": [],
"last": "Metzler",
"suffix": ""
},
{
"first": "W",
"middle": [
"B"
],
"last": "Croft",
"suffix": ""
}
],
"year": 2005,
"venue": "Information Retrieval",
"volume": "8",
"issue": "3",
"pages": "481--504",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Metzler and W. B. Croft. 2005. Analysis of statistical question classification for fact-based questions. In- formation Retrieval, 8(3), pages 481-504.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Trie memory",
"authors": [
{
"first": "E",
"middle": [],
"last": "Fredkin",
"suffix": ""
}
],
"year": 1960,
"venue": "Communications of the ACM, D",
"volume": "3",
"issue": "9",
"pages": "490--499",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Fredkin. 1960. Trie memory. Communications of the ACM, D. 3(9):490-499.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Overview of the TREC 2004 question answering track",
"authors": [
{
"first": "E",
"middle": [
"M"
],
"last": "Voorhees",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of TREC'04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. M. Voorhees. 2004. Overview of the TREC 2004 question answering track. In Proc. of TREC'04.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Automated question answering using question templates that cover the conceptual model of the database",
"authors": [
{
"first": "E",
"middle": [],
"last": "Sneiders",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of the 6th International Conference on Applications of Natural Language to Information Systems",
"volume": "",
"issue": "",
"pages": "235--239",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Sneiders. 2002. Automated question answering using question templates that cover the conceptual model of the database. In Proc. of the 6th International Conference on Applications of Natural Language to Information Systems, pages 235-239.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A vector space model for automatic indexing",
"authors": [
{
"first": "G",
"middle": [],
"last": "Salton",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "C",
"middle": [
"S"
],
"last": "Yang",
"suffix": ""
}
],
"year": 1975,
"venue": "Communications of the ACM",
"volume": "18",
"issue": "11",
"pages": "613--620",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Salton, A. Wong, and C. S. Yang 1975. A vector space model for automatic indexing. Communica- tions of the ACM, vol. 18, nr. 11, pages 613-620.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Generalizing case frames using a thesaurus and the MDL principle",
"authors": [
{
"first": "H",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Abe",
"suffix": ""
}
],
"year": 1998,
"venue": "Computational Linguistics",
"volume": "24",
"issue": "2",
"pages": "217--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Li and N. Abe. 1998. Generalizing case frames us- ing a thesaurus and the MDL principle. Computa- tional Linguistics, 24(2), pages 217-244.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning translationbased language models using Q&A archives",
"authors": [
{
"first": "J",
"middle": [],
"last": "Jeon",
"suffix": ""
},
{
"first": "W",
"middle": [
"B"
],
"last": "Croft",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Jeon and W.B. Croft. 2007. Learning translation- based language models using Q&A archives. Tech- nical report, University of Massachusetts.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Finding semantically similar questions based on their answers",
"authors": [
{
"first": "J",
"middle": [],
"last": "Jeon",
"suffix": ""
},
{
"first": "W",
"middle": [
"B"
],
"last": "Croft",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of SIGIR'05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Jeon, W. B. Croft, and J. Lee. 2005a. Finding seman- tically similar questions based on their answers. In Proc. of SIGIR'05.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Finding similar questions in large question and answer archives",
"authors": [
{
"first": "J",
"middle": [],
"last": "Jeon",
"suffix": ""
},
{
"first": "W",
"middle": [
"B"
],
"last": "Croft",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of CIKM '05",
"volume": "",
"issue": "",
"pages": "84--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Jeon, W. B. Croft, and J. Lee. 2005b. Finding similar questions in large question and answer archives. In Proc. of CIKM '05, pages 84-90.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Modeling by shortest data description",
"authors": [
{
"first": "J",
"middle": [],
"last": "Rissanen",
"suffix": ""
}
],
"year": 1978,
"venue": "Automatica",
"volume": "14",
"issue": "",
"pages": "465--471",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Rissanen. 1978. Modeling by shortest data description. Automatica, vol. 14, pages. 465-471",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A language modeling approach to information retrieval",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Ponte",
"suffix": ""
},
{
"first": "W",
"middle": [
"B"
],
"last": "Croft",
"suffix": ""
}
],
"year": 1998,
"venue": "Proc. of SIGIR'98",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.M. Ponte, W.B. Croft. 1998. A language modeling approach to information retrieval. In Proc. of SIGIR'98.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "High performance question/answering",
"authors": [
{
"first": "M",
"middle": [
"A"
],
"last": "Pasca",
"suffix": ""
},
{
"first": "S",
"middle": [
"M"
],
"last": "Harabagiu",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of SIGIR'01",
"volume": "",
"issue": "",
"pages": "366--374",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. A. Pasca and S. M. Harabagiu. 2001. High perfor- mance question/answering. In Proc. of SIGIR'01, pages 366-374.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The mathematics of statistical machine translation: parameter estimation",
"authors": [
{
"first": "P",
"middle": [
"F"
],
"last": "Brown",
"suffix": ""
},
{
"first": "V",
"middle": [
"J D"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "S",
"middle": [
"A D"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Mercer",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "263--311",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. F. Brown, V. J. D. Pietra, S. A. D. Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: parameter estimation. Computational Linguistics, 19(2):263-311.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Question answering from frequently asked question files: Experiences with the FAQ finder system",
"authors": [
{
"first": "R",
"middle": [
"D"
],
"last": "Burke",
"suffix": ""
},
{
"first": "K",
"middle": [
"J"
],
"last": "Hammond",
"suffix": ""
},
{
"first": "V",
"middle": [
"A"
],
"last": "Kulyukin",
"suffix": ""
},
{
"first": "S",
"middle": [
"L"
],
"last": "Lytinen",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Tomuro",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Schoenberg",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. D. Burke, K. J. Hammond, V. A. Kulyukin, S. L. Lytinen, N. Tomuro, and S. Schoenberg. 1997. Ques- tion answering from frequently asked question files: Experiences with the FAQ finder system. Technical report, University of Chicago.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Experiments with Interactive Question-Answering",
"authors": [
{
"first": "S",
"middle": [],
"last": "Harabagiu",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Hickl",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lehmann",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Moldovan",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of ACL'05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Harabagiu, A. Hickl, J. Lehmann and D. Moldovan. 2005. Experiments with Interactive Question- Answering. In Proc. of ACL'05.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Retrieving Answers from Frequently Asked Questions Pages on the Web",
"authors": [
{
"first": "V",
"middle": [],
"last": "Jijkoun",
"suffix": ""
},
{
"first": "M",
"middle": [
"D"
],
"last": "Rijke",
"suffix": ""
}
],
"year": 2005,
"venue": "Proc. of CIKM'05",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Jijkoun, M. D. Rijke. 2005. Retrieving Answers from Frequently Asked Questions Pages on the Web. In Proc. of CIKM'05.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Base noun phrase translation using web data and the EM algorithm",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of COLING'02",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Cao and H. Li. 2002. Base noun phrase translation using web data and the EM algorithm. In Proc. of COLING'02.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Faq mining via list detection",
"authors": [
{
"first": "Y.-S",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "K.-A",
"middle": [],
"last": "Fung",
"suffix": ""
},
{
"first": "C.-H",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of the Workshop on Multilingual Summarization and Question Answering",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y.-S. Lai, K.-A. Fung, and C.-H. Wu. 2002. Faq mining via list detection. In Proc. of the Workshop on Multi- lingual Summarization and Question Answering, 2002.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "An Example of a Question TreeGiven the topic chains with respect to the questions inTable 1as follow, \u2022 Q1: Hamburg Berlin cool club \u2022 Q2: Berlin fun club \u2022 Q3: Hamburg Berlin nice hotel \u2022 Q4: Hamburg Berlin how long does it take \u2022 Q5: Berlin cheap hotel we can have the question tree presented inFigure 2.",
"num": null,
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"text": "Balancing between Question Topic and Question Focus",
"num": null,
"uris": null
},
"FIGREF3": {
"type_str": "figure",
"text": "Balancing between Question Topic and Question Focus",
"num": null,
"uris": null
},
"TABREF0": {
"text": "Any nice hotels in Berlin or Hamburg? Q4: How long does it take to Hamburg from Berlin? Q5: Cheap hotels in Berlin? Table 1. An Example on Question Search",
"num": null,
"content": "<table><tr><td>Query:</td></tr><tr><td>Q1: Any cool clubs in Berlin or Hamburg?</td></tr><tr><td>Expected:</td></tr><tr><td>Q2: What are the best/most fun clubs in Berlin?</td></tr><tr><td>Not Expected:</td></tr><tr><td>Q3:</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF1": {
"text": "model formed by the targeted question . The targeted question is from a collection of questions.",
"num": null,
"content": "<table><tr><td>from the language</td><td/><td/></tr><tr><td/><td/><td>cool club</td></tr><tr><td>Hamburg</td><td>Berlin</td><td>nice hotel</td></tr><tr><td>ROOT</td><td/><td>how long does it take</td></tr><tr><td>Berlin</td><td/><td>cheap hotel</td></tr><tr><td/><td/><td>fun club</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF2": {
"text": "",
"num": null,
"content": "<table><tr><td>pro-</td></tr></table>",
"html": null,
"type_str": "table"
},
"TABREF4": {
"text": "",
"num": null,
"content": "<table><tr><td/><td>1. How cold does it usually get in Charlotte,</td></tr><tr><td/><td>NC during winters?</td></tr><tr><td/><td>2. How long and cold are the winters in</td></tr><tr><td/><td>Rochester, NY?</td></tr><tr><td/><td>2. How cold does it get really in Toronto in</td></tr><tr><td/><td>the winter?</td></tr><tr><td/><td>3. How cold does the Mojave Desert get in</td></tr><tr><td/><td>the winter?</td></tr><tr><td/><td>1. How cold is it in Alaska?</td></tr><tr><td>LMIR-CUT</td><td>2. How cold is Alaska in March and out-door activities? 3. How cold does it get in Nova Scotia in the</td></tr><tr><td/><td>winter?</td></tr></table>",
"html": null,
"type_str": "table"
}
}
}
}