ACL-OCL / Base_JSON /prefixD /json /D10 /D10-1010.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D10-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:52:43.706438Z"
},
"title": "Learning the Relative Usefulness of Questions in Community QA",
"authors": [
{
"first": "Razvan",
"middle": [],
"last": "Bunescu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ohio University Athens",
"location": {
"postCode": "43201",
"region": "OH",
"country": "USA"
}
},
"email": "bunescu@ohio.edu"
},
{
"first": "Yunfeng",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Ohio University",
"location": {
"postCode": "43201",
"settlement": "Athens",
"region": "OH",
"country": "USA"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a machine learning approach for the task of ranking previously answered questions in a question repository with respect to their relevance to a new, unanswered reference question. The ranking model is trained on a collection of question groups manually annotated with a partial order relation reflecting the relative utility of questions inside each group. Based on a set of meaning and structure aware features, the new ranking model is able to substantially outperform more straightforward, unsupervised similarity measures.",
"pdf_parse": {
"paper_id": "D10-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a machine learning approach for the task of ranking previously answered questions in a question repository with respect to their relevance to a new, unanswered reference question. The ranking model is trained on a collection of question groups manually annotated with a partial order relation reflecting the relative utility of questions inside each group. Based on a set of meaning and structure aware features, the new ranking model is able to substantially outperform more straightforward, unsupervised similarity measures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Open domain Question Answering (QA) is one of the most complex and challenging tasks in natural language processing. In general, a question answering system may need to integrate knowledge coming from a wide variety of linguistic processing tasks such as syntactic parsing, semantic role labeling, named entity recognition, and anaphora resolution (Prager, 2006) . State of the art implementations of these linguistic analysis tasks are still limited in their performance, with errors that compound and propagate into the final performance of the QA system (Moldovan et al., 2002) . Consequently, the performance of open domain QA systems has yet to arrive at a level at which it would become a feasible alternative to the current paradigms for information access based on keyword searches.",
"cite_spans": [
{
"start": 348,
"end": 362,
"text": "(Prager, 2006)",
"ref_id": "BIBREF18"
},
{
"start": 557,
"end": 580,
"text": "(Moldovan et al., 2002)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, community-driven QA sites such as Yahoo! Answers and WikiAnswers 1 have established 1 answers.yahoo.com, wiki.answers.com a new approach to question answering that shifts the inherent complexity of open domain QA from the computer system to volunteer contributors. The computer is no longer required to perform a deep linguistic analysis of questions and generate corresponding answers, and instead acts as a mediator between users submitting questions and volunteers providing the answers.",
"cite_spans": [
{
"start": 75,
"end": 76,
"text": "1",
"ref_id": "BIBREF0"
},
{
"start": 94,
"end": 95,
"text": "1",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An important objective in community QA is to minimize the time elapsed between the submission of questions by users and the subsequent posting of answers by volunteer contributors. One useful strategy for minimizing the response latency is to search the QA repository for similar questions that have already been answered, and provide the corresponding ranked list of answers, if such a question is found. The success of this approach depends on the definition and implementation of the question-to-question similarity function. In the simplest solution, the system searches for previously answered questions based on exact string matching with the reference question. Alternatively, sites such as WikiAnswers allow the users to mark questions they think are rephrasings (\"alternate wordings\", or paraphrases) of existing questions. These question clusters are then taken into account when performing exact string matching, therefore increasing the likelihood of finding previously answered questions that are semantically equivalent to the reference question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to lessen the amount of work required from the contributors, an alternative approach is to build a system that automatically finds rephrasings of questions, especially since question rephrasing seems to be computationally less demanding than question answering. According to previous work in this domain, a question is considered a rephrasing of a reference question Q 0 if it uses an alternate wording to express an identical information need. For example, Q 0 and Q 1 below are rephrasings of each other, and consequently they are expected to have the same answer. Paraphrasings of a new question cannot always be found in the community QA repository. We believe that computing a ranked list of existing questions that at least partially address the original information need could also be useful to the user, at least until other users volunteer to give an exact answer to the original, unanswered reference question. For example, in the absence of any additional information about the reference question Q 0 , the expected answers to questions Q 2 and Q 3 below may be seen as partially overlapping in information content with the expected answer for the reference question Q 0 . An answer to question Q 4 , on the other hand, is less likely to benefit the user, even though it has a significant lexical overlap with the reference question. In this paper, we propose a supervised learning approach to the question ranking problem, a generalization of the question paraphrasing problem in which questions are ranked in a partial order based on the relative information overlap between their expected answers and the expected answer of the reference question. Underlying the question ranking task is the expectation that the user who submits a reference question will find the answers of the highly ranked questions to be more useful than the answers associated with the lower ranked questions. For the reference question Q 0 above, the learned ranking model is expected to produce a partial order in which Q 1 is ranked higher than Q 2 , Q 3 and Q 4 , whereas Q 2 and Q 3 are ranked higher than Q 4 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to enable the evaluation of question ranking approaches, we have previously created a dataset of 60 groups of questions (Bunescu and Huang, 2010b) . Each group consists of a reference question (e.g. Q 0 above) that is associated with a partially ordered set of questions (e.g. Q 1 to Q 4 above). For each reference questions, its corresponding partially ordered set is created from questions in Yahoo! Answers and other online repositories that have a high cosine similarity with the reference question. Out of the 26 top categories in Yahoo! Answers, the 60 reference questions span a diverse set of categories. Figure 1 lists the 20 categories covered, where each category is shown with the number of corresponding reference questions between parentheses. Inside each group, the questions are manually annotated with a partial order relation, according to their utility with respect to the reference question. We use the notation Q i \u227b Q j |Q r to encode the fact that question Q i is more useful than question Q j with respect to the reference question Q r . Similarly, Q i = Q j will be used to express the fact that questions Q i and Q j are reformulations of each other (the reformulation relation is independent of the reference question). The partial ordering among the questions Q 0 to Q 4 above can therefore be expressed concisely as follows:",
"cite_spans": [
{
"start": 129,
"end": 155,
"text": "(Bunescu and Huang, 2010b)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 622,
"end": 630,
"text": "Figure 1",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Partially Ordered Datasets for Question Ranking",
"sec_num": "2"
},
{
"text": "Travel (10), Computers & Internet (6), Beauty & Style (5), Entertainment & Music (5), Food & Drink (5), Health (5), Arts & Humanities (3), Cars & Transportation (3), Consumer Electronics (3), Pets (3), Family & Relationships (2), Science & Mathematics (2), Education & Reference (1), Environment (1), Local Businesses (1), Pregnancy & Parenting (1), Society & Culture (1), Sports (1), Yahoo! Products (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partially Ordered Datasets for Question Ranking",
"sec_num": "2"
},
{
"text": "Q 0 = Q 1 , Q 1 \u227b Q 2 |Q 0 , Q 1 \u227b Q 3 |Q 0 , Q 2 \u227b Q 4 |Q 0 , Q 3 \u227b Q 4 |Q 0 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partially Ordered Datasets for Question Ranking",
"sec_num": "2"
},
{
"text": "Note that we do not explicitly annotate the relation Q 1 \u227b Q 4 |Q 0 , since it can be inferred based on the transitivity of the more useful than relation: Also note that no relation is specified between Q 2 and Q 3 , and similarly no relation can be inferred between these two questions. This reflects our belief that, in the absence of any additional information regarding the user or the \"turtle\" referenced in Q 0 , we cannot compare questions Q 2 and Q 3 in terms of their usefulness with respect to Q 0 . Table 1 shows another reference question Q 5 from our dataset, together with its annotated group of questions Q 6 to Q 20 . In order to make the annotation process easier and reproducible, we have divided it into two levels of annotation. During the first annotation stage, each question group is partitioned manually into 3 subgroups of questions:",
"cite_spans": [],
"ref_spans": [
{
"start": 510,
"end": 517,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Partially Ordered Datasets for Question Ranking",
"sec_num": "2"
},
{
"text": "Q 1 \u227b Q 2 |Q 0 \u2227 Q 2 \u227b Q 4 |Q 0 \u21d2 Q 1 \u227b Q 4 |Q 0 . REFERENCE QUESTION (Q r ) Q 5 What'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partially Ordered Datasets for Question Ranking",
"sec_num": "2"
},
{
"text": "\u2022 P is the set of paraphrasing questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partially Ordered Datasets for Question Ranking",
"sec_num": "2"
},
{
"text": "\u2022 U is the set of useful questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partially Ordered Datasets for Question Ranking",
"sec_num": "2"
},
{
"text": "\u2022 N is the set of neutral questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partially Ordered Datasets for Question Ranking",
"sec_num": "2"
},
{
"text": "A question is deemed useful if its expected answer may overlap in information content with the expected answer of the reference question. The expected answer of a neutral question, on the other hand, should be irrelevant with respect to the reference question. Let Q r be the reference question, Q p \u2208 P a paraphrasing question, Q u \u2208 U a useful question, and Q n \u2208 N a neutral question. Then the following relations are assumed to hold among these questions:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partially Ordered Datasets for Question Ranking",
"sec_num": "2"
},
{
"text": "1. Q p \u227b Q u |Q r : a paraphrasing question is more useful than a useful question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partially Ordered Datasets for Question Ranking",
"sec_num": "2"
},
{
"text": "2. Q u \u227b Q n |Q r : a useful question is more useful than a neutral question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partially Ordered Datasets for Question Ranking",
"sec_num": "2"
},
{
"text": "Note that as long as these relations hold between the 3 types of questions, the names of the subgroups and their definitions are irrelevant with respect to the implied set of more useful than relations, since only the implied ternary relations will be used for training and evaluating question ranking approaches. We also assume that, by transitivity, the following ternary relations also hold:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partially Ordered Datasets for Question Ranking",
"sec_num": "2"
},
{
"text": "Q p \u227b Q n |Q r , i.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partially Ordered Datasets for Question Ranking",
"sec_num": "2"
},
{
"text": "e. a paraphrasing question is more useful than a neutral question. Furthermore, if",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partially Ordered Datasets for Question Ranking",
"sec_num": "2"
},
{
"text": "Q p 1 , Q p 2 \u2208 P are two paraphrasing questions, this implies Q p 1 = Q p 2 |Q r .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partially Ordered Datasets for Question Ranking",
"sec_num": "2"
},
{
"text": "For the vast majority of questions, the first annotation stage is straightforward and non-controversial. In the second annotation stage, we perform a finer annotation of relations between questions in the middle group U. Table 1 shows two such relations (using indentation): Q 8 \u227b Q 9 |Q 5 and Q 8 \u227b Q 10 |Q 5 . Question Q 8 would have been a rephrasing of the reference question, were it not for the noun \"art\" modifying the focus noun phrase \"summer camp\". Therefore, the information content of the answer to Q 8 is strictly subsumed in the information content associated with the answer to Q 5 . Similarly, in Q 9 the focus noun phrase is further specialized through the prepositional phrase \"for girls\". Therefore, (an answer to) Q 9 is less useful to Q 5 than (an answer to) Q 8 , i.e. Q 8 \u227b Q 9 |Q 5 . Furthermore, the focus \"art summer camp\" in Q 8 conceptually subsumes the focus \"summer camps for singing\" in Q 10 , therefore Q 8 \u227b Q 10 |Q 5 .",
"cite_spans": [],
"ref_spans": [
{
"start": 221,
"end": 228,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Partially Ordered Datasets for Question Ranking",
"sec_num": "2"
},
{
"text": "We call this dataset simple since most of the reference questions are shorter than the other questions in their group. We have also created a complex version of the same dataset, by selecting as the reference question in each group a longer question from the same group. For example, if Q 0 were a reference question, it would be replaced with a more complex question, such as Q 2 , or Q 3 . The annotation is redone to reflect the relative usefulness relations with respect to the new reference questions. We believe that the new complex dataset is closer to the actual distribution of questions in community QA repositories: unanswered questions tend to be more specific (longer), whereas general questions (shorter) are more likely to have been answered already. Each dataset is annotated by two annotators, leading to a total of 4 datasets: Simple 1 , Simple 2 , Complex 1 , and Complex 2 . Table 2 presents the following statistics on the two types of datasets (Simple, Complex) for each annotator (1, 2) : the total number of paraphrasings (P), the total number of useful questions (U), the total number of neutral questions (N ), the total number of more useful than ordered pairs encoded in the dataset, either explicitly or through transitivity, and the Inter-Annotator Agreement (ITA). We compute the ITA as the precision (P) and recall (R) with respect to the more useful than ordered pairs encoded in one annotation (P airs 1 ) relative to the ordered pairs encoded in the other annotation (P airs 2 ).",
"cite_spans": [
{
"start": 1003,
"end": 1006,
"text": "(1,",
"ref_id": "BIBREF0"
},
{
"start": 1007,
"end": 1009,
"text": "2)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 895,
"end": 902,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Partially Ordered Datasets for Question Ranking",
"sec_num": "2"
},
{
"text": "P = |P airs 1 \u2229 P airs 2 | P airs 1 R = |P airs 1 \u2229 P airs 2 | P airs 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Partially Ordered Datasets for Question Ranking",
"sec_num": "2"
},
{
"text": "The statistics in Table 2 indicate that the second annotator was in general more conservative in tagging questions as paraphrases or useful questions.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 25,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Partially Ordered Datasets for Question Ranking",
"sec_num": "2"
},
{
"text": "An ideal question ranking method would take an arbitrary triplet of questions Q r , Q i and Q j as input, and output an ordering between Q i and Q j with respect to the reference question Q r , i.e. one of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Methods for Question Ranking",
"sec_num": "3"
},
{
"text": "Q i \u227b Q j |Q r , Q i = Q j |Q r , or Q j \u227b Q i |Q r . One ap- proach is to design a usefulness function u(Q i , Q r )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Methods for Question Ranking",
"sec_num": "3"
},
{
"text": "that measures how useful question Q i is for the reference question Q r , and define the more useful than (\u227b) relation as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Methods for Question Ranking",
"sec_num": "3"
},
{
"text": "Q i \u227b Q j |Q r \u21d4 u(Q i , Q r ) > u(Q j , Q r )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Methods for Question Ranking",
"sec_num": "3"
},
{
"text": "If we define I(Q) to be the information need associated with question Q, then u(Q i , Q r ) could be defined as a measure of the relative overlap between I(Q i ) and I(Q r ). Unfortunately, the information need is a concept that, in general, is defined only intensionally and therefore it is difficult to measure. For lack of an operational definition of the information need, we will approximate u(Q i , Q r ) directly as a measure of the similarity between Q i and Q r . The similarity between two questions can be seen as a special case of text-to-text similarity, consequently one possibility is to use a general text-to-text similarity function such as cosine similarity in the vector space model (Baeza-Yates and Ribeiro-Neto, 1999) :",
"cite_spans": [
{
"start": 702,
"end": 738,
"text": "(Baeza-Yates and Ribeiro-Neto, 1999)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Methods for Question Ranking",
"sec_num": "3"
},
{
"text": "cos(Q i , Q r ) = Q T i Q r Q i Q r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Methods for Question Ranking",
"sec_num": "3"
},
{
"text": "Here, Q i and Q r denote the corresponding tf\u00d7idf vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Methods for Question Ranking",
"sec_num": "3"
},
{
"text": "As a measure of question similarity, one major drawback of cosine similarity is that it is oblivious of the meanings of words in each question. This particular problem is illustrated by the three questions below. Q 22 and Q 23 have the same cosine similarity with Q 21 , they are therefore indistinguishable in terms of their usefulness to the reference question Q 21 , even though we expect Q 22 to be more useful than Q 23 (a place that sells hydrangea often sells other types of plants too, possibly including cacti). To alleviate the lexical chasm, we can redefine u(Q i , Q r ) to be the similarity measure proposed by (Mihalcea et al., 2006) as follows:",
"cite_spans": [
{
"start": 624,
"end": 647,
"text": "(Mihalcea et al., 2006)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Methods for Question Ranking",
"sec_num": "3"
},
{
"text": "Q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Methods for Question Ranking",
"sec_num": "3"
},
{
"text": "mcs(Q i , Q r ) = w\u2208{Qi} maxSim(w, Q r ) * idf (w) w\u2208{Qi} idf (w) + w\u2208{Qr} maxSim(w, Q i ) * idf (w) w\u2208{Qr} idf (w)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Methods for Question Ranking",
"sec_num": "3"
},
{
"text": "Since scaling factors are immaterial for ranking, we have ignored the normalization constant contained in the original measure. For each word w \u2208 Q i , maxSim(w, Q r ) computes the maximum semantic similarity between w and any word w r \u2208 Q r . The similarity scores are weighted by the corresponding idf's, and normalized. A similar score is computed for each word w \u2208 Q r . The score computed by maxSim depends on the actual function used to compute the word-to-word semantic similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Methods for Question Ranking",
"sec_num": "3"
},
{
"text": "In this paper, we evaluated four of the knowledgebased measures explored in (Mihalcea et al., 2006) : wup (Wu and Palmer, 1994) , res (Resnik, 1995) , lin (Lin, 1998) , and jcn (Jiang and Conrath, 1997) .",
"cite_spans": [
{
"start": 76,
"end": 99,
"text": "(Mihalcea et al., 2006)",
"ref_id": "BIBREF16"
},
{
"start": 106,
"end": 127,
"text": "(Wu and Palmer, 1994)",
"ref_id": "BIBREF21"
},
{
"start": 134,
"end": 148,
"text": "(Resnik, 1995)",
"ref_id": "BIBREF19"
},
{
"start": 155,
"end": 166,
"text": "(Lin, 1998)",
"ref_id": "BIBREF15"
},
{
"start": 177,
"end": 202,
"text": "(Jiang and Conrath, 1997)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unsupervised Methods for Question Ranking",
"sec_num": "3"
},
{
"text": "Cosine similarity, henceforth referred as cos, treats questions as bags-of-words. The meta-measure proposed in (Mihalcea et al., 2006) , henceforth called mcs, treats questions as bags-of-concepts. Both cos and mcs ignore the syntactic relations between the words in a question, and therefore may miss important structural information. In the next three sections we describe a set of structural features that we believe are relevant for judging question similarity. These and other types of features will be integrated in an SVM model for ranking, as described later in Section 4.4.",
"cite_spans": [
{
"start": 111,
"end": 134,
"text": "(Mihalcea et al., 2006)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Supervised Learning for Question Ranking",
"sec_num": "4"
},
{
"text": "If we consider the question Q 24 below as reference, question Q 26 will be deemed more useful than Q 25 when using cos or mcs because of the higher relative lexical and conceptual overlap with Q 24 . However, this is contrary to the actual ordering Q 25 \u227b Q 26 |Q 24 , which reflects the fact that Q 25 , which expects the same answer type as Q 24 , should be deemed more useful than Q 26 , which has a different answer type. The analysis above shows the importance of using the answer type when computing the similarity between two questions. However, instead of relying exclusively on a predefined hierarchy of answer types, we identify the question focus of a question, defined as the set of maximal noun phrases in the question that corefer with the expected answer (Bunescu and Huang, 2010a) . Focus nouns such as movies and songs provide more discriminative information than general answer types such as products. We use answer types only for questions such as Q 27 or Q 28 below that lack an explicit question focus. In such cases, an artificial question focus is created from the answer type (e.g. location for Q 27 , or method for Q 28 ).",
"cite_spans": [
{
"start": 770,
"end": 796,
"text": "(Bunescu and Huang, 2010a)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Matching the Focus Words",
"sec_num": "4.1"
},
{
"text": "Q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching the Focus Words",
"sec_num": "4.1"
},
{
"text": "Q 27 Where can I buy a good coffee maker?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching the Focus Words",
"sec_num": "4.1"
},
{
"text": "Q 28 How do I make a pizza?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching the Focus Words",
"sec_num": "4.1"
},
{
"text": "Let f i and f r be the focus words corresponding to questions Q i and Q r . We introduce a focus feature \u03c6 f , and set its value to be equal with the similarity between the focus words:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching the Focus Words",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c6 f (Q i , Q r ) = wsim(f i , f r )",
"eq_num": "(1)"
}
],
"section": "Matching the Focus Words",
"sec_num": "4.1"
},
{
"text": "We use wsim to denote a generic word meaning similarity measure (e.g. wup, res, lin or jcn). When computing the focus feature, the non-focus word \"movie\" in Q 26 will not be compared with the focus word \"movies\" in Q 24 , and therefore Q 26 will have a lower value for this feature than Q 25 , i.e. \u03c6 f (Q 26 , Q 24 ) < \u03c6 f (Q 25 , Q 24 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching the Focus Words",
"sec_num": "4.1"
},
{
"text": "In addition to the question focus, the main verb of a question can also provide key information in estimating question-to-question similarity. We define the main verb to be the content verb that is highest in the dependency tree of the question, e.g. buy for Q 27 , or make for Q 28 . If the question does not contain a content verb, the main verb is defined to be the highest verb in the dependency tree, as for example are in Q 24 to Q 26 . The utility of a question's main verb in judging its similarity to other questions can be seen more clearly in the questions below, where Q 29 is the reference: The fact that upload, as the main verb of Q 30 , is more semantically related to transfer is essential in deciding that Q 30 \u227b Q 31 |Q 29 , i.e. Q 30 is more useful than Q 31 to Q 29 . Let v i and v r be the main verbs corresponding to questions Q i and Q r . We introduce a main verb feature \u03c6 v as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching the Main Verbs",
"sec_num": "4.2"
},
{
"text": "Q",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching the Main Verbs",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c6 v (Q i , Q r ) = wsim(v i , v r )",
"eq_num": "(2)"
}
],
"section": "Matching the Main Verbs",
"sec_num": "4.2"
},
{
"text": "If Q 29 is considered as reference question, it is expected that the main verb feature for question Q 30 will have a higher value than the main verb feature for Q 31 , i.e. \u03c6 f (Q 31 , Q 29 ) < \u03c6 f (Q 30 , Q 29 ). ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching the Main Verbs",
"sec_num": "4.2"
},
{
"text": "The question focus and the main verb are only two of the nodes in the syntactic dependency tree of a question. In general, all the words in a question are important when judging its semantic similarity with another question. We therefore propose a more general feature that exploits the dependency structure of the question and, in doing so, it also considers all the words in the question, like cos and mcs. For any given question we initially ignore the direction of the dependency arcs and change the question dependency tree to be rooted at the focus word, as illustrated in Figure 2 for questions Q 5 and Q 9 . Interrogative patterns such as \"What is\" or \"Are there any\" are automatically eliminated from the dependency trees. We define the dependency tree similarity between two questions Q i and Q r to be a function of similarities wsim",
"cite_spans": [],
"ref_spans": [
{
"start": 579,
"end": 587,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Matching the Dependency Trees",
"sec_num": "4.3"
},
{
"text": "(v i , v r ) computed between aligned nodes v i \u2208 Q i and v r \u2208 Q r .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching the Dependency Trees",
"sec_num": "4.3"
},
{
"text": "The nodes of two dependency trees are aligned through a function MaxMatch(u i .C, u r .C) that takes two sets of children nodes as arguments, one from Q i and one from Q r , and finds the maximum weighted bipartite matching between u i .C and u r .C. Given two children nodes v i \u2208 u i .C and v r \u2208 u r .C, the weight of a potential matching between v i and v r is defined simply as wsim(v i , v r ). MaxMatch(u i .C, u r .C) is furthermore constrained to match only nodes that have compatible part-of-speech tags (e.g. nouns are matched to nouns, verbs are matched to verbs), and children nodes that have the same head-modifier relationship with their parents (i.e. they are both heads, or they are both dependents of their parents). Table 3 shows the recursive algorithm used",
"cite_spans": [],
"ref_spans": [
{
"start": 735,
"end": 742,
"text": "Table 3",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Matching the Dependency Trees",
"sec_num": "4.3"
},
{
"text": "TreeMatch(u i , u r ) [In]: Two dependency tree nodes u i , u r .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching the Dependency Trees",
"sec_num": "4.3"
},
{
"text": "[Out]: A set of node pairs M. for finding a matching between two question dependency trees rooted at the focus words. The initial arguments of the algorithm are the two focus words u i = f i and u r = f r . Thus, the pair (f i , f r ) is the first pair of nodes to be added to the matching M in step 1. In the next step, we compute the maximum weighted matching between the children nodes u i .C and u r .C, and recursively call the matching algorithm on pairs of matched nodes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching the Dependency Trees",
"sec_num": "4.3"
},
{
"text": "1. set M \u2190 {(u i , u r )} 2. for each (v i , v r ) \u2208 MaxMatch(u i .C, u r .C): 3. set M \u2190 M \u222a TreeMatch(v i , v r ) 4. return M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching the Dependency Trees",
"sec_num": "4.3"
},
{
"text": "(v i , v r ) from M.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching the Dependency Trees",
"sec_num": "4.3"
},
{
"text": "The algorithm stops when MaxMatch returns an empty matching, which may happen when reaching leaf nodes, or when no pair of children nodes has compatible POS tags, or child-parent dependencies. Figure 2 shows the results of applying the tree matching algorithm on questions Q 5 and Q 9 . Matched nodes share the same index and are shown in circles, whereas unmatched nodes are shown in italics.",
"cite_spans": [],
"ref_spans": [
{
"start": 193,
"end": 201,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Matching the Dependency Trees",
"sec_num": "4.3"
},
{
"text": "We introduce a new feature \u03c6 t (Q i , Q r ) whose value is defined as the dependency tree similarity between questions Q i and Q r . Once the optimum matching M(Q i , Q r ) between dependency trees has been found, \u03c6 t (Q i , Q r ) is computed as the normalized sum of the similarities between pairs of matched nodes v i and v r , as shown in Equations 3 and 4 below. When computing the similarity between two matched nodes, we factor in the similarities between corresponding pairs of words on the paths f i ; v i , f r ; v r between the focus words f i , f r and the nodes v i , v r , as shown in Equation 5. This has the effect of reducing the importance of words that are farther away from the focus word in the dependency tree.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching the Dependency Trees",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c6 t (Q i , Q r ) = sim(Q i , Q r ) sim(Q i , Q i )sim(Q r , Q r ) (3) sim(Q i , Q r ) = (vi,vr)\u2208M(Qi,Qr) sim(f i ; v i , f r ; v r ) (4) sim(u 1 ; u n , v 1 ; v n ) = n i=1 wsim(u i , v i )",
"eq_num": "(5)"
}
],
"section": "Matching the Dependency Trees",
"sec_num": "4.3"
},
{
"text": "If the word similarity function is normalized and defined to return 1 for identical words, the normalizer in Equation 3 becomes equivalent with |Q i ||Q r |. Thus, words that are left unmatched implicitly decrease the dependency tree similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matching the Dependency Trees",
"sec_num": "4.3"
},
{
"text": "We consider learning a usefulness function u(Q i , Q r ) of the following general, linear form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An SVM Model for Ranking Questions",
"sec_num": "4.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "u(Q i , Q r ) = w T \u03c6(Q i , Q r )",
"eq_num": "(6)"
}
],
"section": "An SVM Model for Ranking Questions",
"sec_num": "4.4"
},
{
"text": "The vector \u03c6(Q i , Q r ) is defined to contain the following generic features: 5. mcs(Q i , Q r ) = the bag-of-concepts similarity between the two questions, as described in Section 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An SVM Model for Ranking Questions",
"sec_num": "4.4"
},
{
"text": "Each of the generic features \u03c6 f , \u03c6 v , \u03c6 t , and mcs corresponds to four actual features, one for each possible choice of the word similarity function wsim (i.e. wup, res, lin or jcn). An additional pair of features is targeted at questions containing locations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An SVM Model for Ranking Questions",
"sec_num": "4.4"
},
{
"text": "6. \u03c6 l (Q i , Q r ) = 1 if both questions contain locations, 0 otherwise. 7. \u03c6 d (Q i , Q r ) = the normalized geographical distance between the locations in Q i and Q r , 0 if \u03c6 l (Q i , Q r ) = 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An SVM Model for Ranking Questions",
"sec_num": "4.4"
},
{
"text": "Given two location names, we first find their latitude and longitude using Google Maps, and then compute the spherical distance between them using the haversine formula. The corresponding parameters w will be trained on pairs from one of the partially ordered datasets described in Section 2. We use the kernel version of the large-margin ranking approach from (Joachims, 2002) which solves the optimization problem in Figure 3 below. The aim of this formulation is to find a minimize:",
"cite_spans": [
{
"start": 361,
"end": 377,
"text": "(Joachims, 2002)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 419,
"end": 427,
"text": "Figure 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "An SVM Model for Ranking Questions",
"sec_num": "4.4"
},
{
"text": "J(w, \u03be) = 1 2 w 2 + C \u03be rij subject to: weight vector w such that 1) the number of ranking constraints u(Q i , Q r ) \u2265 u(Q j , Q r ) from the training data D that are violated is minimized, and 2) the ranking function u(Q i , Q r ) generalizes well beyond the training data. The learned w is a linear combination of the feature vectors \u03c6(Q i , Q r ), which makes it possible to use kernels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An SVM Model for Ranking Questions",
"sec_num": "4.4"
},
{
"text": "w T \u03c6(Q i , Q r ) \u2212 w T \u03c6(Q j , Q r ) \u2265 1 \u2212 \u03be rij \u03be rij \u2265 0 \u2200Q r , Q i , Q j \u2208 D, Q i \u227b Q j |Q r",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An SVM Model for Ranking Questions",
"sec_num": "4.4"
},
{
"text": "We use the four question ranking datasets described in Section 2 to evaluate the three similarity measures cos, mcs, and \u03c6 t , as well as the SVM ranking model. We report one set of results for each of the four word similarity measures wup, res, lin or jcn. Each question similarity measure is evaluated in terms of its accuracy on the set of ordered pairs, and the performance is averaged between the two annotators for the Simple and Complex datasets. If Q i \u227b Q j |Q r is a relation specified in the annotation, we consider the tuple Q i , Q j , Q r correctly classified if and only if u(Q i , Q r ) > u(Q j , Q r ), where u is the question similarity measure. We used the SVM light 2 implementation of ranking SVMs, with a cubic kernel and the standard parameters. The SVM ranking model was trained and tested using 10-fold cross-validation, and the overall accuracy was computed by averaging over the 10 folds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "5"
},
{
"text": "We used the NLTK 3 implementation of the four similarity measures wup, res, lin or jcn. The idf values for each word were computed from frequency counts over the entire Wikipedia. For each question, the focus is identified automatically by an SVM tagger trained on a separate corpus of 2,000 questions manually annotated with focus information (Bunescu and Huang, 2010a) . The SVM tagger uses a combination of lexico-syntactic features and a quadratic kernel to achieve a 93.5% accuracy in a 10-fold cross validation evaluation on the 2,000 questions. The head-modifier dependencies were derived automatically from the syntactic parse tree using the head finding rules from (Collins, 1999) . The syntactic tree is obtained using Spear 4 , a syntactic parser which comes pre-trained on an additional treebank of questions. The main verb of a question is identified deterministically using a breadth first traversal of the dependency tree.",
"cite_spans": [
{
"start": 344,
"end": 370,
"text": "(Bunescu and Huang, 2010a)",
"ref_id": "BIBREF4"
},
{
"start": 674,
"end": 689,
"text": "(Collins, 1999)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "5"
},
{
"text": "The overall accuracy results presented in Table 4 show that the SVM ranking model obtains by far the best performance on both datasets, a substantial 10% higher than cos, which is the best performing unsupervised method. The random baseline -assigning a random similarity value to each pair of questionsresults in 50% accuracy. Even though its use of word senses was expected to lead to superior results, mcs does not perform better than cos on this dataset. Our implementation of mcs did however perform better than cos on the Microsoft paraphrase corpus (Dolan et al., 2004) . One possible reason for this behavior is that mcs seems to be less resilient than cos to differences in question length. Whereas the Microsoft paraphrase corpus was specifically designed such that \"the length of the shorter of the two sentences, in words, is at least 66% that of the longer\" (Dolan and Brockett, 2005) , the question ranking datasets place no constraints on the lengths of the ",
"cite_spans": [
{
"start": 556,
"end": 576,
"text": "(Dolan et al., 2004)",
"ref_id": "BIBREF8"
},
{
"start": 871,
"end": 897,
"text": "(Dolan and Brockett, 2005)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 42,
"end": 49,
"text": "Table 4",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Experimental Evaluation",
"sec_num": "5"
},
{
"text": "all questions. However, even though by themselves the meaning aware mcs and the structure-and-meaning aware \u03c6 t do not outperform the bag-of-words cos, they do help in increasing the performance of the SVM ranking model, as can be inferred from the corresponding columns in Table 5 . The table shows the results of ablation experiments in which all but one type of features are used. The results indicate that all types of features are useful, with significant contributions being brought especially by cos and the focus related features \u03c6 f,t . The measures investigated in this paper are all compositional and reduce the similarity computations to word level. The following question patterns illustrate the need to design more complex similarity measures that take into account the context of every word in the question: If we take Q 32 as reference question, the fact that the distance between Los Angeles and Anaheim is smaller than the distance between Vista and Anaheim leads the ranking system to rank Q 33 as more useful than Q 34 with respect to Q 32 , which is the expected result. The preposition \"around\" from the city context in the first pattern is a good indicator that proximity relations are relevant in this case. When the same three cities are used for instantiating the other two patterns, it can be seen that the proximity relations are no longer as relevant for judging the relative usefulness of questions.",
"cite_spans": [],
"ref_spans": [
{
"start": 274,
"end": 281,
"text": "Table 5",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "\u2212\u03c6 f \u2212\u03c6 v \u2212\u03c6 t \u2212\u03c6 l,d \u2212cos \u2212mcs \u2212\u03c6 f,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "P 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": null
},
{
"text": "We plan to integrate context dependent word similarity measures into a more robust question utility function. We also plan to make the dependency tree matching more flexible in order to account for paraphrase patterns that may differ in their syntactic structure. The questions that are posted on community QA sites often contain spelling or grammatical errors. Consequently, we will work on interfacing the question ranking system with a separate module aimed at fixing orthographic and grammatical errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Future Work",
"sec_num": "6"
},
{
"text": "The question rephrasing subtask has spawned a diverse set of approaches. (Hermjakob et al., 2002) derive a set of phrasal patterns for question reformulation by generalizing surface patterns acquired automatically from a large corpus of web documents. The focus of the work in (Tomuro, 2003) is on deriving reformulation patterns for the interrogative part of a question. In (Jeon et al., 2005) , word translation probabilities are trained on pairs of semantically similar questions that are automatically extracted from an FAQ archive, and then used in a language model that retrieves question reformulations. (Jijkoun and de Rijke, 2005) describe an FAQ question retrieval system in which weighted combinations of similarity functions corresponding to questions, existing answers, FAQ titles and pages are computed using a vector space model. (Zhao et al., 2007) exploit the Encarta logs to automatically extract clusters containing question paraphrases and further train a perceptron to recognize question paraphrases inside each cluster based on a combination of lexical, syntactic and semantic similarity features. More recently, (Bernhard and Gurevych, 2008) evaluated various string similarity measures and vector space based similarity measures on the task of retrieving question paraphrases from the WikiAnswers repository. The aim of the question search task presented in (Duan et al., 2008) is to return questions that are semantically equivalent or close to the queried question, and is therefore similar to our question ranking task. Their approach is evaluated on a dataset in which questions are categorized either as relevant or irrelevant. Our formulation of question ranking is more general, and in particular subsumes the annotation of binary question categories such as relevant vs. irrelevant, or paraphrases vs. non-paraphrases. Moreover, we are able to exploit the annotated utility relations as supervision in a learning for ranking approach, whereas (Duan et al., 2008) use the annotated dataset to tune the 3 parameters of a mostly unsupervised approach. The question ranking task was first formulated in (Bunescu and Huang, 2010b) , where an initial version of the dataset was also described. In this paper, we introduce 4 versions of the dataset, a more general meaning and structure aware similarity measure, and a supervised model for ranking that substantially outperforms the previously proposed utility measures.",
"cite_spans": [
{
"start": 73,
"end": 97,
"text": "(Hermjakob et al., 2002)",
"ref_id": "BIBREF10"
},
{
"start": 277,
"end": 291,
"text": "(Tomuro, 2003)",
"ref_id": "BIBREF20"
},
{
"start": 375,
"end": 394,
"text": "(Jeon et al., 2005)",
"ref_id": "BIBREF11"
},
{
"start": 611,
"end": 639,
"text": "(Jijkoun and de Rijke, 2005)",
"ref_id": "BIBREF13"
},
{
"start": 845,
"end": 864,
"text": "(Zhao et al., 2007)",
"ref_id": "BIBREF22"
},
{
"start": 1135,
"end": 1164,
"text": "(Bernhard and Gurevych, 2008)",
"ref_id": "BIBREF3"
},
{
"start": 1382,
"end": 1401,
"text": "(Duan et al., 2008)",
"ref_id": "BIBREF9"
},
{
"start": 1975,
"end": 1994,
"text": "(Duan et al., 2008)",
"ref_id": "BIBREF9"
},
{
"start": 2131,
"end": 2157,
"text": "(Bunescu and Huang, 2010b)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "7"
},
{
"text": "We presented a supervised learning approach to the question ranking task in which previously known questions are ordered based on their relative utility with respect to a new, reference question. We created four versions of a dataset of 60 groups of questions 5 , each annotated with a partial order relation reflecting the relative utility of questions inside each group. An SVM ranking model was trained 5 The dataset will be made publicly available. on the dataset and evaluated together with a set of simpler, unsupervised question-to-question similarity models. Experimental results demonstrate the importance of using structure and meaning aware features when computing the relative usefulness of questions.",
"cite_spans": [
{
"start": 406,
"end": 407,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
},
{
"text": "svmlight.joachims.org 3 www.nltk.org 4 www.surdeanu.name/mihai/spear",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their insightful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "\u03c6 f (Q i , Q r ) = the semantic similarity between focus words",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u03c6 f (Q i , Q r ) = the semantic similarity between focus words, as described in Section 4.1.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Q r ) = the semantic similarity between main verbs",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u03c6 v (Q i , Q r ) = the semantic similarity between main verbs, as described in Section 4.2.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Modern Information Retrieval",
"authors": [
{
"first": "Ricardo",
"middle": [],
"last": "Baeza",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Yates",
"suffix": ""
},
{
"first": "Berthier",
"middle": [],
"last": "Ribeiro-Neto",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ricardo Baeza-Yates and Berthier Ribeiro-Neto. 1999. Modern Information Retrieval. ACM Press, New York.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Answering learners' questions by retrieving question paraphrases from social Q&A sites",
"authors": [
{
"first": "Delphine",
"middle": [],
"last": "Bernhard",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2008,
"venue": "EANL '08: Proceedings of the Third Workshop on Innovative Use of NLP for Building Educational Applications",
"volume": "",
"issue": "",
"pages": "44--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Delphine Bernhard and Iryna Gurevych. 2008. Answer- ing learners' questions by retrieving question para- phrases from social Q&A sites. In EANL '08: Pro- ceedings of the Third Workshop on Innovative Use of NLP for Building Educational Applications, pages 44- 52, Morristown, NJ, USA. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Towards a general model of answer typing: Question focus identification",
"authors": [
{
"first": "Razvan",
"middle": [],
"last": "Bunescu",
"suffix": ""
},
{
"first": "Yunfeng",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of The 11th International Conference on Intelligent Text Processing and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "231--242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan Bunescu and Yunfeng Huang. 2010a. Towards a general model of answer typing: Question focus iden- tification. In Proceedings of The 11th International Conference on Intelligent Text Processing and Com- putational Linguistics (CICLing 2010), RCS Volume, pages 231-242.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A utilitydriven approach to question ranking in social QA",
"authors": [
{
"first": "Razvan",
"middle": [],
"last": "Bunescu",
"suffix": ""
},
{
"first": "Yunfeng",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of The 23rd International Conference on Computational Linguistics (COLING 2010)",
"volume": "",
"issue": "",
"pages": "125--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Razvan Bunescu and Yunfeng Huang. 2010b. A utility- driven approach to question ranking in social QA. In Proceedings of The 23rd International Conference on Computational Linguistics (COLING 2010), pages 125-133.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Head-driven Statistical Models for Natural Language Parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1999. Head-driven Statistical Models for Natural Language Parsing. Ph.D. thesis, Univer- sity of Pennsylvania.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatically constructing a corpus of sentential paraphrases",
"authors": [
{
"first": "B",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Third International Workshop on Paraphrasing (IWP2005)",
"volume": "",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "William B. Dolan and Chris Brockett. 2005. Automat- ically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005), pages 9-16.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Unsupervised construction of large paraphrase corpora: Exploiting assively parallel news sources",
"authors": [
{
"first": "Bill",
"middle": [],
"last": "Dolan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of The 20th International Conference on Computational Linguistics (COLING'04)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Un- supervised construction of large paraphrase corpora: Exploiting assively parallel news sources. In Proceed- ings of The 20th International Conference on Compu- tational Linguistics (COLING'04), page 350.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Searching questions by identifying question topic and question focus",
"authors": [
{
"first": "Huizhong",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Yunbo",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "156--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huizhong Duan, Yunbo Cao, Chin-Yew Lin, and Yong Yu. 2008. Searching questions by identifying question topic and question focus. In Proceedings of ACL-08: HLT, pages 156-164, Columbus, Ohio, June.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Natural language based reformulation resource and web exploitation for question answering",
"authors": [
{
"first": "Ulf",
"middle": [],
"last": "Hermjakob",
"suffix": ""
},
{
"first": "Abdessamad",
"middle": [],
"last": "Echihabi",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of TREC-2002",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ulf Hermjakob, Abdessamad Echihabi, and Daniel Marcu. 2002. Natural language based reformulation resource and web exploitation for question answering. In Proceedings of TREC-2002.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Finding similar questions in large question and answer archives",
"authors": [
{
"first": "W",
"middle": [
"Bruce"
],
"last": "Jiwoon Jeon",
"suffix": ""
},
{
"first": "Joon Ho",
"middle": [],
"last": "Croft",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 14th ACM international conference on Information and knowledge management (CIKM'05)",
"volume": "",
"issue": "",
"pages": "84--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiwoon Jeon, W. Bruce Croft, and Joon Ho Lee. 2005. Finding similar questions in large question and an- swer archives. In Proceedings of the 14th ACM in- ternational conference on Information and knowledge management (CIKM'05), pages 84-90, New York, NY, USA. ACM.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Semantic similarity based on corpus statistics and lexical taxonomy",
"authors": [
{
"first": "J",
"middle": [
"J"
],
"last": "Jiang",
"suffix": ""
},
{
"first": "D",
"middle": [
"W"
],
"last": "Conrath",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the International Conference on Research in Computational Linguistics",
"volume": "",
"issue": "",
"pages": "19--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.J. Jiang and D.W. Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. In Proceedings of the International Conference on Re- search in Computational Linguistics, pages 19-33.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Retrieving answers from frequently asked questions pages on the Web",
"authors": [
{
"first": "Valentin",
"middle": [],
"last": "Jijkoun",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Maarten De Rijke",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 14th ACM international conference on Information and knowledge management (CIKM'05)",
"volume": "",
"issue": "",
"pages": "76--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Valentin Jijkoun and Maarten de Rijke. 2005. Retrieving answers from frequently asked questions pages on the Web. In Proceedings of the 14th ACM international conference on Information and knowledge manage- ment (CIKM'05), pages 76-83, New York, NY, USA. ACM.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Optimizing search engines using clickthrough data",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2002)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Joachims. 2002. Optimizing search engines us- ing clickthrough data. In Proceedings of the Eighth ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining (KDD-2002), Ed- monton, Canada.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "An information-theoretic definition of similarity",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Fifteenth International Conference on Machine Learning (ICML '98)",
"volume": "",
"issue": "",
"pages": "296--304",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin. 1998. An information-theoretic defini- tion of similarity. In Proceedings of the Fifteenth In- ternational Conference on Machine Learning (ICML '98), pages 296-304, San Francisco, CA, USA. Mor- gan Kaufmann Publishers Inc.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Corpus-based and knowledge-based measures of text semantic similarity",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Courtney",
"middle": [],
"last": "Corley",
"suffix": ""
},
{
"first": "Carlo",
"middle": [],
"last": "Strapparava",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st national conference on Artificial intelligence (AAAI'06)",
"volume": "",
"issue": "",
"pages": "775--780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea, Courtney Corley, and Carlo Strappar- ava. 2006. Corpus-based and knowledge-based mea- sures of text semantic similarity. In Proceedings of the 21st national conference on Artificial intelligence (AAAI'06), pages 775-780. AAAI Press.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Performance issues and error analysis in an open-domain question answering system",
"authors": [
{
"first": "Dan",
"middle": [
"I"
],
"last": "Moldovan",
"suffix": ""
},
{
"first": "Marius",
"middle": [],
"last": "Pasca",
"suffix": ""
},
{
"first": "Sanda",
"middle": [
"M"
],
"last": "Harabagiu",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan I. Moldovan, Marius Pasca, Sanda M. Harabagiu, and Mihai Surdeanu. 2002. Performance issues and error analysis in an open-domain question answering system. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 33-40, Philadelphia, PA, July.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Open-domain questionanswering",
"authors": [
{
"first": "John",
"middle": [
"M"
],
"last": "Prager",
"suffix": ""
}
],
"year": 2006,
"venue": "Foundations and Trends in Information Retrieval",
"volume": "1",
"issue": "2",
"pages": "91--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John M. Prager. 2006. Open-domain question- answering. Foundations and Trends in Information Retrieval, 1(2):91-231.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Using information content to evaluate semantic similarity in a taxonomy",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1995,
"venue": "IJCAI'95: Proceedings of the 14th international joint conference on Artificial intelligence",
"volume": "",
"issue": "",
"pages": "448--453",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Resnik. 1995. Using information content to eval- uate semantic similarity in a taxonomy. In IJCAI'95: Proceedings of the 14th international joint conference on Artificial intelligence, pages 448-453, San Fran- cisco, CA, USA. Morgan Kaufmann Publishers Inc.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Interrogative reformulation patterns and acquisition of question paraphrases",
"authors": [
{
"first": "Noriko",
"middle": [],
"last": "Tomuro",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Second International Workshop on Paraphrasing",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noriko Tomuro. 2003. Interrogative reformulation pat- terns and acquisition of question paraphrases. In Pro- ceedings of the Second International Workshop on Paraphrasing, pages 33-40, Morristown, NJ, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Verbs semantics and lexical selection",
"authors": [
{
"first": "Zhibiao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of the 32nd annual meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "133--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhibiao Wu and Martha Palmer. 1994. Verbs seman- tics and lexical selection. In Proceedings of the 32nd annual meeting on Association for Computational Lin- guistics, pages 133-138, Morristown, NJ, USA. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning question paraphrases for QA from Encarta logs",
"authors": [
{
"first": "Shiqi",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 20th international joint conference on Artifical intelligence (IJCAI'07)",
"volume": "",
"issue": "",
"pages": "1795--1800",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shiqi Zhao, Ming Zhou, and Ting Liu. 2007. Learn- ing question paraphrases for QA from Encarta logs. In Proceedings of the 20th international joint conference on Artifical intelligence (IJCAI'07), pages 1795-1800, San Francisco, CA, USA. Morgan Kaufmann Publish- ers Inc.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "What should I feed my turtle? What do I feed my pet turtle?"
},
"FIGREF1": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "What kind of fish should I feed my turtle? Q 3 What do you feed a turtle that is the size of a quarter? What kind of food should I feed a turtle dove?"
},
"FIGREF2": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "The 20 categories represented in the dataset."
},
"FIGREF3": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "Matched dependency trees."
},
"FIGREF4": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "SVM ranking optimization problem."
},
"TABREF0": {
"content": "<table><tr><td>USEFUL QUESTIONS (U )</td></tr><tr><td>Q 8 Does anyone know a good art summer camp to go to in FL?</td></tr><tr><td>Q 9 Are there any good artsy camps for girls in FL?</td></tr><tr><td>Q 10 What are some summer camps for like singing in Florida?</td></tr><tr><td>Q 11 What is a good cooking summer camp in FL?</td></tr><tr><td>Q 12 Do you know of any summer camps in Tampa, FL?</td></tr><tr><td>Q 13 What is a good summer camp in Sarasota FL for a 12 year old?</td></tr><tr><td>Q 14 Can you please help me find a surfing summer camp for beginners in Treasure Coast, FL?</td></tr><tr><td>Q 15 Are there any acting summer camps and/or workshops in the Orlando, FL area?</td></tr><tr><td>Q 16 Does anyone know any volleyball camps in Miramar, FL?</td></tr><tr><td>Q 17 Does anyone know about any cool science camps in Miami?</td></tr><tr><td>Q 18 What's a good summer camp you've ever been to?</td></tr><tr><td>NEUTRAL QUESTIONS (N )</td></tr><tr><td>Q 19 What's a good summer camp in Canada?</td></tr><tr><td>Q 20 What's the summer like in Florida?</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": "s a nice summer camp to go to in Florida? PARAPHRASING QUESTIONS (P ) Q 6 What camps are good for a vacation during the summer in FL? Q 7 What summer camps in FL do you recommend?"
},
"TABREF1": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "A question group."
},
"TABREF3": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "Dataset statistics."
},
"TABREF4": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "21 Where can I buy a hydrangea? Q 22 Where can I buy a cactus? Q 23 Where can I buy an iPad?"
},
"TABREF7": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": ""
},
"TABREF8": {
"content": "<table><tr><td>Question</td><td>wup</td><td/><td>res</td><td/><td>lin</td><td/><td>jcn</td><td/><td/></tr><tr><td>Dataset</td><td>cos mcs</td><td>\u03c6 t</td><td>mcs</td><td>\u03c6 t</td><td>mcs</td><td>\u03c6 t</td><td>mcs</td><td>\u03c6 t</td><td>SVM</td></tr><tr><td>Simple</td><td colspan=\"9\">73.7 69.1 69.4 71.3 71.8 70.8 69.8 71.9 71.7 82.1</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": "Complex 72.6 64.1 69.6 66.0 71.5 66.9 69.1 69.4 71.0 82.5"
},
"TABREF9": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "Pairwise accuracy results."
},
"TABREF11": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "Ablation results."
},
"TABREF12": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "Where can I find a job around City ? P 2 What are some famous people from City ? P 3 What is the population of City ? Below are three instantiations of the first question pattern: Q 32 Where can I find a job around Anaheim, CA? Q 33 Where can I find a job around Los Angeles? Q 34 Where can I find a job around Vista, CA?"
}
}
}
}