| { |
| "paper_id": "C10-1015", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T12:55:00.623427Z" |
| }, |
| "title": "A Utility-Driven Approach to Question Ranking in Social QA", |
| "authors": [ |
| { |
| "first": "Razvan", |
| "middle": [], |
| "last": "Bunescu", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "EECS Ohio University", |
| "location": {} |
| }, |
| "email": "bunescu@ohio.edu" |
| }, |
| { |
| "first": "Yunfeng", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We generalize the task of finding question paraphrases in a question repository to a novel formulation in which known questions are ranked based on their utility to a new, reference question. We manually annotate a dataset of 60 groups of questions with a partial order relation reflecting the relative utility of questions inside each group, and use it to evaluate meaning and structure aware utility functions. Experimental evaluation demonstrates the importance of using structural information in estimating the relative usefulness of questions, holding the promise of increased usability for social QA sites.", |
| "pdf_parse": { |
| "paper_id": "C10-1015", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We generalize the task of finding question paraphrases in a question repository to a novel formulation in which known questions are ranked based on their utility to a new, reference question. We manually annotate a dataset of 60 groups of questions with a partial order relation reflecting the relative utility of questions inside each group, and use it to evaluate meaning and structure aware utility functions. Experimental evaluation demonstrates the importance of using structural information in estimating the relative usefulness of questions, holding the promise of increased usability for social QA sites.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Open domain Question Answering (QA) is one of the most complex and challenging tasks in natural language processing. While building on ideas from Information Retrieval (IR), question answering is generally seen as a more difficult task due to constraints on both the input representation (natural language questions vs. keywordbased queries) and the form of the output (focused answers vs. entire documents). Recently, community-driven QA sites such as Yahoo! Answers and WikiAnswers have established a new approach to question answering in which the burden of dealing with the inherent complexity of open domain QA is shifted from the computer system to volunteer contributors. The computer is no longer required to perform a deep linguistic analysis of questions and generate corresponding answers, and instead acts as a mediator be-tween users submitting questions and volunteers providing the answers. In most implementations of community-driven QA, the mediator system has a well defined strategy for enticing volunteers to post high quality answers on the website. In general, the overall objective is to minimize the response time and maximize the accuracy of the answers, measures that are highly correlated with user satisfaction. For any submitted question, one useful strategy is to search the QA repository for similar questions that have already been answered, and provide the corresponding ranked list of answers, if such a question is found. The success of this approach depends on the definition and implementation of the question-to-question similarity function. In the simplest solution, the system searches for previously answered questions based on exact string matching with the reference question. Alternatively, sites such as WikiAnswers allow the users to mark questions they think are rephrasings (\"alternate wordings\", or paraphrases) of existing questions. These question clusters are then taken into account when performing exact string matching, therefore increasing the likelihood of finding previously answered questions that are semantically equivalent to the reference question. Like the original question answering task, the solution to question rephrasing is also based on volunteer contributions. In order to lessen the amount of work required from the contributors, an alternative solution is to build a system that automatically finds rephrasings of questions, especially since question rephrasing seems to be computationally less demanding than question answering. The question rephrasing subtask has spawned a diverse set of approaches. (Herm-jakob et al., 2002) derive a set of phrasal patterns for question reformulation by generalizing surface patterns acquired automatically from a large corpus of web documents. The focus of the work in (Tomuro, 2003) is on deriving reformulation patterns for the interrogative part of a question. In (Jeon et al., 2005) , word translation probabilities are trained on pairs of semantically similar questions that are automatically extracted from an FAQ archive, and then used in a language model that retrieves question reformulations. (Jijkoun and de Rijke, 2005) describe an FAQ question retrieval system in which weighted combinations of similarity functions corresponding to questions, existing answers, FAQ titles and pages are computed using a vector space model. (Zhao et al., 2007) exploit the Encarta logs to automatically extract clusters containing question paraphrases and further train a perceptron to recognize question paraphrases inside each cluster based on a combination of lexical, syntactic and semantic similarity features. More recently, (Bernhard and Gurevych, 2008) evaluated various string similarity measures and vector space based similarity measures on the task of retrieving question paraphrases from the WikiAnswers repository.", |
| "cite_spans": [ |
| { |
| "start": 2577, |
| "end": 2602, |
| "text": "(Herm-jakob et al., 2002)", |
| "ref_id": null |
| }, |
| { |
| "start": 2782, |
| "end": 2796, |
| "text": "(Tomuro, 2003)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 2880, |
| "end": 2899, |
| "text": "(Jeon et al., 2005)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 3116, |
| "end": 3144, |
| "text": "(Jijkoun and de Rijke, 2005)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 3350, |
| "end": 3369, |
| "text": "(Zhao et al., 2007)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 3640, |
| "end": 3669, |
| "text": "(Bernhard and Gurevych, 2008)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "According to previous work in this domain, a question is considered a rephrasing of a reference question Q 0 if it uses an alternate wording to express an identical information need. For example, Q 0 and Q 1 below may be considered rephrasings of each other, and consequently they are expected to have the same answer. Community-driven QA sites are bound to face situations in which paraphrasings of a new question cannot be found in the QA repository. We believe that computing a ranked list of existing questions that partially address the original information need could be useful to the user, at least until other users volunteer to give an exact answer to the original, unanswered reference question. For example, in the absence of any additional information about the reference question Q 0 , the expected answers to questions Q 2 and Q 3 above may be seen as partially overlapping in information content with the expected answer for the reference question. An answer to question Q 4 , on the other hand, is less likely to benefit the user, even though it has a significant lexical overlap with the reference question. In this paper, we propose a generalization of the question paraphrasing problem to a question ranking problem, in which questions are ranked in a partial order based on the relative information overlap between their expected answers and the expected answer of the reference question. The expectation in this approach is that the user who submits a reference question will find the answers of the highly ranked question to be more useful than the answers associated with the lower ranked questions. For the reference question Q 0 above, the system is expected to produce a partial order in which Q 1 is ranked higher than Q 2 , Q 3 and Q 4 , whereas Q 2 and Q 3 are ranked higher than Q 4 . In Section 2 we give further details on the question ranking task and describe a dataset of questions that have been manually annotated with partial order information. Section 3 presents a set of initial approaches to question ranking, followed by their experimental evaluation in Section 4. The paper ends with a discussion of future work, and conclusion.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In order to enable the evaluation of question ranking approaches, we created a dataset of 60 groups of questions. Each group consists of a reference question (e.g. Q 0 above) that is associated with a partially ordered set of questions (e.g. Q 1 to Q 4 above). and other online repositories that have a high cosine similarity with the reference question. Due to the significant lexical overlap between the questions, this is a rather difficult dataset, especially for ranking methods that rely exclusively on bagof-words measures. Inside each group, the questions are manually annotated with a partial order relation, according to their utility with respect to the reference question. We shall use the notation Q i \u227b Q j |Q r to encode the fact that question Q i is more useful than question Q j with respect to the reference question Q r . Similarly, Q i = Q j will be used to express the fact that questions Q i and Q j are reformulations of each other (the reformulation relation is independent of the reference question). The partial ordering among the questions Q 0 to Q 4 above can therefore be expressed concisely as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Partially Ordered Dataset for Question Ranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Q 0 = Q 1 , Q 1 \u227b Q 2 |Q 0 , Q 1 \u227b Q 3 |Q 0 , Q 2 \u227b Q 4 |Q 0 , Q 3 \u227b Q 4 |Q 0 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Partially Ordered Dataset for Question Ranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Note that we do not explicitly annotate the relation Q 1 \u227b Q 4 |Q 0 , since it can be inferred based on the transitivity of the more useful than relation:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Partially Ordered Dataset for Question Ranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Q 1 \u227b Q 2 |Q 0 \u2227 Q 2 \u227b Q 4 |Q 0 \u21d2 Q 1 \u227b Q 4 |Q 0 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Partially Ordered Dataset for Question Ranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Also note that no relation is specified between Q 2 and Q 3 , and similarly no relation can be inferred between these two questions. This reflects our belief that, in the absence of any additional information regarding the user or the \"turtle\" referenced in Q 0 , we cannot compare questions Q 2 and Q 3 in terms of their usefulness with respect to Q 0 . Table 1 shows another reference question Q 5 from our dataset, together with its annotated group of questions Q 6 to Q 20 . In order to make the annotation process easier and reproducible, we divide it into two levels of annotation. During the first annotation stage (L 1 ), each question group is partitioned manually into 3 subgroups of questions:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 355, |
| "end": 362, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Partially Ordered Dataset for Question Ranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 P is the set of paraphrasing questions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Partially Ordered Dataset for Question Ranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 U is the set of useful questions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Partially Ordered Dataset for Question Ranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 N is the set of neutral questions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Partially Ordered Dataset for Question Ranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A question is deemed useful if its expected answer may overlap in information content with the expected answer of the reference question. The expected answer of a neutral question, on the other hand, should be irrelevant with respect to the reference question. Let Q r be the reference question, Q p \u2208 P a paraphrasing question, Q u \u2208 U a useful question, and Q n \u2208 N a neutral question. Then the following relations are assumed to hold among these questions:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Partially Ordered Dataset for Question Ranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "1. Q p \u227b Q u |Q r : a paraphrasing question is more useful than a useful question.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Partially Ordered Dataset for Question Ranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "2. Q u \u227b Q n |Q r : a useful question is more useful than a neutral question.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Partially Ordered Dataset for Question Ranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We also assume that, by transitivity, the following ternary relations also hold:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Partially Ordered Dataset for Question Ranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Q p \u227b Q n |Q r , i.e. a", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Partially Ordered Dataset for Question Ranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "paraphrasing question is more useful than a neutral question. Furthermore, if Q p 1 , Q p 2 \u2208 P are two paraphrasing questions, this implies", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Partially Ordered Dataset for Question Ranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Q p 1 = Q p 2 |Q r .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Partially Ordered Dataset for Question Ranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "For the vast majority of questions, the first annotation stage is straightforward and noncontroversial. In the second annotation stage (L 2 ), we perform a finer annotation of relations between questions in the middle group U. Table 1 shows two such relations (using indentation): Q 8 \u227b Q 9 |Q 5 and Q 8 \u227b Q 10 |Q 5 . Question Q 8 would have been a rephrasing of the reference question, were it not for the noun \"art\" modifying the focus noun phrase \"summer camp\". Therefore, the information content of the answer to Q 8 is strictly subsumed in the information content associated with the answer to Q 5 . Similarly, in Q 9 the focus noun phrase is further specialized through the prepositional phrase \"for girls\". Therefore, (an answer to) Q 9 is less useful to Q 5 than (an answer to) Q 8 , i.e. Q 8 \u227b Q 9 |Q 5 . Furthermore, the focus \"art summer camp\" in Q 8 conceptually subsumes the focus \"summer camps for singing\" in Q 10 , therefore Q 8 \u227b Q 10 |Q 5 . Table 2 below presents the following statistics on the annotated dataset: the number of reference questions (Q r ), the total number of paraphrasings (P), the total number of useful questions (U), the total number of neutral questions (N ), and the total number of more useful than ordered pairs encoded in the dataset, either explicitly or through transitivity, in the two annotation levels L 1 and L 2 . 60 177 847 427 7,378 7,639 Table 2 : Dataset statistics.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 227, |
| "end": 234, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 959, |
| "end": 966, |
| "text": "Table 2", |
| "ref_id": null |
| }, |
| { |
| "start": 1365, |
| "end": 1401, |
| "text": "60 177 847 427 7,378 7,639 Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Partially Ordered Dataset for Question Ranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Q r P U N L 1 L 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Partially Ordered Dataset for Question Ranking", |
| "sec_num": "2" |
| }, |
| { |
| "text": "An ideal question ranking method would take an arbitrary triplet of questions Q r , Q i and Q j as input, and output an ordering between Q i and Q j with respect to the reference question Q r , i.e. one of", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Question Ranking Methods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Q i \u227b Q j |Q r , Q i = Q j |Q r , or Q j \u227b Q i |Q r .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Question Ranking Methods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "One approach is to design a usefulness function u(Q i , Q r ) that measures how useful question Q i is for the reference question Q r , and define the more useful than (\u227b) relation as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Question Ranking Methods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Q i \u227b Q j |Q r \u21d4 u(Q i , Q r ) > u(Q j , Q r )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Question Ranking Methods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "If we define I(Q) to be the information need associated with question Q, then u(Q i , Q r ) could be defined as a measure of the relative overlap between I(Q i ) and I(Q r ). Unfortunately, the information need is a concept that, in general, is defined only intensionally and therefore it is difficult to measure. For lack of an operational definition of the information need, we will approximate u(Q i , Q r ) directly as a measure of the similarity between Q i and Q r . The similarity between two questions can be seen as a special case of text-to-text similarity, consequently one possibility is to use a general text-to-text similarity function such as cosine similarity in the vector space model (Baeza-Yates and Ribeiro-Neto, 1999):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Question Ranking Methods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "cos(Q i , Q r ) = Q T i Q r Q i Q r", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Question Ranking Methods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Here, Q i and Q r denote the corresponding tf\u00d7idf vectors. As a measure of question-to-question similarity, cosine has two major drawbacks:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Question Ranking Methods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "1. As an exclusively lexical measure, it is oblivious to the meanings of words in each question.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Question Ranking Methods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "2. Questions are treated as bags-of-words, and thus important structural information is missed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Question Ranking Methods", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The three questions below illustrate the first problem associated with cosine similarity. Q 22 and Q 23 have the same cosine similarity with Q 21 , they are therefore indistinguishable in terms of their usefulness to the reference question Q 21 , even though we expect Q 22 to be more useful than Q 23 (a place that sells hydrangea often sells other types of plants too, possibly including cacti). To alleviate the lexical chasm, we can redefine u(Q i , Q r ) to be the similarity measure proposed by (Mihalcea et al., 2006) as follows:", |
| "cite_spans": [ |
| { |
| "start": 501, |
| "end": 524, |
| "text": "(Mihalcea et al., 2006)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning Aware Measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Q", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning Aware Measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "mcs(Qi, Qr) = X w\u2208{Q i } (maxSim(w, Qr) * idf (w)) X w\u2208{Q i } idf (w) + X w\u2208{Qr } (maxSim(w, Qi) * idf (w)) X w\u2208{Qr } idf (w)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning Aware Measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Since scaling factors are immaterial for ranking, we have ignored the normalization constant contained in the original measure. For each word w \u2208 Q i , maxSim(w, Q r ) computes the maximum semantic similarity between w and any word w r \u2208 Q r . The similarity scores are then weighted by the corresponding idf's, and normalized. A similar score is computed for each word w \u2208 Q r . The score computed by maxSim depends on the actual function used to compute the word-to-word semantic similarity. In this paper, we evaluated four of the knowledge-based measures explored in (Mihalcea et al., 2006) : wup (Wu and Palmer, 1994) , res (Resnik, 1995) , lin (Lin, 1998) , and jcn (Jiang and Conrath, 1997) . Since all these measures are defined on pairs of WordNet concepts, their analogues on word pairs (w i , w r ) are computed by selecting pairs of WordNet synsets (c i , c r ) such that w i belongs to concept c i , w r belongs to concept c r , and (c i , c r ) maximizes the similarity function. The measure introduced in (Wu and Palmer, 1994) finds the least common subsumer (LCS) of the two input concepts in the WordNet hierarchy, and computes the ratio between its depth and the sum of the depths of the two concepts:", |
| "cite_spans": [ |
| { |
| "start": 571, |
| "end": 594, |
| "text": "(Mihalcea et al., 2006)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 601, |
| "end": 622, |
| "text": "(Wu and Palmer, 1994)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 629, |
| "end": 643, |
| "text": "(Resnik, 1995)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 650, |
| "end": 661, |
| "text": "(Lin, 1998)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 672, |
| "end": 697, |
| "text": "(Jiang and Conrath, 1997)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 1020, |
| "end": 1041, |
| "text": "(Wu and Palmer, 1994)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning Aware Measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "wup(ci, cr) = 2 * depth(lcs(ci, cr)) depth(ci) + depth(cr)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning Aware Measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Resnik's measure is based on the Information Content (IC) of a concept c defined as the negative log probability \u2212 log P (c) of finding that concept in a large corpus:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning Aware Measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "res(ci, cr) = IC(lcs(ci, cr))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning Aware Measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Lin's similarity measure can be seen as a normalized version of Resnik's information content:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning Aware Measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "lin(ci, cr) = 2 * IC(lcs(ci, cr)) IC(ci) + IC(cr)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning Aware Measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Jiang & Conrath's measure is closely related to lin and is computed as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning Aware Measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "jcn(ci, cr) = [IC(ci) + IC(cr) \u2212 2 * IC(lcs(ci, cr))] \u22121", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Meaning Aware Measures", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Cosine similarity, henceforth referred as cos, treats questions as bags-of-words. The metameasure proposed in (Mihalcea et al., 2006) , henceforth called mcs, treats questions as bagsof-concepts. Consequently, both cos and mcs may miss important structural information. If we consider the question Q 24 below as reference, question Q 26 will be deemed more useful than Q 25 when using cos or mcs because of the higher relative lexical and conceptual overlap with Q 24 . However, this is contrary to the actual ordering Q 25 \u227b Q 26 |Q 24 , which reflects that fact that Q 25 , which expects the same answer type as Q 24 , should be deemed more useful than Q 26 , which has a different answer type. The analysis above shows the importance of using the answer type when computing the similarity between two questions. However, instead of relying exclusively on a predefined hierarchy of answer types, we have decided to identify the question focus of a question, defined as the set of maximal noun phrases in the question that corefer with the expected answer. Focus nouns such as movies and songs provide more discriminative information than general answer types such as products. We use answer types only for questions such as Q 27 or Q 28 below that lack an explicit question focus. In such cases, an artificial question focus is created from the answer type (e.g. location for Q 27 , or method for Q 28 ) and added to the set of question words.", |
| "cite_spans": [ |
| { |
| "start": 110, |
| "end": 133, |
| "text": "(Mihalcea et al., 2006)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Aware Measures", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Q", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Aware Measures", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Q 27 Where can I buy a good coffee maker?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Aware Measures", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Q 28 How do I make a pizza?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Aware Measures", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Let qsim be a general bag-of-words question similarity measure (e.g. cos or mcs). Furthermore, let wsim by a generic word meaning similarity measure (e.g. wup, res, lin or jcn). The equation below describes a modification of qsim that makes it aware of the questions focus:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Aware Measures", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "qsim f (Qi, Qr) = wsim(fi, fr) * qsim(Qi \u2212{fi}, Qr \u2212{fr})", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Aware Measures", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Here, Q i and Q r refer both to the questions and their sets of words, while f i and f r stand for the corresponding focus words. We define qsim to return 1 if one of its arguments is an empty set, i.e. qsim(\u2205, ) = qsim( , \u2205) = 1. The new similarity measure qsim f multiplies the semantic similarity between the two focus words with the bag-of-words similarity between the remaining words in the two questions. Consequently, the word \"movie\" in Q 26 will not be compared with the word \"movies\" in Q 24 , and therefore Q 26 will receive a lower utility score than Q 25 . In addition to the question focus, the main verb of a question can also provide key information in estimating question-to-question similarity. We define the main verb to be the content verb that is highest in the dependency tree of the question, e.g. buy for Q 27 , or make for Q 28 . If the question does not contain a content verb, the main verb is defined to be the highest verb in the dependency tree, as for example are in Q 24 to Q 26 . The utility of a question's main verb in judging its similarity to other questions can be seen more clearly in the questions below, where Q 29 is the reference:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Aware Measures", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Q 29 How can I transfer music from iTunes to my iPod?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Aware Measures", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Q 30 How can I upload music to my iPod?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Aware Measures", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Q 31 How can I play music in iTunes?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Aware Measures", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The fact that upload, as the main verb of Q 30 , is more semantically related to transfer (upload is a hyponym of transfer in WordNet) is essential in deciding that Q 30 \u227b Q 31 |Q 29 , i.e. Q 30 is more useful than Q 31 to Q 29 . Like the focus word, the main verb can be incorporated in the question similarity function as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Aware Measures", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "qsim f v (Qi, Qr) = wsim(fi, fr) * wsim(vi, vr) * qsim(Qi \u2212{fi, vi}, Qr \u2212{fr, vr})", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Aware Measures", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The new measure qsim f v takes into account both the focus words and the main verbs when estimating the semantic similarity between questions. When decomposing the questions into focus words, main verbs and the remaining words, we have chosen to multiply the corresponding similarities instead of, for example, summing them. Consequently, a close to zero score in each of them would drive the entire similarity to zero. This reflects the belief that question similarity is sensitive to each component of a question.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Structure Aware Measures", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We use the question ranking dataset described in Section 2 to evaluate the two similarity measures cos and mcs, as well as their structured versions cos f , cos f v , mcs f , and mcs f v . We report one set of results for each of the four word similarity measures wup, res, lin or jcn. Each question similarity measure is evaluated in terms of its accuracy on the set of ordered pairs for each of the two annotation levels described in Section 2. Thus, for the first annotation level (L 1 ) , we evaluate only over the set of relations defined across the three Question Word similarity (wsim) similarity ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "wup res lin jcn (qsim) L 1 L 2 L 1 L 2 L 1 L 2 L 1 L 2 cos", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Q i , Q r ) > u(Q j , Q r ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "where u is the question similarity measure (Section 3). For the second annotation level (L 2 ), we also consider the relations annotated between useful questions inside the group U.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We used the NLTK 1 implementation of the four similarity measures wup, res, lin or jcn. The idf values for each word were computed from frequency counts over the entire Wikipedia. For each question, the focus is identified automatically by an SVM tagger trained on a separate corpus of 2,000 questions manually annotated with focus information. The SVM tagger uses a combination of lexico-syntactic features and a quadratic kernel to achieve a 93.5% accuracy in a 10-fold cross validation evaluation on the 2,000 questions. The main verb of a question is identified deterministically using a breadth first traversal of the dependency tree.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The overall accuracy results presented in Table 3 show that using the focus word improves the performance across all 8 combinations of question and word similarity measures. For cosine similarity, the best performing system uses the focus words and Resnik's similarity function to obtain a 3.4% increase in accuracy. For the meaning aware similarity mcs, the best performing system uses the focus words, the main verb and Lin's word similarity to achieve a 4.1% increase in accuracy. The improvement due to accounting for focus words is consistent, whereas adding the main verb seems to improve the performance only for mcs, although not by a large margin. The second level of annotation brings 261 more relations in the dataset, some of them more difficult to annotate when compared with the three groups in the first level. Nevertheless, the performance either remains the same (somewhat expected due to the relatively small number of additional relations), or is marginally better. The random baseline -assigning a random similarity value to each pair of questions -results in 50% accuracy. A somewhat unexpected result is that mcs does not perform better than cos on this dataset. After analysing the result in more detail, we have noticed that mcs seems to be less resilient than cos to variations in the length of the questions. The Microsoft paraphrase corpus was specifically designed such that \"the length of the shorter of the two sentences, in words, is at least 66% that of the longer\" (Dolan and Brockett, 2005) , whereas in our dataset the two questions in a pair can have significantly different lengths 2 .", |
| "cite_spans": [ |
| { |
| "start": 1498, |
| "end": 1524, |
| "text": "(Dolan and Brockett, 2005)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 42, |
| "end": 49, |
| "text": "Table 3", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The questions in each of the 60 groups have a high degree of lexical overlap, making the dataset especially difficult. In this context, we believe the results are encouraging. We expect to obtain further improvements in accuracy by allowing relations between all the words in a question to influence the overall similarity measure. For example, question Q 19 has the same focus word as the reference question Q 5 (repeated below), yet the difference between the focus word prepositional modifiers makes it a neutral question.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Q 5 What's a good summer camp to go to in FL? Q 19 What's a good summer camp in Canada?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Some of the questions in our dataset illustrate the need to design a word similarity function specifically tailored to reflect how words change the relative usefulness of a question. In the set of questions below, in deciding that Q 33 and Q 34 are more useful than Q 36 for the reference question Q 32 , an ideal question ranker needs to know that the \"Mayflower Hotel\" and the \"Queensboro Bridge\" are in the proximity of \"Midtown Manhattan\", and that proximity relations are relevant when asking for directions. A coarse measure of proximity can be obtained for the pair (\"Manhattan\", \"Queensboro Bridge\") by following the meronymy links connecting the two entities in WordNet. However, a different strategy needs to be devised for entities such as \"Mayflower Hotel\", \"JFK\", or \"La Guardia\" which are not covered in WordNet. Finally, to realize why question Q 35 is useful one needs to know that, once directions on how to get by train from location X to location Y are known, then normally it suffices to reverse the list of stops in order to obtain directions on how to get from Y back to X.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We plan to integrate the entire dependency structure of the question in the overall similarity measure, possibly by defining kernels between questions in a maximum margin model for ranking.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Future Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We also plan to extend the word similarity functions to better reflect the types of relations that are relevant when measuring question utility, such as proximity relations between locations. Furthermore, we intend to take advantage of databases of interrogative paraphrases and paraphrase patterns that were created in previous research on question reformulation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Future Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We presented a novel question ranking task in which previously known questions are ordered based on their relative utility with respect to a new, reference question. We created a dataset of 60 groups of questions 3 annotated with a partial order relation reflecting the relative utility of questions inside each group, and used it to evaluate the ranking performance of several meaning and structure aware utility functions. Experimental results demonstrate the importance of using structural information in judging the relative usefulness of questions. We believe that the new perspective on ranking questions has the potential to significantly improve the usability of social QA sites.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "http://www.nltk.org", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Our implementation of mcs did performed better than cos on the Microsoft dataset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The dataset will be made publicly available.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We would like to thank the anonymous reviewers for their helpful suggestions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Modern Information Retrieval", |
| "authors": [ |
| { |
| "first": "Ricardo", |
| "middle": [], |
| "last": "Baeza-Yates", |
| "suffix": "" |
| }, |
| { |
| "first": "Berthier", |
| "middle": [], |
| "last": "Ribeiro-Neto", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Baeza-Yates, Ricardo and Berthier Ribeiro-Neto. 1999. Modern Information Retrieval. ACM Press, New York.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Answering learners' questions by retrieving question paraphrases from social Q&A sites", |
| "authors": [ |
| { |
| "first": "Delphine", |
| "middle": [], |
| "last": "Bernhard", |
| "suffix": "" |
| }, |
| { |
| "first": "Iryna", |
| "middle": [], |
| "last": "Gurevych", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "EANL '08: Proceedings of the Third Workshop on Innovative Use of NLP for Building Educational Applications", |
| "volume": "", |
| "issue": "", |
| "pages": "44--52", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bernhard, Delphine and Iryna Gurevych. 2008. An- swering learners' questions by retrieving question paraphrases from social Q&A sites. In EANL '08: Proceedings of the Third Workshop on Innovative Use of NLP for Building Educational Applications, pages 44-52, Morristown, NJ, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Automatically constructing a corpus of sentential paraphrases", |
| "authors": [ |
| { |
| "first": "William", |
| "middle": [ |
| "B" |
| ], |
| "last": "Dolan", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Brockett", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the Third International Workshop on Paraphrasing (IWP2005)", |
| "volume": "", |
| "issue": "", |
| "pages": "9--16", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dolan, William B. and Chris Brockett. 2005. Auto- matically constructing a corpus of sentential para- phrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005), pages 9-16.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Natural language based reformulation resource and web exploitation for question answering", |
| "authors": [ |
| { |
| "first": "Ulf", |
| "middle": [], |
| "last": "Hermjakob", |
| "suffix": "" |
| }, |
| { |
| "first": "Abdessamad", |
| "middle": [], |
| "last": "Echihabi", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of TREC-2002", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hermjakob, Ulf, Abdessamad Echihabi, and Daniel Marcu. 2002. Natural language based reformula- tion resource and web exploitation for question an- swering. In Proceedings of TREC-2002.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Finding similar questions in large question and answer archives", |
| "authors": [ |
| { |
| "first": "Jiwoon", |
| "middle": [], |
| "last": "Jeon", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [ |
| "Bruce" |
| ], |
| "last": "Croft", |
| "suffix": "" |
| }, |
| { |
| "first": "Joon Ho", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 14th ACM international conference on Information and knowledge management (CIKM'05)", |
| "volume": "", |
| "issue": "", |
| "pages": "84--90", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeon, Jiwoon, W. Bruce Croft, and Joon Ho Lee. 2005. Finding similar questions in large question and an- swer archives. In Proceedings of the 14th ACM in- ternational conference on Information and knowl- edge management (CIKM'05), pages 84-90, New York, NY, USA. ACM.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Semantic similarity based on corpus statistics and lexical taxonomy", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "J" |
| ], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "W" |
| ], |
| "last": "Conrath", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Proceedings of the International Conference on Research in Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "19--33", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiang, J.J. and D.W. Conrath. 1997. Semantic similar- ity based on corpus statistics and lexical taxonomy. In Proceedings of the International Conference on Research in Computational Linguistics, pages 19- 33.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Retrieving answers from frequently asked questions pages on the Web", |
| "authors": [ |
| { |
| "first": "Valentin", |
| "middle": [], |
| "last": "Jijkoun", |
| "suffix": "" |
| }, |
| { |
| "first": "Maarten", |
| "middle": [], |
| "last": "De Rijke", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 14th ACM international conference on Information and knowledge management (CIKM'05)", |
| "volume": "", |
| "issue": "", |
| "pages": "76--83", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jijkoun, Valentin and Maarten de Rijke. 2005. Re- trieving answers from frequently asked questions pages on the Web. In Proceedings of the 14th ACM international conference on Information and knowl- edge management (CIKM'05), pages 76-83, New York, NY, USA. ACM.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "An information-theoretic definition of similarity", |
| "authors": [ |
| { |
| "first": "Dekang", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the Fifteenth International Conference on Machine Learning (ICML '98)", |
| "volume": "", |
| "issue": "", |
| "pages": "296--304", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lin, Dekang. 1998. An information-theoretic def- inition of similarity. In Proceedings of the Fif- teenth International Conference on Machine Learn- ing (ICML '98), pages 296-304, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Corpus-based and knowledge-based measures of text semantic similarity", |
| "authors": [ |
| { |
| "first": "Rada", |
| "middle": [], |
| "last": "Mihalcea", |
| "suffix": "" |
| }, |
| { |
| "first": "Courtney", |
| "middle": [], |
| "last": "Corley", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlo", |
| "middle": [], |
| "last": "Strapparava", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 21st national conference on Artificial intelligence (AAAI'06)", |
| "volume": "", |
| "issue": "", |
| "pages": "775--780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mihalcea, Rada, Courtney Corley, and Carlo Strappa- rava. 2006. Corpus-based and knowledge-based measures of text semantic similarity. In Proceed- ings of the 21st national conference on Artificial in- telligence (AAAI'06), pages 775-780. AAAI Press.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Using information content to evaluate semantic similarity in a taxonomy", |
| "authors": [ |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Resnik", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "IJ-CAI'95: Proceedings of the 14th international joint conference on Artificial intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "448--453", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Resnik, Philip. 1995. Using information content to evaluate semantic similarity in a taxonomy. In IJ- CAI'95: Proceedings of the 14th international joint conference on Artificial intelligence, pages 448- 453, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Interrogative reformulation patterns and acquisition of question paraphrases", |
| "authors": [ |
| { |
| "first": "Noriko", |
| "middle": [], |
| "last": "Tomuro", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the Second International Workshop on Paraphrasing", |
| "volume": "", |
| "issue": "", |
| "pages": "33--40", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomuro, Noriko. 2003. Interrogative reformulation patterns and acquisition of question paraphrases. In Proceedings of the Second International Workshop on Paraphrasing, pages 33-40, Morristown, NJ, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Verbs semantics and lexical selection", |
| "authors": [ |
| { |
| "first": "Zhibiao", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proceedings of the 32nd annual meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "133--138", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wu, Zhibiao and Martha Palmer. 1994. Verbs se- mantics and lexical selection. In Proceedings of the 32nd annual meeting on Association for Computa- tional Linguistics, pages 133-138, Morristown, NJ, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Learning question paraphrases for QA from Encarta logs", |
| "authors": [ |
| { |
| "first": "Shiqi", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Ting", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 20th international joint conference on Artifical intelligence (IJCAI'07)", |
| "volume": "", |
| "issue": "", |
| "pages": "1795--1800", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhao, Shiqi, Ming Zhou, and Ting Liu. 2007. Learn- ing question paraphrases for QA from Encarta logs. In Proceedings of the 20th international joint con- ference on Artifical intelligence (IJCAI'07), pages 1795-1800, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "What should I feed my turtle? Q 1 What do I feed my pet turtle?" |
| }, |
| "TABREF1": { |
| "content": "<table><tr><td>The 60 reference questions have been</td></tr><tr><td>selected to represent a diverse set of question cat-</td></tr><tr><td>egories from Yahoo! Answers. For each refer-</td></tr><tr><td>ence questions, its corresponding partially ordered</td></tr><tr><td>set is created from questions in Yahoo! Answers</td></tr></table>", |
| "text": "What camps are good for a vacation during the summer in FL? Q 7 What summer camps in FL do you recommend? USEFUL QUESTIONS (U ) Q 8 Does anyone know a good art summer camp to go to in FL? Q 9 Are there any good artsy camps for girls in FL? Q 10 What are some summer camps for like singing in Florida? Q 11 What is a good cooking summer camp in FL? Q 12 Do you know of any summer camps in Tampa, FL? Q 13 What is a good summer camp in Sarasota FL for a 12 year old? Q 14 Can you please help me find a surfing summer camp for beginners in Treasure Coast, FL? Q 15 Are there any acting summer camps and/or workshops in the Orlando, FL area? Q 16 Does anyone know any volleyball camps in Miramar, FL? Q 17 Does anyone know about any cool science camps in Miami? Q 18 What's a good summer camp you've ever been to? NEUTRAL QUESTIONS (N ) Q 19 What's a good summer camp in Canada? Q 20 What's the summer like in Florida?", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "content": "<table/>", |
| "text": "24 What are some good thriller movies? Q 25 What are some thriller movies with happy ending?", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF5": { |
| "content": "<table/>", |
| "text": "Accuracy results, with and without meaning and structure information.sets R, U, and N . If Q i \u227b Q j |Q r is a relation specified in the annotation, we consider the tuple Q i , Q j , Q r correctly classified if and only if u(", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF6": { |
| "content": "<table/>", |
| "text": "Q 32 What is the best way to get to Midtown Manhattan from JFK? Q 33 What's the best way from JFK to Mayflower Hotel? Q 34 What's the best way from JFK to Queensboro Bridge? Q 35 How do I get from Manhattan to JFK airport by train? Q 36 What is the best way to get to LaGuardia from JFK?", |
| "html": null, |
| "num": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |