| { |
| "paper_id": "P05-1027", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:37:22.735622Z" |
| }, |
| "title": "Question Answering as Question-Biased Term Extraction: A New Approach toward Multilingual QA", |
| "authors": [ |
| { |
| "first": "Yutaka", |
| "middle": [], |
| "last": "Sasaki", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "ATR Spoken Language Communication Research Laboratories", |
| "institution": "", |
| "location": { |
| "addrLine": "2-2-2 Hikaridai, Seika-cho, Soraku-gun", |
| "postCode": "619-0288", |
| "settlement": "Kyoto", |
| "country": "Japan" |
| } |
| }, |
| "email": "yutaka.sasaki@atr.jp" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper regards Question Answering (QA) as Question-Biased Term Extraction (QBTE). This new QBTE approach liberates QA systems from the heavy burden imposed by question types (or answer types). In conventional approaches, a QA system analyzes a given question and determines the question type, and then it selects answers from among answer candidates that match the question type. Consequently, the output of a QA system is restricted by the design of the question types. The QBTE directly extracts answers as terms biased by the question. To confirm the feasibility of our QBTE approach, we conducted experiments on the CRL QA Data based on 10-fold cross validation, using Maximum Entropy Models (MEMs) as an ML technique. Experimental results showed that the trained system achieved 0.36 in MRR and 0.47 in Top5 accuracy.", |
| "pdf_parse": { |
| "paper_id": "P05-1027", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper regards Question Answering (QA) as Question-Biased Term Extraction (QBTE). This new QBTE approach liberates QA systems from the heavy burden imposed by question types (or answer types). In conventional approaches, a QA system analyzes a given question and determines the question type, and then it selects answers from among answer candidates that match the question type. Consequently, the output of a QA system is restricted by the design of the question types. The QBTE directly extracts answers as terms biased by the question. To confirm the feasibility of our QBTE approach, we conducted experiments on the CRL QA Data based on 10-fold cross validation, using Maximum Entropy Models (MEMs) as an ML technique. Experimental results showed that the trained system achieved 0.36 in MRR and 0.47 in Top5 accuracy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The conventional Question Answering (QA) architecture is a cascade of the following building blocks:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Question Analyzer analyzes a question sentence and identifies the question types (or answer types). Document Retriever retrieves documents related to the question from a large-scale document set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Answer Candidate Extractor extracts answer candidates that match the question types from the retrieved documents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Answer Selector ranks the answer candidates according to the syntactic and semantic conformity of each answer with the question and its context in the document.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Typically, question types consist of named entities, e.g., PERSON, DATE, and ORGANIZATION, numerical expressions, e.g., LENGTH, WEIGHT, SPEED, and class names, e.g., FLOWER, BIRD, and FOOD. The question type is also used for selecting answer candidates. For example, if the question type of a given question is PERSON, the answer candidate extractor lists only person names that are tagged as the named entity PERSON.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The conventional QA architecture has a drawback in that the question-type system restricts the range of questions that can be answered by the system. It is thus problematic for QA system developers to carefully design and build an answer candidate extractor that works well in conjunction with the questiontype system. This problem is particularly difficult when the task is to develop a multilingual QA system to handle languages that are unfamiliar to the developer. Developing high-quality tools that can extract named entities, numerical expressions, and class names for each foreign language is very costly and time-consuming.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Recently, some pioneering studies have investigated approaches to automatically construct QA components from scratch by applying machine learning techniques to training data (Ittycheriah et al., 2001a) (Ittycheriah et al., 2001b )(Ng et al., 2001 ) (Pasca and Harabagiu) (Suzuki et al., 2002) (Zukerman and Horvitz, 2001 ) (Sasaki et al., 2004) . These approaches still suffer from the problem of preparing an adequate amount of training data specifically designed for a particular QA system because each QA system uses its own questiontype system. It is very typical in the course of system development to redesign the question-type system in order to improve system performance. This inevitably leads to revision of a large-scale training dataset, which requires a heavy workload. For example, assume that you have to develop a Chinese or Greek QA system and have 10,000 pairs of question and answers. You have to manually classify the questions according to your own questiontype system. In addition, you have to annotate the tags of the question types to large-scale Chinese or Greek documents. If you wanted to redesign the question type ORGANIZATION to three categories, COMPANY, SCHOOL, and OTHER ORGANIZATION, then the ORGANIZATION tags in the annotated document set would need to be manually revisited and revised.", |
| "cite_spans": [ |
| { |
| "start": 174, |
| "end": 201, |
| "text": "(Ittycheriah et al., 2001a)", |
| "ref_id": null |
| }, |
| { |
| "start": 202, |
| "end": 228, |
| "text": "(Ittycheriah et al., 2001b", |
| "ref_id": null |
| }, |
| { |
| "start": 229, |
| "end": 246, |
| "text": ")(Ng et al., 2001", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 271, |
| "end": 292, |
| "text": "(Suzuki et al., 2002)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 293, |
| "end": 320, |
| "text": "(Zukerman and Horvitz, 2001", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 323, |
| "end": 344, |
| "text": "(Sasaki et al., 2004)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "(Suzuki", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To solve this problem, this paper regards Question Answering as Question-Biased Term Extraction (QBTE). This new QBTE approach liberates QA systems from the heavy burden imposed by question types.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Since it is a challenging as well as a very complex and sensitive problem to directly extract answers without using question types and only using features of questions, correct answers, and contexts in documents, we have to investigate the feasibility of this approach: how well can answer candidates be extracted, and how well are answer candidates ranked?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In response, this paper employs the machine learning technique Maximum Entropy Models (MEMs) to extract answers to a question from documents based on question features, document features, and the combined features. Experimental results show the performance of a QA system that ap-plies MEMs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Mainichi Newspaper published in 1995.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Data Document Set Japanese newspaper articles of The", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Question/Answer Set We used the CRL 1 QA Data (Sekine et al., 2002) . This dataset comprises 2,000 Japanese questions with correct answers as well as question types and IDs of articles that contain the answers. Each question is categorized as one of 115 hierarchically classified question types.", |
| "cite_spans": [ |
| { |
| "start": 46, |
| "end": 67, |
| "text": "(Sekine et al., 2002)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Data Document Set Japanese newspaper articles of The", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The document set is used not only in the training phase but also in the execution phrase.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training Data Document Set Japanese newspaper articles of The", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Although the CRL QA Data contains question types, the information of question types are not used for the training. This is because more than the 60% of question types have fewer than 10 questions as examples (Table 1 ). This means it is very unlikely that we can train a QA system that can handle this 60% due to data sparseness. 2 Only for the purpose of analyzing experimental results in this paper do we refer to the question types of the dataset.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 208, |
| "end": 216, |
| "text": "(Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Training Data Document Set Japanese newspaper articles of The", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "This section briefly introduces the machine learning technique Maximum Entropy Models and describes how to apply MEMs to QA tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning with Maximum Entropy Models", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Let X be a set of input symbols and Y be a set of class labels. A sample (x, y) is a pair of input", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Entropy Models", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "x={x 1 ,. . . , x m } (x i \u2208 X ) and output y \u2208 Y.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Entropy Models", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "The Maximum Entropy Principle (Berger et al., 1996) is to find a model p * = argmax p\u2208C H(p), which means a probability model p(y|x) that maximizes entropy H(p).", |
| "cite_spans": [ |
| { |
| "start": 30, |
| "end": 51, |
| "text": "(Berger et al., 1996)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Entropy Models", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "Given data (", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Entropy Models", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "x (1) , y (1) ),. . .,(x (n) , y (n) ), let k (x (k) \u00d7 {y (k) }) = { x 1 ,\u1ef9 1 , ..., x i ,\u1ef9 i , ...,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Entropy Models", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "x m ,\u1ef9 m }. This means that we enumerate all pairs of an input symbol and label and represent them as", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Entropy Models", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "x i ,\u1ef9 i using index i (1 \u2264 i \u2264 m).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Entropy Models", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "In this paper, feature function f i is defined as follows.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Entropy Models", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "f i (x, y) = 1 ifx i \u2208 x and y =\u1ef9 i 0 otherwise", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Entropy Models", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "We use all combinations of input symbols in x and class labels for features (or the feature function) of MEMs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Entropy Models", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "With Lagrangian \u03bb = \u03bb 1 , ..., \u03bb m , the dual function of H is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Entropy Models", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "\u03a8(\u03bb) = \u2212 xp (x) log Z \u03bb (x) + \u03bb ip (f i ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Entropy Models", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Entropy Models", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "Z \u03bb (x) = y exp( i \u03bb i f i (x, y)) andp(x)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Entropy Models", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "andp(f i ) indicate the empirical distribution of x and f i in the training data. The dual optimization problem \u03bb * = argmax \u03bb \u03a8(\u03bb) can be efficiently solved as an optimization problem without constraints. As a result, probabilistic model p * = p \u03bb * is obtained as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Entropy Models", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "p \u03bb * (y|x) = 1 Z \u03bb (x) exp i \u03bb i f i (x, y) .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Maximum Entropy Models", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "Question analysis is a classification problem that classifies questions into different question types. Answer candidate extraction is also a classification problem that classifies words into answer types (i.e., question types), such as PERSON, DATE, and AWARD. Answer selection is an exactly classification that classifies answer candidates as positive or negative. Therefore, we can apply machine learning techniques to generate classifiers that work as components of a QA system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Applying MEMs to QA", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": "In the QBTE approach, these three components, i.e., question analysis, answer candidate extraction, and answer selection, are integrated into one classifier.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Applying MEMs to QA", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": "To successfully carry out this goal, we have to extract features that reflect properties of correct answers of a question in the context of articles.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Applying MEMs to QA", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": "This section presents a framework, QBTE Model 1, to construct a QA system from question-answer pairs based on the QBTE Approach. When a user gives a question, the framework finds answers to the question in the following two steps. Document Retrieval retrieves the top N articles or paragraphs from a large-scale corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "QBTE Model 1", |
| "sec_num": "3" |
| }, |
| { |
| "text": "QBTE creates input data by combining the question features and documents features, evaluates the input data, and outputs the top M answers. 3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "QBTE Model 1", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Since this paper focuses on QBTE, this paper uses a simple idf method in document retrieval.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "QBTE Model 1", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Let w i be words and w 1 ,w 2 ,. . .w m be a document. Question Answering in the QBTE Model 1 involves directly classifying words w i in the document into answer words or non-answer words. That is, given input x (i) for w i , its class label is selected from among {I, O, B} as follows: I: if the word is in the middle of the answer word sequence;", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "QBTE Model 1", |
| "sec_num": "3" |
| }, |
| { |
| "text": "O: if the word is not in the answer word sequence; B: if the word is the start word of the answer word sequence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "QBTE Model 1", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The class labeling system in our experiment is IOB2 (Sang, 2000) , which is a variation of IOB (Ramshaw and Marcus, 1995) . Input x (i) of each word is defined as described below.", |
| "cite_spans": [ |
| { |
| "start": 52, |
| "end": 64, |
| "text": "(Sang, 2000)", |
| "ref_id": null |
| }, |
| { |
| "start": 95, |
| "end": 121, |
| "text": "(Ramshaw and Marcus, 1995)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "QBTE Model 1", |
| "sec_num": "3" |
| }, |
| { |
| "text": "This paper employs three groups of features as features of input data:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Extraction", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 Question Feature Set (QF);", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Extraction", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 Document Feature Set (DF);", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Extraction", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 Combined Feature Set (CF), i.e., combinations of question and document features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Extraction", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "A Question Feature Set (QF) is a set of features extracted only from a question sentence. This feature set is defined as belonging to a question sentence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Question Feature Set (QF)", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "The following are elements of a Question Feature Set: : POS4 of words in the question. POS1-POS4 indicate part-of-speech (POS) of the IPA POS tag set generated by the Japanese morphological analyzer ChaSen. For example, \"Tokyo\" is analyzed as POS1 = noun, POS2 = propernoun, POS3 = location, and POS4 = general. This paper used up to 4-grams for qw.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Question Feature Set (QF)", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "qw: an enumeration of the word n-grams (1 \u2264 n \u2264 N ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Question Feature Set (QF)", |
| "sec_num": "3.1.1" |
| }, |
| { |
| "text": "Document Feature Set (DF) is a feature set extracted only from a document. Using only DF corresponds to unbiased Term Extraction (TE).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Document Feature Set (DF)", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "For each word w i , the following features are extracted: dw-k,. . .,dw+0,. . .,dw+k: k preceding and following words of the word w i , e.g., { dw-1:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Document Feature Set (DF)", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "w i\u22121 , dw+0:w i , dw+1:w i+1 } if k = 1, dm1-k,.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Document Feature Set (DF)", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": ". .,dm1+0,. . .,dm1+k: POS1 of k preceding and following words of the word w i , dm2-k,. . .,dm2+0,. . .,dm2+k: POS2 of k preceding and following words of the word w i , dm3-k,. . .,dm3+0,. . .,dm3+k: POS3 of k preceding and following words of the word w i , dm4-k,. . .,dm4+0,. . .,dm4+k: POS4 of k preceding and following words of the word w i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Document Feature Set (DF)", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "In this paper, k is set to 3 so that the window size is 7.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Document Feature Set (DF)", |
| "sec_num": "3.1.2" |
| }, |
| { |
| "text": "Combined Feature Set (CF) contains features created by combining question features and document features. QBTE Model 1 employs CF. For each word w i , the following features are created. cw-k,. . .,cw+0,. . .,cw+k: matching results (true/false) between each of dw-k,...,dw+k features and any qw feature, e.g., cw-1:true if dw-1:President and qw: President, cm1-k,. . .,cm1+0,. . .,cm1+k: matching results (true/false) between each of dm1-k,...,dm1+k features and any POS1 in qm1 features, cm2-k,. . .,cm2+0,. . .,cm2+k: matching results (true/false) between each of dm2-k,...,dm2+k features and any POS2 in qm2 features, cm3-k,. . .,cm3+0,. . .,cm3+k: matching results (true/false) between each of dm3-k,...,dm3+k features and any POS3 in qm3 features, cm4-k,. . .,cm4+0,. . .,cm4+k: matching results (true/false) between each of dm4-k,...,dm4+k features and any POS4 in qm4 features, cq-k,. . .,cq+0,. . .,cq+k: combinations of each of dw-k,...,dw+k features and qw features, e.g., cq-1:President&Who is a combination of dw-1:President and qw:Who.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combined Feature Set (CF)", |
| "sec_num": "3.1.3" |
| }, |
| { |
| "text": "The training phase estimates a probabilistic model from training data (x (1) ,y (1) ),...,(x (n) ,y (n) ) generated from the CRL QA Data. The execution phase evaluates the probability of y (i) given inputx (i) using the the probabilistic model.", |
| "cite_spans": [ |
| { |
| "start": 189, |
| "end": 192, |
| "text": "(i)", |
| "ref_id": null |
| }, |
| { |
| "start": 206, |
| "end": 209, |
| "text": "(i)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and Execution", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "1. Given question q, correct answer a, and document d. y (1) ),...,(x (n) ,y (n) ) using Maximum Entropy Models.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 55, |
| "end": 60, |
| "text": "y (1)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Training Phase", |
| "sec_num": null |
| }, |
| { |
| "text": "The execution phase extracts answers from retrieved documents as Term Extraction, biased by the question.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Annotate", |
| "sec_num": "2." |
| }, |
| { |
| "text": "1. Given question q and paragraph d.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Execution Phase", |
| "sec_num": null |
| }, |
| { |
| "text": "i of d = w 1 , ..., w m , create input data x (i) by extracting features. 4. For each y (j) \u2208 Y, compute p \u03bb * (y (j) |x (i) ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "For w", |
| "sec_num": "3." |
| }, |
| { |
| "text": "which is a probability of y (j) given x (i) .", |
| "cite_spans": [ |
| { |
| "start": 28, |
| "end": 31, |
| "text": "(j)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "For w", |
| "sec_num": "3." |
| }, |
| { |
| "text": "For each x (i) , y (j) with the highest probability is selected as the label of w i .", |
| "cite_spans": [ |
| { |
| "start": 19, |
| "end": 22, |
| "text": "(j)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5.", |
| "sec_num": null |
| }, |
| { |
| "text": "6. Extract word sequences that start with the word labeled B and are followed by words labeled I from the labeled word sequence of d.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5.", |
| "sec_num": null |
| }, |
| { |
| "text": "7. Rank the top M answers according to the probability of the first word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5.", |
| "sec_num": null |
| }, |
| { |
| "text": "This approach is designed to extract only the most highly probable answers. However, pin-pointing only answers is not an easy task. To select the top five answers, it is necessary to loosen the condition for extracting answers. Therefore, in the execution phase, we only give label O to a word if its probability exceeds 99%, otherwise we give the second most probable label.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5.", |
| "sec_num": null |
| }, |
| { |
| "text": "As a further relaxation, word sequences that include B inside the sequences are extracted for answers. This is because our preliminary experiments indicated that it is very rare for two answer candidates to be adjacent in Question-Biased Term Extraction, unlike an ordinary Term Extraction task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "5.", |
| "sec_num": null |
| }, |
| { |
| "text": "We conducted 10-fold cross validation using the CRL QA Data. The output is evaluated using the Top5 score and MRR.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Top5 Score shows the rate at which at least one correct answer is included in the top 5 answers. Rank) is the average reciprocal rank (1/n) of the highest rank n of a correct answer for each question.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 97, |
| "end": 102, |
| "text": "Rank)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experimental Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Judgment of whether an answer is correct is done by both automatic and manual evaluation. Automatic evaluation consists of exact matching and partial matching. Partial matching is useful for absorbing the variation in extraction range. A partial match is judged correct if a system's answer completely includes the correct answer or the correct answer completely includes a system's answer. Table 2 presents the experimental results. The results show that a QA system can be built by using our QBTE approach. The manually evaluated performance scored MRR=0.36 and Top5=0.47. However, manual evaluation is costly and time-consuming, so we use automatic evaluation results, i.e., exact matching results and partial matching results, as a pseudo lowerbound and upper-bound of the performances. Interestingly, the manual evaluation results of MRR and Top5 are nearly equal to the average between exact and partial evaluation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 391, |
| "end": 398, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "MRR (Mean Reciprocal", |
| "sec_num": null |
| }, |
| { |
| "text": "To confirm that the QBTE ranks potential answers to the higher rank, we changed the number of paragraphs retrieved from a large corpus from N = 1, 3, 5 to 10. Table 3 shows the results. Whereas the performances of Term Extraction (TE) and Term Extraction with question features (TE+QF) significantly degraded, the performance of the QBTE (CF) did not severely degrade with the larger number of retrieved paragraphs. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 159, |
| "end": 166, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "MRR (Mean Reciprocal", |
| "sec_num": null |
| }, |
| { |
| "text": "Our approach needs no question type system, and it still achieved 0.36 in MRR and 0.47 in Top5. This performance is comparable to the results of SAIQA-II (Sasaki et al., 2004) (MRR=0.4, Top5=0.55) whose question analysis, answer candidate extraction, and answer selection modules were independently built from a QA dataset and an NE dataset, which is limited to eight named entities, such as PERSON and LOCATION. Since the QA dataset is not publicly available, it is not possible to directly compare the experimental results; however we believe that the performance of the QBTE Model 1 is comparable to that of the conventional approaches, even though it does not depend on question types, named entities, or class names. Most of the partial answers were judged correct in manual evaluation. For example, for \"How many times bigger ...?\", \"two times\" is a correct answer but \"two\" was judged correct. Suppose that \"John Kerry\" is a prepared correct answer in the CRL QA Data. In this case, \"Senator John Kerry\" would also be correct. Such additions and omissions occur because our approach is not restricted to particular extraction units, such as named entities or class names.", |
| "cite_spans": [ |
| { |
| "start": 154, |
| "end": 196, |
| "text": "(Sasaki et al., 2004) (MRR=0.4, Top5=0.55)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The performance of QBTE was affected little by the larger number of retrieved paragraphs, whereas the performances of TE and TE + QF significantly degraded. This indicates that QBTE Model 1 is not mere Term Extraction with document retrieval but Term Extraction appropriately biased by questions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Our experiments used no information about question types given in the CRL QA Data because we are seeking a universal method that can be used for any QA dataset. Beyond this main goal, as a reference, The Appendix shows our experimental results classified into question types without using them in the training phase. The results of automatic evaluation of complete matching are in Top5 (T5), and MRR and partial matching are in Top5 (T5') and MRR'. It is interesting that minor question types were correctly answered, e.g., SEA and WEAPON, for which there was only one training question.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We also conducted an additional experiment, as a reference, on the training data that included question types defined in the CRL QA Data; the questiontype of each question is added to the qw feature. The performance of QBTE from the first-ranked paragraph showed no difference from that of experiments shown in Table 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 311, |
| "end": 318, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "There are two previous studies on integrating QA components into one using machine learning/statistical NLP techniques. Echihabi et al. (Echihabi et al., 2003) used Noisy-Channel Models to construct a QA system. In this approach, the range of Term Extraction is not trained by a data set but selected from answer candidates, e.g., named entities and noun phrases, generated by a decoder. Lita et al. (Lita and Carbonell, 2004) share our motivation to build a QA system only from question-answer pairs without depending on the question types. Their method finds clusters of questions and defines how to answer questions in each cluster. However, their approach is to find snippets, i.e., short passages including answers, not exact answers extracted by Term Extraction.", |
| "cite_spans": [ |
| { |
| "start": 120, |
| "end": 159, |
| "text": "Echihabi et al. (Echihabi et al., 2003)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 388, |
| "end": 426, |
| "text": "Lita et al. (Lita and Carbonell, 2004)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "This paper described a novel approach to extracting answers to a question using probabilistic models constructed from only question-answer pairs. This approach requires no question type system, no named entity extractor, and no class name extractor. To the best of our knowledge, no previous study has regarded Question Answering as Question-Biased Term Extraction. As a feasibility study, we built a QA system using Maximum Entropy Models on a 2000-question/answer dataset. The results were evaluated by 10-fold cross validation, which showed that the performance is 0.36 in MRR and 0.47 in Top5. Since this approach relies on a morphological analyzer, applying the QBTE Model 1 to QA tasks of other languages is our future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Presently, National Institute of Information and Communications Technology (NICT), Japan 2 A machine learning approach to hierarchical question analysis was reported in(Suzuki et al., 2003), but training and maintaining an answer extractor for question types of fine granularity is not an easy task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In this paper, M is set to 5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This research was supported by a contract with the National Institute of Information and Communications Technology (NICT) of Japan entitled, \"A study of speech dialogue translation technology based on a large corpus\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgment", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A Maximum Entropy Approach to Natural Language Processing", |
| "authors": [ |
| { |
| "first": "Adam", |
| "middle": [ |
| "L" |
| ], |
| "last": "Berger", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [ |
| "A" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [ |
| "J Della" |
| ], |
| "last": "Pietra", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "", |
| "volume": "22", |
| "issue": "", |
| "pages": "39--71", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adam L. Berger, Stephen A. Della Pietra, and Vincent J. Della Pietra: A Maximum Entropy Approach to Nat- ural Language Processing, Computational Linguistics, Vol. 22, No. 1, pp. 39-71 (1996).", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "A Noisy-Channel Approach to Question Answering", |
| "authors": [ |
| { |
| "first": "Abdessamad", |
| "middle": [], |
| "last": "Echihabi", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proc. of ACL-2003", |
| "volume": "", |
| "issue": "", |
| "pages": "16--23", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abdessamad Echihabi and Daniel Marcu: A Noisy- Channel Approach to Question Answering, Proc. of ACL-2003, pp. 16-23 (2003).", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Question Answering Using Maximum-Entropy Components", |
| "authors": [ |
| { |
| "first": "Abraham", |
| "middle": [], |
| "last": "Ittycheriah", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Franz", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei-Jing", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Adwait", |
| "middle": [], |
| "last": "Ratnaparkhi", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proc. of NAACL-2001", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abraham Ittycheriah, Martin Franz, Wei-Jing Zhu, and Adwait Ratnaparkhi: Question Answering Using Maximum-Entropy Components, Proc. of NAACL- 2001 (2001).", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "IBM's Statistical Question Answering System -TREC-10", |
| "authors": [ |
| { |
| "first": "Abraham", |
| "middle": [], |
| "last": "Ittycheriah", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Franz", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei-Jing", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Adwait", |
| "middle": [], |
| "last": "Ratnaparkhi", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proc. of TREC-10", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abraham Ittycheriah, Martin Franz, Wei-Jing Zhu, and Adwait Ratnaparkhi: IBM's Statistical Question An- swering System -TREC-10, Proc. of TREC-10 (2001).", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Instance-Based Question Answering: A Data-Driven Approach", |
| "authors": [ |
| { |
| "first": "Lucian", |
| "middle": [], |
| "last": "Vlad", |
| "suffix": "" |
| }, |
| { |
| "first": "Lita", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Jaime", |
| "middle": [], |
| "last": "Carbonell", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proc. of EMNLP-2004", |
| "volume": "", |
| "issue": "", |
| "pages": "396--403", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lucian Vlad Lita and Jaime Carbonell: Instance-Based Question Answering: A Data-Driven Approach: Proc. of EMNLP-2004, pp. 396-403 (2004).", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Question Answering Using a Large Text Database: A Machine Learning Approach: Proc. of EMNLP-2001", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Hwee", |
| "suffix": "" |
| }, |
| { |
| "first": "Jennifer", |
| "middle": [ |
| "L P" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "Yiyuan", |
| "middle": [], |
| "last": "Kwan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Xia", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "67--73", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hwee T. Ng, Jennifer L. P. Kwan, and Yiyuan Xia: Ques- tion Answering Using a Large Text Database: A Ma- chine Learning Approach: Proc. of EMNLP-2001, pp. 67-73 (2001).", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Harabagiu: High Performance Question/Answering", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Marisu", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pasca", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Sanda", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proc. of SIGIR-2001", |
| "volume": "", |
| "issue": "", |
| "pages": "366--374", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marisu A. Pasca and Sanda M. Harabagiu: High Perfor- mance Question/Answering, Proc. of SIGIR-2001, pp. 366-374 (2001).", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Text Chunking using Transformation-Based Learning", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Lance", |
| "suffix": "" |
| }, |
| { |
| "first": "Mitchell", |
| "middle": [ |
| "P" |
| ], |
| "last": "Ramshaw", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Marcus", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proc. of WVLC-95", |
| "volume": "", |
| "issue": "", |
| "pages": "82--94", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lance A. Ramshaw and Mitchell P. Marcus: Text Chunk- ing using Transformation-Based Learning, Proc. of WVLC-95, pp. 82-94 (1995).", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Tjong Kim Sang: Noun Phrase Recognition by System Combination", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Erik", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proc. of NAACL-2000", |
| "volume": "", |
| "issue": "", |
| "pages": "55--55", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Erik F. Tjong Kim Sang: Noun Phrase Recognition by System Combination, Proc. of NAACL-2000, pp. 55- 55 (2000).", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "A Trainable Japanese QA System with SVM", |
| "authors": [ |
| { |
| "first": "Yutaka", |
| "middle": [], |
| "last": "Sasaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Hideki", |
| "middle": [], |
| "last": "Isozaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Jun", |
| "middle": [], |
| "last": "Suzuki", |
| "suffix": "" |
| }, |
| { |
| "first": "Kouji", |
| "middle": [], |
| "last": "Kokuryou", |
| "suffix": "" |
| }, |
| { |
| "first": "Tsutomu", |
| "middle": [], |
| "last": "Hirao", |
| "suffix": "" |
| }, |
| { |
| "first": "Hideto", |
| "middle": [], |
| "last": "Kazawa", |
| "suffix": "" |
| }, |
| { |
| "first": "Eisaku", |
| "middle": [], |
| "last": "Maeda", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Saiqa-Ii", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "45", |
| "issue": "", |
| "pages": "635--646", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yutaka Sasaki, Hideki Isozaki, Jun Suzuki, Kouji Kokuryou, Tsutomu Hirao, Hideto Kazawa, and Eisaku Maeda, SAIQA-II: A Trainable Japanese QA System with SVM, IPSJ Journal, Vol. 45, NO. 2, pp. 635-646, 2004. (in Japanese)", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Chikashi Nobata, Kiyotaka Uchimoto, and Hitoshi Isahara, NYU/CRL QA system, QAC question analysis and CRL QA data", |
| "authors": [ |
| { |
| "first": "Satoshi", |
| "middle": [], |
| "last": "Sekine", |
| "suffix": "" |
| }, |
| { |
| "first": "Kiyoshi", |
| "middle": [], |
| "last": "Sudo", |
| "suffix": "" |
| }, |
| { |
| "first": "Yusuke", |
| "middle": [], |
| "last": "Shinyama", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Working Notes of NTCIR Workshop", |
| "volume": "3", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Satoshi Sekine, Kiyoshi Sudo, Yusuke Shinyama, Chikashi Nobata, Kiyotaka Uchimoto, and Hitoshi Isa- hara, NYU/CRL QA system, QAC question analysis and CRL QA data, in Working Notes of NTCIR Work- shop 3 (2002).", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "SVM Answer Selection for Open-Domain Question Answering", |
| "authors": [ |
| { |
| "first": "Jun", |
| "middle": [], |
| "last": "Suzuki", |
| "suffix": "" |
| }, |
| { |
| "first": "Yutaka", |
| "middle": [], |
| "last": "Sasaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Eisaku", |
| "middle": [], |
| "last": "Maeda", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. of Coling-2002", |
| "volume": "", |
| "issue": "", |
| "pages": "974--980", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jun Suzuki, Yutaka Sasaki, and Eisaku Maeda: SVM An- swer Selection for Open-Domain Question Answer- ing, Proc. of Coling-2002, pp. 974-980 (2002).", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Directed Acyclic Graph Kernel", |
| "authors": [ |
| { |
| "first": "Jun", |
| "middle": [], |
| "last": "Suzuki", |
| "suffix": "" |
| }, |
| { |
| "first": "Hirotoshi", |
| "middle": [], |
| "last": "Taira", |
| "suffix": "" |
| }, |
| { |
| "first": "Yutaka", |
| "middle": [], |
| "last": "Sasaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Eisaku", |
| "middle": [], |
| "last": "Maeda", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proc. of ACL 2003 Workshop on Multilingual Summarization and Question Answering -Machine Learning and Beyond", |
| "volume": "", |
| "issue": "", |
| "pages": "61--68", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jun Suzuki, Hirotoshi Taira, Yutaka Sasaki, and Eisaku Maeda: Directed Acyclic Graph Kernel, Proc. of ACL 2003 Workshop on Multilingual Summarization and Question Answering -Machine Learning and Beyond, pp. 61-68, Sapporo (2003).", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Using Machine Learning Techniques to Interpret WH-Questions", |
| "authors": [ |
| { |
| "first": "Ingrid", |
| "middle": [], |
| "last": "Zukerman", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Horvitz", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proc. of ACL-2001", |
| "volume": "", |
| "issue": "", |
| "pages": "547--554", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ingrid Zukerman and Eric Horvitz: Using Machine Learning Techniques to Interpret WH-Questions, Proc. of ACL-2001, Toulouse, France, pp. 547-554 (2001).", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "A and /A right before and after answer a in d.3. Morphologically analyze d.4. Ford = w 1 , ..., A , w j , ..., w k , /A , ..., w m , extract features as x (1) ,...,x (m) .5. Class label y (i) = B if w i follows A , y (i) = I if w i is inside of A and /A , and y (i) = O otherwise.", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "num": null, |
| "html": null, |
| "type_str": "table", |
| "text": "Number of Questions in Question Types of CRL QA Data", |
| "content": "<table><tr><td># of Questions # of Question Types Example 1-9 AWARD, CRIME, OFFENSE 74 10-50 PERCENT, N PRODUCT, YEAR PERIOD 32 51-100 COUNTRY, COMPANY, GROUP 6 100-300 PERSON, DATE, MONEY 3 Total 115</td></tr><tr><td>et al., 2003)</td></tr></table>" |
| }, |
| "TABREF2": { |
| "num": null, |
| "html": null, |
| "type_str": "table", |
| "text": "", |
| "content": "<table><tr><td/><td colspan=\"3\">: Main Results with 10-fold Cross Validation Correct Answer Rank MRR Top5 1 2 3 4 5</td></tr><tr><td>Exact match</td><td>453 139</td><td>68 35 19 0.28</td><td>0.36</td></tr><tr><td>Partial match</td><td colspan=\"2\">684 222 126 80 48 0.43</td><td>0.58</td></tr><tr><td>Ave.</td><td/><td colspan=\"2\">0.355 0.47</td></tr><tr><td colspan=\"2\">Manual evaluation 578 188</td><td>86 55 34 0.36</td><td>0.47</td></tr><tr><td>6. Estimate p \u03bb * from (x (1) ,</td><td/><td/><td/></tr></table>" |
| }, |
| "TABREF3": { |
| "num": null, |
| "html": null, |
| "type_str": "table", |
| "text": "Answer Extraction from Top N documents", |
| "content": "<table><tr><td>Feature set</td><td colspan=\"2\">Top N paragraphs Match</td><td>1</td><td>Correct Answer Rank 2 3 4</td><td>5</td><td>MRR Top5</td></tr><tr><td>TE (DF)</td><td>1 3 5 10</td><td colspan=\"4\">Exact Partial 207 186 155 153 121 102 109 80 71 62 Exact 65 63 55 53 43 Partial 120 131 112 108 94 Exact 51 38 38 36 36 Partial 99 80 89 81 75 Exact 29 17 19 22 18 Partial 59 38 35 49 46</td><td>0.11 0.21 0.07 0.13 0.05 0.10 0.03 0.07</td><td>0.21 0.41 0.14 0.28 0.10 0.21 0.07 0.14</td></tr><tr><td>TE (DF) + QF</td><td>1 3 5 10</td><td colspan=\"4\">Exact Partial 207 198 175 126 140 120 105 94 63 80 Exact 65 68 52 58 57 Partial 119 117 111 122 106 Exact 44 57 41 35 31 Partial 91 104 71 82 63 Exact 28 42 30 28 26 Partial 57 68 57 56 45</td><td>0.12 0. 23 0.21 0 .42 0.07 0.15 0.13 0.29 0.05 0.10 0.10 0.21 0.04 0.08 0.07 0.14</td></tr><tr><td>QBTE (CF)</td><td>1 3 5 10</td><td colspan=\"4\">Exact Partial 684 222 126 453 139 68 Exact 403 156 92 Partial 539 296 145 105 35 80 52 Exact 381 153 92 59 Partial 542 291 164 122 102 19 48 43 92 50 Exact 348 128 92 65 57 Partial 481 257 173 124 102</td><td>0.28 0.43 0.27 0.42 0.26 0.40 0.24 0.36</td><td>0.36 0.58 0.37 0.62 0.37 0.61 0.35 0.57</td></tr></table>" |
| } |
| } |
| } |
| } |