ACL-OCL / Base_JSON /prefixO /json /O07 /O07-1013.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "O07-1013",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:08:35.007711Z"
},
"title": "Question Analysis and Answer Passage Retrieval for Opinion Question Answering Systems",
"authors": [
{
"first": "Lun-Wei",
"middle": [],
"last": "Ku",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Taiwan University",
"location": {}
},
"email": "lwku@nlg.csie.ntu.edu.tw"
},
{
"first": "Yu-Ting",
"middle": [],
"last": "Liang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Taiwan University",
"location": {}
},
"email": ""
},
{
"first": "Hsin-Hsi",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National Taiwan University",
"location": {}
},
"email": "hhchen@csie.ntu.edu.tw"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Question answering systems provide an elegant way for people to access an underlying knowledge base. Humans are not only interested in factual questions but also interested in opinions. This paper deals with question analysis and answer passage retrieval in opinion QA systems. For question analysis, six opinion question types are defined. A two-layered framework utilizing two question type classifiers is proposed. Algorithms for these two classifiers are described. The performance achieves 87.8% in general question classification and 92.5% in opinion question classification. The question focus is detected to form a query for the information retrieval system and the question polarity is detected to retain relevant sentences which have the same polarity as the question. For answer passage retrieval, three components are introduced. Relevant sentences retrieved are further identified whether the focus (Focus Detection) is in a scope of opinion (Opinion Scope Identification) or not, and if yes, whether the polarity of the scope matches with the polarity of the question (Polarity Detection). The best model achieves an F-measure of 40.59% using partial match at the level of meaningful unit. With relevance issues removed, the F-measure of the best model boosts up to 84.96%.",
"pdf_parse": {
"paper_id": "O07-1013",
"_pdf_hash": "",
"abstract": [
{
"text": "Question answering systems provide an elegant way for people to access an underlying knowledge base. Humans are not only interested in factual questions but also interested in opinions. This paper deals with question analysis and answer passage retrieval in opinion QA systems. For question analysis, six opinion question types are defined. A two-layered framework utilizing two question type classifiers is proposed. Algorithms for these two classifiers are described. The performance achieves 87.8% in general question classification and 92.5% in opinion question classification. The question focus is detected to form a query for the information retrieval system and the question polarity is detected to retain relevant sentences which have the same polarity as the question. For answer passage retrieval, three components are introduced. Relevant sentences retrieved are further identified whether the focus (Focus Detection) is in a scope of opinion (Opinion Scope Identification) or not, and if yes, whether the polarity of the scope matches with the polarity of the question (Polarity Detection). The best model achieves an F-measure of 40.59% using partial match at the level of meaningful unit. With relevance issues removed, the F-measure of the best model boosts up to 84.96%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Most of the state-of-the-art Question Answering (QA) systems serve the needs of answering factual questions such as \"When was James Dean born?\" and \"Who won the Nobel Peace Prize in 1991?\". In addition to facts, people would also like to know about others' opinions, thoughts, and feelings toward some specific topics, groups, and events. Opinion questions (e.g. \"How do Americans consider the US-Iraq war?\" and \"What are the public's opinions on human cloning?\") revealing answers about people's opinions have long as well as complex answers which tend to scatter across different documents. Traditional QA approaches are not effective enough to retrieve answers for opinion questions as they have been for factual questions (Stoyanov et al., 2005) . Hence, an opinion QA system is essential and urgent.",
"cite_spans": [
{
"start": 726,
"end": 749,
"text": "(Stoyanov et al., 2005)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most of the research on QA systems has been developed for factual questions, and the association of subjective information with question answering has not yet been much studied. As for subjective information, Wiebe (2000) proposed a method to identify strong clues of subjectivity on adjectives. presented a subjectivity classifier using lists of subjective nouns learned by bootstrapping algorithms. proposed a bootstrapping process to learn linguistically rich extraction patterns for subjective expressions. Kim and Hovy (2004) presented a system to determine word sentiments and combined sentiments within a sentence. Pang, Lee, and Vaithyanathan (2002) classified documents not by the topic, but by the overall sentiment, and then determined the polarity of a review. Wiebe et al. (2002) proposed a method for opinion summarization. Wilson et al. (2005) presented a phrase-level sentiment analysis to automatically identify the contextual polarity. Ku et al. (2006) proposed a method to automatically mine and organize opinions from heterogeneous information sources.",
"cite_spans": [
{
"start": 209,
"end": 221,
"text": "Wiebe (2000)",
"ref_id": "BIBREF11"
},
{
"start": 511,
"end": 530,
"text": "Kim and Hovy (2004)",
"ref_id": "BIBREF1"
},
{
"start": 622,
"end": 657,
"text": "Pang, Lee, and Vaithyanathan (2002)",
"ref_id": "BIBREF5"
},
{
"start": 773,
"end": 792,
"text": "Wiebe et al. (2002)",
"ref_id": "BIBREF12"
},
{
"start": 838,
"end": 858,
"text": "Wilson et al. (2005)",
"ref_id": "BIBREF13"
},
{
"start": 954,
"end": 970,
"text": "Ku et al. (2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Some research has gone from opinion analysis in texts toward that in QA systems. Cardie et al. (2003) took advantage of opinion summarization to support Multi-Perspective Question Answering (MPQA) system which aims to extract opinion-oriented information of a question. Yu and Hatzivassiloglou (2003) separated opinions from facts, at both the document and sentence levels. They intended to cluster opinion sentences from the same perspective together and summarize them as answers to opinion questions. Kim and Hovy (2005) identified opinion holders, which are frequently asked in opinion questions.",
"cite_spans": [
{
"start": 81,
"end": 101,
"text": "Cardie et al. (2003)",
"ref_id": "BIBREF0"
},
{
"start": 270,
"end": 300,
"text": "Yu and Hatzivassiloglou (2003)",
"ref_id": "BIBREF14"
},
{
"start": 504,
"end": 523,
"text": "Kim and Hovy (2005)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper deals with two major problems in opinion QA systems: question analysis and answer passage retrieval. Several issues, including how to separate opinion questions from factual ones, how to define question types for opinion questions, how to correctly classify opinion questions into corresponding types, how to present answers for different types of opinion questions, and how to retrieve answer passages for opinion questions, are discussed. Note that the unit of a passage is a sentence in this paper, though a passage can sometimes refer to more sentences, such as a paragraph.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 An Opinion QA Framework Figure 1 is a framework of the opinion QA system. The question is initially submitted into a part of speech tagger (POS Tagger), and then the question is analyzed in three aspects: the question focus, the question polarity, and the opinion question type. The former two attributes are further applied in answer passage retrieval. The question focus is the query for an information retrieval (IR) system to retrieve relevant sentences. The question polarity is utilized to screen out relevant sentences with different polarities to the question. With answer passages retrieved, answer extraction extracts text spans as answers according to the opinion question types, and outputs answers to the user. ",
"cite_spans": [],
"ref_spans": [
{
"start": 26,
"end": 34,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The experimental corpus comes from four sources, i.e. TREC 1 , NTCIR 2 , the Internet Polls, and OPQ. TREC and NTCIR are two of three major information retrieval evaluation forums in the world. Their evaluation tracks are in natural language processing and information retrieval domains such as large-scale information retrieval, question answering, genomics, cross language processing, and many new hot research topics. We collect 500 factual questions from the main task of QA Track in TREC-11. These English questions are translated into Chinese for experiments. A total of 1,577 factual questions are obtained from the developing question set of the CLQA task in NTCIR-5. Questions from public opinion polls in three public media websites, Chinatimes, Era, and TVBS, are crawled. OPQ is developed for this research, and it contains both factual and opinion questions. To construct the question corpus OPQ, annotators are given titles and descriptions of six opinion topics selected from NTCIR-2 and NTCIR-3. Annotators freely ask any three factual questions and seven opinion questions for each topic. Duplicated questions are dropped and a total of 1,011 questions are collected. Within these 1,011 questions in OPQ, 304 are factual questions and the other 707 are opinion questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Corpus Preparation",
"sec_num": "3"
},
{
"text": "Overall, we collect 2,443 factual questions and 1,289 opinion questions from four different sources. A total of 3,732 questions are gathered for our experiments, as shown in There are some challenging issues in extracting answers automatically by opinion QA systems. We categorize these challenges (indexed by numbers and enclosed by parentheses as follows) in question analysis into on holders, on opinions and on concepts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Corpus Preparation",
"sec_num": "3"
},
{
"text": "On holders, (1) to automatically identify named entities expressing opinions is imperative. (2) Grouping opinion holders is another issue. Answers to the question, \"How do Americans feel about the affair of the U.S. president Clinton?\", consist of opinions from any American. To answer questions like \"What kind of people support the abolishment of the Joint College Entrance Examination?\", QA systems have to find people having opinions toward the examination and (3) classify them into correct category, such as students, teachers, scholars, parents, and so forth.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Corpus Preparation",
"sec_num": "3"
},
{
"text": "On opinions, (4) knowing whether questions themselves contain subjective information and deciding their opinion polarities is necessary. The question \"Who disagrees with the idea of surrogate mothers?\" points out a negative attitude and the answer to this question is expected to be a list of persons or organizations that have negative opinions toward the idea of surrogate mothers. Another issue is (5) whether the comparison and the summarization of positive and negative opinions are required. In the question \"Is using the civil ID card more advantageous or disadvantageous?\", opinions expressing advantages and disadvantages have to be contrasted and scored to represent answers as \"More advantageous\" or \"More disadvantageous\" with evidence listed to users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Corpus Preparation",
"sec_num": "3"
},
{
"text": "On concepts, it is essential (6) to understand the concepts of opinions and perform the expansion on concepts to extract correct answers. In the question \"Is civil ID card secure?\" it is vital to know the definition and expansion of being secure. Keeping public's privacy, ensuring system's security, and protecting fingerprints' obtainment are possible security points. For (7) the concept of targets, the idea is the same as the concept of opinions except that it is about targets. For instance, the question \"What do Taiwanese think about the substitute program of Joint College Entrance Examination?\" necessitates the comprehension of what the substitute program is or the alias of this program, and then the system can seek for text spans which hold opinions towards it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Corpus Preparation",
"sec_num": "3"
},
{
"text": "Among the 707 opinion questions from OPQ corpus, answers of 160 opinion questions are found in the NTCIR corpus. These 160 opinion questions are analyzed based on the above seven challenges. Table 2 lists the number of questions (#Q) with respect to the number of challenges (#C). #C 1 2 3 4 5 6 7 Total #Q 19 47 39 30 13 12 0 160 A total of 60 questions are selected for further annotation based on their challenges. Sentences are annotated as whether they are opinions (Opinion), whether they are relevant to the topic (Rel2T), whether they are relevant to the question (Rel2Q), and whether they contain answers (AnswerQ). If sentences are annotated as relevant to the question, annotators further annotate the text spans which contribute answers to the question (CorrectMU).",
"cite_spans": [],
"ref_spans": [
{
"start": 191,
"end": 198,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Corpus Preparation",
"sec_num": "3"
},
{
"text": "A two-layered classification, i.e. with Q-Classifier and OPQ-Classifier, is proposed. Q-Classifier separates opinion questions from factual ones, and OPQ-Classifier tells types of opinion questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two-layered Question Classification",
"sec_num": "4"
},
{
"text": "According to opinion questions themselves and their corresponding answers, we define six opinion question types as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Opinion Questions",
"sec_num": "4.1"
},
{
"text": "(1) Holder (HD) Definition: Asking who the expresser of the specific opinion is. Example: Who supports the civil ID card? Answer: Entities and the corresponding evidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Opinion Questions",
"sec_num": "4.1"
},
{
"text": "(2) Target (TG) Definition: Asking whom the holder's attitude is toward. Example: Who does the public think should be responsible for the airplane crash? Answer: Entities and the corresponding evidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Opinion Questions",
"sec_num": "4.1"
},
{
"text": "(3) Attitude (AT) Definition: Asking what the attitude of a holder to a specific target is. Example: How do people feel about the affair of the U.S. President Clinton? Answer: Question-related opinions, separated into support, neutral, and non-support categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Opinion Questions",
"sec_num": "4.1"
},
{
"text": "(4) Reason (RS) Definition: Asking the reasons of an explicit or an implicit holder's attitude to a specific target. Example: Why do people think better not to have the college entrance exam? Answer: Reasons for taking the stand specified.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Opinion Questions",
"sec_num": "4.1"
},
{
"text": "(5) Majority (MJ) Definition: Asking which option, listed or not listed, is the majority. Example: If the government tries to carry out the usage of the civil ID card, will its reputation get better or worse? Answer: The majority of support, neutral and non-support evidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Opinion Questions",
"sec_num": "4.1"
},
{
"text": "(6) Yes/No (YN) Definition: Asking whether their statements are correct. Example: Is the airplane crash caused by management problems? Answer: The stronger opinion, i.e. yes or no.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of Opinion Questions",
"sec_num": "4.1"
},
{
"text": "Q-Classifier distinguishes opinion questions from factual questions. We use See5 (Quinlan, 2000) to train Q-Classifier. Seven features are employed. The feature pretype (PTY) denotes types in factual QA systems such as SELECTION, YESNO, METHOD, REASON, PERSON, LOCATION, PERSONDEF, DATE, QUANTITY, DEFINITION, OBJECT, and MISC and extracted by a conventional QA system (reference removed for blind review). For example, the value of pretype in \"Who is Tom Cruise married to?\" is PERSON.",
"cite_spans": [
{
"start": 81,
"end": 96,
"text": "(Quinlan, 2000)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Q-Classifier",
"sec_num": "4.2"
},
{
"text": "The other six features are operator (OPR), positive (POS), negative (NEG), totalow (TOW), totalscore (TSR), and maxscore (MSR). A public available sentiment dictionary (Ku et al., 2006) , which contains 2,655 positive opinion keywords, 7,767 negative opinion keywords, and 150 opinion operators, is used to tell if there are any positive (negative) opinion keywords and operators in questions. Each opinion keyword has a score expressing the degree of tendency. The feature operator (OPR) includes words of actions for expressing opinions. For example, say, think, and believe can be hints for extracting opinions. A total of 151 operators are manually collected. The feature totalow (TOW) is the total number of opinion operators, positive opinion keywords, and negative opinion keywords in a question. The feature totalscore (TSR) is the overall opinion score of the whole question, while the feature maxscore (MSR) is the absolute maximum opinion score of opinion keywords in a question.",
"cite_spans": [
{
"start": 168,
"end": 185,
"text": "(Ku et al., 2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Q-Classifier",
"sec_num": "4.2"
},
{
"text": "Section 3 mentions that 2,443 factual questions and 1,289 opinion questions from four different sources are collected. To keep the quantities of factual and opinion questions balanced, 1,289 factual questions are randomly selected from 2,443 questions and a total of 2,578 questions are employed. We adopt See5 to generate the decision tree based on different combinations of features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Q-Classifier",
"sec_num": "4.2"
},
{
"text": "With a 10-fold cross-validation, See5 outputs the resulting decision trees for each 10 folds, and a summary with the mean of error rates produced by these 10 folds. The features pretype (PTY) and totalow (TOW) perform best in reducing errors when used alone. They also cannot be ignored since the error rate increases more when they are excluded. The feature totalow shows that if a question contains more opinion keywords, it is more possible to be an opinion question. After all features are considered together, the best performance is 87.8%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Q-Classifier",
"sec_num": "4.2"
},
{
"text": "OPQ-Classifier categorizes opinion questions into the corresponding opinion question types. We first examine if there is any specific patterns in the question. If yes, then the rule for the pattern is applied. Otherwise, a scoring function is applied.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OPQ-Classifier",
"sec_num": "4.3"
},
{
"text": "The heuristic rules are listed as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OPQ-Classifier",
"sec_num": "4.3"
},
{
"text": "(1) The pattern \"A-not-A\": Yes/No 2End with question words: Yes/No 3\"Who\" + opinion operator: Holder 4\"Who\" + passive tense: Target (5) pretype (PTY) is Reason: Reason (6) pretype (PTY) is Selection: Majority A scoring function deals with those questions which cannot be classified by the above patterns. Unigrams, bigrams and trigrams in training questions are selected as feature candidates. These feature candidates are separated into two types. A topic dependent feature is only meaningful in questions of some topics, while general features may appear in questions of all kinds of topics. If a feature is topic dependent (e.g. human cloning and Clinton), it is dropped from the feature set. Only general features (e.g. is or is not, whether, and reason) are kept. Finally a set of features is obtained from the training questions. Then the discriminate power of these features is calculated as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OPQ-Classifier",
"sec_num": "4.3"
},
{
"text": "First, the observation probability of a feature i in the question type j is defined in Formula (1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OPQ-Classifier",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ") ( ) , ( ) , ( j NumQ j i NumQ j i P o =",
"eq_num": "(1)"
}
],
"section": "OPQ-Classifier",
"sec_num": "4.3"
},
{
"text": "where i is the index of the feature, j is the index of the question type, and NumQ represents the number of questions. The observation probability shows how often a feature is observed in each type. It is then normalized by Formula (2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OPQ-Classifier",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2211 = = 6 1 ) , ( ) , ( ) , ( j o o o j i P j i P j i NP",
"eq_num": "(2)"
}
],
"section": "OPQ-Classifier",
"sec_num": "4.3"
},
{
"text": "Every feature has six normalized observation probabilities corresponding to the six types. With these probabilities, the score ScoreQ of a question can be calculated by Formula (3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OPQ-Classifier",
"sec_num": "4.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2211 = = n i o j i NP j ScoreQ 1 ) , ( ) (",
"eq_num": "(3)"
}
],
"section": "OPQ-Classifier",
"sec_num": "4.3"
},
{
"text": "where n is the total number of features in question Q, and ScoreQ(j) represents the score of question Q as type j. Since there are six possible opinion question types, the six ScoreQ represent how possible the question Q belongs to each type. These six scores form the feature vector of the question Q for classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "OPQ-Classifier",
"sec_num": "4.3"
},
{
"text": "Training instances are used to find the centroid of each type. The Pearson correlation is adopted as the distance measure. The distances between the testing opinion questions and the six centroids are calculated to assign the opinion questions to the closest type. We use the OPQ corpus in Section 3 for the evaluation of the OPQ-Classifier. The opinion types of these opinion questions are manually given. Among the 707 opinion questions, answers of 160 opinion questions are found in the NTCIR corpus. They are used as the training data for an intensive analysis of both questions and answers. The rest 547 opinion questions are used as the testing data. The confusion matrix of the OPQ-Classifier is shown in Table 4 and 5. The average accuracy is 92.5%. There are fewer questions of target (TG) and majority (MJ) types, 8 and 13 in testing collection respectively. The unsatisfactory results of these two types may due to the lack of training questions. Figure 2 shows the framework of answer passage retrieval in an opinion QA system. The question focus supplied by the question analysis serves as the input to an Okapi IR system to retrieve relevant sentences from the knowledge base. Relevant sentences are further detected to identify whether the focus (Focus Detection) is in a scope of opinion text spans (Opinion Scope Identification) or not, and if yes, whether the polarity of the scope matches with the polarity of the question (Polarity Detection). The details are discussed in the following sections. ",
"cite_spans": [],
"ref_spans": [
{
"start": 712,
"end": 719,
"text": "Table 4",
"ref_id": null
},
{
"start": 958,
"end": 966,
"text": "Figure 2",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "OPQ-Classifier",
"sec_num": "4.3"
},
{
"text": "The first stage of answer passage retrieval is to input the question focus as a query into an IR system to retrieve relevant sentences from the knowledge base. These retrieved sentences may contain answers for a question. A set of content words in one question is used to represent its focus. The following steps extract a set of content words as the question focus and formulate a query.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Focus Extraction",
"sec_num": "5.1"
},
{
"text": "(1) Remove question marks. 2Remove question words. 3Remove opinion operators. 4Remove negation words. 5Name the remaining terms as focus. 6Use the Boolean OR operator to form a query.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Focus Extraction",
"sec_num": "5.1"
},
{
"text": "Since question marks and question words are common in every question, they do not contribute to the retrieval of relevant sentences, and therefore are removed. Opinion operators and negation words are removed as well since they represent the question polarity instead of the question focus. Once we have the question focus, we use the Boolean OR operator rather than the AND operator to form a query. This is because we prefer the IR system to return sentences that have any relevancy to the question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Focus Extraction",
"sec_num": "5.1"
},
{
"text": "The polarity of the question is useful in opinion QA systems to filter out query-relevant sentences which have different polarities from the question. If the question polarity is positive, the sentences providing answers ought to be positive, and vice versa. The polarity detection algorithm is shown as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Polarity Detection",
"sec_num": "5.2"
},
{
"text": "( 1)Determine the polarity of the opinion operator. 1 is for positive, 0 is for neutral, and -1 is for negative. 2Negate the operator polarity if there is any negation word anterior to the operator. 3Determine the polarity of the question focus. 1 is for positive, 0 is for neutral, and -1 is for negative. (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Polarity Detection",
"sec_num": "5.2"
},
{
"text": "If one of the operator polarity and question focus is 0 (neutral), output the sign of the other; else output the sign of the product of the polarities of the opinion operator and the question focus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Polarity Detection",
"sec_num": "5.2"
},
{
"text": "We regard the polarity of the question focus together with the polarity of the opinion operator because the opinion operator primarily shows the opinion tendency of the question and different polarities of the question focus can affect the polarity of the entire question. A positive opinion operator stands for a supportive attitude such as \"agree\", \"approve\", and \"support\". A neutral opinion operator stands for a neutral attitude such as \"state\", \"mention\", and \"indicate\". A negative opinion operator stands for a not-supportive attitude such as \"doubt\", \"disapprove\", and \"protest\". In the question \"Who approves the Joint College Entrance Examination?\", \"approve\" is a positive operator, and \"the Joint College Entrance Examination\" is a neutral question focus. The overall polarity of this question is positive, so the opinion QA system needs to retrieve sentences that contain a positive polarity to \"the Joint College Entrance Examination.\" In contrast, in the question \"Who agrees the abolishment of the Joint College Entrance Examination?\", the question focus \"the abolishment of the Joint College Entrance Examination\" becomes negative because of \"the abolishment\". Even though the operator is positive, opinion QA systems still have to look for sentences that contain negative opinions toward \"the Joint College Entrance Examination.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question Polarity Detection",
"sec_num": "5.2"
},
{
"text": "In Chinese, a sentence ending with a full stop may be composed of several sentence fragments sf separated by commas or semicolons as follows: \"sf 1 \uff0csf 2 \uff0csf 3 \uff0c\u2026\uff0csf n \u3002\". This paper (reference removed for blind review) shows that about 75% of Chinese sentences contain more than two sentence fragments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Scope Identification",
"sec_num": "5.3"
},
{
"text": "An opinion scope denotes a range expressing attitudes in a sentence. It may be a complete sentence, a sentence fragment, or a meaningful unit (MU) based on different criteria. It is very common that many concepts are expressed within one sentence in Chinese documents. Therefore to identify the complete concept, which is denoted as a meaningful unit, in sentences is necessary for the processing of relevant opinions. As mentioned, a Chinese sentence is composed of several sentence fragments, and one or more of them can form a meaningful unit, which expresses a complete concept. This paper (reference removed) employed linking elements (Li and Thompson, 1981) such as \"because\", \"when\", etc. to compose MUs from a sentence. In S (in Chinese), \"\u56e0\u6b64\" (thus) is a linking element which links sf 2 , sf 3 , and sf 4 together, and sf 2 is a subordinate clause of the operator \"\u8868\u793a\" (indicate) in sf 1 . Therefore, sf 1 , sf 2 , sf 3 , and sf 4 form a MU in this case.",
"cite_spans": [
{
"start": 640,
"end": 663,
"text": "(Li and Thompson, 1981)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Opinion Scope Identification",
"sec_num": "5.3"
},
{
"text": "sf 1 : \u9ec3\u5b97\uf914\u8868\u793a(indicate:operator)\uff0c sf 2 : \u767c\ufa08\u570b\u6c11 IC \u5361\u727d\u6d89\u5230\u57fa\u672c\u4eba\u6b0a\uff0c sf 3 : \u56e0\u6b64(thus:linking element)\uff0c sf 4 : \u5728\u6c7a\u7b56\u904e\u7a0b\u4e0a\u5fc5\u9808\u76f8\u7576\u56b4\u5bc6\uff0c sf 5 : \uf9b5\u5982\u65e5\u672c\u5c31\u672a\u767c\ufa08\u570b\u6c11\u8eab\u4efd\u8b49\u3002",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "S:",
"sec_num": null
},
{
"text": "The IR system takes a sentence as a retrieval unit and reports those sentences that are probably relevant to a given query. The focus detection aims to know which sentence fragments are useful to extract answer passages. Three criteria of focus detection, namely exact match, partial match, and lenient, are considered. In an extreme case (i.e. lenient), all the fragments in a retrieved sentence are regarded as relevant to the question focus. In another extreme case (i.e. exact match), only the fragment containing the complete question focus is regarded as relevant. In other words, exact match filters out the irrelevant fragments from the retrieved sentences. Partial match is weaker than exact match and is stronger than the lenient criterion. Those fragments which contain a part of the question focus are regarded as relevant. There are three criteria for focus detection and opinion scope identification, respectively, thus a total of 9 combinations are considered. For example, a combination of exact match and meaningful units means there is at least one focus in meaningful units. Similarly, a combination of partial match and sentence fragments indicates that there is at least one partial focus in sentence fragments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Focus Detection",
"sec_num": "5.4"
},
{
"text": "Given a combination of the above strategies, we have a set of opinion scopes relevant to the specific focus. Polarity detection tries to identify the scopes which have the same polarities as the question. How to determine the opinion polarity is an important issue. Two approaches are adopted. The opinion word approach employs a sentiment dictionary to detect if some words in this dictionary appear in a scope. The score of an opinion scope is the sum of the scores of these words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polarity Detection",
"sec_num": "5.5"
},
{
"text": "People sometimes imply their feelings or beliefs toward a particular target or event by actions. For example, people may not say \"Objection!\" to disagree an event, but they may try to abolish or terminate it as possible as they could. On the other hand, people may not say \"I'm loving it!\" to show their delight to an event, but they may try to fight for it or legalize it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polarity Detection",
"sec_num": "5.5"
},
{
"text": "In both circumstances, what people take in action expresses their opinions. Action words are those which indicate a person's willing of doing or not doing some behaviors. For example, carry out, seek, and follow are words showing willingness to do something, and we name these words as do's; substitute, stop, and boycott are words showing unwillingness to do something, and we name these words as don'ts. In the action word approach, we detect opinions in scopes with the help of a seed vocabulary of do's and don'ts, together with a sentiment dictionary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Polarity Detection",
"sec_num": "5.5"
},
{
"text": "The F-measure metric is used for evaluation for the answer passage retrieval. To answer an opinion question, all answer passages have to be retrieved for opinion polarity judgment. Therefore, the conventional evaluation metric that uses the precision and recall at a certain rank, e.g. top 10, may not be suitable for this task. Since all answer passages, sentence fragments and meaningful units which provide correct answers are already annotated in the testing bed, the F-measure metric can be applied without questions. Tables 6 and 7 show the F-measures of answer passage retrieval using the opinion word approach and the action word approach, respectively. In these two approaches, adopting meaningful units as opinion scopes is better than adopting sentences and sentence fragments. Considering both opinion and action words are better than opinion words only. The best F-measure 40.59% is achieved when meaningful units and partial match are used. Table 7 . F-Measure of Action Word Approach.",
"cite_spans": [],
"ref_spans": [
{
"start": 523,
"end": 537,
"text": "Tables 6 and 7",
"ref_id": null
},
{
"start": 955,
"end": 962,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments on Answer Passage Retrieval",
"sec_num": "5.6"
},
{
"text": "The previous experiments were done on sentences reported by the Okapi IR system. These retrieved sentences are not all relevant to the questions. This section will discuss how the relevance affects answer passage retrieval. Recall that the experimental corpus is annotated with Rel2T (relevant or irrelevant to the topic), Rel2Q (relevant or irrelevant to the question), CorrectMU (text spans containing answers to the question). Assume meaningful units are taken as the opinion scope. Tables 8 and 9 show how relevance influences the performance of answer passage retrieval using the opinion word and action word approaches, respectively. Rel2T shows the performance of using answer passages relevant to the six topics, that is, the original relevant documents from NTCIR CLIR task. Rel2Q shows the performance of using answer passages relevant to the questions, while CorrectMU shows the performance of using correct opinion fragments, which are relevant to the question focus, to decide opinion polarities. Rel2T is similar to the relevant sentence retrieval, which was shown to be tough in TREC novelty track (Soboroff and Harman, 2003) . From Rel2T to Rel2Q and CorrectMU, the best strategy for matching the question focus switches from partial match to lenient. This is reasonable, since the contents of Rel2Q and CorrectMU are already relevant to the question focus. In Rel2Q, doing focus detection doesn't benefit or harm a lot (50.37% vs. 53.06%). It shows that the question focus will appear exactly or partially in the relevant sentences. However, focus detection lowers the performance in CorrectMU (72.84% vs. 84.96%). It tells that the question focus and the correct meaningful units may appear in different positions within the sentence. For example, the first meaningful unit talks about the question focus, while the third meaningful unit really answers the question but omits the question focus since it is mentioned earlier. From Rel2T to Rel2Q, the F-measure does not increase as much as that from Rel2Q to CorrectMU. This result shows that finding the correct fragments of passages to judge the opinion polarity is very crucial to answer passage retrieval. The Fmeasure of CorrectMU shows the performance of judging opinion polarities without the relevant issue. Using either the opinion word approach or the action word approach achieves an F-measure greater than 80%. As a whole, including action words is better than using opinion words only.",
"cite_spans": [
{
"start": 1113,
"end": 1140,
"text": "(Soboroff and Harman, 2003)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 486,
"end": 500,
"text": "Tables 8 and 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments on Relevance Effects",
"sec_num": "5.7"
},
{
"text": "This paper proposes some important techniques for opinion question answering. For question classification, a two-layered framework including two classifiers is proposed. General questions are divided into factual and opinion questions, and then opinion questions themselves are classified into one of the six opinion question types defined in this paper. With both factual and opinion features for a decision tree model, the classifier achieves a precision rate of 87.8% for general question classification. With heuristic rules and the Pearson correlation coefficient as the distance measurement, the classifier achieves a precision rate of 92.5% for opinion question classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "For opinion answer passage retrieval, we concern not only the relevance but also the sentiment. Considering both opinion words and action words is better than considering opinion words only. Taking meaningful units as the opinion scope is better than taking sentences. Under the action word approach, the best model achieves an F-measure of 40.59% using partial match at the level of meaningful unit. With relevance issues removed, the Fmeasure of the best model boosts up to 84.96%. Understanding the meaning of the question focus is important for the relevance detection, but some foci are quite challenging in the experiments. Query expansion and concept ontology will be explored in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "http://trec.nist.gov/ 2 http://research.nii.ac.jp/ntcir/index-en.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Combining Low-Level and Summary Representations of Opinions for Multi-Perspective Question Answering",
"authors": [
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Litman",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of AAAI Spring Symposium Workshop",
"volume": "",
"issue": "",
"pages": "20--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cardie, C., Wiebe, J., Wilson, T. and Litman, D. 2003. Combining Low-Level and Summary Representations of Opinions for Multi-Perspective Question Answering. In Proceedings of AAAI Spring Symposium Workshop, 20-27",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Determining the Sentiment of Opinions",
"authors": [
{
"first": "S.-M",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20 th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1367--1373",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim, S.-M. and Hovy, E. 2004. Determining the Sentiment of Opinions. In Proceedings of the 20 th International Conference on Computational Linguistics, 1367-1373.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Identifying Opinion Holders for Question Answering in Opinion Texts",
"authors": [
{
"first": "S-M",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of AAAI-05 Workshop on Question Answering in Restricted Domains",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim, S-M and Hovy, E. 2005. Identifying Opinion Holders for Question Answering in Opinion Texts. In Proceedings of AAAI-05 Workshop on Question Answering in Restricted Domains.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Opinion Extraction, Summarization and Tracking in News and Blog Corpora",
"authors": [
{
"first": "L.-W",
"middle": [],
"last": "Ku",
"suffix": ""
},
{
"first": "Y.-T",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "H.-H",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of AAAI-2006 Spring Symposium on Computational Approaches to Analyzing Weblogs",
"volume": "",
"issue": "",
"pages": "100--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ku, L.-W., Liang, Y.-T. and Chen, H.-H. 2006. Opinion Extraction, Summarization and Tracking in News and Blog Corpora. In Proceedings of AAAI-2006 Spring Symposium on Computational Approaches to Analyzing Weblogs, AAAI Technical Report, 100-107.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Mandarin Chinese: A Functional Reference Grammar",
"authors": [
{
"first": "C",
"middle": [
"N"
],
"last": "Li",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Thompson",
"suffix": ""
}
],
"year": 1981,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li, C.N. and Thompson, S.A. 1981. Mandarin Chinese: A Functional Reference Grammar, University of California Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Thumbs up? Sentiment Classification Using Machine Learning Techniques",
"authors": [
{
"first": "B",
"middle": [],
"last": "Pang",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Vaithyanathan",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 2002 Conference on EMNLP",
"volume": "",
"issue": "",
"pages": "79--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pang, B., Lee, L. and Vaithyanathan, S. 2002. Thumbs up? Sentiment Classification Using Machine Learning Techniques. In Proceedings of the 2002 Conference on EMNLP, 79-86.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Data Mining Tools See5 and C5.0",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Quinlan",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quinlan, J. R. 2000. Data Mining Tools See5 and C5.0. http://www.rulequest.com/see5- info.html",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning Extraction Patterns for Subjective Expressions",
"authors": [
{
"first": "E",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference on EMNLP",
"volume": "",
"issue": "",
"pages": "105--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Riloff, E. and Wiebe, J. 2003. Learning Extraction Patterns for Subjective Expressions. In Proceedings of the 2003 Conference on EMNLP, 105-112.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning Subjective Nouns Using Extraction Pattern Bootstrapping",
"authors": [
{
"first": "E",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of Seventh Conference on Natural Language Learning",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Riloff, E., Wiebe, J. and Wilson, T. 2003. Learning Subjective Nouns Using Extraction Pattern Bootstrapping. In Proceedings of Seventh Conference on Natural Language Learning, 25-32.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Overview of the TREC 2003 novelty track",
"authors": [
{
"first": "I",
"middle": [],
"last": "Soboroff",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Harman",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of Twelfth Text REtrieval Conference",
"volume": "",
"issue": "",
"pages": "38--53",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soboroff, I. and Harman, D. 2003. Overview of the TREC 2003 novelty track. In Proceedings of Twelfth Text REtrieval Conference, National Institute of Standards and Technology, 38-53.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Multi-Perspective Question Answering Using the OpQA Corpus",
"authors": [
{
"first": "V",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of HLT/EMNLP 2005",
"volume": "",
"issue": "",
"pages": "923--930",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stoyanov, V., Cardie, C. and Wiebe, J. 2005. Multi-Perspective Question Answering Using the OpQA Corpus. In Proceedings of HLT/EMNLP 2005, 923-930.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning Subjective Adjectives from Corpora",
"authors": [
{
"first": "J",
"middle": [],
"last": "Wiebe",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceeding of 17th National Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "735--740",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wiebe, J. 2000. Learning Subjective Adjectives from Corpora. In Proceeding of 17th National Conference on Artificial Intelligence, 735-740.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "NRRC Summer Workshop on Multi-Perspective Question Answering. ARDA NRRC Summer",
"authors": [
{
"first": "J",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Breck",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Buckly",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Fraser",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Litman",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Pierce",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Wilson",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wiebe, J., Breck, E., Buckly, C., Cardie, C., Davis, P., Fraser, B., Litman, D., Pierce, D., Riloff, E. and Wilson, T. 2002 NRRC Summer Workshop on Multi-Perspective Question Answering. ARDA NRRC Summer 2002 Workshop.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Recognizing Contextual Polarity in Phrase-Level Sentiment Analysis",
"authors": [
{
"first": "T",
"middle": [],
"last": "Wilson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Wiebe",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hoffmann",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of HLT/EMNLP 2005",
"volume": "",
"issue": "",
"pages": "347--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wilson, T., Wiebe, J. and Hoffmann, P. 2005. Recognizing Contextual Polarity in Phrase- Level Sentiment Analysis. In Proceedings of HLT/EMNLP 2005, 347-354.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Towards Answering Opinion Questions: Separating Facts from Opinions and Identifying the Polarity of Opinion Sentences",
"authors": [
{
"first": "H",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Hatzivassiloglou",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of HLT/EMNLP 2003",
"volume": "",
"issue": "",
"pages": "129--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu, H., and Hatzivassiloglou, V. 2003. Towards Answering Opinion Questions: Separating Facts from Opinions and Identifying the Polarity of Opinion Sentences. In Proceedings of HLT/EMNLP 2003, 129-136.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "An Opinion QA System Framework.",
"type_str": "figure",
"num": null,
"uris": null
},
"FIGREF3": {
"text": "Answer Passage Retrieval.",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF0": {
"text": "",
"type_str": "table",
"content": "<table><tr><td>Q</td><td/><td/><td/></tr><tr><td>type</td><td colspan=\"3\">Factual Opinion Total</td></tr><tr><td>Corpus</td><td/><td/><td/></tr><tr><td>TREC</td><td>500</td><td>0</td><td>500</td></tr><tr><td>NTCIR</td><td>1,577</td><td>0</td><td>1,577</td></tr><tr><td>Polls</td><td>62</td><td>582</td><td>644</td></tr><tr><td>OPQ</td><td>304</td><td>707</td><td>1,011</td></tr><tr><td>Total</td><td>2,443</td><td>1,289</td><td>3,732</td></tr></table>",
"html": null,
"num": null
},
"TABREF1": {
"text": "",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF2": {
"text": "shows experimental results. Only with feature x shows the error rate of using one single feature, while with all but feature x shows the error rate of using all features except the specified feature. feature x PTY OPR POS NEG only with feature x 19.6 38.5 34.9 35.3 with all but feature x 16.3 12.7 13.7 12.2 feature x TOW TSR MSR ALL only with feature x 21.9 26.6 29.6 12.2 with all but feature x 14.8 12.4 12.8",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF3": {
"text": "",
"type_str": "table",
"content": "<table><tr><td>Opinion Scope \u2192 Focus Detection \u2193</td><td>sentence</td><td>sentence fragment</td><td>meaningful unit</td></tr><tr><td>Exact Match</td><td colspan=\"2\">32.09% 36.06%</td><td>36.25%</td></tr><tr><td>Partial Match</td><td colspan=\"2\">27.32% 27.46%</td><td>33.09%</td></tr><tr><td>Lenient</td><td colspan=\"2\">19.91% 19.95%</td><td>25.05%</td></tr><tr><td>Table 6. F-Opinion Scope \u2192 Focus Detection \u2193</td><td>sentence</td><td>sentence fragment</td><td>meaningful unit</td></tr><tr><td>Exact Match</td><td colspan=\"2\">28.75% 30.20%</td><td>36.36%</td></tr><tr><td>Partial Match</td><td colspan=\"2\">32.83% 35.09%</td><td>40.59%</td></tr><tr><td>Lenient</td><td colspan=\"2\">27.15% 29.19%</td><td>32.87%</td></tr></table>",
"html": null,
"num": null
},
"TABREF4": {
"text": "",
"type_str": "table",
"content": "<table><tr><td>Rel Degree \u2192 Focus Detection \u2193</td><td>Rel2T</td><td colspan=\"2\">Rel2Q CorrectMU</td></tr><tr><td>Exact Match</td><td colspan=\"2\">36.69% 36.73%</td><td>50.43%</td></tr><tr><td>Partial Match</td><td colspan=\"2\">34.79% 47.15%</td><td>70.15%</td></tr><tr><td>Lenient</td><td colspan=\"2\">28.03% 48.35%</td><td>80.73%</td></tr><tr><td>Table 8. Rel Degree \u2192 Focus Detection \u2193</td><td>Rel2T</td><td colspan=\"2\">Rel2Q CorrectMU</td></tr><tr><td>Exact Match</td><td colspan=\"2\">36.88% 36.92%</td><td>48.99%</td></tr><tr><td>Partial Match</td><td colspan=\"2\">41.90% 50.37%</td><td>72.84%</td></tr><tr><td>Lenient</td><td colspan=\"2\">37.04% 53.06%</td><td>84.96%</td></tr><tr><td>Table 9.</td><td/><td/></tr></table>",
"html": null,
"num": null
}
}
}
}