| { |
| "paper_id": "O16-2002", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:05:07.270470Z" |
| }, |
| "title": "Linguistic Template Extraction for Recognizing Reader-Emotion", |
| "authors": [ |
| { |
| "first": "Yung-Chun", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Academia Sinica", |
| "location": { |
| "settlement": "Taipei", |
| "country": "Taiwan" |
| } |
| }, |
| "email": "changyc@iis.sinica.edu.tw" |
| }, |
| { |
| "first": "Chun-Han", |
| "middle": [], |
| "last": "Chu", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Chien", |
| "middle": [], |
| "last": "Chin", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Academia Sinica", |
| "location": { |
| "settlement": "Taipei", |
| "country": "Taiwan" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Wen-Lian", |
| "middle": [], |
| "last": "Hsu", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Academia Sinica", |
| "location": { |
| "settlement": "Taipei", |
| "country": "Taiwan" |
| } |
| }, |
| "email": "hsu@iis.sinica.edu.tw" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Previous studies on emotion classification mainly focus on the emotional state of the writer. By contrast, our research emphasizes emotion detection from the readers' perspective. The classification of documents into reader-emotion categories can be applied in several ways, and one of the applications is to retain only the documents that trigger desired emotions to enable users to retrieve documents that contain relevant contents and at the same time instill proper emotions. However, current information retrieval (IR) systems lack the ability to discern emotions within texts, and the detection of reader's emotion has yet to achieve a comparable performance. Moreover, previous machine learning-based approaches generally use statistical models that are not in a human-readable form. Thereby, it is difficult to pinpoint the reason for recognition failures and understand the types of emotions that the articles inspired on their readers. In this paper, we propose a flexible emotion template-based approach (TBA) for reader-emotion detection that simulates such process in a human perceptive manner. TBA is a highly automated process that incorporates various knowledge sources to learn an emotion template from raw text that characterize an emotion and are comprehensible for humans. Generated templates are adopted to predict reader's emotion through an alignment-based matching algorithm that allows an emotion template to be partially matched through a statistical scoring scheme. Experimental results demonstrate that our approach can effectively detect reader's emotions by exploiting the syntactic structures and semantic associations in the context, while", |
| "pdf_parse": { |
| "paper_id": "O16-2002", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Previous studies on emotion classification mainly focus on the emotional state of the writer. By contrast, our research emphasizes emotion detection from the readers' perspective. The classification of documents into reader-emotion categories can be applied in several ways, and one of the applications is to retain only the documents that trigger desired emotions to enable users to retrieve documents that contain relevant contents and at the same time instill proper emotions. However, current information retrieval (IR) systems lack the ability to discern emotions within texts, and the detection of reader's emotion has yet to achieve a comparable performance. Moreover, previous machine learning-based approaches generally use statistical models that are not in a human-readable form. Thereby, it is difficult to pinpoint the reason for recognition failures and understand the types of emotions that the articles inspired on their readers. In this paper, we propose a flexible emotion template-based approach (TBA) for reader-emotion detection that simulates such process in a human perceptive manner. TBA is a highly automated process that incorporates various knowledge sources to learn an emotion template from raw text that characterize an emotion and are comprehensible for humans. Generated templates are adopted to predict reader's emotion through an alignment-based matching algorithm that allows an emotion template to be partially matched through a statistical scoring scheme. Experimental results demonstrate that our approach can effectively detect reader's emotions by exploiting the syntactic structures and semantic associations in the context, while", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "With the rapid growth of the Internet, the web has become a powerful medium for disseminating information. People can easily share information of daily experiences and their opinion anytime and anywhere on the social media, such as blogs, Twitter and Facebook. Therefore, sentiment analysis studies have gained increasing interest in recent years with academia and business corporations attempting to analyse and predict public trends by mining opinions that are subjective statements and reflect people's sentiments or perceptions about topics (Pang et al., 2002) . Moreover, human feelings can be quickly collected through emotion detection (Quan & Ren, 2009; Das & Bandyopadhyay, 2009) . While previous researches on emotions mainly focused on detecting the emotions that the authors of the documents were expressing, it is worthy of noting that the reader emotions, in some aspects, differ from that of the authors and may be even more complex (Lin et al., 2008; Tang & Chen, 2012) . Regarding a news article for instance, while a journalist objectively reports up-going oil price and does not express his or her emotion in the text, a reader could yield angry or negative emotions. On the other hand, an infamous politician's blog entry describing his miserable day may not cause the opposing readers to feel the same way. While the author of an article may directly express his/her emotions through sentiment words within the text, a reader's emotion possesses a more complex nature, as even general words can evoke different types of reader's emotions depending on the reader's personal experience and knowledge (Lin et al., 2007) .", |
| "cite_spans": [ |
| { |
| "start": 545, |
| "end": 564, |
| "text": "(Pang et al., 2002)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 643, |
| "end": 661, |
| "text": "(Quan & Ren, 2009;", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 662, |
| "end": 688, |
| "text": "Das & Bandyopadhyay, 2009)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 948, |
| "end": 966, |
| "text": "(Lin et al., 2008;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 967, |
| "end": 985, |
| "text": "Tang & Chen, 2012)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 1619, |
| "end": 1637, |
| "text": "(Lin et al., 2007)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Instead of detecting writer's emotion, which has been already investigated extensively in previous studies (Zhang & Liu, 2010; Mukherjee & Liu, 2010; Si et al., 2013) , this paper aims to uncover the emotions of readers triggered by the document. Such research holds great potential for novel applications. For instance, an enterprise that possesses the business intelligence that is capable of identifying the emotional effect that a document inflicts on its readers can provide services to retain only the documents that evokes the desired emotions, enabling users to retrieve documents with relevant contents and meanwhile being instilled the proper emotions. As a result, users benefit by obtaining opportunities and advantages in the competitive market through a more efficient and quick manner. However, current information retrieval systems lack the ability to discern emotion within texts, and reader-emotion detection has yet to achieve comparable performance (Lin et al., 2007) . Machine learning-based approaches are widely used for sentiment analysis and emotion detection related researches. These approaches can usually generate accurate classifiers that assign a category label for each document with much lower labour cost. Nevertheless, the statistical models used by these classifiers are generally not in a human-readable form. Thus, it is difficult to pinpoint the reason for recognition failures and understand what exact emotion of the reader is triggered.", |
| "cite_spans": [ |
| { |
| "start": 107, |
| "end": 126, |
| "text": "(Zhang & Liu, 2010;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 127, |
| "end": 149, |
| "text": "Mukherjee & Liu, 2010;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 150, |
| "end": 166, |
| "text": "Si et al., 2013)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 969, |
| "end": 987, |
| "text": "(Lin et al., 2007)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In light of this rationale, we proposed a flexible template-based approach (TBA) for reader-emotion detection that simulates such process in human perception. TBA is a highly automated process that integrates various types of knowledge to generate discriminative linguistic patterns for document representation. These patterns can be acknowledged as the essential knowledge for humans to understand different kinds of emotions. Furthermore, TBA recognizes reader's emotions of documents using an alignment-based algorithm that allows an emotion template to be partially matched through a statistical scoring scheme. Our experiments demonstrate that TBA can achieve a higher performance than other well-known text categorization methods and the state-of-the-art reader-emotion detection method.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The remainder of this paper is organized as follows. The next section contains a review of related works on reader-emotion detection approaches. We introduce the proposed emotion template-based approach for reader-emotion detection in Section 3, and its evaluation is described in Section 4. Finally, we provide some concluding remarks and potential future avenues of research in Section 5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "Textual articles are one of the most common ways for persons to convey their feelings. Identifying essential factors that affect emotion transition is important for human language understanding. With the rapid growth of computer-mediated communication applications, such as social websites and micro-blogs, the research on emotion classification has been attracting more and more attention recently from enterprises toward business intelligence (Chen et al., 2010; Purver & Battersby, 2012) . In general, a single text may possess two types of emotions: writer-emotion and reader-emotion. The research of writer-emotion investigates the emotion expressed by the writer when writing the text. Pang et al. (2002) designed an algorithm to classify movie reviews into positive and negative emotions. Mishne (2005) , and Yang and Chen (2006) used emoticons as tags to train SVM (Cortes & Vapnik, 1995) classifiers at the document or sentence level, respectively. In their studies, emoticons were taken as mood or emotion tags, and textual keywords were considered as features. Wu et al. (2006) proposed a sentence level emotion recognition method using dialogs as their corpus, in which \"Happy\", \"Unhappy\", or \"Neutral\" was assigned to each sentence as its emotion category. Yang et al. (2006) adopted Thayer's model (1989) to classify music emotions. Each music segment can be classified into four classes of moods. As for sentiment analysis research, Read (2005) used emoticons in newsgroup articles to extract relevant instances for training polarity classifiers.", |
| "cite_spans": [ |
| { |
| "start": 445, |
| "end": 464, |
| "text": "(Chen et al., 2010;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 465, |
| "end": 490, |
| "text": "Purver & Battersby, 2012)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 692, |
| "end": 710, |
| "text": "Pang et al. (2002)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 796, |
| "end": 809, |
| "text": "Mishne (2005)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 816, |
| "end": 836, |
| "text": "Yang and Chen (2006)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 873, |
| "end": 896, |
| "text": "(Cortes & Vapnik, 1995)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 1072, |
| "end": 1088, |
| "text": "Wu et al. (2006)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 1270, |
| "end": 1288, |
| "text": "Yang et al. (2006)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 1297, |
| "end": 1318, |
| "text": "Thayer's model (1989)", |
| "ref_id": null |
| }, |
| { |
| "start": 1448, |
| "end": 1459, |
| "text": "Read (2005)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Nevertheless, the research of reader-emotion concerns the emotions expressed by a reader after reading the text. As the writer and readers may view the same text from different perspectives, they do not always share the same emotion. Since the recent increase in the popularity of Internet, certain news websites, such as Yahoo! Kimo News, incorporate the Web 2.0 technologies that allow readers to express their emotions toward news articles. Classifying emotions from the readers' point of view is a challenging task, and research on this topic is relatively sparse as compared to those considering the writers' perspective. While writer-emotion classification has been extensively studied, there are only a few studies on reader-emotion classification. Lin et al. (2007) first described the task of reader-emotion classification on news articles and classified Yahoo! News articles into 8 emotion classes (e.g. happy, angry, or depressing) from the readers' perspectives. They combined bigrams, words, metadata and word emotion categories to train a classifier for determining the reader-emotions toward news. Yang et al. (2009) automatically annotated reader-emotions on a writer-emotion corpus with a reader-emotion classifier, and studied the interactions between writers and readers with the writer-reader-emotion corpus.", |
| "cite_spans": [ |
| { |
| "start": 756, |
| "end": 773, |
| "text": "Lin et al. (2007)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 1113, |
| "end": 1131, |
| "text": "Yang et al. (2009)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": "Our approach differs from existing reader-emotion detection approaches in a number of aspects. First, we proposed an emotion template-based approach that mimics the perceptual behaviour of humans in understanding. Second, the generated emotion templates are human readable, and can be represented as the domain knowledge required for detecting reader-emotion. Therefore, it is helpful in elucidating how articles trigger certain types of emotions in their readers in a more comprehensive manner. Finally, in addition to syntactic features, TBA further considers the surrounding context and semantic associations to efficiently recognize reader-emotions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2." |
| }, |
| { |
| "text": "In this paper, we present a template-based approach (TBA) for detecting the reader-emotion of documents. We model reader-emotion detection as a classification problem, and define the reader-emotion detection task as the following. Let W= {w 1 , w 2 , \u2026,w k } be a set of words, D= {d 1 , d 2 , \u2026, d m } be a set of documents, and E= {e 1 , e 2 , \u2026 , e n } be a set of reader-emotions. Each document d is a set of words such that d\u2286W. The goal of this task is to decide the most appropriate reader-emotion e i for a document d j , although one or more emotions can be associated with a document. Our proposed method is different in that we take advantage of multiple knowledge sources, and implement an algorithm to automatically generate templates that represent discriminative patterns in documents. Our system, using the proposed method, mainly consists of three components: Crucial Element Labelling (CEL), Emotion Template Generation (ETG), and Emotion Template Matching (ETM), as shown in Figure 1 . The CEL first uses prior knowledge to mark the semantic classes of words in the corpus. Then the ETG collects frequently co-occurring elements, and generates templates for each emotion. These templates are stored in the emotion-dependent knowledge base to provide domain-specific knowledge for our emotion detection. During the detection process, an article is first labelled by the CEL as well. Subsequently, the ETM applies an alignment-based algorithm that utilizes our knowledge base to calculate the similarity between each emotion and the article to determine the main emotion of this article. Details of these components will be disclosed in the following sections. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 994, |
| "end": 1002, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Learning Reader-emotion Template from Raw Text", |
| "sec_num": "3." |
| }, |
| { |
| "text": "TBA attempts to simulate the human perception of an emotion through the recognition of crucial elements. In this work, we capture crucial elements within documents by adopting a three-layer labelling approach that utilizes various knowledge sources, such as lexical dictionaries and Wikipedia, to induce template elements. First of all, since keywords within a reader-emotion are often considered as important information, we used the log likelihood ratio (LLR) (Manning & Sch\u00fctze, 1999) , an effective feature selection method, to learn a set of reader-emotion specific keywords. Given a training dataset, LLR employs Equation (1) to calculate the likelihood of the assumption that the occurrence of a word w in reader-emotion E is not random. In (1), e i denotes the specific reader's emotion in the training dataset; N(e i ) and N(\u00ace i ) are the numbers of on-emotion and off-emotion documents, respectively. N(w^e i ), denoted as k, is the number of on-emotion documents containing w; the number of off-emotion documents containing w, N(w^\u00ace i ), is denoted as l. Altogether, the formula expresses the ratio of two likelihood functions. To simplify the formula, we also define m as the number of on-emotion documents with no word w (that is, m = N(e i ) -k), and n as that of off-emotion documents (n = N(\u00ace i ) -l). The probabilities p(w), p(w|e i ), and p(w|^e i ) are estimated using maximum likelihood estimation. A word with a large LLR value is closely associated with the reader-emotion. We rank the words in the training dataset based on their LLR values and select the top 200 to compile an emotion keyword list.", |
| "cite_spans": [ |
| { |
| "start": 462, |
| "end": 487, |
| "text": "(Manning & Sch\u00fctze, 1999)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Crucial Element Labelling (CEL)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "( | ) (1 ( | )) ( | ) (1 ( | )) ( , ) 2log ( ) (1 ( )) k m l n i i i i i k l m n p w e p w e p w e p w e LLR w e p w p w \uf02b \uf02b \uf0e9 \uf0f9 \uf02d \uf0d8 \uf02d \uf0d8 \uf03d \uf0ea \uf0fa \uf02d \uf0ea \uf0fa \uf0eb \uf0fb", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Crucial Element Labelling (CEL)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(1) Next, we aim to recognize named entities (NEs) from text to facilitate document comprehension and improve the performance of identifying topics (Bashaddadh & Mohd, 2011) . Our labelling algorithm uses a string matching scheme to single out the keywords (if they exist in the sequences); therefore, segmentation is not required in the preprocess. However, exact-matching NEs may overlook many template elements since it omits semantic context. To remedy this problem, this paper adopts a novel structure to construct the NE ontology (NEO) for labelling crucial elements based on the levels of organization mentioned in (Lee et al. 2005) and (Wang et al. 2010) . Figure 2 depicts the architecture of the NE ontology, which includes an emotion layer, a semantic layer, and an instance layer. There are eight types of emotions in the emotion layer, namely \"Angry\", \"Worried\", \"Boring\", \"Happy\", \"Odd\", \"Depressing\", \"Informative\" and \"Warm\". Moreover, each semantic class in the semantic layer denotes a general semantic meaning of named entities that can be aggregated from many emotions, including \"\u653f\u6cbb\u4eba\u7269 (Politician)\", \"\u75be\u75c5 (Disease)\" and others. The instance layer represents 6323 named entities extracted from documents across eight emotions by the Stanford NER. In order to minimize the labour cost of instance generalization, we utilize Wikipedia to semi-automatically label NEs with their semantic classes as a way of generalization. Only NE labels for persons, places and organizations are taken into consideration, and Wikipedia's category tags are used to label NEs recognized by the Stanford NER 1 . We select the category tag to which the most topic paths are associated as the main semantic label for NEs in documents. A topic path is the classification hierarchy of a certain category; it can be considered as the traversal from general categories to more specific ones. A category name with more associated topic paths is considered to be more suitable to represent a NE for its appropriate scope of semantic coverage. For example, a query \"\u6b50\u5df4\u99ac (Obama)\" to the Wikipedia would return a page titled \"\u5df4\u62c9\u514b\u2022\u6b50\u5df4\u99ac (Barack Obama)\". Within this page, there are a number of category tags such as \"\u6c11\u4e3b\u9ee8 (Democratic Party)\" and \"\u7f8e\u570b\u7e3d \u7d71 (Presidents of the United States)\". These two category tags contain three and seven topic paths, respectively. Suppose \"\u7f8e\u570b\u7e3d\u7d71(Presidents of the United States)\" is the one with more topic paths, our system will label \"\u5df4\u62c9\u514b\u2022\u6b50\u5df4\u99ac (Barack Obama)\" with the tag \"[\u7f8e\u570b\u7e3d \u7d71 (Presidents of the United States)]\". Domain experts further annotated each named entity by their corresponding semantic classes for the purpose of generalization if the NE term not included in Wikipedia to its category tags. Each instance in the instance layer can connect to multiple semantic classes according to the generalized relations. For example, named entity \"\u55ac\u4e39 (Jordan)\" can be generalized to \"\u570b\u5bb6 (country)\" and \"\u4eba\u540d (people)\". In this manner, we can transform plain NEs to a more general class and increase the coverage of each label.", |
| "cite_spans": [ |
| { |
| "start": 148, |
| "end": 173, |
| "text": "(Bashaddadh & Mohd, 2011)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 622, |
| "end": 639, |
| "text": "(Lee et al. 2005)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 644, |
| "end": 662, |
| "text": "(Wang et al. 2010)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 665, |
| "end": 673, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Crucial Element Labelling (CEL)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Finally, to incorporate even richer semantic context into our semantic template, we use the Extended HowNet (Chen et al., 2005) , which is an extension of the HowNet (Dong et al. 2010) . Extended HowNet contains a structured representation of knowledge and semantics. It connects approximately 90000 words in the CKIP Chinese Lexical Knowledge Base and HowNet, and includes additional highly frequent words that are specific in Traditional Chinese. It also contains a different formulation of each word to better fit its semantic representation, as well as the distinct definition of function and content words. A total of four basic semantic classes were applied, namely object, act, attribute, and value. Moreover, in comparison to HowNet, EHowNet possesses a layered definition scheme and complex relationship formulation, and uses simpler concepts to replace schemes as the basic element when defining a more complex concept or relationship. To illustrate the content of the EHowNet, let us take the definition of \"\u8840\u764c (leukemia)\" in EHowNet in Definition 1 as an example. In the most compact sense, leukemia is a type of cancer that originates from disorder of blood (cells); therefore, EHowNet presents the phenomenon and its occurring position for the term in the Simple Definition. To further detail its meaning, EHowNet goes an extra mile and explains both \"cancer\" and \"blood\" in the Expanded Definition. We can see that the EHowNet not only contains semantic representation of a word, but also its relations to other words or entities. This enables us to combine or dissect the meaning of words by using its semantic components. Following the method in (Shihet al. 2012), we extracted the main definition of each word as the semantic class label. We exploit the taxonomies in EHowNet, which include lexical categories, synonyms, and semantic relations between different words or sets of words. With the resources stated, the CEL can transform words in the original documents into their corresponding semantic labels. Our research assigns clause as the unit for semantic labelling. To illustrate the process of CEL, consider the clause C n = \"\u5e0c\u62c9\u854a\u9806\u5229\u4ee3\u8868\u6c11\u4e3b\u9ee8\u8d0f\u5f97\u7f8e\u570b\u7e3d\u7d71\u9078\u8209 (Hillary Clinton represents the Democratic Party and won the Presidential election in the U.S.)\", as shown in Figure 3 . First, \"\u5e0c\u62c9\u854a(Hillary Clinton)\" is found in the keyword dictionary and tagged. Then NEs like \"\u6c11\u4e3b\u9ee8 (Democratic Party)\" and \"\u7e3d\u7d71\u9078\u8209 (Presidential elections)\" are recognized and tagged as \"{\u653f\u9ee8 (Party)}\" and \"{\u7e3d\u7d71\u9078\u8209 (Presidential elections)}\". Subsequently, other terms such as \"\u4ee3\u8868 (represent)\" and \"\u8d0f\u5f97 (won)\" are labelled with their corresponding E-HowNet senses if they exist. Finally we can obtain the sequence: \"[\u5e0c \u62c9 \u854a ]:{ \u4ee3 \u8868 }:{ \u653f \u9ee8 }:{ \u5f97 \u5230 }:{ \u570b \u5bb6 }:{ \u7e3d \u7d71 \u9078 \u8209 } ([Hillary Clinton]:{represent}:{party}:{got}:{country}:{Presidential elections})\", where domain keywords (represented by square-bracketed slots) are matched verbatim and semantic classes (represented by curly-bracketed slots) match all words that belong to their specific class. Since different annotation knowledge bases are processed through in a specific-to-general order, the proportion of overlapping annotations are relatively low. In the occasion when there is an overlap between the matched keywords, the longest keyword will be reserved. The labelling process not only effectively prevents errors caused by Chinese word segmentation, but also groups the synonyms altogether by using semantic labels. This enables us to generate distinctive and prominent semantic templates in the next stage. ", |
| "cite_spans": [ |
| { |
| "start": 108, |
| "end": 127, |
| "text": "(Chen et al., 2005)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 166, |
| "end": 184, |
| "text": "(Dong et al. 2010)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 2281, |
| "end": 2289, |
| "text": "Figure 3", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Crucial Element Labelling (CEL)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In our framework, a reader's emotion is represented by a set of semantic templates, which are sequences of crucial elements consisting of crucial elements and keywords. The emotion template generation (ETG) process aims at automatically generating N representative templates from sequences of crucial elements in the documents. These representative (or dominating) templates can be used as background knowledge for each reader's emotion when recognizing documents. More importantly the representative templates can be easily understood by humans. To illustrate, consider the emotion \"Happy\" and one of the automatically generated semantic templates, \"{\u904b\u52d5\u54e1 player }:{\u5f97\u5230 get }:[\u51a0\u8ecd championship ].\" We can think of various semantically similar sentences that are covered by this semantic template, e.g., \"\u67ef\u91cc\u5e36\u9818\u52c7\u58eb\u8d0f\u5f97\u4e86 NBA \u7e3d\u51a0\u8ecd\u8cfd (Stephen Curry led the Warriors to win the NBA championship)\" or \"\u8cbb\u5fb7\u52d2\u64ca\u6557\u5b89\u8fea\u2022\u7a46\u96f7\u7372\u5f97\u6eab\u666e\u6566\u51a0\u8ecd (Roger Federer defeated Andy Murray and won the Wimbledon championship)\". Likewise, a similar template \"{\u904b\u52d5\u54e1 player }:{\u5f97\u5230 get }:[\u5206\u6578 score ]\" is capable of representing sentences like \"\u5e03\u840a\u6069\u5728\u6700\u7d42\u8cfd\u8d0f\u5f97 \u516d\u5341\u5206 (Kobe Bryant won sixty points at his final game.)\". This sort of human-interpretable knowledge cannot be easily obtained by ordinary machine learning models. The ETG process is described as follows. We formulate reader-emotion template generation as a frequent pattern mining problem. Based on the co-occurrence of crucial elements, we can construct a crucial element graph (CE graph) to describe the strength of relations between them. Since crucial elements are of an ordered nature, the graph is directed and can be made with association rules. In order to avoid the generation of templates with insufficient length, we empirically set the minimum support of a crucial element as 100 and minimum confidence as 0.5 in our association rules. This is because we observed that the rank-frequency distribution of semantic classes followed Zipf's law (Manning & Sch\u00fctze, 1999) , so does the normalized frequency of semantic templates. Crucial elements of low frequency usually identify semantic that are irrelevant to the emotion. Hence, for each reader's emotion, we selected the first frequent crucial elements that accumulated frequencies reached 80% of the total crucial element frequency count in the reader-emotion documents. Thus, an association rule can be represented as (2):", |
| "cite_spans": [ |
| { |
| "start": 1951, |
| "end": 1976, |
| "text": "(Manning & Sch\u00fctze, 1999)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Emotion Template Generation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "(2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Emotion Template Generation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "where support min =100, confidence min =0.5. Figure 4 is the illustration of a CE graph. In this graph, vertices (CE x ) represent crucial elements, and edges represent the co-occurrence of two elements, CE i and CE j , where CE i precedes CE j . The number on the edge denotes the confidence of two connecting vertices.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 45, |
| "end": 53, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Emotion Template Generation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": ", ) ( support ) ( support confidence i j i i j j i CE CE CE ) |CE P(CE ) CE (CE \uf0c8 \uf03d \uf03d \uf0de", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Emotion Template Generation", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "After constructing all CE graphs, we then generate emotion templates by applying the random walk theory (Lov\u00e1sz, 1993) in search of high-frequency and representative elements for each reader's emotion. Let a CE graph G be defined as G=(V,E) (|V|=p, |E|=k), a random walk process consists of a series of random selections on the graph. Every edge (CE n , CE m ) has its own weight M nm , which denotes the probability of a crucial element CE n , followed by another element CE m . For each element, the sum of weight to all neighboring elements N(CE n ) is defined as (3), and the whole graph's probability matrix is defined as (4). As a result, a series of a random walk process becomes a Markov Chain. According to (Li et al., 2010) , the cover time of a random walk process on a normal graph is", |
| "cite_spans": [ |
| { |
| "start": 104, |
| "end": 118, |
| "text": "(Lov\u00e1sz, 1993)", |
| "ref_id": null |
| }, |
| { |
| "start": 716, |
| "end": 733, |
| "text": "(Li et al., 2010)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 4. A CE graph for template generation.", |
| "sec_num": null |
| }, |
| { |
| "text": "2 4 , k C CE n CE n \uf0a3 \uf022", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 4. A CE graph for template generation.", |
| "sec_num": null |
| }, |
| { |
| "text": ". We can conclude that using random walk to find frequent patterns on CE graphs would help us capture even the low probability combinations and shorten the processing time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 4. A CE graph for template generation.", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "( ) 1 n n n m m N CE CE M \uf0ce \uf022 \uf03d \uf0e5 (3) 1 1 1 0 , [ | ] ... t n t k t m t m t n n m i X CE X CE Pr X CE Pr X CE X CE M X CE \uf02d \uf02b \uf02b \uf03d \uf0e9 \uf0f9 \uf0ea \uf0fa \uf03d \uf0ea \uf0fa \uf03d \uf03d \uf03d \uf03d \uf03d \uf03d \uf0ea \uf0fa \uf0ea \uf0fa \uf03d \uf0ea \uf0fa \uf0eb \uf0fb", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Figure 4. A CE graph for template generation.", |
| "sec_num": null |
| }, |
| { |
| "text": "Although the random walk process can help us generate templates from frequent patterns in CE graphs, it can also create some redundancy. Hence, a merging procedure is required to eliminate the redundant results by retaining only the templates with the longest length and highest coverage, and dispose of those that are completely covered by another template. For example, the template [\u6b50\u5df4\u99ac]:{\u653f\u9ee8}:{\u7e3d\u7d71\u9078\u8209} is completely covered by template [\u6b50\u5df4 \u99ac]:{\u4ee3\u8868}:{\u653f\u9ee8}:{\u5f97\u5230}:{\u570b\u5bb6}:{\u7e3d\u7d71\u9078\u8209}. Thus, the former template is removed. Moreover, the reduction of the crucial element space provided by template selection is critical. It allows the execution of more sophisticated text classification algorithms, which lead to improved results. Those algorithms cannot be executed on the original crucial element space because their execution time would be excessively high, making them impractical (Ricardo & Berthier, 2011) . Therefore, selecting crucial elements closely associated with an emotion would improve the performance of reader-emotion detection. We use the log likelihood ratio (LLR) again to differentiate crucial elements for each emotion. Given a training dataset comprised of different reader's emotions, the LLR calculates the likelihood of the occurrence of a crucial element in the emotion. A crucial element with a large LLR value is thought to be closely associated with the emotion. Lastly, we rank the crucial elements in the training dataset based on a sum of crucial elements LLR values and retain the top 100 for this reader's emotion.", |
| "cite_spans": [ |
| { |
| "start": 871, |
| "end": 897, |
| "text": "(Ricardo & Berthier, 2011)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Figure 4. A CE graph for template generation.", |
| "sec_num": null |
| }, |
| { |
| "text": "We believe the human perception of an emotion is obtained through recognizing important events or semantic contents to rapidly evoke their emotional response. For instance, when an article contains strongly correlated words such as \"Japan (country)\", \"Earthquake (disaster)\" and \"Tsunami (disaster)\" simultaneously, it is natural to conclude that the article has a much higher chance of eliciting emotions like depressed and worried rather than happy and warm. TBA uses an alignment algorithm to measure the similarity of templates, since alignment enables a single template to match multiple semantically similar expressions with appropriate scores. During matching, a document is first labelled with crucial elements. Afterwards, an alignment-based algorithm (Needleman & Wunsch, 1970 ) is applied to determine to what degree a semantic template fits in a document. For each clause within a given document d j , we first label crucial elements cs ={ce 1 ,\u2026, ce n }, followed by the matching procedure that compares all sequences of crucial element in d j to all emotion templates ET = {et 1 ,\u2026, et j } in each emotion, and calculates the sum of scores for each emotion. The emotion e i with the highest sum of scores defined in (5) is considered as the winner. an element is not matched, the score of insertion or deletion is calculated. An insertion (IS), defined as a label that is present in the article but not in the template, can be accounted for by the inversed entropy of this crucial element (7), which can be thought of as the uniqueness or generality of this label. On the other hand, a deletion (DS), representing the label that exists only in the template and not in the article, is computed from the log frequency of this element as (8). Both types give negative scores to the sum. The matching algorithm is then applied to determine the reader-emotion of the article by comparing the sequence of crucial elements C = {c 1 , c 2 , \u2026, c n } in each clause of an article to every template F = {C 1 , C 2 , \u2026, C m } in each emotion. An illustration of the matching process of a sequence of semantic classes to an emotion template is shown in Figure 5 . A match between the two sequences is given a positive score obtained from the LLR score of the semantic class in an emotion. Finally, the sum of scores of each emotion was computed, and the emotion with the highest score is considered as the winner. Through this method, each individual crucial element label is given a different weight according to its characteristics. Thus, the order of these labels is not the only determining factor in matching. The detailed algorithm is described in Algorithm 1. 4. Experiment", |
| "cite_spans": [ |
| { |
| "start": 761, |
| "end": 786, |
| "text": "(Needleman & Wunsch, 1970", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 2155, |
| "end": 2163, |
| "text": "Figure 5", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Emotion Template Matching", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "To the best of our knowledge, there is no publicly available corpus for reader-emotion detection. Therefore, we compiled a data corpus for performance evaluation, as shown in Table 1 . The data corpus contains news articles spanning a period from 2012 to 2014 collected from Yahoo News 2 . It is an independent common resource for performance evaluation among reader-emotion research (e.g. Lin et al. (2007) ), since it has a special feature which allows a reader of a news article to select from eight emotions the one that represent how the reader feels after reading a news article, i.e., \"Angry\", \"Worried\", \"Boring\", \"Happy\", \"Odd\", \"Depressing\", \"Warm\", and \"Informative\". To ensure the quality of the corpus, only articles with a clear statistical distinction between the highest vote of emotion and others determined by t-test with a 95% confidence level were retained. Finally, a total of 47,285 articles were retained from the original 68,026 articles, and they were divided into the training set consisting of 11,681 articles and the testing set consisting of 35,604 articles, respectively. A comprehensive performance evaluation of the TBA with other methods is provided. The first is an emotion keyword-based model which is trained by SVM to demonstrate the effect of our keyword extraction approach (denoted as KW-SVM). Another is a probabilistic graphical model which uses the LDA model as document representation to train an SVM to classify the documents as either emotion relevant or irrelevant (Blei et al., 2003) (denoted as LDA-SVM). The last one is a state-of-the-art reader-emotion recognition method which combines various feature sets including bigrams, words, metadata, and emotion category words (Lin et al., 2007) (denoted as CF-SVM). To serve as a standard for comparison, we also included the results of Naive Bayes (McCallum & Nigam, 1998) as a baseline (denoted as NB). Details of the implementations of these methods are as follows. We employed CKIP 3 for Chinese word segmentation. The dictionary required by Na\u00efve Bayes and LDA-SVM is constructed by removing stop words according to a Chinese stop word list provided by Zou et al. (2006) , and retaining tokens that make up 90% of the accumulated frequency. In other words, the dictionary can cover up to 90% of the tokens in the corpus. As for unseen events, we use Laplace smoothing in Na\u00efve Bayes, and an LDA toolkit 4 is used to perform the detection of LDA-SVM. Regarding the CF-SVM, the words outputted by the segmentation tool were used. The information related to news reporter, news category, location of the news event, time (hour of publication) and news agency were used as the metadata features. The extracted emotion keywords were used as the emotion category words, since the emotion categories of Yahoo! Kimo Blog was not provided. To evaluate the effectiveness of these systems, we adopted the accuracy measures used by Lin et al. (2007) . We used macro-average and micro-average to compute the average performance. These measures are defined based on a contingency table of predictions for a target emotion E k . The accuracy A(E k ), macro-average A M , and micro-average A \u03bc are defined as follows:", |
| "cite_spans": [ |
| { |
| "start": 390, |
| "end": 407, |
| "text": "Lin et al. (2007)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 1512, |
| "end": 1531, |
| "text": "(Blei et al., 2003)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 1722, |
| "end": 1740, |
| "text": "(Lin et al., 2007)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 1845, |
| "end": 1869, |
| "text": "(McCallum & Nigam, 1998)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 2154, |
| "end": 2171, |
| "text": "Zou et al. (2006)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 2921, |
| "end": 2938, |
| "text": "Lin et al. (2007)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 175, |
| "end": 182, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "( ) ( ) ( ) ( ) ( ) ( ) ( ) k k k k k k k TP E TN E A E TP E FP E TN E FN E \uf02b \uf03d \uf02b \uf02b \uf02b (9) 1 1 ( ) m M k k A A E m \uf03d \uf03d \uf0e5 (10) 1 1 ( ) ( ) ( ( ) ( ) ( ) ( )) m k k k m k k k k k TP E TN E A TP E FP E TN E FN E \uf06d \uf03d \uf03d \uf02b \uf03d \uf02b \uf02b \uf02b \uf0e5 \uf0e5 (11)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "where TP(E k ) is the set of test documents correctly classified to the emotion E k , FP(E k ) is the set of test documents incorrectly classified to the emotion, FN(E k ) is the set of test documents wrongly rejected, and TN(E k ) is the set of test documents correctly rejected.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The performances of emotion detection systems are listed in Table 2 . As a baseline, the Na\u00efve Bayes classifier is a keyword statistics-based system which can only accomplish a mediocre performance. Since it only considers surface word weightings, it is difficult to represent inter-word relations. The overall accuracy of the Na\u00efve Bayes classifier is 36.84%, with the emotion \"Warm\" only achieving 15% accuracy.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 60, |
| "end": 67, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "On the contrary, the LDA-SVM included both keyword and long-distance relations, and greatly outperforms the Na\u00efve Bayes with an overall accuracy of 76.1%. It even achieved the highest accuracy of 92.83% and 85.40% for the emotion \"Worried\" and \"Odd\", respectively, among all five methods. As we can see, the KW-SVM can bring about substantial proficiency in detecting the emotions with 77.70% overall accuracy. This indicates that reader's emotion can be recognized effectively by using only the LLR scores of keywords, since the likelihood of a word existing in a certain emotion is not random. Those with a larger LLR value are considered as closely associated with the emotion (Manning & Sch\u00fctze, 1999) . The TBA can further improve the basic keyword-based method with rich context and semantic information, thus achieving the best overall accuracy of 84.72%. It is worth noting that the CF-SVM achieved a satisfactory accuracy around 80% among all emotions. This is because the combined lexicon feature sets (i.e. character bigrams, word dictionary, and emotion keywords) of CF-SVM improved the classification accuracy. In addition, the metadata of the articles are also associated with reader-emotion. For instance, we found that many sports related news articles evoke \"Happy\" emotion. In particular, 45% of all \"Happy\" instances belong to the news category sports. It is also observed that an instance with the news category sports has a 31% chance of having the true class \"Happy\". Hence, the high accuracy of \"Happy\" emotion can be the result of people's general enthusiasm over sports rather than the result of a particular event. On top of that, the TBA can generate distinct semantic templates to capture alternations of similar combinations to achieve the optimal outcome. For instance, a semantic template generated by our proposed system, \"{\u570b\u5bb6 country }:[\u767c\u751f occur ]:[\u5730\u9707 earthquake ]:{\u52ab\u96e3 disaster }\", belongs to the emotion \"Depressing\". It is perceivable that this template is relaying information about disastrous earthquakes that occurred in a certain country, and such news often induce negative emotions among the readers. This example demonstrates that the automatically generated semantic templates are comprehensible for humans and can be utilized to effectively detect reader's emotion. Nevertheless, our system could not surpass the LDA-SVM in the emotion \"Worried\". It may be attributed to the fact that semantic templates generated in this emotion have inadequate quality. We examined some of the templates within this emotion and found that they mostly contain very general semantic classes, such as \"{\u6a5f\u69cb institution }:{\u7d44\u7e54 organization }:{\u653f\u9ee8 party }:{\u5be6\u73fe realize }:{\u7a0b\u5ea6 degree }:{\u5ff5\u982d thought }\", thereby reducing its accuracy. Despite the \"Worried\" emotion, we were able to identify distinctive semantic templates for the other emotions. For instance, the template \"[\u5a66\u5973 women ]:{\u6551\u52a9 help }:[\u5c0f\u5b69 child ]:{\u7576\u4f5c treat }:{\u4eba human }:[\u8a8d\u70ba consider ]\" was generated for the emotion \"Warm\", and it is reasonable that news about a woman helping a child would evoke a warm feeling in the readers' mind. The ability to generate such emotion-specific templates is considered as the main reason for TBA to outperform other systems.", |
| "cite_spans": [ |
| { |
| "start": 680, |
| "end": 705, |
| "text": "(Manning & Sch\u00fctze, 1999)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "As a last touch, we dig into the top keywords in each emotion category as an analysis of the news trend, listed in TABLE 3. As stated in the previous section, we observe that keywords related to \"Happy\" are mostly terms about sports, such as team names (e.g., \"\u71b1\u706b Miami Heat \" and \"\u7d05\u896a Boston Red Sox \") or player names (e.g., \"\u9673\u5049\u6bb7 Wei-Yin Chen \", a pitcher for the Baltimore Orioles). Similar findings had been made before as well (Chen et al. 2008) . This is certainly a good indication for the performance of a sports team over a specified period of time. On the other hand, \"Angry\"-related keywords consist largely of political parties or public issues. For instance, the most noticeable word \"\u7f8e\u725b United States beef\" indicates the heated dispute on the import policy of beef from United States to Taiwan, which has been an issue in Taiwan-U.S. relations and leads to a domestic political unrest. Numerous political terms showed up in the top list too, such as \"\u570b\u6c11\u9ee8 Kuomintang \", \"\u7acb\u6cd5\u9662 legislature \", and \"\u7acb\u59d4 legislator \". This highlights the extracted emotion keywords are highly correlated with reader's emotion, thus tagging them in the emotion template helps our method discriminate reader-emotion.", |
| "cite_spans": [ |
| { |
| "start": 431, |
| "end": 449, |
| "text": "(Chen et al. 2008)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "As for the Depressing category, the keywords mostly relate to social events that involve severe weather or casualties. The phrase \"\u5927\u70b3 Da Bing \" refers to a Taiwanese performer who died in 2012 (around the time of our data retrieval) due to drug abuse. On the off chance that sports players suffered from low performance temporarily, they might show up in the category too due to readers' compassion. Theoretically if we have sufficient patterns containing negation terms, we will be able to generate representative templates that can give rise to the scores in negative emotions. However, in our current data set, the relatively low portion of negative cases affects little more than the score of positive emotion, hence affecting the system performance. We acknowledge that this is an important task and should be studied in our future work. Lastly, not surprisingly, the Warm category contains words associated with mostly social care or volunteer providing charity or assistance to social vulnerable groups, while economic news dominates the Informative emotion. To summarize, the proposed TBA integrates the syntactic, semantic, and context information in text to identify reader-emotions and achieves the best performance among the compared methods. It also demonstrates the capabilities of our approach to integrate statistical and knowledge-based models. Notably, in contrast to models used by previous machine learning-based methods which are generally not human understandable, the generated templates can be acknowledged as the fundamental knowledge for each emotion and are comprehensible to the human mind.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "With the rapid growth of computer mediated communication applications, the research on emotion classification has been attracting more and more attention recently from enterprises toward business intelligence. Recognizing reader-emotion concerns the emotion expressed by a reader after reading the text, and it holds the potential to be applied in fields that differ from writer-emotion detection applications. For instance, users are able to retrieve documents that contain relevant contents and at the same time produce desired feelings by integrating reader-emotion into information retrieval. In addition, reader-emotion detection can assist writers in foreseeing how their work will influence readers emotionally. In this research, we presented a flexible template-based approach (TBA) for detecting reader-emotion that simulates the process of human perception. By capturing the most prominent and representative pattern within an emotion, TBA allows us to effectively recognize the reader-emotion of text. Results of our experiments demonstrate that the TBA can achieve a higher performance than other well-known methods of reader-emotion detection. In the future, we plan to refine our TBA and employ it to other natural language processing applications. Also, further studies can be done on combining statistical models into different components in our system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "5." |
| }, |
| { |
| "text": "http://nlp.stanford.edu/software/CRF-NER.shtml", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://tw.news.yahoo.com/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://ckipsvr.iis.sinica.edu.tw/ 4 http://nlp.stanford.edu/software/tmt/tmt-0.4/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This research was supported by the Ministry of Science and Technology of Taiwan under grant MOST 103-3111-Y-001-027.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgement", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Topic detection and tracking interface with named entities approach", |
| "authors": [ |
| { |
| "first": "O", |
| "middle": [ |
| "M A" |
| ], |
| "last": "Bashaddadh", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Mohd", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of International Conference on Semantic Technology and Information Retrieval", |
| "volume": "", |
| "issue": "", |
| "pages": "215--219", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bashaddadh, O.M.A. & Mohd, M. (2011). Topic detection and tracking interface with named entities approach. In Proceedings of International Conference on Semantic Technology and Information Retrieval, 215-219.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Latent dirichlet allocation", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [ |
| "M" |
| ], |
| "last": "Blei", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "I" |
| ], |
| "last": "Jordan", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "3", |
| "issue": "", |
| "pages": "993--1022", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Blei, D.M., Ng, A.Y., & Jordan, M.I. (2003). Latent dirichlet allocation. Journal of Machine Learning Research, 3, 993-1022.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Extended-HowNet: A representational framework for concepts", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [ |
| "J" |
| ], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "L" |
| ], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Shih", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [ |
| "J" |
| ], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of OntoLex -Ontologies and Lexical Resources IJCNLP-05 Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "1--6", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen, K.J., Huang, S.L., Shih, Y.Y., & Chen, Y.J. (2005). Extended-HowNet: A representational framework for concepts. In Proceedings of OntoLex -Ontologies and Lexical Resources IJCNLP-05 Workshop, 1-6.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Emotion cause detection with linguistic constructions", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 21th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "179--187", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen, Y., Lee, S., Li, S., & Huang, C. (2010). Emotion cause detection with linguistic constructions. In Proceedings of the 21th International Conference on Computational Linguistics, 179-187.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Support vector networks", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Cortes", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Vapnik", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Machine Learning", |
| "volume": "20", |
| "issue": "", |
| "pages": "1--25", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cortes, C. & Vapnik, V. (1995). Support vector networks. Machine Learning, 20, 1-25.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Word to Sentence Level Emotion Tagging for Bengali Blogs", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Bandyopadhyay", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "149--152", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Das, D. & Bandyopadhyay, S. (2009). Word to Sentence Level Emotion Tagging for Bengali Blogs. In Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics, 149-152.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Hownet and its computation of meaning", |
| "authors": [ |
| { |
| "first": "Z", |
| "middle": [ |
| "D" |
| ], |
| "last": "Dong", |
| "suffix": "" |
| }, |
| { |
| "first": "Q", |
| "middle": [], |
| "last": "Dong", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "L" |
| ], |
| "last": "Hao", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "53--56", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dong, Z.D., Dong, Q., & Hao, C.L. (2010). Hownet and its computation of meaning. In Proceedings of the 23rd International Conference on Computational Linguistics: Demonstrations, 53-56.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Computers and intractability: A Guide to the Theory of NP-Completeness", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [ |
| "R" |
| ], |
| "last": "Garey", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [ |
| "S" |
| ], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 1979, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Garey, M.R. & Johnson, D.S. (1979). Computers and intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman & Co. New York, NY, USA.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Approximation algorithms for combinatorial problems", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [ |
| "S" |
| ], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 1974, |
| "venue": "Journal of Computer and System Sciences", |
| "volume": "9", |
| "issue": "3", |
| "pages": "256--278", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johnson, D.S. (1974). Approximation algorithms for combinatorial problems. Journal of Computer and System Sciences, 9(3), 256-278.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Using Emotions to Reduce Dependency in Machine Learning Techniques for Sentiment Classification", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Read", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 43th Annual Meeting of the Association for Computational Linguistics Student Research Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "43--48", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Read, J. (2005). Using Emotions to Reduce Dependency in Machine Learning Techniques for Sentiment Classification. In Proceedings of the 43th Annual Meeting of the Association for Computational Linguistics Student Research Workshop, 43-48.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Random walks on graphs: A survey", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Lov\u00e1sz", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Combinatorics, Paul erdos is eighty", |
| "volume": "2", |
| "issue": "1", |
| "pages": "1--46", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lov\u00e1sz, L. (2005). Random walks on graphs: A survey. Combinatorics, Paul erdos is eighty, 2(1), 1-46.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "A fuzzy ontology and its application to news summarization", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [ |
| "S" |
| ], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [ |
| "W" |
| ], |
| "last": "Jian", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [ |
| "K" |
| ], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "IEEE Transactions on Systems, Man, and Cybernetics", |
| "volume": "35", |
| "issue": "5", |
| "pages": "859--880", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lee, C.S., Jian, Z.W., & Huang, L.K. (2005). A fuzzy ontology and its application to news summarization. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 35(5), 859-880.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "What Emotions Do News Articles Trigger in Their Readers?", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "H" |
| ], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [ |
| "H" |
| ], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of 30th Annual International ACM SIGIR Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "23--27", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lin, H.Y., Yang, C.H., & Chen, H.H. (2007). What Emotions Do News Articles Trigger in Their Readers? In Proceedings of 30th Annual International ACM SIGIR Conference, 23-27.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Emotion Classification of Online News Articles from the Reader's Perspective", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [ |
| "H" |
| ], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "H" |
| ], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [ |
| "H" |
| ], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of International Conference on Web Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "220--226", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lin, K.H., Yang, C.H., & Chen, H.H. (2008). Emotion Classification of Online News Articles from the Reader's Perspective. In Proceedings of International Conference on Web Intelligence, 220-226.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Foundations of Statistical Natural Language Processing", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Sch\u00fctze", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Manning, C.D. & Sch\u00fctze, H. (1999). Foundations of Statistical Natural Language Processing. MIT Press.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "A comparison of event models for Na\u00efve Bayes text classification", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Nigam", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of AAAI/ICML-98 Workshop on Learning for Text Categorization", |
| "volume": "", |
| "issue": "", |
| "pages": "41--48", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "McCallum, A. & Nigam, K. (1998). A comparison of event models for Na\u00efve Bayes text classification. In Proceedings of AAAI/ICML-98 Workshop on Learning for Text Categorization, 41-48.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Experiments with mood classification in blog posts", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Mishne", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 1st Workshop on Stylistic Analysis of Text for Information Access", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mishne, G. (2005). Experiments with mood classification in blog posts. In Proceedings of the 1st Workshop on Stylistic Analysis of Text for Information Access.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Improving Gender Classification of Blog Authors", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Mukherjee", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "207--217", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mukherjee, A. & Liu, B. (2010). Improving Gender Classification of Blog Authors. In Proceedings of Conference on Empirical Methods in Natural Language Processing, 207-217.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Thumbs up?: sentiment classification using machine learning techniques", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Vaithyanathan", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the ACL-02 conference on Empirical methods in natural language processing", |
| "volume": "", |
| "issue": "", |
| "pages": "79--86", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pang, B., Lee, L., & Vaithyanathan, S. (2002). Thumbs up?: sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing, 79-86.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Using WordNet-based Context Vectors to Estimate the Semantic Relatedness of Concepts", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Patwardhan", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Peterson", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Patwardhan, S. & Peterson, T. (2003). Using WordNet-based Context Vectors to Estimate the Semantic Relatedness of Concepts.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Experimenting with distant supervision for emotion classification", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Purver", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Battersby", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "482--491", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Purver, M. & Battersby, S. (2012). Experimenting with distant supervision for emotion classification. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, 482-491.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Construction of a Blog Emotion Corpus for Chinese Emotional Expression Analysis", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Quan", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Ren", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1446--1454", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Quan, C. & Ren, F. (2009). Construction of a Blog Emotion Corpus for Chinese Emotional Expression Analysis. In Proceedings of Conference on Empirical Methods in Natural Language Processing, 1446-1454.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Modern Information Retrieval: The Concepts and Technology Behind Search", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Ricardo", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "N" |
| ], |
| "last": "Berthier", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ricardo, B.Y. & Berthier, R.N. (2011). Modern Information Retrieval: The Concepts and Technology Behind Search. New York: Addison Wesley.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Term-weighting approaches in automatic text retrieval", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Salton", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Buckley", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Information Processing and Management", |
| "volume": "24", |
| "issue": "5", |
| "pages": "513--523", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Salton, G. & Buckley, C. (1988). Term-weighting approaches in automatic text retrieval. Information Processing and Management, 24(5), 513-523.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Validating Contradiction in Texts Using Online Co-Mention Pattern Checking", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [ |
| "W" |
| ], |
| "last": "Shih", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "W" |
| ], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [ |
| "H" |
| ], |
| "last": "Tsai", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [ |
| "L" |
| ], |
| "last": "Hsu", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "ACM Transactions on Asian Language Information Processing", |
| "volume": "11", |
| "issue": "4", |
| "pages": "", |
| "other_ids": { |
| "DOI": [ |
| "10.1145/2382593.2382599" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shih, C.W., Lee, C.W., Tsai, T.H., & Hsu, W.L., (2012). Validating Contradiction in Texts Using Online Co-Mention Pattern Checking. ACM Transactions on Asian Language Information Processing, 11(4), 17. doi: 10.1145/2382593.2382599", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Exploiting Topic based Twitter Sentiment for Stock Prediction", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Si", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Mukherjee", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Q", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Deng", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of The 51st Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "24--29", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Si, J., Mukherjee, A., Liu, B., Li, Q., Li, H., & Deng, X. (2013). Exploiting Topic based Twitter Sentiment for Stock Prediction. In Proceedings of The 51st Annual Meeting of the Association for Computational Linguistics, 24-29.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Mining Sentiment Words from Microblogs for Predicting Writer-Reader-emotion Transition", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [ |
| "J" |
| ], |
| "last": "Tang", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [ |
| "H" |
| ], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of 8th International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "1226--1229", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tang, Y.J. & Chen, H.H. (2012). Mining Sentiment Words from Microblogs for Predicting Writer-Reader-emotion Transition. In Proceedings of 8th International Conference on Language Resources and Evaluation, 1226-1229.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "The Biopsychology of Mood and Arousal", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [ |
| "E" |
| ], |
| "last": "Thayer", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thayer, R.E. (1989). The Biopsychology of Mood and Arousal, Oxford University Press.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Ontology-based multi-agents for intelligent healthcare applications", |
| "authors": [], |
| "year": null, |
| "venue": "Journal of Ambient Intelligence and Humanized Computing", |
| "volume": "1", |
| "issue": "2", |
| "pages": "111--131", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ontology-based multi-agents for intelligent healthcare applications. Journal of Ambient Intelligence and Humanized Computing, 1(2), 111-131.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Emotion recognition from text using semantic labels and separable mixture models", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [ |
| "H" |
| ], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [ |
| "J" |
| ], |
| "last": "Chuang", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [ |
| "C" |
| ], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "ACM Transactions on Asian Language Information Processing", |
| "volume": "5", |
| "issue": "2", |
| "pages": "165--183", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wu, C.H., Chuang, Z.J., & Lin, Y.C. (2006). Emotion recognition from text using semantic labels and separable mixture models. ACM Transactions on Asian Language Information Processing, 5(2), 165-183.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Writer Meets Reader: Emotion Analysis of Social Media from both the Writer's and Reader's Perspectives", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [ |
| "H" |
| ], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [ |
| "H" |
| ], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of International Conference on Web Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "287--290", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yang, C.H., Lin, H.Y., & Chen, H.H. (2009). Writer Meets Reader: Emotion Analysis of Social Media from both the Writer's and Reader's Perspectives. In Proceedings of International Conference on Web Intelligence, 287-290.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Building emotion lexicon from Weblog corpora", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [ |
| "H" |
| ], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [ |
| "H Y" |
| ], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [ |
| "H" |
| ], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of 2007 IEEE/WIC/ACM International Conference on Web Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "275--278", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yang, C.H., Lin, K.H.Y., & Chen, H.H. (2007). Building emotion lexicon from Weblog corpora. In Proceedings of 2007 IEEE/WIC/ACM International Conference on Web Intelligence, 275-278.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Music emotion classification: A fuzzy approach", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [ |
| "H" |
| ], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "C" |
| ], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [ |
| "H" |
| ], |
| "last": "Chen", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 14th Annual ACM International Conference on Multimedia", |
| "volume": "", |
| "issue": "", |
| "pages": "81--84", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yang, Y.H., Liu, C. C., & Chen, H. H. (2006). Music emotion classification: A fuzzy approach. In Proceedings of the 14th Annual ACM International Conference on Multimedia, 81-84.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Extracting and Ranking Product Features in Opinion Documents", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1462--1470", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhang, L. & Liu, B. (2010). Extracting and Ranking Product Features in Opinion Documents. In Proceedings of the 23rd International Conference on Computational Linguistics, 1462-1470.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Automatic construction of Chinese stop word list", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Zou", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [ |
| "L" |
| ], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Deng", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Han", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [ |
| "S" |
| ], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 5th WSEAS International Conference on Applied Computer Science", |
| "volume": "", |
| "issue": "", |
| "pages": "1010--1015", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zou, F., Wang, F.L., Deng, X., Han, S., & Wang, L.S. (2006). Automatic construction of Chinese stop word list. In Proceedings of the 5th WSEAS International Conference on Applied Computer Science, 1010-1015.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Architecture of our system." |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Architecture of named entity ontology." |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "{disease|\u75be\u75c5:position={BodyFluid|\u9ad4\u6db2:telic={transport|\u9001:patient={gas| \u6c23:predication={respire|\u547c\u5438:patient={~}}},instrument={~}}},qualification={serious|\u56b4\u91cd}}" |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Crucial element labelling process." |
| }, |
| "FIGREF4": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "ce l represent the k th slot of et n and l th crucial element of cs m , respectively. Scoring of the matched and unmatched components in semantic templates is as follows: If et n \u2022st k and cs m \u2022ce l are identical, we add a matched score (MS) obtained from the LLR value of ce l if it matches a keyword. Otherwise, the score is determined by multiplying the frequency of the crucial element in emotion e i by a normalizing factor 100 as in (6). On the contrary, if" |
| }, |
| "FIGREF5": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "Illustration of an emotion template matching process." |
| }, |
| "TABREF1": { |
| "text": "", |
| "html": null, |
| "content": "<table><tr><td/><td colspan=\"4\">Angry Worried Boring Happy</td><td>Odd</td><td colspan=\"3\">Depressing Warm Informative</td></tr><tr><td>#Training</td><td>2,001</td><td>261</td><td>1,473</td><td>2,001</td><td>1,536</td><td>1,573</td><td>835</td><td>2,001</td></tr><tr><td>#Test</td><td>4,326</td><td>261</td><td>1,473</td><td>7,334</td><td>1,526</td><td>1,573</td><td>835</td><td>18,266</td></tr><tr><td>#Total</td><td>6,327</td><td>522</td><td>2,946</td><td>9,345</td><td>3,062</td><td>3,146</td><td>1,670</td><td>20,267</td></tr></table>", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF2": { |
| "text": "", |
| "html": null, |
| "content": "<table><tr><td>Topic</td><td/><td/><td>Accuracy (%)</td><td/><td/></tr><tr><td/><td>NB</td><td>LDA-SVM</td><td>KW-SVM</td><td>CF-SVM</td><td>TBA</td></tr><tr><td>Angry</td><td>47.00</td><td>74.21</td><td>79.21</td><td>83.71</td><td>83.92</td></tr><tr><td>Worried</td><td>69.56</td><td>92.83</td><td>81.96</td><td>87.50</td><td>80.12</td></tr><tr><td>Boring</td><td>75.67</td><td>76.21</td><td>84.34</td><td>87.52</td><td>87.88</td></tr><tr><td>Depressing</td><td>73.76</td><td>81.43</td><td>85.00</td><td>87.70</td><td>90.13</td></tr><tr><td>Happy</td><td>37.90</td><td>67.59</td><td>80.97</td><td>86.27</td><td>89.50</td></tr><tr><td>Warm</td><td>15.09</td><td>87.09</td><td>79.59</td><td>85.83</td><td>88.56</td></tr><tr><td>Odd</td><td>73.90</td><td>85.40</td><td>77.05</td><td>84.25</td><td>85.32</td></tr><tr><td>Informative</td><td>20.60</td><td>44.02</td><td>74.74</td><td>83.59</td><td>82.10</td></tr><tr><td>A M</td><td>51.69</td><td>76.10</td><td>80.36</td><td>85.80</td><td>85.94</td></tr><tr><td>A \u03bc</td><td>34.52</td><td>58.68</td><td>77.68</td><td>84.61</td><td>84.72</td></tr></table>", |
| "type_str": "table", |
| "num": null |
| }, |
| "TABREF3": { |
| "text": "Keywords discovered by the TBA Angry \u7f8e\u725b American beef , \u7acb\u59d4 Legislator , \u7acb\u6cd5\u9662 Legislative Yuan , \u570b\u6c11\u9ee8 Kuomintang Party , \u653f\u5e9c Government , \u8b49\u6240\u7a05 Stock exchange income tax , \u4e2d\u6cb9 CPC , \u7e3d\u7d71 President , \u99ac\u82f1\u4e5d Ma Ying-jeou , \u9ee8\u5718 Political party Worried \u98b1\u98a8 Typhoon , \u6c23\u8c61\u5c40 Central Weather Bureau , \u75c5\u6bd2 Virus , \u611f\u67d3 Infection , \u8c6a\u96e8 Rain storm , \u5730 \u9707 Earthquake , \u571f\u77f3\u6d41 Mudslide , \u9918\u9707 Aftershock , \u75be\u75c5 Disease , \u75c5\u4f8b Patient record Boring \u8607\u8c9e\u660c Su Jen Chang , \u9673\u6c34\u6241 Chen Shui-bian , \u540d\u5634 Critics , \u7ae0\u5b50\u6021 Zhang Ziyi , \u9673\u6587\u831c Chen Wen-chien , \u5b8b\u6b63\u5b87 Sung Jen-yu , \u7bc0\u76ee TV show , \u5433\u5b97\u61b2 Jacky Wu , \u85dd\u4eba Celebrity , \u5973\u661f Female celebrity Depressing \u5927\u70b3 Da Bing , \u6c11\u9032\u9ee8 Democratic Progressive Party , \u907a\u9ad4 Remain , \u9001\u91ab To hospital , \u6eba\u6c34 Drown , \u6025\u6551 Emergency medical service , \u4e0d\u6cbb Dead , \u5931\u8e64 Missing , \u8c6a\u96e8 Rain strom , \u66fe\u96c5\u59ae Tseng Yani Happy \u9673\u5049\u6bb7 Chen Wei-yin , \u5b89\u6253 Strike , \u6bd4\u8cfd Game , \u71b1\u706b Heat , \u7d05\u896a Red Sox , \u5b8f\u9054\u96fb HTC , \u6bb7 \u4ed4 Yin , \u96f7\u9706 Thunder , \u592a\u7a7a\u4eba Astros , \u51a0\u8ecd Championship Warm \u5b69\u5b50 Children , \u5bb6\u6276 Fund for family , \u5abd\u5abd Mother , \u7236\u89aa Father , \u5fd7\u5de5 Volunteer , \u95dc\u61f7 Care , \u57fa \u91d1\u6703 Foundation , \u884c\u5584 Charity , \u5e6b\u52a9 Assistance , \u5f31\u52e2 Social vulnerable Odd \u7537\u5b50 Man , \u8b66\u65b9 Police , \u5973\u5b50 Woman , \u53f0\u7063 Taiwan , \u767c\u73fe discover , \u7adf\u7136 surprisingly , \u76e3\u8996 \u5668 Security camera , \u7db2\u53cb Internet user , \u7acb\u59d4 Legislator , \u7f8e\u725b American beef Informative \u5de5\u5546\u6642\u5831 Commercial Times , \u5e02\u5834 Market , \u5831\u5c0e Report , \u71df\u6536 Revenue , \u6210\u9577 Growth , \u9700\u6c42 Necessity , \u91d1\u878d Financial , \u7522\u54c1 Product , \u81ea\u7531\u6642\u5831 Liberty Times , \u6295\u8cc7 Investment", |
| "html": null, |
| "content": "<table/>", |
| "type_str": "table", |
| "num": null |
| } |
| } |
| } |
| } |