| { |
| "paper_id": "S16-1035", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:27:10.817891Z" |
| }, |
| "title": "PotTS at SemEval-2016 Task 4: Sentiment Analysis of Twitter Using Character-level Convolutional Neural Networks", |
| "authors": [ |
| { |
| "first": "Uladzimir", |
| "middle": [], |
| "last": "Sidarenka", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "sidarenk@uni-potsdam.de" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper presents an alternative approach to polarity and intensity classification of sentiments in microblogs. In contrast to previous works, which either relied on carefully designed hand-crafted feature sets or automatically derived neural embeddings for words, our method harnesses character embeddings as its main input units. We obtain task-specific vector representations of characters by training a deep multi-layer convolutional neural network on the labeled dataset provided to the participants of the SemEval-2016 Shared Task 4 (Sentiment Analysis in Twitter; Nakov et al., 2016b) and subsequently evaluate our classifiers on subtasks B (two-way polarity classification) and C (joint five-way prediction of polarity and intensity) of this competition. Our first system, which uses three manifold convolution sets followed by four non-linear layers, ranks 16 in the former track; while our second network, which consists of a single convolutional filter set followed by a highway layer and three non-linearities with linear mappings in-between, attains the 10-th place on subtask C. 1", |
| "pdf_parse": { |
| "paper_id": "S16-1035", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper presents an alternative approach to polarity and intensity classification of sentiments in microblogs. In contrast to previous works, which either relied on carefully designed hand-crafted feature sets or automatically derived neural embeddings for words, our method harnesses character embeddings as its main input units. We obtain task-specific vector representations of characters by training a deep multi-layer convolutional neural network on the labeled dataset provided to the participants of the SemEval-2016 Shared Task 4 (Sentiment Analysis in Twitter; Nakov et al., 2016b) and subsequently evaluate our classifiers on subtasks B (two-way polarity classification) and C (joint five-way prediction of polarity and intensity) of this competition. Our first system, which uses three manifold convolution sets followed by four non-linear layers, ranks 16 in the former track; while our second network, which consists of a single convolutional filter set followed by a highway layer and three non-linearities with linear mappings in-between, attains the 10-th place on subtask C. 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Sentiment analysis (SA) -a field of knowledge which deals with the analysis of people's opinions, sentiments, evaluations, appraisals, attitudes, and emotions towards particular entities mentioned in discourse (Liu, 2012) -is commonly considered to be one of the most challenging, competitive, but at 1 The source code of our implementation is freely available online at https://github.com/WladimirSidorenko/ SemEval-2016/ the same time utmost necessary areas of research in modern computational linguistics.", |
| "cite_spans": [ |
| { |
| "start": 210, |
| "end": 221, |
| "text": "(Liu, 2012)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 301, |
| "end": 302, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Unfortunately, despite numerous notable advances in recent years (e.g., Breck et al., 2007; Yessenalina and Cardie, 2011; Socher et al., 2011) , many of the challenges in the opinion mining field, such as domain adaptation or analysis of noisy texts, still pose considerable difficulties to researchers. In this respect, rapidly evaluating and comparing different approaches to solving these problems in a controlled environment -like the one provided for the SemEval task (Nakov et al., 2016b ) -is of crucial importance for finding the best possible way of mastering them.", |
| "cite_spans": [ |
| { |
| "start": 72, |
| "end": 91, |
| "text": "Breck et al., 2007;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 92, |
| "end": 121, |
| "text": "Yessenalina and Cardie, 2011;", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 122, |
| "end": 142, |
| "text": "Socher et al., 2011)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 473, |
| "end": 493, |
| "text": "(Nakov et al., 2016b", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We also pursue this goal in the present paper by investigating whether one of the newest machine learning trends -the use of deep neural networks (DNNs) with small receptive fields -would be a viable solution for improving state-of-the-art results in sentiment analysis of Twitter.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "After a brief summary of related work in Section 2, we present the architectures of our networks and describe the training procedure we used for them in Section 3. Since we applied two different DNN topologies to subtasks B and C, we make a crosscomparison of both systems and evaluate the role of the preprocessing steps in the next-to-last section. Finally, in Section 5, we draw conclusions from our experiments and make further suggestions for future research.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Since its presumably first official mention by Nasukawa and Yi in 2003 (cf. Liu, 2012) , sentiment analysis has constantly attracted the attention of re-searchers. Though earlier works on opinion mining were primarily concerned with the analysis of narratives (Wiebe, 1994) or newspaper articles (Wiebe et al., 2003) , the explosive emergence of social media (SM) services in the mid-2000s has brought about a dramatic focus change in this field.", |
| "cite_spans": [ |
| { |
| "start": 47, |
| "end": 59, |
| "text": "Nasukawa and", |
| "ref_id": null |
| }, |
| { |
| "start": 60, |
| "end": 86, |
| "text": "Yi in 2003 (cf. Liu, 2012)", |
| "ref_id": null |
| }, |
| { |
| "start": 260, |
| "end": 273, |
| "text": "(Wiebe, 1994)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 296, |
| "end": 316, |
| "text": "(Wiebe et al., 2003)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A particularly important role in this regard was played by Twitter -a popular microblogging service first introduced by Jack Dorsey in 2006 (Dorsey, 2006) . The sudden availability of huge amounts of data combined with the presence of all possible social and national groups on this stream rapidly gave rise to a plethora of scientific studies. Notable examples of these were the works conducted by Go et al. (2009) and Pak and Paroubek (2010) , who obtained their corpora using distant supervision and subsequently trained several classifiers on these data; Kouloumpis et al. (2011) , who trained an AdaBoost system on the Edinburgh Twitter corpus 2 ; and Agarwal et al. (2011) , who proposed tree-kernel methods for doing message-level sentiment analysis of tweets.", |
| "cite_spans": [ |
| { |
| "start": 140, |
| "end": 154, |
| "text": "(Dorsey, 2006)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 399, |
| "end": 415, |
| "text": "Go et al. (2009)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 420, |
| "end": 443, |
| "text": "Pak and Paroubek (2010)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 559, |
| "end": 583, |
| "text": "Kouloumpis et al. (2011)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 657, |
| "end": 678, |
| "text": "Agarwal et al. (2011)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Eventually, with the introduction of the SemEval corpus (Nakov et al., 2013) , a great deal of automatic systems and resources have appeared on the scene. Though most of these systems typically rely on traditional supervised classification methods, such as SVM (Mohammad et al., 2013; Becker et al., 2013) or logistic regression (Hamdan et al., 2015; Plotnikova et al., 2015) , in recent years, the deep learning (DL) tsunami (Manning, 2015) has also started hitting the shores of this \"battlefield\".", |
| "cite_spans": [ |
| { |
| "start": 56, |
| "end": 76, |
| "text": "(Nakov et al., 2013)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 261, |
| "end": 284, |
| "text": "(Mohammad et al., 2013;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 285, |
| "end": 305, |
| "text": "Becker et al., 2013)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 329, |
| "end": 350, |
| "text": "(Hamdan et al., 2015;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 351, |
| "end": 375, |
| "text": "Plotnikova et al., 2015)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In this paper we investigate whether one of the newest lines of research in DL -the use of characterlevel deep neural networks (charDNNs) -would be a perspective way for addressing the sentiment analysis task on Twitter as well.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Introduced by Sutskever et al. (2011) , char-DNNs have already proved their efficiency for a variety of NLP applications, including part-of-speech tagging (dos Santos and Zadrozny, 2014), named-entity recognition (dos Santos and Guimar\u00e3es, 2015), and general language modeling (Kim et al., 2015; J\u00f3zefowicz et al., 2016) . We hypothesized that the reduced feature sparsity of this approach, its lower susceptibility to informal spellings, and the shift of 2 http://demeter.inf.ed.ac.uk the main discriminative classification power from input units to transformation layers would make it suitable for doing opinion mining on Twitter as well.", |
| "cite_spans": [ |
| { |
| "start": 14, |
| "end": 37, |
| "text": "Sutskever et al. (2011)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 277, |
| "end": 295, |
| "text": "(Kim et al., 2015;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 296, |
| "end": 320, |
| "text": "J\u00f3zefowicz et al., 2016)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "To test our conjectures, we downloaded the training and development data provided by the organizers of the SemEval-2016 Task 4 (Sentiment Analysis in Twitter; Nakov et al., 2016b) . Due to dynamic changes of this content, we were only able to retrieve a total of 5,178 messages for subtasks B and D (two-way polarity classification) and 7,335 microblogs for subtasks C and E (joint five-way prediction of polarity and intensity).", |
| "cite_spans": [ |
| { |
| "start": 159, |
| "end": 179, |
| "text": "Nakov et al., 2016b)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We deliberately refused to do any heavy-weight NLP preprocessing of these data to check whether the applied DL method alone would suffice to get acceptable results. In order to facilitate the training and reduce the variance of the learned weights though, we applied a shallow normalization of the input by lower-casing messages' strings and filtering out stop words before passing the tweets to the classifiers.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "3" |
| }, |
| { |
| "text": "As stop words we considered all auxiliary verbs (e.g., be, have, do) and auxiliary parts of speech (prepositions, articles, particles, and conjunctions) up to a few exceptions -we kept the negations and words that potentially could inverse the polarity of opinions, e.g., without, but, and however. Furthermore, we also removed hyperlinks, digits, retweets, @-mentions, common temporal expressions, and mentions of tweets' topics, since all of these elements were a priori guaranteed to be objective. An example of such preprocessed microblog is provided below: EXAMPLE 3.1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Original: Going to MetLife tomorrow but not to see the boys is a weird feeling Normalized: but not see boys weird feeling", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Method", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We then defined a multi-layer deep convolutional network for subtasks B and D as follows: At the initial step, we map the input characters to their appropriate embeddings, obtaining an input matrix E \u2208 R n\u00d7m , where n stands for the length of the input instance, and m denotes the dimensionality of the embedding space (specifically, we use m = 32).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adversarial Convolutional Networks (Subtasks B and D)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Next, three sets of convolutional filters -positive (+), negative (\u2212), and shifter (x) convolutions -are applied to the input matrix E. Each of these sets in turn consists of three subsets: one subset with 4 filters of width 3, another subset comprising 8 filters of width 4, and, finally, a third subset having 12 filters of width 5. 3 Each subset filter F forms a matrix R w\u00d7m with the number of rows w corresponding to the filter width and the number of columns m being equal to the embedding dimensionality as above. A subset of filters S p w for p \u2208 {+, \u2212, x} is then naturally represented as a tensor R c\u00d7w\u00d7m , where c is the number of filters with the given width w.", |
| "cite_spans": [ |
| { |
| "start": 335, |
| "end": 336, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adversarial Convolutional Networks (Subtasks B and D)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We apply the usual convolution operation with max-pooling over time for each filter, getting an output vector v S p w \u2208 R c for each subset. All output vectors v S p * of the same subset are then concatenated into one vector", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adversarial Convolutional Networks (Subtasks B and D)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "v S p = [ v S p 3 , v S p 4 , v S p 5 ] of size 4 + 8 + 12 = 24.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adversarial Convolutional Networks (Subtasks B and D)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The results of the three sets are subsequently joined using the following equation:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adversarial Convolutional Networks (Subtasks B and D)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "v conv = sig( v S + \u2212 v S \u2212 ) tanh( v S x ),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adversarial Convolutional Networks (Subtasks B and D)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where v S + , v S \u2212 , and v S x mean the output vectors for the positive, negative, and shifter sets respectively, and denotes the Hadamard product.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adversarial Convolutional Networks (Subtasks B and D)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The motivation behind this choice of unification function is that we first want to obtain the difference between the positive and negative predictions (thus", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adversarial Convolutional Networks (Subtasks B and D)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "v S + \u2212 v S \u2212 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adversarial Convolutional Networks (Subtasks B and D)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": ", then map this difference to the range [0, 1] (therefore the sigmoid), and finally either inverse or dampen these results depending on the output of the shifter layer, whose values are guaranteed to be in the range [\u22121, 1] thanks to tanh. Since we simultaneously apply competing convolutions to the same input, we call this layer \"adversarial\" as all of its components have different opinions regarding the final outcome.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adversarial Convolutional Networks (Subtasks B and D)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "After obtaining v conv , we consecutively use three non-linear transformations (linear rectification, hy-perbolic tangent, and sigmoid function) with linear modifications in-between:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adversarial Convolutional Networks (Subtasks B and D)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "v relu = relu( v conv \u2022 M relu + b relu ), v tanh = tanh( v relu \u2022 M tanh + b tanh ), v sig = sig( v tanh \u2022 M sig + b sig ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adversarial Convolutional Networks (Subtasks B and D)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In this equation, M relu , M tanh , and M sig \u2208 R 24\u00d724 stand for the linear transform matrices, and b relu , b tanh , b sig \u2208 R 24 represent the usual bias terms. With this combination, we hope to first prune unreliable input signals by using a hard rectifying linear unit (Jarrett et al., 2009) and then gain more discriminative power by successively applying tanh and sig, thus funneling the input to increasingly smaller ranges: [\u22121, 1] in the case of tanh, and [0, 1] in the case of sigmoid.", |
| "cite_spans": [ |
| { |
| "start": 274, |
| "end": 296, |
| "text": "(Jarrett et al., 2009)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adversarial Convolutional Networks (Subtasks B and D)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "At the last stage, after applying a binomial dropout mask with p = 0.5 to the v sig vector (Srivastava et al., 2014), we compute the final prediction as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adversarial Convolutional Networks (Subtasks B and D)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "y = 1, if sig( v sig \u2022 M pred + b pred ) \u2265 0.5 0, otherwise,", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Adversarial Convolutional Networks (Subtasks B and D)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where M pred \u2208 R 24\u00d72 and b pred \u2208 R 2 stand for the transformation matrix and bias term respectively, and the summation runs over the two elements of the resulting R 2 vector.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adversarial Convolutional Networks (Subtasks B and D)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "To train our classifier, we normally define the cost function as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adversarial Convolutional Networks (Subtasks B and D)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "cost = i y i * (1 \u2212 y i ) + (1 \u2212 y i ) * y i ,", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Adversarial Convolutional Networks (Subtasks B and D)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "where y i denotes the gold category of the i-th training instance and y i stands for its predicted class, and optimize this function using RMSProp (Tieleman and Hinton, 2012).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adversarial Convolutional Networks (Subtasks B and D)", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "A slightly different model was used for subtasks C and E: In contrast to the previous two-way classification network, we only use one set of convolutions with 4 filters of width 3, 16 filters of width 4, and 24 filters of width 5, and the number of dimensions of the resulting v conv vector being equal to 44 instead of 24.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Highway Convolutional Networks (Subtasks C and E)", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "After normally computing and max-pooling the convolutions, we pass the output convolution vector through a highway layer (Srivastava et al., 2015) in addition to using relu, i.e.:", |
| "cite_spans": [ |
| { |
| "start": 121, |
| "end": 146, |
| "text": "(Srivastava et al., 2015)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Highway Convolutional Networks (Subtasks C and E)", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "v hwtrans = sig( v conv \u2022 M hwtrans + b hwtrans ), v hwcarry = v conv (1 \u2212 v hwtrans ), v relu = relu( v conv \u2022 M conv + b conv ), v relu = sig( v relu v hwtrans + v hwcarry ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Highway Convolutional Networks (Subtasks C and E)", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The rest of the network is organized the same way as in the previous model, up to the final layer. Since this task involves multivariate classification, instead of computing the sigmoid of the sum as in Equation 1, we obtain a softmax vector v \u03c3 \u2208 R 5 and consider the argmax value of this vector as the predicted class:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Highway Convolutional Networks (Subtasks C and E)", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "v \u03c3 = \u03c3( v sig \u2022 M \u03c3 + b \u03c3 ) y = argmax( v \u03c3 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Highway Convolutional Networks (Subtasks C and E)", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The corresponding cost function is appropriately defined as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Highway Convolutional Networks (Subtasks C and E)", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "cost = i \u2212 ln v \u03c3 [y i ]+ 2 * p p 2 + 3 * (y i \u2212y i ) 2 ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Highway Convolutional Networks (Subtasks C and E)", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "where v \u03c3 [y i ] means the probability value for the gold class in the v \u03c3 vector, 2 and 3 are constants (we use 2 = 1e \u22125 and 3 = 3e \u22124 ), p's denote the training parameters of the model, and (y i \u2212 y i ) 2 stand for the squared difference between the numerical values of the predicted and gold classes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Highway Convolutional Networks (Subtasks C and E)", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In this task, we opted for the L 2 regularization instead of using dropout, since we found it working slightly better on the development set, though the differences between the two methods were not very big, and the derivative computation with dropout was significantly faster.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Highway Convolutional Networks (Subtasks C and E)", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Because initialization has a crucial impact on the results of deep learning approaches (Sutskever et al., 2011), we did not rely on purely random weights but used the uniform He method (He et al., 2015) for initially setting the embeddings, convolutional filters, and bias terms instead. The inter-layer transformations were set to orthogonal matrices to ensure their full rank.", |
| "cite_spans": [ |
| { |
| "start": 185, |
| "end": 202, |
| "text": "(He et al., 2015)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Initialization and Training", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Additionally, to guarantee that each preceding network stage came maximally prepared and provided best possible output to its successors, after adding each new intermediate layer, we temporarily short-circuited it to the final output node(s) and pretrained this abridged network for 5 epochs, removing the short-circuit connections afterwards. The final training then took 50 epochs with each epoch lasting for 35 iterations over the provided training data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Initialization and Training", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Since our models appeared to be very susceptible to imbalanced classes, we subsampled the training data by getting min(1.1 * n min , n c ) samples for each distinct gold category c, where n min is the number of instances of the rarest class in the corpus, and n c denotes the number of training examples belonging to the c-th class. This subset was resampled anew for each subsequent training epoch.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Initialization and Training", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Finally, to steer our networks towards recognizing correct features, we randomly added additional training instances from two established sentiment lexica: Subjectivity Clues (Wilson et al., 2005) and NRC Hashtag Affirmative Context Sentiment Lexicon (Kiritchenko et al., 2014) . To that end, we drew n binary random numbers for each polarity class in the corpus from a binomial distribution B(n, 0.1), where n stands for the total size of the generated training set, and added a uniformly chosen term from either lexica whenever the sampled value was equal to one. In the same way, we randomly (with the probability B(m, 0.15), where m means the number of matches) replaced occurrences of terms from the lexica in the training tweets with other uniformly drawn lexicon items.", |
| "cite_spans": [ |
| { |
| "start": 175, |
| "end": 196, |
| "text": "(Wilson et al., 2005)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 251, |
| "end": 277, |
| "text": "(Kiritchenko et al., 2014)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Initialization and Training", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "To train our final model, we used both training and development data provided by the organizers, setting aside 15 percent of the samples drawn in each epoch for evaluation and using the remaining 85 percent for optimizing the networks' weights.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We obtained the final classifier by choosing the network state that produced the best task-specific score on the set-aside part of the corpus during the training. For this purpose, in each training iteration, we estimated the macroaveraged recall \u03c1 P N on the evaluation set for subtask B:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "\u03c1 P N = \u03c1 P os +\u03c1 N eg 2 ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "and computed the macroaveraged mean absolute error measure M AE M (cf. Nakov et al., 2016a ) to select a model for track C :", |
| "cite_spans": [ |
| { |
| "start": 71, |
| "end": 90, |
| "text": "Nakov et al., 2016a", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "M AE M (h, T e) = 1 |C| |C| j=1 1 |T e j |", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "x\u2208T e j |h(", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "x i ) \u2212 y i |", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The resulting models were then used in both classification and quantification subtasks of the SemEval competition, i.e., we used the adversarial network with the maximum \u03c1 P N score observed during the training to generate the output for tracks B and D and applied the highway classifier with the minimum achieved M AE M rate to get predictions for subtasks C and E. 4 The scores of the final evaluation on the official test set are shown in Table 1 . Since many of our parameter and design choices were made empirically by analyzing systems' errors at each development step, we decided to recheck whether these decisions were still optimal for the final configuration. To that end, we re-evaluated the effects of the preprocessing steps by temporarily switching off lower-casing and stop word filtering, and also estimated the impact of the network structure by applying the model architecture used for subtask B to the five-way prediction task, and vice versa using the highway network for the binary classification. The output layers, costs, and regularization functions of these two approaches were also swapped in these experiments when applied to different objectives.", |
| "cite_spans": [ |
| { |
| "start": 367, |
| "end": 368, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 442, |
| "end": 449, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Because re-running the complete training from scratch was relatively expensive (taking eight to ten hours on our machine), we reduced the number of training epochs by a factor of five, but tested each configuration thrice in order to overcome the random factors in the He initialization. The arithmetic mean and standard deviation (with N = 2) of these three outcomes for each setting are also provided in the table.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "As can be seen from the results, running fewer training epochs does not notably harm the final prediction quality for the binary task. On the contrary, it might even lead to some improvements for the adversarial network. We explain this effect by the fact that the model selected during the shorter training had a lower score on the evaluation set than the network state chosen during 50 epochs. Nevertheless, 5,cs 61.34 \u00b11.24 1.3 \u00b10.05 Adversarial 1/5,sw 58.64 \u00b10.8 1.3 \u00b10.05 Adversarial 1/5 61.9 \u00b10.66 1.37 \u00b10.03 Adversarial 61.8 n/a Highway 1/5,cs 59.87 \u00b10.79 1.26 \u00b10.01 Highway 1/5,sw 60.35 \u00b11.5 1.23 \u00b10.05 Highway 1/5 62.05 \u00b10.75 1.3 \u00b10.04 Highway n/a 1.24 Table 1 : Results of the adversarial and highway networks with different preprocessing steps on Subtasks B and C. (\u2227 -higher is better; \u2228 -lower is better; 1/5 -using 1/5 of training epochs;", |
| "cite_spans": [ |
| { |
| "start": 410, |
| "end": 414, |
| "text": "5,cs", |
| "ref_id": null |
| }, |
| { |
| "start": 421, |
| "end": 426, |
| "text": "\u00b11.24", |
| "ref_id": null |
| }, |
| { |
| "start": 449, |
| "end": 455, |
| "text": "1/5,sw", |
| "ref_id": null |
| }, |
| { |
| "start": 462, |
| "end": 466, |
| "text": "\u00b10.8", |
| "ref_id": null |
| }, |
| { |
| "start": 619, |
| "end": 622, |
| "text": "1/5", |
| "ref_id": null |
| }, |
| { |
| "start": 629, |
| "end": 634, |
| "text": "\u00b10.75", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 662, |
| "end": 669, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Training Configuration \u03c1 P N \u2227 (Subtask B) M AE M \u2228 (Subtask C) Adversarial 1/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "cs -preserving the character case; sw -keeping stop words)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "despite its worse evaluation results, this first configuration was more able to fit the test data than the second system, which apparently overfitted the setaside part of the corpus. Furthermore, we also can observe a mixed effect of the normalization on the two tasks: while keeping stop words and preserving character case deteriorates the results for the binary classification, abandoning any preprocessing steps turns out to be a more favorable solution when doing five-way prediction. The reasons for such different behavior are presumably twofold: a) the character case by itself might serve as a good indicator of sentiment intensity but be rather irrelevant to expressing its polarity, and b) the number of training instances might have become scarce as the number of possible gold classes in the corpus increased.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Finally, one also can see that the highway network performs slightly better on both subtasks (two-and five-way) than its adversarial counterpart when used with shorter training. In this case, we assume that the swapping of the regularization and cost functions has hidden the distinctions of the two networks at their initial layers, since, in our earlier experiments, we did observe better results for the two-way classification with the adversarial structure.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Unfortunately, despite our seemingly sound theoretical assumptions set forth at the beginning, relying on character embeddings as input did not work out in practice at the end. Our adversarial system was only ranked fourth to last on subtask B, and the highway network attained the second to last place in track C. However, knowing this outcome in advance was not possible without trying out these approaches first.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In order to make a retrospective error analysis, we computed the correlation coefficients between the character n-grams occurring in the training data and their gold classes, also comparing these figures with the corresponding numbers obtained on the test set. The results of this comparison are shown in Table 2 . As can be seen from the table, the most reliable classification traits that could have been learned during the training are very specific to their respective topics -in particular, Trump and Turkey appear to be very negatively biased terms. This effect becomes even more evident as the length of the character ngrams increases. The reason why we did not prefilter these substrings in the preprocessing was that the respective topics of these messages were specified as donald trump and erdogan, but we only removed exact topic matches from tweets.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 305, |
| "end": 312, |
| "text": "Table 2", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion and Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Due to this evident topic susceptibility, as a possible way to improve our results, we could imagine the inclusion of more training data. Applying ensemble approaches, as it was done by the top-scoring systems this year, could also be a perspective direction to go. We would, however, advise the reader from further experimenting with network architectures (at least when training on the original SemEval dataset only), since both the recursive (RNTN, Socher et al., 2012) and recurrent variants (LSTM, Hochreiter and Schmidhuber, 1997) of neural classifiers were found to perform worse in our experiments than the feed-forward structure we described.", |
| "cite_spans": [ |
| { |
| "start": 445, |
| "end": 472, |
| "text": "(RNTN, Socher et al., 2012)", |
| "ref_id": null |
| }, |
| { |
| "start": 496, |
| "end": 536, |
| "text": "(LSTM, Hochreiter and Schmidhuber, 1997)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "By simultaneously applying multiple filter sets of different width to the same input, we hoped to improve the precisionrecall trade-off, getting more accurate outputs from wider filters while reducing their sparsity with narrower kernels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We used the official aggregating scripts to generate the results for the quantification tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Sentiment Analysis of Twitter Data", |
| "authors": [ |
| { |
| "first": "Apoorv", |
| "middle": [], |
| "last": "Agarwal", |
| "suffix": "" |
| }, |
| { |
| "first": "Boyi", |
| "middle": [], |
| "last": "Xie", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilia", |
| "middle": [], |
| "last": "Vovsha", |
| "suffix": "" |
| }, |
| { |
| "first": "Owen", |
| "middle": [], |
| "last": "Rambow", |
| "suffix": "" |
| }, |
| { |
| "first": "Rebecca", |
| "middle": [], |
| "last": "Passonneau", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the Workshop on Languages in Social Media, LSM '11", |
| "volume": "", |
| "issue": "", |
| "pages": "30--38", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Apoorv Agarwal, Boyi Xie, Ilia Vovsha, Owen Rambow, and Rebecca Passonneau. 2011. Sentiment Analysis of Twitter Data. In Proceedings of the Workshop on Languages in Social Media, LSM '11, pages 30-38, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "AVAYA: Sentiment Analysis on Twitter with Self-Training and Polarity Lexicon Expansion", |
| "authors": [ |
| { |
| "first": "Lee", |
| "middle": [], |
| "last": "Becker", |
| "suffix": "" |
| }, |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Erhart", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Skiba", |
| "suffix": "" |
| }, |
| { |
| "first": "Valentine", |
| "middle": [], |
| "last": "Matula", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation", |
| "volume": "2", |
| "issue": "", |
| "pages": "333--340", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lee Becker, George Erhart, David Skiba, and Valentine Matula. 2013. AVAYA: Sentiment Analysis on Twitter with Self-Training and Polarity Lexicon Expansion. In Second Joint Conference on Lexical and Compu- tational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 333-340, Atlanta, Georgia, USA, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Identifying expressions of opinion in context", |
| "authors": [ |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Breck", |
| "suffix": "" |
| }, |
| { |
| "first": "Yejin", |
| "middle": [], |
| "last": "Choi", |
| "suffix": "" |
| }, |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Cardie", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 20th International Joint Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "2683--2688", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eric Breck, Yejin Choi, and Claire Cardie. 2007. Identifying expressions of opinion in context. In Manuela M. Veloso, editor, IJCAI 2007, Proceedings of the 20th International Joint Conference on Artificial Intelligence, Hyderabad, India, January 6-12, 2007, pages 2683-2688.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "just setting up my twttr", |
| "authors": [ |
| { |
| "first": "Jack", |
| "middle": [], |
| "last": "Dorsey", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jack Dorsey. 2006. just setting up my twttr.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Boosting named entity recognition with neural character embeddings", |
| "authors": [], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C\u00edcero Nogueira dos Santos and Victor Guimar\u00e3es. 2015. Boosting named entity recognition with neural charac- ter embeddings. CoRR, abs/1505.05008.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Learning character-level representations for part-ofspeech tagging", |
| "authors": [ |
| { |
| "first": "C\u00edcero", |
| "middle": [], |
| "last": "Nogueira", |
| "suffix": "" |
| }, |
| { |
| "first": "Bianca", |
| "middle": [], |
| "last": "Santos", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zadrozny", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 31th International Conference on Machine Learning", |
| "volume": "32", |
| "issue": "", |
| "pages": "1818--1826", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C\u00edcero Nogueira dos Santos and Bianca Zadrozny. 2014. Learning character-level representations for part-of- speech tagging. In Proceedings of the 31th Interna- tional Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, volume 32 of JMLR Proceedings, pages 1818-1826. JMLR.org.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Twitter Sentiment Classification using Distant Supervision", |
| "authors": [ |
| { |
| "first": "Alec", |
| "middle": [], |
| "last": "Go", |
| "suffix": "" |
| }, |
| { |
| "first": "Richa", |
| "middle": [], |
| "last": "Bhayani", |
| "suffix": "" |
| }, |
| { |
| "first": "Lei", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "1--6", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alec Go, Richa Bhayani, and Lei Huang. 2009. Twit- ter Sentiment Classification using Distant Supervision. Technical report, pages 1-6.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Lsislif: Feature Extraction and Label Weighting for Sentiment Analysis in Twitter", |
| "authors": [ |
| { |
| "first": "Hussam", |
| "middle": [], |
| "last": "Hamdan", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrice", |
| "middle": [], |
| "last": "Bellot", |
| "suffix": "" |
| }, |
| { |
| "first": "Frederic", |
| "middle": [], |
| "last": "Bechet", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "568--573", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hussam Hamdan, Patrice Bellot, and Frederic Bechet. 2015. Lsislif: Feature Extraction and Label Weight- ing for Sentiment Analysis in Twitter. In Proceedings of the 9th International Workshop on Semantic Eval- uation (SemEval 2015), pages 568-573, Denver, Col- orado, June. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", |
| "authors": [ |
| { |
| "first": "Kaiming", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiangyu", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Shaoqing", |
| "middle": [], |
| "last": "Ren", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. CoRR, abs/1502.01852.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural Computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735- 1780.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "What is the best multistage architecture for object recognition?", |
| "authors": [ |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Jarrett", |
| "suffix": "" |
| }, |
| { |
| "first": "Koray", |
| "middle": [], |
| "last": "Kavukcuoglu", |
| "suffix": "" |
| }, |
| { |
| "first": "Marc'aurelio", |
| "middle": [], |
| "last": "Ranzato", |
| "suffix": "" |
| }, |
| { |
| "first": "Yann", |
| "middle": [], |
| "last": "Lecun", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "IEEE 12th International Conference on Computer Vision", |
| "volume": "", |
| "issue": "", |
| "pages": "2146--2153", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kevin Jarrett, Koray Kavukcuoglu, Marc'Aurelio Ran- zato, and Yann LeCun. 2009. What is the best multi- stage architecture for object recognition? In IEEE 12th International Conference on Computer Vision, ICCV 2009, Kyoto, Japan, September 27 -October 4, 2009, pages 2146-2153. IEEE.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Exploring the limits of language modeling", |
| "authors": [ |
| { |
| "first": "Rafal", |
| "middle": [], |
| "last": "J\u00f3zefowicz", |
| "suffix": "" |
| }, |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Schuster", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Yonghui", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rafal J\u00f3zefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. 2016. Exploring the limits of language modeling. CoRR, abs/1602.02410.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Character-aware neural language models", |
| "authors": [ |
| { |
| "first": "Yoon", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Yacine", |
| "middle": [], |
| "last": "Jernite", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Sontag", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [ |
| "M" |
| ], |
| "last": "Rush", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoon Kim, Yacine Jernite, David Sontag, and Alexan- der M. Rush. 2015. Character-aware neural language models. CoRR, abs/1508.06615.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Sentiment analysis of short informal texts", |
| "authors": [ |
| { |
| "first": "Svetlana", |
| "middle": [], |
| "last": "Kiritchenko", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaodan", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Saif", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "J. Artif. Intell. Res. (JAIR)", |
| "volume": "50", |
| "issue": "", |
| "pages": "723--762", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Svetlana Kiritchenko, Xiaodan Zhu, and Saif M. Moham- mad. 2014. Sentiment analysis of short informal texts. J. Artif. Intell. Res. (JAIR), 50:723-762.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Twitter Sentiment Analysis: The Good the Bad and the OMG! In", |
| "authors": [ |
| { |
| "first": "Efthymios", |
| "middle": [], |
| "last": "Kouloumpis", |
| "suffix": "" |
| }, |
| { |
| "first": "Theresa", |
| "middle": [], |
| "last": "Wilson", |
| "suffix": "" |
| }, |
| { |
| "first": "Johanna", |
| "middle": [ |
| "D" |
| ], |
| "last": "Moore", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the Fifth International Conference on Weblogs and Social Media", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Efthymios Kouloumpis, Theresa Wilson, and Johanna D. Moore. 2011. Twitter Sentiment Analysis: The Good the Bad and the OMG! In Lada A. Adamic, Ricardo A. Baeza-Yates, and Scott Counts, editors, Proceedings of the Fifth International Conference on Weblogs and Social Media, Barcelona, Catalonia, Spain, July 17- 21, 2011. The AAAI Press.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Sentiment Analysis and Opinion Mining", |
| "authors": [ |
| { |
| "first": "Bing", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Synthesis Lectures on Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bing Liu. 2012. Sentiment Analysis and Opinion Min- ing. Synthesis Lectures on Human Language Tech- nologies. Morgan & Claypool Publishers.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Computational linguistics and deep learning", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Computational Linguistics", |
| "volume": "41", |
| "issue": "4", |
| "pages": "701--707", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christopher D. Manning. 2015. Computational linguis- tics and deep learning. Computational Linguistics, 41(4):701-707.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "NRC-Canada: Building the Stateof-the-Art in Sentiment Analysis of Tweets", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Saif", |
| "suffix": "" |
| }, |
| { |
| "first": "Svetlana", |
| "middle": [], |
| "last": "Mohammad", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaodan", |
| "middle": [], |
| "last": "Kiritchenko", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Saif M. Mohammad, Svetlana Kiritchenko, and Xiao- dan Zhu. 2013. NRC-Canada: Building the State- of-the-Art in Sentiment Analysis of Tweets. CoRR, abs/1308.6242.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "SemEval-2013 Task 2: Sentiment Analysis in Twitter", |
| "authors": [ |
| { |
| "first": "Preslav", |
| "middle": [], |
| "last": "Nakov", |
| "suffix": "" |
| }, |
| { |
| "first": "Sara", |
| "middle": [], |
| "last": "Rosenthal", |
| "suffix": "" |
| }, |
| { |
| "first": "Zornitsa", |
| "middle": [], |
| "last": "Kozareva", |
| "suffix": "" |
| }, |
| { |
| "first": "Veselin", |
| "middle": [], |
| "last": "Stoyanov", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Ritter", |
| "suffix": "" |
| }, |
| { |
| "first": "Theresa", |
| "middle": [], |
| "last": "Wilson", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation", |
| "volume": "2", |
| "issue": "", |
| "pages": "312--320", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Preslav Nakov, Sara Rosenthal, Zornitsa Kozareva, Veselin Stoyanov, Alan Ritter, and Theresa Wilson. 2013. SemEval-2013 Task 2: Sentiment Analysis in Twitter. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Pro- ceedings of the Seventh International Workshop on Se- mantic Evaluation (SemEval 2013), pages 312-320, Atlanta, Georgia, USA, June. Association for Compu- tational Linguistics.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Evaluation measures for the SemEval-2016 task 4: \"sentiment analysis in Twitter", |
| "authors": [ |
| { |
| "first": "Preslav", |
| "middle": [], |
| "last": "Nakov", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Ritter", |
| "suffix": "" |
| }, |
| { |
| "first": "Sara", |
| "middle": [], |
| "last": "Rosenthal", |
| "suffix": "" |
| }, |
| { |
| "first": "Fabrizio", |
| "middle": [], |
| "last": "Sebastiani", |
| "suffix": "" |
| }, |
| { |
| "first": "Veselin", |
| "middle": [], |
| "last": "Stoyanov", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Preslav Nakov, Alan Ritter, Sara Rosenthal, Fabrizio Se- bastiani, and Veselin Stoyanov. 2016a. Evaluation measures for the SemEval-2016 task 4: \"sentiment analysis in Twitter\".", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "SemEval-2016 task 4: Sentiment analysis in Twitter", |
| "authors": [ |
| { |
| "first": "Preslav", |
| "middle": [], |
| "last": "Nakov", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Ritter", |
| "suffix": "" |
| }, |
| { |
| "first": "Sara", |
| "middle": [], |
| "last": "Rosenthal", |
| "suffix": "" |
| }, |
| { |
| "first": "Veselin", |
| "middle": [], |
| "last": "Stoyanov", |
| "suffix": "" |
| }, |
| { |
| "first": "Fabrizio", |
| "middle": [], |
| "last": "Sebastiani", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Preslav Nakov, Alan Ritter, Sara Rosenthal, Veselin Stoy- anov, and Fabrizio Sebastiani. 2016b. SemEval-2016 task 4: Sentiment analysis in Twitter. In Proceedings of the 10th International Workshop on Semantic Eval- uation (SemEval 2016), San Diego, California, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Twitter as a Corpus for Sentiment Analysis and Opinion Mining", |
| "authors": [ |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Pak", |
| "suffix": "" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Paroubek", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the International Conference on Language Resources and Evaluation, LREC 2010", |
| "volume": "", |
| "issue": "", |
| "pages": "17--23", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexander Pak and Patrick Paroubek. 2010. Twitter as a Corpus for Sentiment Analysis and Opinion Mining. In Nicoletta Calzolari, Khalid Choukri, Bente Mae- gaard, Joseph Mariani, Jan Odijk, Stelios Piperidis, Mike Rosner, and Daniel Tapias, editors, Proceed- ings of the International Conference on Language Re- sources and Evaluation, LREC 2010, 17-23 May 2010, Valletta, Malta. European Language Resources Asso- ciation.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "KLUEless: Polarity Classification and Association", |
| "authors": [ |
| { |
| "first": "Nataliia", |
| "middle": [], |
| "last": "Plotnikova", |
| "suffix": "" |
| }, |
| { |
| "first": "Micha", |
| "middle": [], |
| "last": "Kohl", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Volkert", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Evert", |
| "suffix": "" |
| }, |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Lerner", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "619--625", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nataliia Plotnikova, Micha Kohl, Kevin Volkert, Stefan Evert, Andreas Lerner, Natalie Dykes, and Heiko Er- mer. 2015. KLUEless: Polarity Classification and Association. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 619-625, Denver, Colorado, June. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Semi-supervised recursive autoencoders for predicting sentiment distributions", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [ |
| "H" |
| ], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "151--161", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Socher, Jeffrey Pennington, Eric H. Huang, An- drew Y. Ng, and Christopher D. Manning. 2011. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the 2011 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2011, 27-31 July 2011, John McIntyre Conference Centre, Edinburgh, UK, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 151-161. ACL.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Semantic compositionality through recursive matrix-vector spaces", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Brody", |
| "middle": [], |
| "last": "Huval", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "1201--1211", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic compositional- ity through recursive matrix-vector spaces. In Jun'ichi Tsujii, James Henderson, and Marius Pasca, editors, Proceedings of the 2012 Joint Conference on Empir- ical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP- CoNLL 2012, July 12-14, 2012, Jeju Island, Korea, pages 1201-1211. ACL.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Dropout: a simple way to prevent neural networks from overfitting", |
| "authors": [ |
| { |
| "first": "Nitish", |
| "middle": [], |
| "last": "Srivastava", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [ |
| "E" |
| ], |
| "last": "Hinton", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Krizhevsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruslan", |
| "middle": [], |
| "last": "Salakhutdinov", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "15", |
| "issue": "1", |
| "pages": "1929--1958", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search, 15(1):1929-1958.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Generating text with recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Martens", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [ |
| "E" |
| ], |
| "last": "Hinton", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Omnipress. T. Tieleman and G. Hinton. 2012. Lecture 6.5-RmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "1017--1024", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ilya Sutskever, James Martens, and Geoffrey E. Hinton. 2011. Generating text with recurrent neural networks. In Lise Getoor and Tobias Scheffer, editors, Proceed- ings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 -July 2, 2011, pages 1017-1024. Omnipress. T. Tieleman and G. Hinton. 2012. Lecture 6.5- RmsProp: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Recognizing and organizing opinions expressed in the world press", |
| "authors": [ |
| { |
| "first": "Janyce", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Breck", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Buckley", |
| "suffix": "" |
| }, |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Cardie", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Davis", |
| "suffix": "" |
| }, |
| { |
| "first": "Bruce", |
| "middle": [], |
| "last": "Fraser", |
| "suffix": "" |
| }, |
| { |
| "first": "Diane", |
| "middle": [ |
| "J" |
| ], |
| "last": "Litman", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [ |
| "R" |
| ], |
| "last": "Pierce", |
| "suffix": "" |
| }, |
| { |
| "first": "Ellen", |
| "middle": [], |
| "last": "Riloff", |
| "suffix": "" |
| }, |
| { |
| "first": "Theresa", |
| "middle": [], |
| "last": "Wilson", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [ |
| "S" |
| ], |
| "last": "Day", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [ |
| "T" |
| ], |
| "last": "Maybury", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "New Directions in Question Answering, Papers from 2003 AAAI Spring Symposium", |
| "volume": "", |
| "issue": "", |
| "pages": "12--19", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Janyce Wiebe, Eric Breck, Chris Buckley, Claire Cardie, Paul Davis, Bruce Fraser, Diane J. Litman, David R. Pierce, Ellen Riloff, Theresa Wilson, David S. Day, and Mark T. Maybury. 2003. Recognizing and or- ganizing opinions expressed in the world press. In Mark T. Maybury, editor, New Directions in Question Answering, Papers from 2003 AAAI Spring Sympo- sium, Stanford University, Stanford, CA, USA, pages 12-19. AAAI Press.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Tracking point of view in narrative", |
| "authors": [ |
| { |
| "first": "Janyce", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Computational Linguistics", |
| "volume": "20", |
| "issue": "2", |
| "pages": "233--287", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Janyce Wiebe. 1994. Tracking point of view in narrative. Computational Linguistics, 20(2):233-287.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Recognizing contextual polarity in phrase-level sentiment analysis", |
| "authors": [ |
| { |
| "first": "Theresa", |
| "middle": [], |
| "last": "Wilson", |
| "suffix": "" |
| }, |
| { |
| "first": "Janyce", |
| "middle": [], |
| "last": "Wiebe", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Hoffmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "HLT/EMNLP 2005, Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In HLT/EMNLP 2005, Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference, 6-8 October 2005, Vancouver, British Columbia, Canada. The Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Compositional matrix-space models for sentiment analysis", |
| "authors": [ |
| { |
| "first": "Ainur", |
| "middle": [], |
| "last": "Yessenalina", |
| "suffix": "" |
| }, |
| { |
| "first": "Claire", |
| "middle": [], |
| "last": "Cardie", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "2011", |
| "issue": "", |
| "pages": "172--182", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ainur Yessenalina and Claire Cardie. 2011. Compo- sitional matrix-space models for sentiment analysis. In Proceedings of the 2011 Conference on Empiri- cal Methods in Natural Language Processing, EMNLP 2011, 27-31 July 2011, John McIntyre Conference Centre, Edinburgh, UK, A meeting of SIGDAT, a Spe- cial Interest Group of the ACL, pages 172-182.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": "(a) Adversarial network used for subtasks B and D. (b) Highway network used for subtasks C and E." |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": "Network architectures." |
| }, |
| "TABREF0": { |
| "content": "<table/>", |
| "text": "Top-10 character n-grams from the training data and their correlation coefficients with the negative class on the training (\u03c1train) and test sets (\u03c1test) of subtask B.", |
| "type_str": "table", |
| "html": null, |
| "num": null |
| } |
| } |
| } |
| } |