| { |
| "paper_id": "S18-1004", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:44:52.198134Z" |
| }, |
| "title": "T\u00fcbingen-Oslo at SemEval-2018 Task 2: SVMs perform better than RNNs at Emoji Prediction", |
| "authors": [ |
| { |
| "first": "\u00c7a\u011fr\u0131", |
| "middle": [], |
| "last": "\u00c7\u00f6ltekin", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of T\u00fcbingen", |
| "location": { |
| "country": "Germany" |
| } |
| }, |
| "email": "ccoltekin@sfs.uni-tuebingen.de" |
| }, |
| { |
| "first": "Taraka", |
| "middle": [], |
| "last": "Rama", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Oslo", |
| "location": { |
| "country": "Norway" |
| } |
| }, |
| "email": "tarakark@ifi.uio.no" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper describes our participation in the SemEval-2018 task Multilingual Emoji Prediction. We participated in both English and Spanish subtasks, experimenting with support vector machines (SVMs) and recurrent neural networks. Our SVM classifier obtained the top rank in both subtasks with macro-averaged F1measures of 35.99 % for English and 22.36 % for Spanish data sets. Similar to a few earlier attempts, the results with neural networks were not on par with linear SVMs.", |
| "pdf_parse": { |
| "paper_id": "S18-1004", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper describes our participation in the SemEval-2018 task Multilingual Emoji Prediction. We participated in both English and Spanish subtasks, experimenting with support vector machines (SVMs) and recurrent neural networks. Our SVM classifier obtained the top rank in both subtasks with macro-averaged F1measures of 35.99 % for English and 22.36 % for Spanish data sets. Similar to a few earlier attempts, the results with neural networks were not on par with linear SVMs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Emojis are graphical symbols that represent an idea or emotion. The use of emojis has become popular over the last decade, particularly in informal communication in the social media. Their popularity kindled a recent interest in investigating many aspects of emojis, including their interaction with natural language (e.g., Barbieri et al., 2016 Barbieri et al., , 2017 Felbo et al., 2017; Kralj Novak et al., 2015) . Although the emojis are presumably languageindependent, their use typically goes together with linguistic text. In this context, the SemEval 2018 task 2, Multilingual Emoji Prediction (Barbieri et al., 2018) , aims predicting the emoji from the surrounding micro-blogging (Twitter) text for English and Spanish.", |
| "cite_spans": [ |
| { |
| "start": 324, |
| "end": 345, |
| "text": "Barbieri et al., 2016", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 346, |
| "end": 369, |
| "text": "Barbieri et al., , 2017", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 370, |
| "end": 389, |
| "text": "Felbo et al., 2017;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 390, |
| "end": 415, |
| "text": "Kralj Novak et al., 2015)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 602, |
| "end": 625, |
| "text": "(Barbieri et al., 2018)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The task at hand is to predict a label, an emoji, from a short text that it accompanies. This is essentially a text/document classification problem, and shares many aspects of other text classification problems such as topic classification, sentiment analysis, language identification and authorship attribution -just to name a few. Although each of these problems have some task-specific aspects, the same models can be used for all of them. In this study, we experiment with and compare two well-known methods: support vector machines (SVMs) with bag of word/character n-gram features and recurrent neural networks (RNNs) with word and character sequences as input. The methods and implementations are similar to our earlier attempts in other text classification tasks (\u00c7\u00f6ltekin and Rama, 2016; . 1 In the remainder of this paper, we describe our methods and experiments, present and discuss our results.", |
| "cite_spans": [ |
| { |
| "start": 771, |
| "end": 796, |
| "text": "(\u00c7\u00f6ltekin and Rama, 2016;", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We participated in both subtasks using the same architectures. However, we trained and tuned the model parameters on each data set separately. The training set for the competition consisted of 500 000 tweets for English and 100 000 tweets for Spanish subtask. The data sets contained most frequent 20 emojis for English and 19 emojis for Spanish. Joining late to the party, our training set consisted of 485 151 English tweets, and 97 765 Spanish tweets, since about 3 % of the tweets were not available by the time we crawled them. As presented in Figure 1 , the label distribution is similar and quite skewed for both languages. We included pre-processing steps of case normalization and discarding low-frequency features as part of our hyperparameter optimization. In all our experiments, we use only the data supplied by the organizers. We did not use any external sources (e.g., pre-trained word embeddings), nor did we perform any further linguistic processing (e.g., POS tagging, or parsing). The test size for English and Spanish is 50 000 and 10 000 respectively.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 549, |
| "end": 557, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments and Results", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The best results obtained in the shared task are based on multi-class (one-vs-rest) linear support vector machines (SVM). We use 'bag of n-grams' as features, combining both character n-grams and word n-grams of different sizes, weighted by sublinear TF-IDF scaling applied globally to all ngrams (character and word n-grams with varying sizes). Although we also experimented with logistic regression and random forests using the same feature set, the results were consistently inferior to the SVMs. Therefore, we will not discuss the results of logistic regression and random forests. The models discussed in this section were implemented with scikit-learn package (Pedregosa et al., 2011) using liblinear back end (Fan et al., 2008) .", |
| "cite_spans": [ |
| { |
| "start": 666, |
| "end": 690, |
| "text": "(Pedregosa et al., 2011)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 716, |
| "end": 734, |
| "text": "(Fan et al., 2008)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Support Vector Machines", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We optimized the models for best macro F1score on each language data set through a grid search using 5-fold cross validation. The hyperparameters considered during optimization were maximum character/word n-gram size, case normalization, minimum document frequency threshold for excluding low-frequency features, and SVM margin (or regularization) parameter 'C'. Although there has been other parameter settings with competitive scores, we used maximum character n-grams size of 6, maximum word n-gram size of 4, minimum document frequency threshold of 2, SVM parameter C of 0.10, and we case normalized only word (not character) n-grams. Our submitted system achieved 36.55 precision, 36.22 recall and 35.99 F1-score on the English test set, and 23.49 precision, 22.80 recall and 22.36 F1score on the Spanish test set. These figures were about 1 % lower than the figures we obtained in 5-fold cross validation results on the training data. larger n-gram sizes increase the performance, the gains from higher n-gram values are rather small. The effects of other hyperparameters are smaller. In general, however, excluding features based on frequency seems to hurt the performance. Case normalization is useful if applied to word n-grams, but its effects are often negative if it is applied to both character and word n-grams. The optimum regularization parameter 'C' is stable over both languages and different training sizes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Support Vector Machines", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Gaining popularity relatively recently, neural models are another common approach to text classification. Fully-connected networks are computationally impractical. However, convolutional networks (CNNs) and recurrent neural networks (RNNs) offer reasonably efficient computation, as well as better modeling of sequences. RNNs, particularly gated RNNs, have been used in many diverse natural language processing tasks successfully, and text classification is not an exception. Our neural model includes two bidirectional RNN components: one taking a sequence of words as input and another taking a sequence of characters as input. The recurrent components of the network builds two representations for the text (one based on characters, the other based on words), the representations are concatenated and passed to a fully connected softmax layer that assigns an emoji to the document based on the RNN representations. Since the tweets are relatively short, we did not truncate the input documents. For both character and word inputs, we used embedding layers before the RNN layers. All neural network experiments were implemented with Tensorflow (Abadi et al., 2015) using Keras API (Chollet et al., 2015) .", |
| "cite_spans": [ |
| { |
| "start": 1146, |
| "end": 1166, |
| "text": "(Abadi et al., 2015)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 1183, |
| "end": 1205, |
| "text": "(Chollet et al., 2015)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recurrent Neural Networks", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Although the history/context is not a parameter for recurrent networks, the architecture has many hyperparameters. We optimized the hyperparameters of the architecture through a random search for the embedding size of both characters and words, the hidden representation size of the RNN cells, the dropout parameter for each component of the network, frequency threshold for excluding features, RNN architecture, GRU (Cho et al., 2014) or LSTM (Hochreiter and Schmidhuber, 1997) , and case normalization. For the RNN models, we used a random training-validation split (80 %-20 % for Spanish, and 90 %-10 % for English) during hyperparameter tuning. We used early stopping based on macro F1-measure, and picked the epoch with the best F1-measure for each hyperparameter setting. Besides these parameters -used for systematic random search -we also experimented with deeper architectures, both by stacking RNNs and by multiple fully-connected layers. Deeper networks, however, yielded worse results.", |
| "cite_spans": [ |
| { |
| "start": 417, |
| "end": 435, |
| "text": "(Cho et al., 2014)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 444, |
| "end": 478, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recurrent Neural Networks", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We obtained F1-scores of 33.02 % for English data and 17.98 % for the Spanish data on the (randomly split) development set. For both subtasks, we submitted results with the hyperparameter setting that worked best on the English data set (although it yielded a slightly lower F1-score than the best one obtained for Spanish). For both languages, the RNN results submitted used a model with embedding layers of size 32 (for characters) and 128 (for words). In the case of bidirectional GRU networks we used hidden units of sizes 32 and 128 for character and word input, respectively, minimum frequency threshold of 4 for characters and 1 for words, dropout parameter of 0.50 at the embedding layers and 0.10 at the RNN layers, and no case normalization.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recurrent Neural Networks", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The performance with different training set sizes is an important consideration in model choice. Furthermore, since the training set sizes for the two languages in present study are different, it is also be a plausible explanation for the fact that substantially lower performance of both models on the Spanish task. To shed light into these two issues, we present incremental results on (only) the English data set. In this experiment, we randomly set aside 10 % of the English training data for testing, we split the remaining 90 % into 10 splits, and train both systems by starting with one of splits, and incrementally adding another one in each iteration. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Effect of training set size", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In this paper, we described our submitted systems at SemEval-2018 Task 2 on Multilingual Emoji Prediction. Besides providing details on our systems, this paper also intends to provide a comparison between two text classification methods: RNNs and linear SVMs. The comparison is motivated by the fact that, despite their popularity and argued superiority, we and others found linear models, particularly SVMs, yield better results than (deep) neural models in a series of other text classification tasks (e.g., \u00c7\u00f6ltekin and Rama, 2016; Medvedeva et al., 2017) . One plausible explanation is the fact that neural networks typically require more data to train. Indeed, the previous shared tasks cited above often provided modest-size training sets, mainly due to the cost of labeling. Emoji classification task has an advantage in this respect as the labeling is relatively cheap compared to many other text classification tasks. As a result, at least for English, the shared task included a rather large training set. However, our current findings also indicate that the linear SVMs still perform better than the RNN counterparts. Although the results presented in Figure 3 indicate that more data is, indeed, helpful for RNNs, the performance gap in favor of SVMs persists. Another interesting (but expected due to model complexity) observation in Figure 3 is that the RNNs also exhibit larger variation, especially with smaller data sizes.", |
| "cite_spans": [ |
| { |
| "start": 510, |
| "end": 534, |
| "text": "\u00c7\u00f6ltekin and Rama, 2016;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 535, |
| "end": 558, |
| "text": "Medvedeva et al., 2017)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1163, |
| "end": 1171, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 1347, |
| "end": 1355, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion and conclusions", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Our findings seem to contradict with the majority of recent NLP literature, where RNNs are often claimed to be superior to linear models, and emoji classification is not an exception (e.g., Barbieri et al., 2017) . Part of this impression comes from the fact that, in most studies, the linear baselines used in comparison are simple bag-of-words models. As words in a text are not independent, simple bag-of-words is deemed to fail. The simple addition of word n-gram features, however, circumvents this problem to a large extent, enabling the linear models to capture some local dependencies. RNNs, however, still have a potential advantage since they can, at least in theory, capture longrange dependencies as well. However, it seems either local dependencies are enough in many text classification tasks, or the data sets are (still) small for RNNs to generalize over useful long-range dependencies. Furthermore, character n-gram features are also useful, particularly for morphologically rich languages, as they also capture information present in sub-word units. Although including many overlapping character and word n-gram features result in large feature vectors, the sparse implementations of these models are computationally feasible and easy to tune -often more than corresponding deep neural network models.", |
| "cite_spans": [ |
| { |
| "start": 190, |
| "end": 212, |
| "text": "Barbieri et al., 2017)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and conclusions", |
| "sec_num": "3" |
| }, |
| { |
| "text": "A curious finding from our experiments is that despite the language-agnostic nature of our methods, both models yielded a rather large performance difference (13.63 % F1-measure on the test set) between English and Spanish. The possible explanation based on training set size is not sup-ported by the experiments presented in Section 2.3. Figure 3 shows that, at about the training set size of Spanish data (100 000 instances), one can obtain about 32 % F1-score on the English data set, which is substantially higher than the best test and development set results we obtained using the full training data for Spanish (22.36 % and 23.59 % respectively). Hence, the difference is likely to be either due to differences between the languages, or due to some inherent confusability of the emojis in the Spanish data set. The confusion matrices in Figure 4 indicate higher majority class bias for Spanish. More experiments are needed for a better understanding of the differences.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 339, |
| "end": 347, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 844, |
| "end": 852, |
| "text": "Figure 4", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Discussion and conclusions", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Past research has found that ensemble methods that combine multiple classifiers yield better performance compared to each individual classifier (Malmasi and Dras, 2015) . Besides the differences in the learning algorithms, the models we compare in this work exploit rather different types of information. Hence, a combination of classifiers may result in better performance. Even though we did not experiment with ensemble methods in this work, the number of test instances that were predicted correctly by one of the models (but not by both) was 17.28 % and 19.95 % for English and the Spanish data respectively, indicating a promising upper bound for an ensemble approach.", |
| "cite_spans": [ |
| { |
| "start": 144, |
| "end": 168, |
| "text": "(Malmasi and Dras, 2015)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Future directions", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Although we did not use any external resources in this task, another potential source of improvement is to use external information (e.g., embeddings or cluster labels) extracted from large unlabeled texts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Future directions", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The source code of our implementation is available at https://github.com/coltekin/emoji2018.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "TensorFlow: Large-scale machine learning on heterogeneous systems", |
| "authors": [ |
| { |
| "first": "Mart\u00edn", |
| "middle": [], |
| "last": "Abadi", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Agarwal", |
| "suffix": "" |
| }, |
| { |
| "first": "Paul", |
| "middle": [], |
| "last": "Barham", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Brevdo", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhifeng", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Craig", |
| "middle": [], |
| "last": "Citro", |
| "suffix": "" |
| }, |
| { |
| "first": "Greg", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Andy", |
| "middle": [], |
| "last": "Davis", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthieu", |
| "middle": [], |
| "last": "Devin", |
| "suffix": "" |
| }, |
| { |
| "first": "Sanjay", |
| "middle": [], |
| "last": "Ghemawat", |
| "suffix": "" |
| }, |
| { |
| "first": "Ian", |
| "middle": [], |
| "last": "Goodfellow", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Harp", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Irving", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Isard", |
| "suffix": "" |
| }, |
| { |
| "first": "Yangqing", |
| "middle": [], |
| "last": "Jia", |
| "suffix": "" |
| }, |
| { |
| "first": "Rafal", |
| "middle": [], |
| "last": "Jozefowicz", |
| "suffix": "" |
| }, |
| { |
| "first": "Lukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "Manjunath", |
| "middle": [], |
| "last": "Kudlur", |
| "suffix": "" |
| }, |
| { |
| "first": "Josh", |
| "middle": [], |
| "last": "Levenberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mart\u00edn Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Cor- rado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man\u00e9, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Tal- war, Paul Tucker, Vincent Vanhoucke, Vijay Vasude- van, Fernanda Vi\u00e9gas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xi- aoqiang Zheng. 2015. TensorFlow: Large-scale ma- chine learning on heterogeneous systems. Software available from tensorflow.org.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Are emojis predictable?", |
| "authors": [ |
| { |
| "first": "Francesco", |
| "middle": [], |
| "last": "Barbieri", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| }, |
| { |
| "first": "Horacio", |
| "middle": [], |
| "last": "Saggion", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "105--111", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Francesco Barbieri, Miguel Ballesteros, and Horacio Saggion. 2017. Are emojis predictable? In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 105-111, Valencia, Spain. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "SemEval-2018 Task 2: Multilingual Emoji Prediction", |
| "authors": [ |
| { |
| "first": "Francesco", |
| "middle": [], |
| "last": "Ronzano", |
| "suffix": "" |
| }, |
| { |
| "first": "Luis", |
| "middle": [], |
| "last": "Espinosa-Anke", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| }, |
| { |
| "first": "Valerio", |
| "middle": [], |
| "last": "Basile", |
| "suffix": "" |
| }, |
| { |
| "first": "Viviana", |
| "middle": [], |
| "last": "Patti", |
| "suffix": "" |
| }, |
| { |
| "first": "Horacio", |
| "middle": [], |
| "last": "Saggion", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 12th International Workshop on Semantic Evaluation (SemEval-2018)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Francesco Ronzano, Luis Espinosa-Anke, Miguel Ballesteros, Valerio Basile, Viviana Patti, and Horacio Saggion. 2018. SemEval-2018 Task 2: Multilingual Emoji Prediction. In Proceedings of the 12th International Workshop on Semantic Eval- uation (SemEval-2018), New Orleans, LA, United States. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "What does this emoji mean? a vector space skip-gram model for twitter emojis", |
| "authors": [ |
| { |
| "first": "Francesco", |
| "middle": [], |
| "last": "Barbieri", |
| "suffix": "" |
| }, |
| { |
| "first": "Francesco", |
| "middle": [], |
| "last": "Ronzano", |
| "suffix": "" |
| }, |
| { |
| "first": "Horacio", |
| "middle": [], |
| "last": "Saggion", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Francesco Barbieri, Francesco Ronzano, and Horacio Saggion. 2016. What does this emoji mean? a vector space skip-gram model for twitter emojis. In Pro- ceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France. European Language Resources Asso- ciation (ELRA).", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "On the properties of neural machine translation: Encoder-decoder approaches", |
| "authors": [ |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Bart", |
| "middle": [], |
| "last": "Van Merrienboer", |
| "suffix": "" |
| }, |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "103--111", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. In Proceedings of SSST-8, Eighth Work- shop on Syntax, Semantics and Structure in Statisti- cal Translation, pages 103-111.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Discriminating similar languages with linear SVMs and neural networks", |
| "authors": [ |
| { |
| "first": "\u00c7a\u011fr\u0131", |
| "middle": [], |
| "last": "\u00c7\u00f6ltekin", |
| "suffix": "" |
| }, |
| { |
| "first": "Taraka", |
| "middle": [], |
| "last": "Rama", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial3)", |
| "volume": "", |
| "issue": "", |
| "pages": "15--24", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "\u00c7a\u011fr\u0131 \u00c7\u00f6ltekin and Taraka Rama. 2016. Discriminat- ing similar languages with linear SVMs and neural networks. In Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial3), pages 15-24, Osaka, Japan.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "T\u00fcbingen system in VarDial 2017 shared task: experiments with language identification and cross-lingual parsing", |
| "authors": [ |
| { |
| "first": "\u00c7a\u011fr\u0131", |
| "middle": [], |
| "last": "\u00c7\u00f6ltekin", |
| "suffix": "" |
| }, |
| { |
| "first": "Taraka", |
| "middle": [], |
| "last": "Rama", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects", |
| "volume": "", |
| "issue": "", |
| "pages": "146--155", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "\u00c7a\u011fr\u0131 \u00c7\u00f6ltekin and Taraka Rama. 2017. T\u00fcbingen system in VarDial 2017 shared task: experiments with language identification and cross-lingual pars- ing. In Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (Var- Dial), pages 146-155, Valencia, Spain. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "LIBLINEAR: A library for large linear classification", |
| "authors": [ |
| { |
| "first": "Kai-Wei", |
| "middle": [], |
| "last": "Rong-En Fan", |
| "suffix": "" |
| }, |
| { |
| "first": "Cho-Jui", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiang-Rui", |
| "middle": [], |
| "last": "Hsieh", |
| "suffix": "" |
| }, |
| { |
| "first": "Chih-Jen", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "9", |
| "issue": "", |
| "pages": "1871--1874", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang- Rui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871-1874.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm", |
| "authors": [ |
| { |
| "first": "Bjarke", |
| "middle": [], |
| "last": "Felbo", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Mislove", |
| "suffix": "" |
| }, |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "S\u00f8gaard", |
| "suffix": "" |
| }, |
| { |
| "first": "Iyad", |
| "middle": [], |
| "last": "Rahwan", |
| "suffix": "" |
| }, |
| { |
| "first": "Sune", |
| "middle": [], |
| "last": "Lehmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1615--1625", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bjarke Felbo, Alan Mislove, Anders S\u00f8gaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain represen- tations for detecting sentiment, emotion and sar- casm. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 1615-1625, Copenhagen, Denmark. As- sociation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Borut Sluban, and Igor Mozeti\u010d", |
| "authors": [ |
| { |
| "first": "Petra", |
| "middle": [ |
| "Kralj" |
| ], |
| "last": "Novak", |
| "suffix": "" |
| }, |
| { |
| "first": "Jasmina", |
| "middle": [], |
| "last": "Smailovi\u0107", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "PLOS ONE", |
| "volume": "10", |
| "issue": "12", |
| "pages": "1--22", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Petra Kralj Novak, Jasmina Smailovi\u0107, Borut Sluban, and Igor Mozeti\u010d. 2015. Sentiment of emojis. PLOS ONE, 10(12):1-22.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Language identification using classifier ensembles", |
| "authors": [ |
| { |
| "first": "Shervin", |
| "middle": [], |
| "last": "Malmasi", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Dras", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the Joint Workshop on Language Technology for Closely Related Languages, Varieties and Dialects", |
| "volume": "", |
| "issue": "", |
| "pages": "35--43", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shervin Malmasi and Mark Dras. 2015. Language identification using classifier ensembles. In Pro- ceedings of the Joint Workshop on Language Tech- nology for Closely Related Languages, Varieties and Dialects, pages 35-43.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "When sparse traditional models outperform dense neural networks: the curious case of discriminating between similar languages", |
| "authors": [ |
| { |
| "first": "Maria", |
| "middle": [], |
| "last": "Medvedeva", |
| "suffix": "" |
| }, |
| { |
| "first": "Martin", |
| "middle": [], |
| "last": "Kroon", |
| "suffix": "" |
| }, |
| { |
| "first": "Barbara", |
| "middle": [], |
| "last": "Plank", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the Fourth Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial)", |
| "volume": "", |
| "issue": "", |
| "pages": "156--163", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Maria Medvedeva, Martin Kroon, and Barbara Plank. 2017. When sparse traditional models outperform dense neural networks: the curious case of discrim- inating between similar languages. In Proceedings of the Fourth Workshop on NLP for Similar Lan- guages, Varieties and Dialects (VarDial), pages 156- 163, Valencia, Spain. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Scikit-learn: Machine learning in Python", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Pedregosa", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Varoquaux", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Gramfort", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Michel", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Thirion", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Grisel", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Blondel", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Prettenhofer", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Weiss", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Dubourg", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Vanderplas", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Passos", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Cournapeau", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Brucher", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Perrot", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Duchesnay", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "12", |
| "issue": "", |
| "pages": "2825--2830", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Fewer features perform well at native language identification task", |
| "authors": [ |
| { |
| "first": "Taraka", |
| "middle": [], |
| "last": "Rama", |
| "suffix": "" |
| }, |
| { |
| "first": "\u00c7a\u011fr\u0131", |
| "middle": [], |
| "last": "\u00c7\u00f6ltekin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications", |
| "volume": "", |
| "issue": "", |
| "pages": "255--260", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taraka Rama and \u00c7a\u011fr\u0131 \u00c7\u00f6ltekin. 2017. Fewer features perform well at native language identification task. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 255-260, Copenhagen, Denmark. Association for Computational Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "Label distribution in both data sets. Ratio of each label is plotted against its rank. Note that the emojis sharing the same rank are not necessarily identical in both languages." |
| }, |
| "FIGREF1": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "presents the effects of character and word n-grams of different sizes. For all results presented inFigure 2, n-grams from size 1 up to the indicated number are included as features. Although both combining character and word n-grams, and The effect of maximum character and word ngram size combinations to F1-measure. Darker shades indicate higher F1-measure." |
| }, |
| "FIGREF2": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "Learning curve for the SVM and RNN models on the English training set. The error bars indicate maximum and minimum values in 10 trials." |
| }, |
| "FIGREF3": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "shows the F1-score against training set size, for both SVM and RNN models." |
| }, |
| "FIGREF5": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "Confusion matrices for both data sets. The labels are sorted by frequency." |
| } |
| } |
| } |
| } |