ACL-OCL / Base_JSON /prefixS /json /semeval /2020.semeval-1.121.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:19:22.881220Z"
},
"title": "IIITG-ADBU at SemEval-2020 Task 9: SVM for Sentiment Analysis of English-Hindi Code-Mixed Text",
"authors": [
{
"first": "Arup",
"middle": [],
"last": "Baruah",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IIIT Guwahati",
"location": {
"country": "India"
}
},
"email": "arup.baruah@gmail.com"
},
{
"first": "Kaushik",
"middle": [
"Amar"
],
"last": "Das",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IIIT Guwahati",
"location": {
"country": "India"
}
},
"email": "kaushikamardas@gmail.com"
},
{
"first": "Ferdous",
"middle": [
"Ahmed"
],
"last": "Barbhuiya",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IIIT Guwahati",
"location": {
"country": "India"
}
},
"email": "ferdous@iiitg.ac.in"
},
{
"first": "Kuntal",
"middle": [],
"last": "Dey",
"suffix": "",
"affiliation": {
"laboratory": "Accenture Technology Labs",
"institution": "",
"location": {
"settlement": "Bangalore"
}
},
"email": "kuntal.dey@accenture.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present the results that the team IIITG-ADBU (codalab username 'abaruah') obtained in the SentiMix task (Task 9) of the International Workshop on Semantic Evaluation 2020 (SemEval 2020). This task required the detection of sentiment in code-mixed Hindi-English tweets. Broadly, we performed two sets of experiments for this task. The first experiment was performed using the multilingual BERT classifier and the second set of experiments was performed using SVM classifiers. The character-based SVM classifier obtained the best F1 score of 0.678 in the test set with a rank of 21 among 62 participants. The performance of the multilingual BERT classifier was quite comparable with the SVM classifier on the development set. However, on the test set it obtained an F1 score of 0.342. *",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present the results that the team IIITG-ADBU (codalab username 'abaruah') obtained in the SentiMix task (Task 9) of the International Workshop on Semantic Evaluation 2020 (SemEval 2020). This task required the detection of sentiment in code-mixed Hindi-English tweets. Broadly, we performed two sets of experiments for this task. The first experiment was performed using the multilingual BERT classifier and the second set of experiments was performed using SVM classifiers. The character-based SVM classifier obtained the best F1 score of 0.678 in the test set with a rank of 21 among 62 participants. The performance of the multilingual BERT classifier was quite comparable with the SVM classifier on the development set. However, on the test set it obtained an F1 score of 0.342. *",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Sentiment analysis has been defined as the computational study of opinions, sentiments, and emotions expressed in the text (Liu, 2010) . In its basic form, sentiment analysis is used to determine the polarity of a given text where the polarity may be negative, neutral, and positive. Thus, sentiment analysis can be viewed as a text classification problem.",
"cite_spans": [
{
"start": 123,
"end": 134,
"text": "(Liu, 2010)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With the advent of social media platforms, the use of non-standard language has increased. Nowa-days, it is very common to use emoticons, mentions, acronyms, and ungrammatical sentences while communicating in social media. All these factors make the traditional tools used for natural language processing fail on social media text. Another new style of communication in social media is the use of code-mixed text. Code-mixing means the mixing of words from more than one language in the same sentence or between sentences. Code mixing makes the task of sentiment analysis more challenging.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The objective of SentiMix (Task 9), organized as part of the International Workshop on Semantic Evaluation 2020 (SemEval 2020), is to detect the sentiment of code-mixed tweets (Patwa et al., 2020) . This task was a three-way classification problem with the labels being negative, neutral, and positive. The task was held for both Hindi-English and Spanish-English code-mixed tweets. We participated in this task for the Hindi-English language. In this task, we experimented with SVM and multilingual BERT classifiers. Joshi et al. (2010) performed the first work on the detection of sentiment analysis of Hindi text. In their work, unigram and bigram based SVM classifiers were used. As another approach, the Hindi text was machine translated to English and the translated text was then classified using a classifier trained on English text. This work also led to the creation of Hindi-SentiWordNet sentiment lexicon. The lexicon was also used to perform a lexicon-based sentiment analysis. The classifier trained on the Hindi text performed the best with an accuracy of 78.14%. Sharma et al. (2015) used a lexicon-based approach to determine the Table 4 : Combined sentiment of code-mixed Hindi-English text. The language of each word was first determined. The Hindi words written using Roman scripts were then transliterated into Devanagari script. The sentiment of each word was then determined through a lookup of the lexicons -Hindi SentiWordNet, Opinion Lexicon, and AFINN. The sentiment of the text was then determined based on the count of positive and negative words. Joshi et al. (2016) also worked on detecting sentiment on code-mixed Hindi-English text. A sub-word level LSTM was used in their study. The sub-word level representations were generated by performing a convolution operation on 128-dimensional character embeddings. The sub-word level LSTM was found to perform better than character based LSTM.",
"cite_spans": [
{
"start": 176,
"end": 196,
"text": "(Patwa et al., 2020)",
"ref_id": "BIBREF6"
},
{
"start": 518,
"end": 537,
"text": "Joshi et al. (2010)",
"ref_id": "BIBREF2"
},
{
"start": 1079,
"end": 1099,
"text": "Sharma et al. (2015)",
"ref_id": "BIBREF8"
},
{
"start": 1577,
"end": 1596,
"text": "Joshi et al. (2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 1147,
"end": 1154,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The Hindi-English data set for this task was provided in the CoNLL format. For each tweet, the following information was provided: (1) the id for the tweet, (2) its label (negative, neutral, or positive), and (3) the language id of each token (HIN for Hindi, ENG for English, and O if neither Hindi nor English). In our experiments, we did not make use of the language id information provided in the data set. Tables 1 to 4 show the statistics of the trial, train, development, and the combined data sets respectively. The combined data set was obtained by combining the trial, train, and development data sets. The duplicate entries were removed from the combined data set. The combined data set was used in our experiments. As can be seen from the tables, the data sets were quite balanced.",
"cite_spans": [],
"ref_spans": [
{
"start": 410,
"end": 423,
"text": "Tables 1 to 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Set",
"sec_num": "3"
},
{
"text": "In our work, before performing tokenization, the text was converted to lower case. This conversion to lower-case was performed through the BERT tokenizer and the TFIDF vectorizer. In one of the experiments, the URLs were removed from the text. Emoticons, hashtags, and mentions were not removed from the text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocessing",
"sec_num": "4.1"
},
{
"text": "BERT (Devlin et al., 2019 ) is a bi-directional model based on the transformer architecture. The transformer architecture is an architecture based solely on attention mechanism (Vaswani et al., 2017) . The transformer architecture overcomes the inherent sequential nature of Recurrent Neural Networks (RNN) and hence they are more conducive for parallelization.",
"cite_spans": [
{
"start": 5,
"end": 25,
"text": "(Devlin et al., 2019",
"ref_id": "BIBREF0"
},
{
"start": 177,
"end": 199,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual BERT",
"sec_num": "4.2.1"
},
{
"text": "Multilingual BERT is BERT trained for multilingual tasks. It was trained on monolingual Wikipedia articles of 104 different languages. It is intended to enable multilingual BERT fine-tuned in one language to make predictions for another language. In our study, we used the multilingual BERT model having 12 layers and 12 heads 1 . This model generates a 768-dimensional vector for each word. We used the 768-dimensional vector of the Extract layer as the representation of the tweet. Our classification layer consisted of a single Dense layer. The dense layer consisted of 3 units and the softmax activation function was used. The loss function used was sparse categorical crossentropy. The Adam optimizer with a learning rate of 2e-5 was used for training the model. The model was trained for 15 epochs. Early stopping with patience of 5 was used and Sparse categorical accuracy was monitored for early stopping.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multilingual BERT",
"sec_num": "4.2.1"
},
{
"text": "We also used the Support Vector Machine (SVM) model for this task. The SVM implementation provided by Scikit-learn library (Pedregosa et al., 2011) was used in our experiments. The SVM model was trained using TF-IDF features of word and character n-grams. TF-IDF of n-grams in a document is calculated by multiplying the term frequency (Luhn, 1958) of the n-gram in the document with the inverse document frequency (Jones, 2004) of the n-gram. The term frequency of an n-gram in a document is the count of the number of times the n-gram appears in the document. The count may be normalized by dividing the count with the total number of n-grams in the document. The inverse document frequency of an n-gram t is calculated as log(N/N t ), where N is the total number of documents and N t is the number of documents in which the n-gram t appears. Word n-grams of size 1 to 3 and character n-grams of size 1 to 6 were used in our study. The linear kernel was used for the classifier and hyperparameter C was set to 1.0. The hyperparameter C is the regularization parameter. Larger values for C leads to a narrower margin and less misclassified instances. However, C should be set to a smaller value to reduce overfitting. We experimented using the SVM model on both the uncleaned data (SVM Run 1) and on the cleaned data where the URLs were removed from the text (SVM Run 2). Table 5 shows the results of our classifier obtained on the development set. As was mentioned in section 3, we combined the trial, train, and dev data sets. 20% of this combined data set was used as the development set. Run 2 of the SVM classifier was on a cleaned data set where the URLs were removed. The uncleaned data set was used for BERT and run 1 of the SVM classifier. As can be seen from the table, the best score was obtained by the SVM classifier when the cleaned data set was used. SVM trained on the character n-grams performed better than those trained on word n-grams or a combination of character and word n-grams. Tables 6 to 8 show the confusion matrices for our classifiers on the development set. As can be seen, the character n-gram based SVM classifier's strength was its ability to predict the neutral class. The word n-gram based classifier predicted the positive class better. Whereas the classifier trained using the combination of character and word n-gram features predicted the negative category better. While comparing BERT with SVM, it can be seen that the BERT classifier predicted the positive category better than SVM. However, it did not predict the negative and neutral classes well. Table 9 shows the scores our classifier obtained on the official run. The SVM classifier trained on the cleaned data using character n-gram features was our best performing classifier. It obtained F1 score of 0.678 and obtained the 21 st rank out of 62 participants. The BERT classifier's performance on the development set was quite comparable to the SVM classifiers. However, on the test data set, the BERT classifier did not perform well and obtained an F1 score of only 0.342.",
"cite_spans": [
{
"start": 123,
"end": 147,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF7"
},
{
"start": 415,
"end": 428,
"text": "(Jones, 2004)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 1373,
"end": 1380,
"text": "Table 5",
"ref_id": "TABREF4"
},
{
"start": 2004,
"end": 2017,
"text": "Tables 6 to 8",
"ref_id": "TABREF5"
},
{
"start": 2593,
"end": 2600,
"text": "Table 9",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "SVM",
"sec_num": "4.2.2"
},
{
"text": "BERT has been a very successful model in many of the natural language processing tasks. In our study, we used multilingual BERT for the detection of sentiment in code-mixed Hindi-English text. Its performance on the development set was comparable with the SVM classifier. However, it produced an F1 score of only 0.342 in the test data. The SVM classifier trained on character n-gram was our best performing classifier on the test set with an F1 score of 0.678.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://github.com/google-research/bert",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A statistical interpretation of term specificity and its application in retrieval",
"authors": [
{
"first": "Karen Sp\u00e4rck",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Documentation",
"volume": "60",
"issue": "5",
"pages": "493--502",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karen Sp\u00e4rck Jones. 2004. A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation, 60(5):493-502.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A fall-back strategy for sentiment analysis in hindi: a case study",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Balamurali",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 8th International Conference On Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Joshi, Balamurali A R, and Pushpak Bhattacharyya. 2010. A fall-back strategy for sentiment analysis in hindi: a case study. In Proceedings of the 8th International Conference On Natural Language Processing (ICON).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Towards sub-word level compositions for sentiment analysis of hindi-english code mixed text",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Ameya",
"middle": [],
"last": "Prabhu",
"suffix": ""
},
{
"first": "Manish",
"middle": [],
"last": "Shrivastava",
"suffix": ""
},
{
"first": "Vasudeva",
"middle": [],
"last": "Varma",
"suffix": ""
}
],
"year": 2016,
"venue": "COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers",
"volume": "",
"issue": "",
"pages": "2482--2491",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Joshi, Ameya Prabhu, Manish Shrivastava, and Vasudeva Varma. 2016. Towards sub-word level compo- sitions for sentiment analysis of hindi-english code mixed text. In Nicoletta Calzolari, Yuji Matsumoto, and Rashmi Prasad, editors, COLING 2016, 26th International Conference on Computational Linguistics, Proceed- ings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan, pages 2482-2491. ACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Sentiment analysis and subjectivity",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2010,
"venue": "Handbook of Natural Language Processing",
"volume": "",
"issue": "",
"pages": "627--666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Liu. 2010. Sentiment analysis and subjectivity. In Nitin Indurkhya and Fred J. Damerau, editors, Handbook of Natural Language Processing, Second Edition, pages 627-666. Chapman and Hall/CRC.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The automatic creation of literature abstracts",
"authors": [
{
"first": "Hans",
"middle": [],
"last": "Peter Luhn",
"suffix": ""
}
],
"year": 1958,
"venue": "IBM J. Res. Dev",
"volume": "2",
"issue": "2",
"pages": "159--165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hans Peter Luhn. 1958. The automatic creation of literature abstracts. IBM J. Res. Dev., 2(2):159-165.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semeval-2020 task 9: Overview of sentiment analysis of code-mixed tweets",
"authors": [
{
"first": "Parth",
"middle": [],
"last": "Patwa",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [],
"last": "Aguilar",
"suffix": ""
},
{
"first": "Sudipta",
"middle": [],
"last": "Kar",
"suffix": ""
},
{
"first": "Suraj",
"middle": [],
"last": "Pandey",
"suffix": ""
},
{
"first": "Pykl",
"middle": [],
"last": "Srinivas",
"suffix": ""
},
{
"first": "Bj\u00f6rn",
"middle": [],
"last": "Gamb\u00e4ck",
"suffix": ""
},
{
"first": "Tanmoy",
"middle": [],
"last": "Chakraborty",
"suffix": ""
},
{
"first": "Thamar",
"middle": [],
"last": "Solorio",
"suffix": ""
},
{
"first": "Amitava",
"middle": [],
"last": "Das",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parth Patwa, Gustavo Aguilar, Sudipta Kar, Suraj Pandey, Srinivas PYKL, Bj\u00f6rn Gamb\u00e4ck, Tanmoy Chakraborty, Thamar Solorio, and Amitava Das. 2020. Semeval-2020 task 9: Overview of sentiment analysis of code-mixed tweets. In Proceedings of the 14th International Workshop on Semantic Evaluation (SemEval-2020), Barcelona, Spain, December. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Scikit-learn: Machine learning in python",
"authors": [
{
"first": "Fabian",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "Ga\u00ebl",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "Bertrand",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "Mathieu",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "Ron",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Dubourg",
"suffix": ""
}
],
"year": 2011,
"venue": "Matthieu Brucher, Matthieu Perrot, and Edouard Duchesnay",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Math- ieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake VanderPlas, Alexandre Passos, David Cour- napeau, Matthieu Brucher, Matthieu Perrot, and Edouard Duchesnay. 2011. Scikit-learn: Machine learning in python. J. Mach. Learn. Res., 12:2825-2830.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Text normalization of code mix and sentiment analysis",
"authors": [
{
"first": "Shashank",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Rakesh Chandra Balabantaray ;",
"middle": [],
"last": "Srinivas",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Sabu",
"suffix": ""
},
{
"first": "Michal",
"middle": [],
"last": "Thampi",
"suffix": ""
},
{
"first": "Oge",
"middle": [],
"last": "Wozniak",
"suffix": ""
},
{
"first": "Dilip",
"middle": [],
"last": "Marques",
"suffix": ""
},
{
"first": "Sartaj",
"middle": [],
"last": "Krishnaswamy",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Sahni",
"suffix": ""
},
{
"first": "Hideyuki",
"middle": [],
"last": "Callegari",
"suffix": ""
},
{
"first": "Zoran",
"middle": [
"S"
],
"last": "Takagi",
"suffix": ""
},
{
"first": "Vinod",
"middle": [
"M"
],
"last": "Bojkovic",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Neeli",
"suffix": ""
},
{
"first": "Jose",
"middle": [
"M Alcaraz"
],
"last": "Prasad",
"suffix": ""
},
{
"first": "Joal",
"middle": [],
"last": "Calero",
"suffix": ""
},
{
"first": "Xinyu",
"middle": [],
"last": "Rodrigues",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Que",
"suffix": ""
}
],
"year": 2015,
"venue": "2015 International Conference on Advances in Computing, Communications and Informatics",
"volume": "",
"issue": "",
"pages": "1468--1473",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shashank Sharma, PYKL Srinivas, and Rakesh Chandra Balabantaray. 2015. Text normalization of code mix and sentiment analysis. In Jaime Lloret Mauri, Sabu M. Thampi, Michal Wozniak, Oge Marques, Dilip Krish- naswamy, Sartaj Sahni, Christian Callegari, Hideyuki Takagi, Zoran S. Bojkovic, Vinod M., Neeli R. Prasad, Jose M. Alcaraz Calero, Joal Rodrigues, Xinyu Que, Natarajan Meghanathan, Ravi Sandhu, and Edward Au, editors, 2015 International Conference on Advances in Computing, Communications and Informatics, ICACCI 2015, Kochi, India, August 10-13, 2015, pages 1468-1473. IEEE.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Attention is All you Need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan. N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998-6008. Curran Associates, Inc.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "",
"type_str": "table",
"content": "<table><tr><td>Negative</td><td>890</td></tr><tr><td/><td>(30%)</td></tr><tr><td>Neutral</td><td>1128</td></tr><tr><td/><td>(38%)</td></tr><tr><td>Positive</td><td>982</td></tr><tr><td/><td>(32%)</td></tr><tr><td>Total</td><td>3000</td></tr><tr><td>: Train</td><td/></tr></table>",
"html": null,
"num": null
},
"TABREF2": {
"text": "",
"type_str": "table",
"content": "<table><tr><td colspan=\"2\">Negative 4992</td></tr><tr><td/><td>(29%)</td></tr><tr><td>Neutral</td><td>6392</td></tr><tr><td/><td>(38%)</td></tr><tr><td>Positive</td><td>5616</td></tr><tr><td/><td>(33%)</td></tr><tr><td>Total</td><td>17000</td></tr><tr><td>: Development</td><td/></tr></table>",
"html": null,
"num": null
},
"TABREF4": {
"text": "",
"type_str": "table",
"content": "<table><tr><td/><td/><td/><td/><td colspan=\"3\">: Dev Set Results</td><td/><td/><td/></tr><tr><td/><td colspan=\"9\">SVM1 (Char n-gram) SVM1 (Word n-gram) SVM1 (Char + Word n-gram)</td></tr><tr><td/><td colspan=\"2\">Pred Pred</td><td>Pred</td><td colspan=\"2\">Pred Pred</td><td>Pred</td><td colspan=\"2\">Pred Pred</td><td>Pred</td></tr><tr><td/><td colspan=\"2\">NEG NEU</td><td>POS</td><td colspan=\"2\">NEG NEU</td><td>POS</td><td colspan=\"2\">NEG NEU</td><td>POS</td></tr><tr><td colspan=\"2\">True NEG 631</td><td>300</td><td>56</td><td>623</td><td>310</td><td>54</td><td>652</td><td>270</td><td>65</td></tr><tr><td colspan=\"2\">True NEU 252</td><td>763</td><td>272</td><td>274</td><td>713</td><td>300</td><td>260</td><td>722</td><td>305</td></tr><tr><td>True POS</td><td>74</td><td>342</td><td>710</td><td>91</td><td>309</td><td>726</td><td>93</td><td>320</td><td>713</td></tr></table>",
"html": null,
"num": null
},
"TABREF5": {
"text": "Confusion Matrix for SVM Run 1 on Dev Set",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
},
"TABREF7": {
"text": "Confusion Matrix for SVM Run 2 on Dev Set",
"type_str": "table",
"content": "<table><tr><td/><td>BERT</td><td/></tr><tr><td colspan=\"3\">Pred Pred Pred</td></tr><tr><td colspan=\"3\">NEG NEU POS</td></tr><tr><td>True NEG 600</td><td>304</td><td>83</td></tr><tr><td>True NEU 247</td><td>699</td><td>341</td></tr><tr><td>True POS 99</td><td>261</td><td>766</td></tr></table>",
"html": null,
"num": null
},
"TABREF8": {
"text": "",
"type_str": "table",
"content": "<table><tr><td/><td colspan=\"4\">SVM1 SVM2 BERT Best System</td></tr><tr><td>F1</td><td>0.674</td><td>0.678</td><td>0.342</td><td>0.75</td></tr><tr><td colspan=\"2\">Rank -</td><td>21/62</td><td>-</td><td>1/62</td></tr><tr><td>: Confusion Matrix for BERT on Dev</td><td/><td/><td/><td/></tr><tr><td>Set</td><td/><td/><td/><td/></tr></table>",
"html": null,
"num": null
},
"TABREF9": {
"text": "Official Results on Test Set",
"type_str": "table",
"content": "<table/>",
"html": null,
"num": null
}
}
}
}