| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T01:06:10.433568Z" |
| }, |
| "title": "DFKI SLT at GermEval 2021: Multilingual Pre-training and Data Augmentation for the Classification of Toxicity in Social Media Comments", |
| "authors": [ |
| { |
| "first": "Remi", |
| "middle": [], |
| "last": "Calizzano", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "DFKI GmbH Berlin", |
| "location": { |
| "country": "Germany" |
| } |
| }, |
| "email": "remi.calizzano@dfki.de" |
| }, |
| { |
| "first": "Malte", |
| "middle": [], |
| "last": "Ostendorff", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "DFKI GmbH", |
| "location": { |
| "settlement": "Berlin", |
| "country": "Germany" |
| } |
| }, |
| "email": "malte.ostendorff@dfki.de" |
| }, |
| { |
| "first": "Georg", |
| "middle": [], |
| "last": "Rehm", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "DFKI GmbH", |
| "location": { |
| "settlement": "Berlin", |
| "country": "Germany" |
| } |
| }, |
| "email": "georg.rehm@dfki.de" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We present our submission to the first subtask of GermEval 2021 (classification of German Facebook comments as toxic or not). Binary sequence classification is a standard NLP task with known state-of-the-art methods. Therefore, we focus on data preparation by using two different techniques: taskspecific pre-training and data augmentation. First, we pre-train multilingual transformers (XLM-RoBERTa and MT5) on 12 hatespeech detection datasets in nine different languages. In terms of F1, we notice an improvement of 10% on average, using task-specific pretraining. Second, we perform data augmentation by labelling unlabelled comments, taken from Facebook, to increase the size of the training dataset by 79%. Models trained on the augmented training dataset obtain on average +0.0282 (+5%) F1 score compared to models trained on the original training dataset. Finally, the combination of the two techniques allows us to obtain an F1 score of 0.6899 with XLM-RoBERTa and 0.6859 with MT5. The code of the project is available at: https://github.com/ airKlizz/germeval2021toxic.", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We present our submission to the first subtask of GermEval 2021 (classification of German Facebook comments as toxic or not). Binary sequence classification is a standard NLP task with known state-of-the-art methods. Therefore, we focus on data preparation by using two different techniques: taskspecific pre-training and data augmentation. First, we pre-train multilingual transformers (XLM-RoBERTa and MT5) on 12 hatespeech detection datasets in nine different languages. In terms of F1, we notice an improvement of 10% on average, using task-specific pretraining. Second, we perform data augmentation by labelling unlabelled comments, taken from Facebook, to increase the size of the training dataset by 79%. Models trained on the augmented training dataset obtain on average +0.0282 (+5%) F1 score compared to models trained on the original training dataset. Finally, the combination of the two techniques allows us to obtain an F1 score of 0.6899 with XLM-RoBERTa and 0.6859 with MT5. The code of the project is available at: https://github.com/ airKlizz/germeval2021toxic.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Toxicity classification, or, more generally, hatespeech detection, has become a highly important topic due to the explosion of social media use. The automation of this task is a challenge for the NLP field with an increasing amount of research on this subject Aluru et al., 2020; Corazza et al., 2020) . The GermEval series has already looked into various aspects related to the detection of German language hatespeech with two shared tasks on offensive language identification (Wiegand et al., 2018; Stru\u00df et al., 2019) . The first subtask of GermEval 2021 follows in these footsteps with the classification of toxic comments.", |
| "cite_spans": [ |
| { |
| "start": 260, |
| "end": 279, |
| "text": "Aluru et al., 2020;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 280, |
| "end": 301, |
| "text": "Corazza et al., 2020)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 478, |
| "end": 500, |
| "text": "(Wiegand et al., 2018;", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 501, |
| "end": 520, |
| "text": "Stru\u00df et al., 2019)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We want to take advantage of the proliferation of hatespeech datasets for various languages created in the last couple of years. Additionally, in the meantime, a number of multilingual language models have been published (Conneau et al., 2020; Xue et al., 2021; with a high capacity for cross-lingual transfer. We use multilingual models and pre-train them on a multilingual dataset created out of 12 datasets for nine different languages on toxicity and hatespeech detection. We evaluate whether performing this type of pre-training on multilingual models can improve their performance. We assume that the cross-lingual transfer capacity of the multilingual models can be applied to task-specific pre-training and that this will improve final performance on the German-only dataset of the shared task. Furthermore, we perform data augmentation by labelling unlabeled data, retrieved from Facebook, using one of the multilingual models pre-trained and fine-tuned on the toxicity classification task. As the dataset of the shared task contains only 3244 examples, we hope that extending the number of training examples can improve the overall performance of the models.", |
| "cite_spans": [ |
| { |
| "start": 221, |
| "end": 243, |
| "text": "(Conneau et al., 2020;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 244, |
| "end": 261, |
| "text": "Xue et al., 2021;", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In summary, our main contributions are:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Comparison of the performance of two multilingual models (XLM-RoBERTa and mT5) against a German-specific language model (GBERT) on a German binary classification task with and without task-specific pretraining for multilingual models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 Evaluation of the models when using data augmentation to increase the size of the dataset used for fine-tuning.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The rest of this article is structured as follows. Section 2 presents our methodology for taskspecific pre-training and data augmentation. Section 3 introduces the task as well as the dataset and describes the models and training scenarios. Sections 4 and 5 present and discuss the results obtained in these training scenarios. Concluding remarks are provided in Section 6.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "2.1 Task-specific pre-training Toxicity or, more generally, hatespeech classification is an NLP task that is supported through multiple datasets in multiple languages. Although the specific task may differ from one dataset to another due to the type of content and annotations used , the features used to classify sequences are similar.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methodology", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Pre-training is a technique that often enables state-of-the-art performance in many NLP tasks (Sarlin et al., 2020) . Task-specific pre-training has shown its efficiency to produce models that capture task-specific features and that, thus, exhibit better performance .", |
| "cite_spans": [ |
| { |
| "start": 94, |
| "end": 115, |
| "text": "(Sarlin et al., 2020)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methodology", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We want to profit from the many existing hatespeech classification datasets by using these datasets to perform task-specific pre-training.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methodology", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We adapt task-specific pre-training to toxicity classification by taking 12 toxicity or hatespeech classification datasets and training language models on these datasets before fine-tuning them on the dataset of the shared task (Table 1 ). Our taskspecific pre-training dataset is composed of a total of 105,142 examples in nine different languages.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 228, |
| "end": 236, |
| "text": "(Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Methodology", |
| "sec_num": "2" |
| }, |
| { |
| "text": "To take advantage of this task-specific multilingual pre-training, we work with multilingual models. Indeed, these models have already demonstrated their ability to transfer what they have learned in one language into other languages (Hu et al., 2020) . In this work, the models will be finetuned on the dataset of the shared task which is in German only, however, we assume that the multilingual models can benefit from the task-specific pre-training.", |
| "cite_spans": [ |
| { |
| "start": 234, |
| "end": 251, |
| "text": "(Hu et al., 2020)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methodology", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In addition to the task specific pre-training, we increase the size of the shared task dataset using data labelling. We use our best performing model and fine-tune on the toxicity classification task of the shared task to label unlabelled Facebook comments we collected from German political talk shows. In total, we collected 5563 Facebook comments added 20171,528 eng Wiegand et al. (2018) 5,009 deu Mandl et al. (2019) 14,336 eng, deu, hin Ousidhoum et al. (2019) 13,014 ara, eng, fra de Gibert et al. 201810,944 eng Davidson et al. (2017) 24,783 eng Alfina et al. 2017713 ind Ross et al. (2016) 469 deu Mulki et al. 20195846 apc Nascimento et al. 20197,672 por Ibrohim and Budi 201913,169 ind Table 1 : List of all the datasets used for the taskspecific pre-training with the number of examples and the languages (code ISO 639-3) for each dataset.", |
| "cite_spans": [ |
| { |
| "start": 370, |
| "end": 391, |
| "text": "Wiegand et al. (2018)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 402, |
| "end": 421, |
| "text": "Mandl et al. (2019)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 443, |
| "end": 466, |
| "text": "Ousidhoum et al. (2019)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 520, |
| "end": 542, |
| "text": "Davidson et al. (2017)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 580, |
| "end": 598, |
| "text": "Ross et al. (2016)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 697, |
| "end": 704, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data augmentation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "to posts from the pages of ZDF heute 1 , Panorama 2 , Maischberger 3 , and hart aber fair 4 . mT5 is performing better than XLM-RoBERTa on the final toxic classification task when simply using task-specific pre-training and fine-tuning, therefore we use mT5 to compute the probability of a comment to be toxic or not. We only keep the comments classified as toxic or non-toxic with a probability larger than 0.8. Figure 1 shows examples of comments with their toxicity probabilities. This way we label 2044 comments, which we add to the original shared task dataset. Table 2 compares the original dataset with the one we created and also with the augmented dataset which corresponds to the combination of the original dataset and the one we created using data augmentation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 413, |
| "end": 421, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 567, |
| "end": 574, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data augmentation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "3 Experiments", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data augmentation", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The first subtask of GermEval 2021 is the classification of Facebook comments from German political talk shows with regard to their toxicity. Figure 2 shows two examples. Risch et al. (2021) provide a detailed description of the dataset. We split the original dataset into a train and an evaluation portion to be able to evaluate our models during training. We use 80% of the original dataset for training and 20% for the evaluation, for which we use precision, recall, and macro-average F1.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 142, |
| "end": 150, |
| "text": "Figure 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Task and dataset", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Hat vermutlich auch\u00fcberhaupt nichts mit Merkels Desastr\u00f6ser Politik zu tun 0.8790 Frage: Wenn die Tage k\u00fcrzer werden, das Gehalt aber gleich bleibt, reicht es dann l\u00e4nger? 0.0541 Die Haus\u00e4rzte bekommen Astra nicht verimpft und die Impfzentren bleiben halb leer. Impfturbo? 0.5627 Na was sind die B\u00fcrger erst entt\u00e4uscht von euch allen samt dem Gremium....", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Toxicity probability", |
| "sec_num": null |
| }, |
| { |
| "text": "Figure 1: Samples of comments collected on Facebook posts from German political talk shows with their toxicity probability. We only keep the comments classified as toxic or non-toxic with a probability larger than 0. Table 2 : Comparison of the original shared task dataset, the dataset created using data augmentation, and the augmented dataset, i. e., the combination of the other two datasets.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 217, |
| "end": 224, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "0.6742", |
| "sec_num": null |
| }, |
| { |
| "text": "The task-specific pre-training is based on a multilingual dataset (Section 2.1). We picked two multilingual Transformer models, XLM-RoBERTa and mT5. In addition, we compare multilingual models with the German Transformer based language model GBERT that we evaluate with our data augmentation method.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "GBERT GBERT (Chan et al., 2020 ) is a German language model using the same architecture as BERT (Devlin et al., 2019) . GBERT is an encoderonly Transformer model. It was trained using masked language modeling with whole word masking which corresponds to masking all of the tokens corresponding to a word. The pre-training corpus consists of German texts from Wikipedia, Common Crawl (Ortiz Su\u00e1rez et al., 2019), OPUS (Tiedemann, 2012), and Open Legal Data (Ostendorff et al., 2020) . GBERT outperforms the state-of-theart for the GermEval 2018 hatespeech detection task and the GermEval 2014 NER task (Chan et al., 2020) . We use the GBERT Base version.", |
| "cite_spans": [ |
| { |
| "start": 12, |
| "end": 30, |
| "text": "(Chan et al., 2020", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 96, |
| "end": 117, |
| "text": "(Devlin et al., 2019)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 456, |
| "end": 481, |
| "text": "(Ostendorff et al., 2020)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 601, |
| "end": 620, |
| "text": "(Chan et al., 2020)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "XLM-RoBERTa XLM-RoBERTa (Conneau et al., 2020) is the multilingual version of RoBERTa (Liu et al., 2019) . It was trained on the Common Crawl corpus in 100 languages using masked language modeling. We choose XLM-RoBERTa instead of Multilingual BERT 5 because XLM-RoBERTa outperforms Multilingual BERT on a variety of cross-lingual benchmarks 5 https://github.com/google-research/bert/blob/master/ multilingual.md (Conneau et al., 2020) . We use the Base version of XLM-RoBERTa. mT5 mT5 (Xue et al., 2021 ) is a multilingual variant of T5 (Raffel et al., 2020) covering 101 languages. It uses the same architecture as T5, an encoder-decoder Transformer model. Being a text-to-text model, we transform the binary classification task into a text generation task where we train mT5 to generate \"neutral\" when the input label corresponds to a non-toxic comment and \"toxic\" when the input label is toxic. We also add the task prefix \"speech review\" at the beginning of each input sequence. As T5, mT5 exists in five sizes: Small, Base, Large, XL, XXL. The XXL version of mT5 performs better than other multilingual models such as XLM-RoBERTa on many multilingual benchmarks, however, due to computational limits, we use the mT5 Base version that produces results comparable to XLM-RoBERTa (Xue et al., 2021) .", |
| "cite_spans": [ |
| { |
| "start": 24, |
| "end": 46, |
| "text": "(Conneau et al., 2020)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 86, |
| "end": 104, |
| "text": "(Liu et al., 2019)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 413, |
| "end": 435, |
| "text": "(Conneau et al., 2020)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 486, |
| "end": 503, |
| "text": "(Xue et al., 2021", |
| "ref_id": null |
| }, |
| { |
| "start": 538, |
| "end": 559, |
| "text": "(Raffel et al., 2020)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 1283, |
| "end": 1301, |
| "text": "(Xue et al., 2021)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "To evaluate the benefit of the task-specific pretraining and data augmentation, we train the models in four different scenarios.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training scenarios", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Fine-tuning only We first fine-tune the three models on the original dataset of the shared task. These models are used as baselines to evaluate the two methodologies we propose.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training scenarios", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "With task-specific pre-training In this scenario, we pre-train mT5 and XLM-RoBERTa on the taskspecific pre-training dataset (Section 2.1). The taskspecific pre-training consists of training the models with the same objective as the fine-tuning task Comment Toxicity Die SPD, Verbrecher,die haben Angst vor den Wahlen in den neuen Bundesl\u00e4ndern,weg mit Euch. 1 Ich schmei\u00df mich weg... 800 Euro sollen f\u00fcr ein \"\"vern\u00fcnftiges\"\" Leben ausreichen? 0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training scenarios", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Figure 2: Two comments from the original GermEval21 shared task dataset with their toxicity labels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training scenarios", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "which is the classification of toxic comments. As the result of the combination of those datasets is not balanced, we randomly remove non-toxic samples to arrive at the same number of toxic and non-toxic samples. Afterwards, we fine-tune the task-specific pre-trained models as in the first scenario.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training scenarios", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "With data augmentation This scenario corresponds to the first one except we use the augmented dataset instead of the original shared task dataset. The augmented dataset combines the original and one additional dataset (Table 2) .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 218, |
| "end": 227, |
| "text": "(Table 2)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Training scenarios", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "With task-specific pre-training and data augmentation This scenario combines the second and third scenario. We fine-tune the task-specific pre-trained models on the augmented dataset. We use the HuggingFace Transformers library (Wolf et al., 2020) to train the models. GBERT and XLM-RoBERTa are trained using the hyperparameter search method 6 with Optuna as the optimization framework 7 , the maximization of the F1 metric as computing objective, and a number of trials equals to 10. As mT5 requires more training time, we do not use hyperparameter search for mT5 but fixed parameters that we found to be the best. We use a learning rate of 5 \u22125 , a batch size of 16, and we train mT5 for 3 epochs. In the end we select the best model with regard to the F1 score.", |
| "cite_spans": [ |
| { |
| "start": 228, |
| "end": 247, |
| "text": "(Wolf et al., 2020)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 386, |
| "end": 387, |
| "text": "7", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training scenarios", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "To deal with the imbalanced training dataset, we use class weights for GBERT and XLM-RoBERTa and oversample the dataset for mT5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training scenarios", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We evaluate the models on the test dataset provided by the organizers of the shared task after the training phase and the submissions (see Table 3 ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 139, |
| "end": 146, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "First, adding task-specific pre-training and/or using data augmentation improves the results for both XLM-RoBERTa and mT5. Training with task-specific pre-training and data augmentation improves the F1 score by 0.0490 (+8%) for XLM-RoBERTa and by 0.0836 (+14%) for mT5. GBERT Table 3 : F1, recall and precision results of each model on the test dataset of the shared task for each training scenario. * models used for our submissions. Results slightly differ from the submissions because we retrained all the models for the paper.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 276, |
| "end": 283, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "also produces slightly better results, the F1 score improves by 0.0066 (+1%), when using the augmented dataset for fine-tuning. Second, for the models fine-tuned only on the original dataset, mT5 obtains the worst results with an F1 score of 0.6023, followed by XLM-RoBERTa with 0.6409, and GBERT with 0.6663. The ranking is the same for the models fine-tuned on the augmented dataset but with a smaller gap between scores. F1 scores for mT5, XLM-RoBERTa and GBERT are 0.6533, 0.6680 and 0.6729.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Third, despite mT5 performing worse that XLM-RoBERTa by 0.0386 when fine-tuned on the original dataset, the results with task-specific pretraining and data augmentation of the two models are very similar with a difference between F1 scores lower than 0.1%. This correlates with the fact that the task-specific pre-training particularly improves the results of mT5 with an increase of 0.0776 (+13%) of the F1 score compared to an increase of 0.0376 (+6%) for XLM-RoBERTa.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Overall, XLM-RoBERTa and mT5 with taskspecific pre-training and data augmentation are the models that obtain the best F1 scores with 0.6899 and 0.6859, respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In the two scenarios where only German data is used (Fine-tuning only and With data augmentation), GBERT performs better than XLM-RoBERTa and mT5. This is easily explained by the fact that GBERT was pre-trained only on German data, in contrast to mT5 and XLM-RoBERTa. However, the small difference in F1 scores with the use of the augmented dataset (With data augmentation) implies that with more data, multilingual models can perform as well as monolingual models. Additionally, we see that the task-specific pre-training of multilingual models on a multilingual dataset compensates for the poorer performance of mT5 and XLM-RoBERTa when trained on a German only dataset compared to GBERT. It is interesting to note that the task-specific pre-training of mT5 and XLM-RoBERTa on a multilingual dataset allows them to perform better than GBERT. The fact that multilingual models can benefit from hatespeech classification datasets in other languages allows them to perform better than the German-only model. It is also important to notice that XLM-RoBERTa and mT5 use more recent architectures and/or pre-training methods than GBERT. It may also partly explain that GBERT's results are worse than those of XLM-RoBERTa and mT5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Moreover, as noted in Section 4, XLM-RoBERTa does not benefit from the task-specific pre-training as much as mT5. Our hypothesis is that having less trainable parameters, XLM-RoBERTa (270M parameters) does not have as much capacity as mT5 (580M parameters) to benefit from all the examples on which the models are pre-trained. The number of parameters of the models is an important aspect to take into consideration when doing pre-training in general, and we observe this again in our experiments with task-specific pre-training.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We describe the methods used for our submissions to the GermEval 2021 toxic comment classification task. Specifically, we can benefit from hatespeech detection datasets in other languages to improve the performance of multilingual models through taskspecific pre-training. With this method, multilingual models (XLM-RoBERTa and mT5) perform even better, +0.0576 (+10%) in average in terms of F1, than GBERT, a German-specific language model. We show that by increasing the shared task dataset by automatically labeling additional comments from Facebook, we are able to improve the results of the three models we evaluated (GBERT, XLM-RoBERTa, mT5) by 5% in average.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We have shown that multilingual models can perform as well or even better than monolingual models by performing task-specific multilingual pre-training. This particularly applies to tasks for which many datasets are available in languages different from the dataset used for fine-tuning and where the fine-tuning dataset is relatively small (less than 10,000 samples) as is the case of the German toxic comment classification task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In addition, multilingual models have some other advantages. First, in a production setting, it might not be feasible to deploy multiple monolingual models due to resource constraints. Replacing multiple monolingual models with a single multilingual model can be a solution. Second, multilingual models, due to their cross-lingual transfer capacity, can be used in a language other than the language of the training dataset. This allows the creation of models for languages for which obtaining training data can be difficult.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Proceedings of the GermEval 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments co-located with KONVENS", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://www.facebook.com/ZDFheute/ 2 https://www.facebook.com/panorama.de 3 https://www.facebook.com/maischberger 4 https://www.facebook.com/hartaberfairARD", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://huggingface.co/transformers/main classes/ trainer.html#transformers.Trainer.hyperparameter search7 https://optuna.org", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The research presented in this paper is funded by the German Federal Ministry of Education and Research (BMBF) through the project QURATOR (http://qurator.ai) (Unternehmen Region, Wachstumskern, grant no. 03WKDA1A). In addition, the authors would like to thank Melina Plakidis for her contribution regarding the data labelling.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Hate speech detection in the indonesian language: A dataset and preliminary study", |
| "authors": [ |
| { |
| "first": "Rio", |
| "middle": [], |
| "last": "Ika Alfina", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohamad", |
| "middle": [ |
| "Ivan" |
| ], |
| "last": "Mulia", |
| "suffix": "" |
| }, |
| { |
| "first": "Yudo", |
| "middle": [], |
| "last": "Fanany", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ekanata", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "2017 International Conference on Advanced Computer Science and Information Systems (ICACSIS)", |
| "volume": "", |
| "issue": "", |
| "pages": "233--238", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ika Alfina, Rio Mulia, Mohamad Ivan Fanany, and Yudo Ekanata. 2017. Hate speech detection in the indonesian language: A dataset and preliminary study. In 2017 International Conference on Ad- vanced Computer Science and Information Systems (ICACSIS), pages 233-238. IEEE.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Deep learning models for multilingual hate speech detection", |
| "authors": [ |
| { |
| "first": "Binny", |
| "middle": [], |
| "last": "Sai Saket Aluru", |
| "suffix": "" |
| }, |
| { |
| "first": "Punyajoy", |
| "middle": [], |
| "last": "Mathew", |
| "suffix": "" |
| }, |
| { |
| "first": "Animesh", |
| "middle": [], |
| "last": "Saha", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mukherjee", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "ArXiv", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sai Saket Aluru, Binny Mathew, Punyajoy Saha, and Animesh Mukherjee. 2020. Deep learning mod- els for multilingual hate speech detection. ArXiv, abs/2004.06465.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Automatic Classification of Abusive Language and Personal Attacks in Various Forms of Online Communication", |
| "authors": [ |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Bourgonje", |
| "suffix": "" |
| }, |
| { |
| "first": "Julian", |
| "middle": [ |
| "Moreno" |
| ], |
| "last": "Schneider", |
| "suffix": "" |
| }, |
| { |
| "first": "Georg", |
| "middle": [], |
| "last": "Rehm", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Language Technologies for the Challenges of the Digital Age: 27th International Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "180--191", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter Bourgonje, Julian Moreno Schneider, and Georg Rehm. 2018. Automatic Classification of Abusive Language and Personal Attacks in Various Forms of Online Communication. In Language Technologies for the Challenges of the Digital Age: 27th Inter- national Conference, GSCL 2017, Berlin, Germany, September 13-14, 2017, Proceedings, number 10713 in Lecture Notes in Artificial Intelligence (LNAI), pages 180-191, Cham, Switzerland. Gesellschaft fur Sprachtechnologie und Computerlinguistik e.V., Springer. 13/14 September 2017.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "German's next language model", |
| "authors": [ |
| { |
| "first": "Branden", |
| "middle": [], |
| "last": "Chan", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Schweter", |
| "suffix": "" |
| }, |
| { |
| "first": "Timo", |
| "middle": [], |
| "last": "M\u00f6ller", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Branden Chan, Stefan Schweter, and Timo M\u00f6ller. 2020. German's next language model.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "CONAN -COunter NArratives through nichesourcing: a multilingual dataset of responses to fight online hate speech", |
| "authors": [ |
| { |
| "first": "Yi-Ling", |
| "middle": [], |
| "last": "Chung", |
| "suffix": "" |
| }, |
| { |
| "first": "Elizaveta", |
| "middle": [], |
| "last": "Kuzmenko", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Serra Sinem Tekiroglu", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Guerini", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "2819--2829", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yi-Ling Chung, Elizaveta Kuzmenko, Serra Sinem Tekiroglu, and Marco Guerini. 2019. CONAN - COunter NArratives through nichesourcing: a mul- tilingual dataset of responses to fight online hate speech. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 2819-2829, Florence, Italy. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Unsupervised cross-lingual representation learning at scale", |
| "authors": [ |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Conneau", |
| "suffix": "" |
| }, |
| { |
| "first": "Kartikay", |
| "middle": [], |
| "last": "Khandelwal", |
| "suffix": "" |
| }, |
| { |
| "first": "Naman", |
| "middle": [], |
| "last": "Goyal", |
| "suffix": "" |
| }, |
| { |
| "first": "Vishrav", |
| "middle": [], |
| "last": "Chaudhary", |
| "suffix": "" |
| }, |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Wenzek", |
| "suffix": "" |
| }, |
| { |
| "first": "Francisco", |
| "middle": [], |
| "last": "Guzm\u00e1n", |
| "suffix": "" |
| }, |
| { |
| "first": "Edouard", |
| "middle": [], |
| "last": "Grave", |
| "suffix": "" |
| }, |
| { |
| "first": "Myle", |
| "middle": [], |
| "last": "Ott", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Veselin", |
| "middle": [], |
| "last": "Stoyanov", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In ACL.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A multilingual evaluation for online hate speech detection", |
| "authors": [ |
| { |
| "first": "Michele", |
| "middle": [], |
| "last": "Corazza", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefano", |
| "middle": [], |
| "last": "Menini", |
| "suffix": "" |
| }, |
| { |
| "first": "Elena", |
| "middle": [], |
| "last": "Cabrio", |
| "suffix": "" |
| }, |
| { |
| "first": "Sara", |
| "middle": [], |
| "last": "Tonelli", |
| "suffix": "" |
| }, |
| { |
| "first": "Serena", |
| "middle": [], |
| "last": "Villata", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "ACM Trans. Internet Technol", |
| "volume": "20", |
| "issue": "2", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michele Corazza, Stefano Menini, Elena Cabrio, Sara Tonelli, and Serena Villata. 2020. A multilingual evaluation for online hate speech detection. ACM Trans. Internet Technol., 20(2).", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Automated hate speech detection and the problem of offensive language", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Davidson", |
| "suffix": "" |
| }, |
| { |
| "first": "Dana", |
| "middle": [], |
| "last": "Warmsley", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Macy", |
| "suffix": "" |
| }, |
| { |
| "first": "Ingmar", |
| "middle": [], |
| "last": "Weber", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the 11th International AAAI Conference on Web and Social Media, ICWSM '17", |
| "volume": "", |
| "issue": "", |
| "pages": "512--515", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the 11th International AAAI Confer- ence on Web and Social Media, ICWSM '17, pages 512-515.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "4171--4186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Detecting online hate speech using context aware models", |
| "authors": [ |
| { |
| "first": "Lei", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruihong", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "260--266", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lei Gao and Ruihong Huang. 2017. Detecting on- line hate speech using context aware models. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP 2017, pages 260-266, Varna, Bulgaria. INCOMA Ltd.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Hate Speech Dataset from a White Supremacy Forum", |
| "authors": [ |
| { |
| "first": "Ona", |
| "middle": [], |
| "last": "De Gibert", |
| "suffix": "" |
| }, |
| { |
| "first": "Naiara", |
| "middle": [], |
| "last": "Perez", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)", |
| "volume": "", |
| "issue": "", |
| "pages": "11--20", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ona de Gibert, Naiara Perez, Aitor Garc\u00eda-Pablos, and Montse Cuadros. 2018. Hate Speech Dataset from a White Supremacy Forum. In Proceedings of the 2nd Workshop on Abusive Language Online (ALW2), pages 11-20, Brussels, Belgium. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization", |
| "authors": [ |
| { |
| "first": "Junjie", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Ruder", |
| "suffix": "" |
| }, |
| { |
| "first": "Aditya", |
| "middle": [], |
| "last": "Siddhant", |
| "suffix": "" |
| }, |
| { |
| "first": "Graham", |
| "middle": [], |
| "last": "Neubig", |
| "suffix": "" |
| }, |
| { |
| "first": "Orhan", |
| "middle": [], |
| "last": "Firat", |
| "suffix": "" |
| }, |
| { |
| "first": "Melvin", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Junjie Hu, Sebastian Ruder, Aditya Siddhant, Gra- ham Neubig, Orhan Firat, and Melvin Johnson. 2020. Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generaliza- tion. CoRR, abs/2003.11080.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Multi-label hate speech and abusive language detection in Indonesian twitter", |
| "authors": [ |
| { |
| "first": "Muhammad", |
| "middle": [], |
| "last": "Okky Ibrohim", |
| "suffix": "" |
| }, |
| { |
| "first": "Indra", |
| "middle": [], |
| "last": "Budi", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the Third Workshop on Abusive Language Online", |
| "volume": "", |
| "issue": "", |
| "pages": "46--57", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Muhammad Okky Ibrohim and Indra Budi. 2019. Multi-label hate speech and abusive language de- tection in Indonesian twitter. In Proceedings of the Third Workshop on Abusive Language Online, pages 46-57, Florence, Italy. Association for Com- putational Linguistics.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Pre-training via paraphrasing", |
| "authors": [ |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Marjan", |
| "middle": [], |
| "last": "Ghazvininejad", |
| "suffix": "" |
| }, |
| { |
| "first": "Gargi", |
| "middle": [], |
| "last": "Ghosh", |
| "suffix": "" |
| }, |
| { |
| "first": "Armen", |
| "middle": [], |
| "last": "Aghajanyan", |
| "suffix": "" |
| }, |
| { |
| "first": "Sida", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Ar- men Aghajanyan, Sida Wang, and Luke Zettlemoyer. 2020. Pre-training via paraphrasing.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Task-specific objectives of pre-trained language models for dialogue adaptation", |
| "authors": [ |
| { |
| "first": "Junlong", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhuosheng", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Hai", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "Xi", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiang", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "ArXiv", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Junlong Li, Zhuosheng Zhang, Hai Zhao, Xi Zhou, and Xiang Zhou. 2020. Task-specific objectives of pre-trained language models for dialogue adaptation. ArXiv, abs/2009.04984.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Multilingual denoising pretraining for neural machine translation", |
| "authors": [ |
| { |
| "first": "Yinhan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiatao", |
| "middle": [], |
| "last": "Gu", |
| "suffix": "" |
| }, |
| { |
| "first": "Naman", |
| "middle": [], |
| "last": "Goyal", |
| "suffix": "" |
| }, |
| { |
| "first": "Xian", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Sergey", |
| "middle": [], |
| "last": "Edunov", |
| "suffix": "" |
| }, |
| { |
| "first": "Marjan", |
| "middle": [], |
| "last": "Ghazvininejad", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "8", |
| "issue": "", |
| "pages": "726--742", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, M. Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre- training for neural machine translation. Transac- tions of the Association for Computational Linguis- tics, 8:726-742.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv e-prints", |
| "authors": [ |
| { |
| "first": "Yinhan", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Myle", |
| "middle": [], |
| "last": "Ott", |
| "suffix": "" |
| }, |
| { |
| "first": "Naman", |
| "middle": [], |
| "last": "Goyal", |
| "suffix": "" |
| }, |
| { |
| "first": "Jingfei", |
| "middle": [], |
| "last": "Du", |
| "suffix": "" |
| }, |
| { |
| "first": "Mandar", |
| "middle": [], |
| "last": "Joshi", |
| "suffix": "" |
| }, |
| { |
| "first": "Danqi", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Lewis", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Veselin", |
| "middle": [], |
| "last": "Stoyanov", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1907.11692" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv e-prints, page arXiv:1907.11692.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Overview of the hasoc track at fire 2019: Hate speech and offensive content identification in indo-european languages", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Mandl", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandip", |
| "middle": [], |
| "last": "Modha", |
| "suffix": "" |
| }, |
| { |
| "first": "Prasenjit", |
| "middle": [], |
| "last": "Majumder", |
| "suffix": "" |
| }, |
| { |
| "first": "Daksh", |
| "middle": [], |
| "last": "Patel", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohana", |
| "middle": [], |
| "last": "Dave", |
| "suffix": "" |
| }, |
| { |
| "first": "Chintak", |
| "middle": [], |
| "last": "Mandlia", |
| "suffix": "" |
| }, |
| { |
| "first": "Aditya", |
| "middle": [], |
| "last": "Patel", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 11th Forum for Information Retrieval Evaluation, FIRE '19", |
| "volume": "", |
| "issue": "", |
| "pages": "14--17", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Mandl, Sandip Modha, Prasenjit Majumder, Daksh Patel, Mohana Dave, Chintak Mandlia, and Aditya Patel. 2019. Overview of the hasoc track at fire 2019: Hate speech and offensive content identifi- cation in indo-european languages. In Proceedings of the 11th Forum for Information Retrieval Evalu- ation, FIRE '19, page 14-17, New York, NY, USA. Association for Computing Machinery.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "L-hsab: A levantine twitter dataset for hate speech and abusive language", |
| "authors": [ |
| { |
| "first": "Hala", |
| "middle": [], |
| "last": "Mulki", |
| "suffix": "" |
| }, |
| { |
| "first": "Hatem", |
| "middle": [], |
| "last": "Haddad", |
| "suffix": "" |
| }, |
| { |
| "first": "Chedi", |
| "middle": [], |
| "last": "Bechikh Ali", |
| "suffix": "" |
| }, |
| { |
| "first": "Halima", |
| "middle": [], |
| "last": "Alshabani", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the Third Workshop on Abusive Language Online", |
| "volume": "", |
| "issue": "", |
| "pages": "111--118", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hala Mulki, Hatem Haddad, Chedi Bechikh Ali, and Halima Alshabani. 2019. L-hsab: A levantine twit- ter dataset for hate speech and abusive language. In Proceedings of the Third Workshop on Abusive Lan- guage Online, pages 111-118.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Alexandre Martins da Cunha, Carlos Roberto Viana, and Gustavo Paiva Guedes", |
| "authors": [ |
| { |
| "first": "Gabriel", |
| "middle": [], |
| "last": "Nascimento", |
| "suffix": "" |
| }, |
| { |
| "first": "Flavio", |
| "middle": [], |
| "last": "Carvalho", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 25th Brazillian Symposium on Multimedia and the Web, WebMedia '19", |
| "volume": "", |
| "issue": "", |
| "pages": "325--328", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gabriel Nascimento, Flavio Carvalho, Alexandre Mar- tins da Cunha, Carlos Roberto Viana, and Gus- tavo Paiva Guedes. 2019. Hate speech detection us- ing brazilian imageboards. In Proceedings of the 25th Brazillian Symposium on Multimedia and the Web, WebMedia '19, page 325-328, New York, NY, USA. Association for Computing Machinery.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Asynchronous Pipeline for Processing Huge Corpora on Medium to Low Resource Infrastructures", |
| "authors": [ |
| { |
| "first": "Pedro Javier Ortiz", |
| "middle": [], |
| "last": "Su\u00e1rez", |
| "suffix": "" |
| }, |
| { |
| "first": "Beno\u00eet", |
| "middle": [], |
| "last": "Sagot", |
| "suffix": "" |
| }, |
| { |
| "first": "Laurent", |
| "middle": [], |
| "last": "Romary", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "7th Workshop on the Challenges in the Management of Large Corpora", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pedro Javier Ortiz Su\u00e1rez, Beno\u00eet Sagot, and Laurent Romary. 2019. Asynchronous Pipeline for Process- ing Huge Corpora on Medium to Low Resource In- frastructures. In 7th Workshop on the Challenges in the Management of Large Corpora (CMLC-", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Leibniz-Institut f\u00fcr Deutsche Sprache", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": ", Cardiff, United Kingdom. Leibniz-Institut f\u00fcr Deutsche Sprache.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Towards an open platform for legal information", |
| "authors": [ |
| { |
| "first": "Malte", |
| "middle": [], |
| "last": "Ostendorff", |
| "suffix": "" |
| }, |
| { |
| "first": "Till", |
| "middle": [], |
| "last": "Blume", |
| "suffix": "" |
| }, |
| { |
| "first": "Saskia", |
| "middle": [], |
| "last": "Ostendorff", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the ACM/IEEE Joint Conference on Digital Libraries in 2020, JCDL '20", |
| "volume": "", |
| "issue": "", |
| "pages": "385--388", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Malte Ostendorff, Till Blume, and Saskia Ostendorff. 2020. Towards an open platform for legal informa- tion. In Proceedings of the ACM/IEEE Joint Con- ference on Digital Libraries in 2020, JCDL '20, page 385-388, New York, NY, USA. Association for Computing Machinery.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Multilingual and multi-aspect hate speech analysis", |
| "authors": [ |
| { |
| "first": "Nedjma", |
| "middle": [], |
| "last": "Ousidhoum", |
| "suffix": "" |
| }, |
| { |
| "first": "Zizheng", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Hongming", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yangqiu", |
| "middle": [], |
| "last": "Song", |
| "suffix": "" |
| }, |
| { |
| "first": "Dit-Yan", |
| "middle": [], |
| "last": "Yeung", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nedjma Ousidhoum, Zizheng Lin, Hongming Zhang, Yangqiu Song, and Dit-Yan Yeung. 2019. Multilin- gual and multi-aspect hate speech analysis. In Pro- ceedings of EMNLP. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Exploring the limits of transfer learning with a unified text-totext transformer", |
| "authors": [ |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Raffel", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Roberts", |
| "suffix": "" |
| }, |
| { |
| "first": "Katherine", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Sharan", |
| "middle": [], |
| "last": "Narang", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Matena", |
| "suffix": "" |
| }, |
| { |
| "first": "Yanqi", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [ |
| "J" |
| ], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "21", |
| "issue": "140", |
| "pages": "1--67", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to- text transformer. Journal of Machine Learning Re- search, 21(140):1-67.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Overview of the GermEval 2021 shared task on the identification of toxic, engaging, and fact-claiming comments", |
| "authors": [ |
| { |
| "first": "Julian", |
| "middle": [], |
| "last": "Risch", |
| "suffix": "" |
| }, |
| { |
| "first": "Anke", |
| "middle": [], |
| "last": "Stoll", |
| "suffix": "" |
| }, |
| { |
| "first": "Lena", |
| "middle": [], |
| "last": "Wilms", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Wiegand", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Proceedings of the GermEval 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments colocated with KONVENS", |
| "volume": "", |
| "issue": "", |
| "pages": "1--12", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Julian Risch, Anke Stoll, Lena Wilms, and Michael Wiegand. 2021. Overview of the GermEval 2021 shared task on the identification of toxic, engaging, and fact-claiming comments. In Proceedings of the GermEval 2021 Shared Task on the Identification of Toxic, Engaging, and Fact-Claiming Comments co- located with KONVENS, pages 1-12.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis", |
| "authors": [ |
| { |
| "first": "Bj\u00f6rn", |
| "middle": [], |
| "last": "Ross", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Rist", |
| "suffix": "" |
| }, |
| { |
| "first": "Guillermo", |
| "middle": [], |
| "last": "Carbonell", |
| "suffix": "" |
| }, |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "Cabrera", |
| "suffix": "" |
| }, |
| { |
| "first": "Nils", |
| "middle": [], |
| "last": "Kurowsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Wojatzki", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of NLP4CMC III: 3rd Workshop on Natural Language Processing for Computer-Mediated Communication", |
| "volume": "17", |
| "issue": "", |
| "pages": "6--9", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bj\u00f6rn Ross, Michael Rist, Guillermo Carbonell, Ben- jamin Cabrera, Nils Kurowsky, and Michael Wo- jatzki. 2016. Measuring the Reliability of Hate Speech Annotations: The Case of the European Refugee Crisis. In Proceedings of NLP4CMC III: 3rd Workshop on Natural Language Processing for Computer-Mediated Communication, volume 17 of Bochumer Linguistische Arbeitsberichte, pages 6-9, Bochum.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Superglue: Learning feature matching with graph neural networks", |
| "authors": [ |
| { |
| "first": "Paul-Edouard", |
| "middle": [], |
| "last": "Sarlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Detone", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the IEEE/CVF conference on computer vision and pattern recognition", |
| "volume": "", |
| "issue": "", |
| "pages": "4938--4947", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paul-Edouard Sarlin, Daniel DeTone, Tomasz Mal- isiewicz, and Andrew Rabinovich. 2020. Superglue: Learning feature matching with graph neural net- works. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4938-4947.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Towards the Automatic Classification of Offensive Language and Related Phenomena in German Tweets", |
| "authors": [ |
| { |
| "first": "Julian Moreno", |
| "middle": [], |
| "last": "Schneider", |
| "suffix": "" |
| }, |
| { |
| "first": "Roland", |
| "middle": [], |
| "last": "Roller", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Bourgonje", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefanie", |
| "middle": [], |
| "last": "Hegele", |
| "suffix": "" |
| }, |
| { |
| "first": "Georg", |
| "middle": [], |
| "last": "Rehm", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the GermEval Workshop 2018 -Shared Task on the Identification of Offensive Language", |
| "volume": "", |
| "issue": "", |
| "pages": "95--103", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Julian Moreno Schneider, Roland Roller, Peter Bour- gonje, Stefanie Hegele, and Georg Rehm. 2018. To- wards the Automatic Classification of Offensive Lan- guage and Related Phenomena in German Tweets. In Proceedings of the GermEval Workshop 2018 - Shared Task on the Identification of Offensive Lan- guage, pages 95-103, Vienna, Austria. 21 Septem- ber 2018.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Overview of germeval task 2, 2019 shared task on the identification of offensive language", |
| "authors": [ |
| { |
| "first": "Julia", |
| "middle": [], |
| "last": "Stru\u00df", |
| "suffix": "" |
| }, |
| { |
| "first": "Melanie", |
| "middle": [], |
| "last": "Siegel", |
| "suffix": "" |
| }, |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Ruppenhofer", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Wiegand", |
| "suffix": "" |
| }, |
| { |
| "first": "Manfred", |
| "middle": [], |
| "last": "Klenner", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Zurich Open Repository and Archive", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Julia Stru\u00df, Melanie Siegel, Josef Ruppenhofer, Michael Wiegand, and Manfred Klenner. 2019. Overview of germeval task 2, 2019 shared task on the identification of offensive language. In Zurich Open Repository and Archive.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Parallel data, tools and interfaces in OPUS", |
| "authors": [ |
| { |
| "first": "J\u00f6rg", |
| "middle": [], |
| "last": "Tiedemann", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012)", |
| "volume": "", |
| "issue": "", |
| "pages": "2214--2218", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J\u00f6rg Tiedemann. 2012. Parallel data, tools and inter- faces in OPUS. In Proceedings of the Eighth In- ternational Conference on Language Resources and Evaluation (LREC-2012), pages 2214-2218, Istan- bul, Turkey. European Languages Resources Associ- ation (ELRA).", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Overview of the germeval 2018 shared task on the identification of offensive language", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Wiegand", |
| "suffix": "" |
| }, |
| { |
| "first": "Melanie", |
| "middle": [], |
| "last": "Siegel", |
| "suffix": "" |
| }, |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Ruppenhofer", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "14th Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Wiegand, Melanie Siegel, and Josef Ruppen- hofer. 2018. Overview of the germeval 2018 shared task on the identification of offensive language. In 14th Conference on Natural Language Processing KONVENS 2018.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Transformers: State-of-the-art natural language processing", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Wolf", |
| "suffix": "" |
| }, |
| { |
| "first": "Lysandre", |
| "middle": [], |
| "last": "Debut", |
| "suffix": "" |
| }, |
| { |
| "first": "Victor", |
| "middle": [], |
| "last": "Sanh", |
| "suffix": "" |
| }, |
| { |
| "first": "Julien", |
| "middle": [], |
| "last": "Chaumond", |
| "suffix": "" |
| }, |
| { |
| "first": "Clement", |
| "middle": [], |
| "last": "Delangue", |
| "suffix": "" |
| }, |
| { |
| "first": "Anthony", |
| "middle": [], |
| "last": "Moi", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierric", |
| "middle": [], |
| "last": "Cistac", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Rault", |
| "suffix": "" |
| }, |
| { |
| "first": "R\u00e9mi", |
| "middle": [], |
| "last": "Louf", |
| "suffix": "" |
| }, |
| { |
| "first": "Morgan", |
| "middle": [], |
| "last": "Funtowicz", |
| "suffix": "" |
| }, |
| { |
| "first": "Joe", |
| "middle": [], |
| "last": "Davison", |
| "suffix": "" |
| }, |
| { |
| "first": "Sam", |
| "middle": [], |
| "last": "Shleifer", |
| "suffix": "" |
| }, |
| { |
| "first": "Clara", |
| "middle": [], |
| "last": "Patrick Von Platen", |
| "suffix": "" |
| }, |
| { |
| "first": "Yacine", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Julien", |
| "middle": [], |
| "last": "Jernite", |
| "suffix": "" |
| }, |
| { |
| "first": "Canwen", |
| "middle": [], |
| "last": "Plu", |
| "suffix": "" |
| }, |
| { |
| "first": "Teven", |
| "middle": [ |
| "Le" |
| ], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Sylvain", |
| "middle": [], |
| "last": "Scao", |
| "suffix": "" |
| }, |
| { |
| "first": "Mariama", |
| "middle": [], |
| "last": "Gugger", |
| "suffix": "" |
| }, |
| { |
| "first": "Quentin", |
| "middle": [], |
| "last": "Drame", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [ |
| "M" |
| ], |
| "last": "Lhoest", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Rush", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "38--45", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer", |
| "authors": [ |
| { |
| "first": "Linting", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [], |
| "last": "Constant", |
| "suffix": "" |
| }, |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Roberts", |
| "suffix": "" |
| }, |
| { |
| "first": "Mihir", |
| "middle": [], |
| "last": "Kale", |
| "suffix": "" |
| }, |
| { |
| "first": "Rami", |
| "middle": [], |
| "last": "Al-Rfou", |
| "suffix": "" |
| }, |
| { |
| "first": "Aditya", |
| "middle": [], |
| "last": "Siddhant", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "483--498", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Linting Xue, Noah Constant, Adam Roberts, Mi- hir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 483-498, Online. Association for Computa- tional Linguistics.", |
| "links": null |
| } |
| }, |
| "ref_entries": {} |
| } |
| } |