| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:14:03.417057Z" |
| }, |
| "title": "AMU-EURANOVA at CASE 2021 Task 1: Assessing the stability of multilingual BERT", |
| "authors": [ |
| { |
| "first": "L\u00e9o", |
| "middle": [], |
| "last": "Bouscarrat", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "EURA NOVA", |
| "location": { |
| "settlement": "Marseille", |
| "country": "France" |
| } |
| }, |
| "email": "leo.bouscarrat@euranova.eu" |
| }, |
| { |
| "first": "Antoine", |
| "middle": [], |
| "last": "Bonnefoy", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "EURA NOVA", |
| "location": { |
| "settlement": "Marseille", |
| "country": "France" |
| } |
| }, |
| "email": "antoine.bonnefoy@euranova.eu" |
| }, |
| { |
| "first": "C\u00e9cile", |
| "middle": [], |
| "last": "Capponi", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "CNRS", |
| "location": { |
| "settlement": "Marseille", |
| "region": "LIS", |
| "country": "France" |
| } |
| }, |
| "email": "cecile.capponi@lis-lab.fr" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Ramisch", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "CNRS", |
| "location": { |
| "settlement": "Marseille", |
| "region": "LIS", |
| "country": "France" |
| } |
| }, |
| "email": "carlos.ramisch@lis-lab.fr" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper explains our participation in task 1 of the CASE 2021 shared task. This task is about multilingual event extraction from news. We focused on sub-task 4, event information extraction. This sub-task has a small training dataset and we fine-tuned a multilingual BERT to solve this sub-task. We studied the instability problem on the dataset and tried to mitigate it.", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper explains our participation in task 1 of the CASE 2021 shared task. This task is about multilingual event extraction from news. We focused on sub-task 4, event information extraction. This sub-task has a small training dataset and we fine-tuned a multilingual BERT to solve this sub-task. We studied the instability problem on the dataset and tried to mitigate it.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Event extraction is becoming more and more important as the number of online news increases. This task consists of extracting events from documents, especially news. An event is defined by a group of entities that give some information about the event. Therefore, the goal of this task is to extract, for each event, a group of entities that define the event, such as the place and time of the event.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This task is related but still different from named entity recognition (NER) as the issue is to group the entities that are related to the same event, and differentiate those related to different events. This difference makes the task harder and also complicates the annotation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In the case of this shared task, the type of events to extract is protests (H\u00fcrriyetoglu et al., 2021a,b) . This shared task is in the continuation of two previous shared tasks at CLEF 2019 (H\u00fcrriyetoglu et al., 2019) and AESPEN (H\u00fcrriyetoglu et al., 2020) . The first one deals with English event extraction with three sub-tasks: document classification, sentence classification, and event information extraction. The second focuses on event sentence coreference identification, whose goal is to group sentences related to the same events.", |
| "cite_spans": [ |
| { |
| "start": 75, |
| "end": 105, |
| "text": "(H\u00fcrriyetoglu et al., 2021a,b)", |
| "ref_id": null |
| }, |
| { |
| "start": 190, |
| "end": 217, |
| "text": "(H\u00fcrriyetoglu et al., 2019)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 229, |
| "end": 256, |
| "text": "(H\u00fcrriyetoglu et al., 2020)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This year, task 1 is composed of the four aforementioned tasks and adds another difficulty: multilinguality. This year's data is available in English, Spanish, and Portuguese. Thus, it is important to note that there is much more data in English than in the other languages. For the document classification sub-task, to test multilingual capabilities, Hindi is available on the testing set only.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We have mainly focused on the last sub-task (event information extraction), but we have also submitted results for the first and second sub-tasks (document and sentence classification). We used multilingual BERT (Devlin et al., 2019) , henceforth M-BERT, which is a model known to obtain near state-of-the-art results on many tasks. It is also supposed to work well for zero-or-few-shot learning on different languages (Pires et al., 2019) . We will see the results on these sub-tasks, especially for sub-task 4 where the training set available for Spanish and Portuguese is small.", |
| "cite_spans": [ |
| { |
| "start": 212, |
| "end": 233, |
| "text": "(Devlin et al., 2019)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 419, |
| "end": 439, |
| "text": "(Pires et al., 2019)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Thus, one of the issues with transformer-based models such as M-BERT is the instability on small datasets (Dodge et al., 2020; Ruder, 2021) . The instability issue is the fact that by changing some random seeds before the learning phase but using the same architecture, data and hyper-parameters the results can have a great variance. We will look at some solutions to mitigate this issue, and how this issue is impacting our results for sub-task 4. 1", |
| "cite_spans": [ |
| { |
| "start": 106, |
| "end": 126, |
| "text": "(Dodge et al., 2020;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 127, |
| "end": 139, |
| "text": "Ruder, 2021)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Sub-tasks 1 and 2 can be seen as binary sequence classification, where the goal is to say if a given sequence is part of a specific class. In our case, a classifier must predict whether a document contains information about an event for sub-task 1 or if a sentence contains information about an event for sub-task 2. Document and sentence classification tasks, subtasks 1 and 2, are not our main research interest. Moreover, the datasets provided for these tasks are less interesting (reasonable amount of training data).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tasks and data", |
| "sec_num": "2" |
| }, |
| { |
| "text": "On the other hand, sub-task 4 not only has less training data available but also requires more finegrained token-based prediction. The goal of subtask 4 is to extract event information from snippets that contain sentences speaking about the same event. H\u00fcrriyetoglu et al. (2019) have defined that an event has the following information classes (example in Figure 1 ):", |
| "cite_spans": [ |
| { |
| "start": 253, |
| "end": 279, |
| "text": "H\u00fcrriyetoglu et al. (2019)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 357, |
| "end": 365, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Tasks and data", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 Time, which indicates when the protest took place,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tasks and data", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 Facility name, which indicates in which facility the protest took place,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tasks and data", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 Organizer, which indicates who organized the protest,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tasks and data", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 Participant, which indicates who participated in the protest,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tasks and data", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 Place, which indicates where the protest took place in a more general area than the facility (city, region, ...),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tasks and data", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 Target, which indicates against whom or what the protest took place,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tasks and data", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2022 Trigger, which is a specific word or group of words that indicate that a protest took place (examples: protested, attack, ...),", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tasks and data", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Thus, not all the snippets contain all the classes, and they can contain several times the same classes. Each information can be composed of one or several adjacent words. Each snippet contains information related to one and only one event.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tasks and data", |
| "sec_num": "2" |
| }, |
| { |
| "text": "As the data is already separated into groups of sentences related to the same event, our approach consists of considering a task of named entity recognition with the aforementioned classes. Multilingual BERT has already been used for multilingual named entity recognition and showed great results compared to state-of-the-art models (Hakala and Pyysalo, 2019) .", |
| "cite_spans": [ |
| { |
| "start": 333, |
| "end": 359, |
| "text": "(Hakala and Pyysalo, 2019)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tasks and data", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The data is in BIO format (Ramshaw and Marcus, 1995) , where each word has a B tag or an I tag of a specific class or an O tag. The B tag means beginning and marks the beginning of a new entity. The tag I means inside, which has to be preceded by another I tag or a B tag, and marks that the word is inside an entity but not the first word of the entity. Finally, the O-tag means outside, which means the word is not part of an entity.", |
| "cite_spans": [ |
| { |
| "start": 26, |
| "end": 52, |
| "text": "(Ramshaw and Marcus, 1995)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tasks and data", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Our model is based on pre-trained multilingual BERT (Devlin et al., 2019) . This model has been pretrained on multilingual Wikipedia texts. To balance the fact that the data is not equally distributed between all the languages the authors used exponential smoothed weighting to under-sample the most present languages and over-sample the rarest ones. This does not perfectly balance all the languages but it reduces the impact of low-resourced languages.", |
| "cite_spans": [ |
| { |
| "start": 52, |
| "end": 73, |
| "text": "(Devlin et al., 2019)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System overview", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The authors of the M-BERT paper shared the weights of a pretrained model that we use to do finetuning. Fine-tuning a model consists of taking an already trained model on a specific task and using this model as a starting point of the training for the task of interest. This approach has reached stateof-the-arts in numerous tasks. In the case of M-BERT, the pre-training tasks are Masked Language Modeling (MLM) and Next Sentence Prediction (NSP).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System overview", |
| "sec_num": "3" |
| }, |
| { |
| "text": "To be able to learn our task, we add a dense layer on top of the outputs of M-BERT and learn it during the fine-tuning. All our models are fine-tuning all the layers of M-BERT.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System overview", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The implementation is the one from Hugging-Face's 'transformers' library (Wolf et al., 2020) . To train it on our data, the model is fine-tuned on each sub-task.", |
| "cite_spans": [ |
| { |
| "start": 73, |
| "end": 92, |
| "text": "(Wolf et al., 2020)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System overview", |
| "sec_num": "3" |
| }, |
| { |
| "text": "For sub-tasks 1 and 2, we approach these tasks as binary sequence classification, as the goal is to predict whether or not a document (sub-task 1) or sentence (sub-task 2) contains relevant information about a protest event. Thus the size of the output of the dense layer is 2. We then perform an argmax on these values to predict a class. We use the base parameters in HuggingFace's 'transformers' library. The loss is a cross-entropy, the learning rate is handled by an AdamW optimizer (Loshchilov and Hutter, 2019) and the activation function is a gelu (Hendrycks and Gimpel, 2016) . We use a dropout of 10% for the fully connected layers inside M-BERT and the attention probabilities.", |
| "cite_spans": [ |
| { |
| "start": 488, |
| "end": 517, |
| "text": "(Loshchilov and Hutter, 2019)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 556, |
| "end": 584, |
| "text": "(Hendrycks and Gimpel, 2016)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sub task 1 and 2", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "One of the issues with M-BERT is the limited length of the input, as it can only take 512 tokens, which are tokenized words. M-BERT uses the wordpiece tokenizer (Wu et al., 2016) . A token is either a word if the tokenizer knows it, if it does not it will separate it into several sub-tokens which are known. For sub-task 1, as we are working with entire documents, it can be frequent that a document is longer than this limit and has to be broken down into several sub-documents. To retain contexts in each sub-documents we use an overlap of 150 tokens, which means between two sub-documents, they will have 150 tokens in common. Our method to output a class, in this case, is as follows:", |
| "cite_spans": [ |
| { |
| "start": 161, |
| "end": 178, |
| "text": "(Wu et al., 2016)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sub task 1 and 2", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 tokenize a document,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sub task 1 and 2", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 if the tokenized document is longer than the 512-tokens limit, create different subdocuments with 150-tokens overlaps between each sub-document,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sub task 1 and 2", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 generate a prediction for each sub-document,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sub task 1 and 2", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 average all the predictions from subdocuments originated from the same document,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sub task 1 and 2", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 take the argmax of the final prediction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sub task 1 and 2", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "For sub-task 4, our approach is based on word classification where we predict a class for each word of the documents. One issue is that as words are tokenized and can be transformed into several sub-tokens we have to choose how to choose the prediction of a multitoken word. Our approach is to take the prediction of the first token composing a word as in Hakala and Pyysalo (2019) .", |
| "cite_spans": [ |
| { |
| "start": 356, |
| "end": 381, |
| "text": "Hakala and Pyysalo (2019)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sub-task 4", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We also have to deal with the input size as some documents are longer than the limit. In this case, we separate them into sub-documents with an overlap of 150. Our approach is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sub-task 4", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\u2022 tokenize a document,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sub-task 4", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\u2022 if the tokenized document is longer than the 512-tokens limit, create different subdocuments with 150-tokens overlaps between each sub-document,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sub-task 4", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\u2022 generate a prediction for each sub-document,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sub-task 4", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\u2022 reconstruct the entire document: take the first and second sub-documents, average the prediction for the same tokens (from the overlap), keep the prediction for the others, then use the same process with the obtained document and the next sub-document. As the size of each sequence is 512 and the overlap is only 150, no tokens can be in more than 2 different sequences,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sub-task 4", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "\u2022 take the argmax of the final prediction for each word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sub-task 4", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We used a soft macro-F1 loss (Lipton et al., 2014) . This loss is closer than categorical cross-entropy on BIO labels to the metric used to evaluate systems in the shared task. The main issue with F1 is its non-differentiability, so it cannot be used as is but must be modified to become differentiable. The F1 score is based on precision and recall, which in turn are functions of the number of true positives, false positives, and false negatives. These quantities are usually defined as follows:", |
| "cite_spans": [ |
| { |
| "start": 29, |
| "end": 50, |
| "text": "(Lipton et al., 2014)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Soft macro-F1 loss", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "tp = i\u2208tokens (pred(i) \u00d7 true(i)) f p = i\u2208tokens (pred(i) \u00d7 (1 \u2212 true(i))) f n = i\u2208tokens ((1 \u2212 pred(i)) \u00d7 true(i))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Soft macro-F1 loss", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "With:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Soft macro-F1 loss", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "\u2022 tokens, the list of tokens in a document,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Soft macro-F1 loss", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "\u2022 true(i), 0 if the true label of the token i is of the negative class, 1 if the true label is of the positive class", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Soft macro-F1 loss", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "\u2022 pred(i), 0 if the predicted label of the token i is of the negative class, 1 if the predicted label is of the positive class", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Soft macro-F1 loss", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "As we use macro-F1 loss, we compute the F1 score for each class where the positive class is the current class and negative any other class, e.g. if the reference class is B-trigger, then true(i)=1 for B-trigger and true(i)=0 for all other classes when macro-averaging the F1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Soft macro-F1 loss", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "We replace the binary function pred(i) by a function outputting the predicted probability of the token i to be of the positive class:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Soft macro-F1 loss", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "sof t tp = i\u2208tokens (proba(i) \u00d7 true(i)) sof t f p = i\u2208tokens (proba(i) \u00d7 (1 \u2212 true(i))) sof t f n = i\u2208tokens ((1 \u2212 proba(i)) \u00d7 true(i))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Soft macro-F1 loss", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "With proba(i) outputting the probability of the token i to be of the positive class, this probability is the predicted probability resulting from the softmax activation of the fine-tuning network.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Soft macro-F1 loss", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "Then we compute, in a similar fashion as a normal F1, the precision and recall using the soft definitions of the true positive, false positive, and false negative. And finally we compute the F1 score with the given precision and recall. As a loss function is a criterion to be minimized whereas F1 is a score that we would like to maximize, the final loss is 1 \u2212 F 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Soft macro-F1 loss", |
| "sec_num": "3.2.1" |
| }, |
| { |
| "text": "A known problem of Transformers-based models is the training instability, especially with small datasets (Dodge et al., 2020; Ruder, 2021) . Dodge et al. (2020) explain that two elements that have much influence on the stability are the data order and the initialization of the prediction layer, both controlled by pseudo-random numbers generated from a seed. To study the impact of these two elements on the models' stability, we freeze all the randomness on the other parts of the models and change only two different random seeds:", |
| "cite_spans": [ |
| { |
| "start": 105, |
| "end": 125, |
| "text": "(Dodge et al., 2020;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 126, |
| "end": 138, |
| "text": "Ruder, 2021)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 141, |
| "end": 160, |
| "text": "Dodge et al. (2020)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recommendation for improved stability", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "\u2022 the data order, i.e. the different batches and their order. Between two runs the model will see the same data during each epoch but the batches will be different, as the batches are built beforehand and do not change between epochs,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recommendation for improved stability", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "\u2022 the initialization of the linear layer used to predict the output of the model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recommendation for improved stability", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "Another recommendation to work with Transformers-based models and small data made by Mosbach et al. (2021) is to use smaller learning rates but compensating with more epochs. We have taken this into account during the hyper-parameter search.", |
| "cite_spans": [ |
| { |
| "start": 85, |
| "end": 106, |
| "text": "Mosbach et al. (2021)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recommendation for improved stability", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "Ruder (2021) recommend using behavioral finetuning to reduce fine-tuning instabilities. It is supposed to be especially helpful to have a better initialization of the final prediction layer. It has also already been used on named entity recognition tasks (Broscheit, 2019) and has shown that it has improved results for a task with a very small training dataset. Thus, to do so, we need a task with the same number of classes, but much larger training datasets. As we did not find such a task, we decided to fine-tune our model on at least the different languages we are working with, English, Spanish and Portuguese. We used named entity recognition datasets and kept only three classes in common in all the datasets: person, organization, and location. These three types of entities can be found in the shared task.", |
| "cite_spans": [ |
| { |
| "start": 255, |
| "end": 272, |
| "text": "(Broscheit, 2019)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recommendation for improved stability", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "To perform this test, the training has been done like that:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recommendation for improved stability", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "\u2022 the first fine-tuning is done on the concatenation of NER datasets in different languages, once the training is finished we save all the weights of the model,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recommendation for improved stability", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "\u2022 we load the weights of the previous model, except for the weights of the final prediction layer which are randomized with a given seed,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recommendation for improved stability", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "\u2022 we train the model on the dataset of the shared task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recommendation for improved stability", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "4 Experimental setup", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recommendation for improved stability", |
| "sec_num": "3.2.2" |
| }, |
| { |
| "text": "The dataset of the shared task is based on articles from different newspapers in different languages. More information about this dataset can be found in (H\u00fcrriyetoglu et al., 2021a) For the final submissions of sub-tasks 1, 2, and 4 we divided the dataset given for training purposes into two parts with 80% for training and 20% for evaluation during the system training phase. We then predicted the data given for testing purposes during the shared task evaluation phase. The quantity of data for each sub-task and language can be found in Table 1 . We can note that the majority of Sub-task English Spanish Portuguese Sub-task 1 9,324 1,000 1,487 Sub-task 2 22,825 2,741 1,182 Sub-task 4 808 33 30 Table 1 : Number of elements for each sub-task for each language in the data given for training purposes. Documents for sub-task 1, sentences for sub-task 2, snippet (group of sentences about one event) for sub-task 4. the data is in English. Spanish and Portuguese are only a small part of the dataset. For all the experiments made on sub-task 4, we divided the dataset given for training purposes into three parts with 60% for training, 20% for evaluating and 20% for testing.", |
| "cite_spans": [ |
| { |
| "start": 154, |
| "end": 182, |
| "text": "(H\u00fcrriyetoglu et al., 2021a)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 542, |
| "end": 549, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 701, |
| "end": 708, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "To be able to do our approach of behavioral finetuning, we needed some Named Entity Recognition datasets in English, Spanish and Portuguese. For English we used the CoNLL 2003 dataset (Tjong Kim Sang and De Meulder, 2003) , for Spanish the Spanish part of the CoNLL 2002 dataset (Tjong Kim Sang, 2002) and for Portuguese the HAREM dataset (Santos et al., 2006) . Each of these datasets had already three different splits for training, development and test. Information about their size can be found in Table 2 .", |
| "cite_spans": [ |
| { |
| "start": 184, |
| "end": 221, |
| "text": "(Tjong Kim Sang and De Meulder, 2003)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 279, |
| "end": 301, |
| "text": "(Tjong Kim Sang, 2002)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 339, |
| "end": 360, |
| "text": "(Santos et al., 2006)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 502, |
| "end": 509, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": null |
| }, |
| { |
| "text": "The dataset for Portuguese is pretty small compared to the two others, but the impact of the size can be interesting to study.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": null |
| }, |
| { |
| "text": "For sub-task 4, we did a hyper-parameter search to optimize the results. We used Ray Tune (Liaw et al., 2018) and the HyperOpt algorithm Bergstra et al. (2013) . We launched 30 different trainings, all the information about the search space and the hyper-parameters can be found in A.1. The goal is to optimize the macro-F1 on the evaluation set.", |
| "cite_spans": [ |
| { |
| "start": 90, |
| "end": 109, |
| "text": "(Liaw et al., 2018)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 137, |
| "end": 159, |
| "text": "Bergstra et al. (2013)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hyper-parameter search", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Our goal was to find a set of hyper-parameters that performs well to use always the same in the following experiments. We also wanted to evaluate the impacts of the hyper-parameters on the training.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hyper-parameter search", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "For the first part of the behavioral fine-tuning, we trained an M-BERT model on the three NER datasets for one epoch. We only learn for one epoch for timing issues, as the learning on this datasets takes several hours. We then fine-tune the resulting models with the best set of hyper-parameters found with the hyper-parameter search.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Behavioral fine-tuning", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "To study the stability of the model and the impact of behavioral fine-tuning we made 6 sets of experiments with 20 experiments in each set:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stability", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "\u2022 normal fine-tuning with random data order and frozen initialization of final layer,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stability", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "\u2022 normal fine-tuning with frozen data order and random initialization of final layer,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stability", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "\u2022 normal fine-tuning with random data order and random initialization of final layer,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stability", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "\u2022 behavioral fine-tuning with random data order and frozen initialization of final layer,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stability", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "\u2022 behavioral fine-tuning with frozen data order and random initialization of final layer,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stability", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "\u2022 behavioral fine-tuning with random data order and random initialization of final layer, Once again it is important to note that what we called behavioral fine-tuning is different from behavioral fine-tuning as proposed by Ruder (2021), as we reset the final layer. Only the weights of all the layers of M-BERT are modified.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stability", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "For each set of experiments we will look at the average of the macro-F1, as implemented in Nakayama (2018) , and the standard deviation of the macro-F1 on the training dataset, on the evaluation dataset, and on three different test datasets, one for each language. Thus we will be able to assess the importance of the instability, if our approach to behavioral fine-tuning helps to mitigate it and if it has similar results across the languages.", |
| "cite_spans": [ |
| { |
| "start": 91, |
| "end": 106, |
| "text": "Nakayama (2018)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stability", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "We can also note that in our implementation the batches are not randomized. They are built once before the learning phase and do not change, neither in content nor order of passage, between each epoch. 30 experiments on sub-task 4 during the hyper-parameter search in function of the value of the hyper-parameters and the value of the F1 on the evaluation set. Each line represents an experiment, and each column a specific hyper-parameter, except the last which is the value of the metric. (Bottom) Same plot with the worst results removed to have a better view of the best results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Stability", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "The results of the hyper-parameter search can be seen in Figure 2 . On the top pictures which represent the 30 experiments, we can see that a specific hyper-parameter seems to impact the worst results (in blue). This parameter is the learning rate, we can see it in the red box on the top image, all the blue lines are at the bottom, which means these experiments had a small learning rate. It seems that we obtain the best results with a learning rate around 5e-05 (0.00005), lower than 1e-06 seems to give bad results.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 57, |
| "end": 65, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Hyper-parameter search", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We can then focus on the bottom picture, with the same type of plot but with the worst results removed. Another hyper-parameter that seems to have an impact is the number of training epochs, 40 seems better than 20. We use a high number of epochs as recommended by Mosbach et al. (2021) to limit the instability. Beyond the learning rate and number of epochs, it is then hard to find impactful hyper-parameters.", |
| "cite_spans": [ |
| { |
| "start": 265, |
| "end": 286, |
| "text": "Mosbach et al. (2021)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hyper-parameter search", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Finally, the set of hyper-parameters that has been selected is: For the stability experiments, the number of training epochs have been reduced to 20 for speed purposes. For the first part of the behavioral finetuning, the learning rate has been set to 1e-05 as more data were available.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hyper-parameter search", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "The results on the test dataset of each model after one epoch of training can be found in Table 5 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 90, |
| "end": 97, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Behavioral fine-tuning", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We could not compare to state-of-the-art NER models on these three datasets as we do not take all the classes (classes such as MISC were removed Table 4 : Score of our final submissions for each sub-task, in parenthesis the score achieved by the best scoring team on each sub-task.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 145, |
| "end": 152, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Behavioral fine-tuning", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Test macro-F1 CoNLL 2003 89.8 CoNLL 2002 86.1 HAREM 76.1 Table 5 : Macro-F1 score of the NER task on the test split of each dataset used in behavioral fine-tuning after training the base M-BERT for 1 epoch.", |
| "cite_spans": [ |
| { |
| "start": 14, |
| "end": 24, |
| "text": "CoNLL 2003", |
| "ref_id": null |
| }, |
| { |
| "start": 25, |
| "end": 40, |
| "text": "89.8 CoNLL 2002", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 57, |
| "end": 64, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": null |
| }, |
| { |
| "text": "before the learning phase). The metrics used on these datasets are not by classes, so the comparison cannot be made. However, the results are already much better than what a random classifier would output, thus the weights of the models should already be better than the weights of the base model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": null |
| }, |
| { |
| "text": "The results of the different sets of experiments can be found in Table 3 . First, we can see that the difference between behavioral fine-tuning and normal fine-tuning is not important enough to say one is better than the other. We can also note that the standard deviation is small for English, but not negligible for Spanish and Portuguese.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 65, |
| "end": 72, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Stability", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The results of the final submissions can be found in Table 4 . We can see that our results are lower than the best results, especially for sub-task 1 with a difference of between 30 to 50 macro-F1 score depending on the language, whereas for sub-tasks 2 and 4 the difference is close to 10 macro-F1 score for all the languages.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 53, |
| "end": 60, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Final submission", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "6.1 Sub-task 1 and 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "As we can see in Table 4 , our final results for subtask 1 are much lower than the best results, but for sub-task 2 the difference is smaller. This is interesting as the tasks are pretty similar, thus expected the difference between our results and the best results to be of the same magnitude. One explanation could be our approach to handle documents longer than the input of M-BERT. We have chosen to take the average of the subdocuments, but if one part of a document contains an event the entire document does too. We may have better results looking if one sub-document at least is considered as having an event.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 17, |
| "end": 24, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "It is then hard to compare to other models as we have chosen to use one model for all the languages and we do not know the other approaches.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "For sub-task 4 we have interesting results for all the languages, even for Spanish and Portuguese, as we were not sure that we could learn this task in a supervised fashion with the amount of data available. In a further study, we could compare our results with results obtained by fine-tuning monolingual models, where we fine-tune one model for each language with only the data of one language. This could show the impact of having data if using a multilingual model instead of several monolingual models improves or not the results. We do not expect good results for Spanish and Portuguese as the training dataset is pretty limited. The results seem to comfort the claim of (Pires et al., 2019) that M-BERT works well for few-shot learning on other languages.", |
| "cite_spans": [ |
| { |
| "start": 677, |
| "end": 697, |
| "text": "(Pires et al., 2019)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sub-task 4", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "The other question for sub-task 4 was about instability. In Table 3 we can see that the instability is way more pronounced for Spanish and Portuguese. It seems logical as we have fewer data available in Spanish and Portuguese than in English. The standard deviation for Spanish and Portuguese is large and can have a real impact on the final results. Finding good seeds could help to improve the results for Spanish and Portuguese.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 60, |
| "end": 67, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Sub-task 4", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Furthermore, our approach of behavioral finetuning did not help to reduce the instabilities. It was expected that one of the sources of the instability is the initialization of the prediction, and in our approach, the initialization of this layer is still random. In our approach, we only fine-tune the weights of M-BERT. This does not seem to work and reinforces the advice of Ruder (2021) that using behavioral fine-tuning is more useful for having a good initialization of the final prediction layer.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sub-task 4", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "On the two sources of randomness we studied, data order seems the most impactful for English, where we have more data. Nonetheless, for Spanish and Portuguese, the two sources have a large impact. In a further study, we could see how the quantity of data helps to decrease the impact of these sources of instabilities.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sub-task 4", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "For the final submissions, the macro-F1 score for English and Portuguese is beneath the average macro-F1 score we found during our development phases. This could be due to bad seeds for randomness or because the splits are different. We did not try to find the best-performing seeds for the final submissions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sub-task 4", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "Our code is available here: https://github.com/ euranova/AMU-EURANOVA-CASE-2021", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank Damien Fourrure, Arnaud Jacques, Guillaume Stempfel and our anonymous reviewers for their helpful comments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| }, |
| { |
| "text": "A.1 Hyper-parameter searchThe space search for our hyper-parameter search was:\u2022 Number of training epochs: value in [20, 25, 30, 40 ],\u2022 Weight decay: uniform distribution between 0.001 and 1,\u2022 Learning rate: value in [1e-5, 2e-5, 3e-5, 4e-5, 5e-5, 6e-5, 2e-7, 1e-7, 3e-7, 2e-8],\u2022 Adafactor: value in \"True\", \"False\",\u2022 Adam beta 1: uniform distribution between 0 and 1,\u2022 Adam beta 2: uniform distribution between 0 and 1,\u2022 Epsilon: value in [1e-8, 2e-8, 3e-8, 1e-9, 2e-9, 3e-10],\u2022 Maximum gradient norm: uniform distribution between 0 and 1.For the HyperOpt algorithm we used two set of hyper-parameters to help finding a good subspace. We maximized the macro-F1 on the evaluation dataset, and set the number of initial points before starting the algorithm to 5.", |
| "cite_spans": [ |
| { |
| "start": 116, |
| "end": 120, |
| "text": "[20,", |
| "ref_id": null |
| }, |
| { |
| "start": 121, |
| "end": 124, |
| "text": "25,", |
| "ref_id": null |
| }, |
| { |
| "start": 125, |
| "end": 128, |
| "text": "30,", |
| "ref_id": null |
| }, |
| { |
| "start": 129, |
| "end": 131, |
| "text": "40", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Appendix", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Bergstra", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Yamins", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Cox", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "International conference on machine learning", |
| "volume": "", |
| "issue": "", |
| "pages": "115--123", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James Bergstra, Daniel Yamins, and David Cox. 2013. Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision ar- chitectures. In International conference on machine learning, pages 115-123. PMLR.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Investigating entity knowledge in BERT with simple neural end-to-end entity linking", |
| "authors": [ |
| { |
| "first": "Samuel", |
| "middle": [], |
| "last": "Broscheit", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)", |
| "volume": "", |
| "issue": "", |
| "pages": "677--685", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/K19-1063" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Samuel Broscheit. 2019. Investigating entity knowl- edge in BERT with simple neural end-to-end en- tity linking. In Proceedings of the 23rd Confer- ence on Computational Natural Language Learning (CoNLL), pages 677-685, Hong Kong, China. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Devlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenton", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Kristina", |
| "middle": [], |
| "last": "Toutanova", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "4171--4186", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping", |
| "authors": [ |
| { |
| "first": "Jesse", |
| "middle": [], |
| "last": "Dodge", |
| "suffix": "" |
| }, |
| { |
| "first": "Gabriel", |
| "middle": [], |
| "last": "Ilharco", |
| "suffix": "" |
| }, |
| { |
| "first": "Roy", |
| "middle": [], |
| "last": "Schwartz", |
| "suffix": "" |
| }, |
| { |
| "first": "Ali", |
| "middle": [], |
| "last": "Farhadi", |
| "suffix": "" |
| }, |
| { |
| "first": "Hannaneh", |
| "middle": [], |
| "last": "Hajishirzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:2002.06305" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. 2020. Fine-tuning pretrained language models: Weight initializations, data orders, and early stop- ping. arXiv preprint arXiv:2002.06305.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Biomedical named entity recognition with multilingual BERT", |
| "authors": [ |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Hakala", |
| "suffix": "" |
| }, |
| { |
| "first": "Sampo", |
| "middle": [], |
| "last": "Pyysalo", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of The 5th Workshop on BioNLP Open Shared Tasks", |
| "volume": "", |
| "issue": "", |
| "pages": "56--61", |
| "other_ids": { |
| "DOI": [ |
| "10.18653/v1/D19-5709" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kai Hakala and Sampo Pyysalo. 2019. Biomedical named entity recognition with multilingual BERT. In Proceedings of The 5th Workshop on BioNLP Open Shared Tasks, pages 56-61, Hong Kong, China. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Gaussian error linear units (gelus)", |
| "authors": [ |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Hendrycks", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Gimpel", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1606.08415" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dan Hendrycks and Kevin Gimpel. 2016. Gaus- sian error linear units (gelus). arXiv preprint arXiv:1606.08415.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Multilingual protest news detection -shared task 1, case 2021", |
| "authors": [ |
| { |
| "first": "Ali", |
| "middle": [], |
| "last": "H\u00fcrriyetoglu", |
| "suffix": "" |
| }, |
| { |
| "first": "Osman", |
| "middle": [], |
| "last": "Mutlu", |
| "suffix": "" |
| }, |
| { |
| "first": "Erdem", |
| "middle": [], |
| "last": "Farhana Ferdousi Liza", |
| "suffix": "" |
| }, |
| { |
| "first": "Ritesh", |
| "middle": [], |
| "last": "Y\u00f6r\u00fck", |
| "suffix": "" |
| }, |
| { |
| "first": "Shyam", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ratan", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio-political Events from Text (CASE 2021), online", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ali H\u00fcrriyetoglu, Osman Mutlu, Farhana Ferdousi Liza, Erdem Y\u00f6r\u00fck, Ritesh Kumar, and Shyam Ratan. 2021a. Multilingual protest news detection -shared task 1, case 2021. In Proceedings of the 4th Workshop on Challenges and Applications of Auto- mated Extraction of Socio-political Events from Text (CASE 2021), online. Association for Computational Linguistics (ACL).", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Challenges and applications of automated extraction of socio-political events from text (case 2021): Workshop and shared task report", |
| "authors": [ |
| { |
| "first": "Ali", |
| "middle": [], |
| "last": "H\u00fcrriyetoglu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hristo", |
| "middle": [], |
| "last": "Tanev", |
| "suffix": "" |
| }, |
| { |
| "first": "Vanni", |
| "middle": [], |
| "last": "Zavarella", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakub", |
| "middle": [], |
| "last": "Piskorski", |
| "suffix": "" |
| }, |
| { |
| "first": "Reyyan", |
| "middle": [], |
| "last": "Yeniterzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Erdem", |
| "middle": [], |
| "last": "Y\u00f6r\u00fck", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Sociopolitical Events from Text (CASE 2021), online. Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ali H\u00fcrriyetoglu, Hristo Tanev, Vanni Zavarella, Jakub Piskorski, Reyyan Yeniterzi, and Erdem Y\u00f6r\u00fck. 2021b. Challenges and applications of automated extraction of socio-political events from text (case 2021): Workshop and shared task report. In Proceedings of the 4th Workshop on Challenges and Applications of Automated Extraction of Socio- political Events from Text (CASE 2021), online. As- sociation for Computational Linguistics (ACL).", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Overview of clef 2019 lab protestnews: Extracting protests from news in a cross-context setting", |
| "authors": [ |
| { |
| "first": "Ali", |
| "middle": [], |
| "last": "H\u00fcrriyetoglu", |
| "suffix": "" |
| }, |
| { |
| "first": "Erdem", |
| "middle": [], |
| "last": "Y\u00f6r\u00fck", |
| "suffix": "" |
| }, |
| { |
| "first": "Deniz", |
| "middle": [], |
| "last": "Y\u00fcret", |
| "suffix": "" |
| }, |
| { |
| "first": "Burak", |
| "middle": [], |
| "last": "Agr\u0131 Yoltar", |
| "suffix": "" |
| }, |
| { |
| "first": "F\u0131rat", |
| "middle": [], |
| "last": "G\u00fcrel", |
| "suffix": "" |
| }, |
| { |
| "first": "Osman", |
| "middle": [], |
| "last": "Duru\u015fan", |
| "suffix": "" |
| }, |
| { |
| "first": "Arda", |
| "middle": [], |
| "last": "Mutlu", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Akdemir", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Experimental IR Meets Multilinguality, Multimodality, and Interaction", |
| "volume": "", |
| "issue": "", |
| "pages": "425--432", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ali H\u00fcrriyetoglu, Erdem Y\u00f6r\u00fck, Deniz Y\u00fcret, \u00c7 agr\u0131 Yoltar, Burak G\u00fcrel, F\u0131rat Duru\u015fan, Osman Mutlu, and Arda Akdemir. 2019. Overview of clef 2019 lab protestnews: Extracting protests from news in a cross-context setting. In Experimental IR Meets Multilinguality, Multimodality, and Interac- tion, pages 425-432, Cham. Springer International Publishing.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Automated extraction of socio-political events from news (AESPEN): Workshop and shared task report", |
| "authors": [ |
| { |
| "first": "Ali", |
| "middle": [], |
| "last": "H\u00fcrriyetoglu", |
| "suffix": "" |
| }, |
| { |
| "first": "Vanni", |
| "middle": [], |
| "last": "Zavarella", |
| "suffix": "" |
| }, |
| { |
| "first": "Hristo", |
| "middle": [], |
| "last": "Tanev", |
| "suffix": "" |
| }, |
| { |
| "first": "Erdem", |
| "middle": [], |
| "last": "Y\u00f6r\u00fck", |
| "suffix": "" |
| }, |
| { |
| "first": "Ali", |
| "middle": [], |
| "last": "Safaya", |
| "suffix": "" |
| }, |
| { |
| "first": "Osman", |
| "middle": [], |
| "last": "Mutlu", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the Workshop on Automated Extraction of Socio-political Events from News 2020", |
| "volume": "", |
| "issue": "", |
| "pages": "1--6", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ali H\u00fcrriyetoglu, Vanni Zavarella, Hristo Tanev, Er- dem Y\u00f6r\u00fck, Ali Safaya, and Osman Mutlu. 2020. Automated extraction of socio-political events from news (AESPEN): Workshop and shared task report. In Proceedings of the Workshop on Automated Ex- traction of Socio-political Events from News 2020, pages 1-6, Marseille, France. European Language Resources Association (ELRA).", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Tune: A research platform for distributed model selection and training", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Liaw", |
| "suffix": "" |
| }, |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Nishihara", |
| "suffix": "" |
| }, |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Moritz", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [ |
| "E" |
| ], |
| "last": "Gonzalez", |
| "suffix": "" |
| }, |
| { |
| "first": "Ion", |
| "middle": [], |
| "last": "Stoica", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1807.05118" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Liaw, Eric Liang, Robert Nishihara, Philipp Moritz, Joseph E Gonzalez, and Ion Stoica. 2018. Tune: A research platform for distributed model selection and training. arXiv preprint arXiv:1807.05118.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Optimal thresholding of classifiers to maximize f1 measure", |
| "authors": [ |
| { |
| "first": "Charles", |
| "middle": [], |
| "last": "Zachary C Lipton", |
| "suffix": "" |
| }, |
| { |
| "first": "Balakrishnan", |
| "middle": [], |
| "last": "Elkan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Naryanaswamy", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Joint European Conference on Machine Learning and Knowledge Discovery in Databases", |
| "volume": "", |
| "issue": "", |
| "pages": "225--239", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zachary C Lipton, Charles Elkan, and Balakrishnan Naryanaswamy. 2014. Optimal thresholding of clas- sifiers to maximize f1 measure. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 225-239. Springer.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Decoupled weight decay regularization", |
| "authors": [ |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Loshchilov", |
| "suffix": "" |
| }, |
| { |
| "first": "Frank", |
| "middle": [], |
| "last": "Hutter", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Con- ference on Learning Representations.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "On the stability of fine-tuning {bert}: Misconceptions, explanations, and strong baselines", |
| "authors": [ |
| { |
| "first": "Marius", |
| "middle": [], |
| "last": "Mosbach", |
| "suffix": "" |
| }, |
| { |
| "first": "Maksym", |
| "middle": [], |
| "last": "Andriushchenko", |
| "suffix": "" |
| }, |
| { |
| "first": "Dietrich", |
| "middle": [], |
| "last": "Klakow", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marius Mosbach, Maksym Andriushchenko, and Diet- rich Klakow. 2021. On the stability of fine-tuning {bert}: Misconceptions, explanations, and strong baselines. In International Conference on Learning Representations.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "seqeval: A python framework for sequence labeling evaluation", |
| "authors": [ |
| { |
| "first": "Hiroki", |
| "middle": [], |
| "last": "Nakayama", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hiroki Nakayama. 2018. seqeval: A python framework for sequence labeling evaluation. Software available from https://github.com/chakki-works/seqeval.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "How multilingual is multilingual bert?", |
| "authors": [ |
| { |
| "first": "Telmo", |
| "middle": [], |
| "last": "Pires", |
| "suffix": "" |
| }, |
| { |
| "first": "Eva", |
| "middle": [], |
| "last": "Schlinger", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Garrette", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "4996--5001", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual bert? In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996-5001.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Text chunking using transformation-based learning", |
| "authors": [ |
| { |
| "first": "Lance", |
| "middle": [], |
| "last": "Ramshaw", |
| "suffix": "" |
| }, |
| { |
| "first": "Mitch", |
| "middle": [], |
| "last": "Marcus", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Third Workshop on Very Large Corpora", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lance Ramshaw and Mitch Marcus. 1995. Text chunk- ing using transformation-based learning. In Third Workshop on Very Large Corpora.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Recent Advances in Language Model Fine-tuning", |
| "authors": [ |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Ruder", |
| "suffix": "" |
| } |
| ], |
| "year": 2021, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sebastian Ruder. 2021. Recent Advances in Lan- guage Model Fine-tuning. http://ruder.io/ recent-advances-lm-fine-tuning.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Harem: An advanced ner evaluation contest for portuguese", |
| "authors": [ |
| { |
| "first": "Diana", |
| "middle": [], |
| "last": "Santos", |
| "suffix": "" |
| }, |
| { |
| "first": "Nuno", |
| "middle": [], |
| "last": "Seco", |
| "suffix": "" |
| }, |
| { |
| "first": "Nuno", |
| "middle": [], |
| "last": "Cardoso", |
| "suffix": "" |
| }, |
| { |
| "first": "Rui", |
| "middle": [], |
| "last": "Vilela", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "quot; In Nicoletta Calzolari; Khalid Choukri; Aldo Gangemi; Bente Maegaard; Joseph Mariani; Jan Odjik; Daniel Tapias", |
| "volume": "", |
| "issue": "", |
| "pages": "22--28", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diana Santos, Nuno Seco, Nuno Cardoso, and Rui Vilela. 2006. Harem: An advanced ner evalua- tion contest for portuguese. In quot; In Nicoletta Calzolari; Khalid Choukri; Aldo Gangemi; Bente Maegaard; Joseph Mariani; Jan Odjik; Daniel Tapias (ed) Proceedings of the 5 th International Conference on Language Resources and Evaluation (LREC'2006)(Genoa Italy 22-28 May 2006).", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition", |
| "authors": [ |
| { |
| "first": "Erik", |
| "middle": [ |
| "F" |
| ], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Tjong Kim", |
| "middle": [], |
| "last": "Sang", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "COLING-02: The 6th Conference on Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. In COLING-02: The 6th Conference on Natural Language Learning 2002 (CoNLL-2002).", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition", |
| "authors": [ |
| { |
| "first": "Erik", |
| "middle": [ |
| "F" |
| ], |
| "last": "Tjong", |
| "suffix": "" |
| }, |
| { |
| "first": "Kim", |
| "middle": [], |
| "last": "Sang", |
| "suffix": "" |
| }, |
| { |
| "first": "Fien", |
| "middle": [], |
| "last": "De Meulder", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003", |
| "volume": "", |
| "issue": "", |
| "pages": "142--147", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natu- ral Language Learning at HLT-NAACL 2003, pages 142-147.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Transformers: State-of-the-art natural language processing", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Wolf", |
| "suffix": "" |
| }, |
| { |
| "first": "Lysandre", |
| "middle": [], |
| "last": "Debut", |
| "suffix": "" |
| }, |
| { |
| "first": "Victor", |
| "middle": [], |
| "last": "Sanh", |
| "suffix": "" |
| }, |
| { |
| "first": "Julien", |
| "middle": [], |
| "last": "Chaumond", |
| "suffix": "" |
| }, |
| { |
| "first": "Clement", |
| "middle": [], |
| "last": "Delangue", |
| "suffix": "" |
| }, |
| { |
| "first": "Anthony", |
| "middle": [], |
| "last": "Moi", |
| "suffix": "" |
| }, |
| { |
| "first": "Pierric", |
| "middle": [], |
| "last": "Cistac", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Rault", |
| "suffix": "" |
| }, |
| { |
| "first": "R\u00e9mi", |
| "middle": [], |
| "last": "Louf", |
| "suffix": "" |
| }, |
| { |
| "first": "Morgan", |
| "middle": [], |
| "last": "Funtowicz", |
| "suffix": "" |
| }, |
| { |
| "first": "Joe", |
| "middle": [], |
| "last": "Davison", |
| "suffix": "" |
| }, |
| { |
| "first": "Sam", |
| "middle": [], |
| "last": "Shleifer", |
| "suffix": "" |
| }, |
| { |
| "first": "Clara", |
| "middle": [], |
| "last": "Patrick Von Platen", |
| "suffix": "" |
| }, |
| { |
| "first": "Yacine", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Julien", |
| "middle": [], |
| "last": "Jernite", |
| "suffix": "" |
| }, |
| { |
| "first": "Canwen", |
| "middle": [], |
| "last": "Plu", |
| "suffix": "" |
| }, |
| { |
| "first": "Teven", |
| "middle": [ |
| "Le" |
| ], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Sylvain", |
| "middle": [], |
| "last": "Scao", |
| "suffix": "" |
| }, |
| { |
| "first": "Mariama", |
| "middle": [], |
| "last": "Gugger", |
| "suffix": "" |
| }, |
| { |
| "first": "Quentin", |
| "middle": [], |
| "last": "Drame", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [ |
| "M" |
| ], |
| "last": "Lhoest", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Rush", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "38--45", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R\u00e9mi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Google's neural machine translation system: Bridging the gap between human and machine translation", |
| "authors": [ |
| { |
| "first": "Yonghui", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Schuster", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhifeng", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Quoc", |
| "middle": [ |
| "V" |
| ], |
| "last": "Le", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohammad", |
| "middle": [], |
| "last": "Norouzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Macherey", |
| "suffix": "" |
| }, |
| { |
| "first": "Maxim", |
| "middle": [], |
| "last": "Krikun", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuan", |
| "middle": [], |
| "last": "Cao", |
| "suffix": "" |
| }, |
| { |
| "first": "Qin", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Klaus", |
| "middle": [], |
| "last": "Macherey", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeff", |
| "middle": [], |
| "last": "Klingner", |
| "suffix": "" |
| }, |
| { |
| "first": "Apurva", |
| "middle": [], |
| "last": "Shah", |
| "suffix": "" |
| }, |
| { |
| "first": "Melvin", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaobing", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "\u0141ukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Gouws", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshikiyo", |
| "middle": [], |
| "last": "Kato", |
| "suffix": "" |
| }, |
| { |
| "first": "Taku", |
| "middle": [], |
| "last": "Kudo", |
| "suffix": "" |
| }, |
| { |
| "first": "Hideto", |
| "middle": [], |
| "last": "Kazawa", |
| "suffix": "" |
| }, |
| { |
| "first": "Keith", |
| "middle": [], |
| "last": "Stevens", |
| "suffix": "" |
| }, |
| { |
| "first": "George", |
| "middle": [], |
| "last": "Kurian", |
| "suffix": "" |
| }, |
| { |
| "first": "Nishant", |
| "middle": [], |
| "last": "Patil", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Oriol Vinyals", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, \u0141ukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rud- nick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "text": "Example of a snippet from sub-task 4.", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "text": "(Top) Parallel coordinates plot of the", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "text": "", |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "text": "Number of elements for each dataset used in the behavioral fine-tuning in each split.", |
| "content": "<table/>", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| }, |
| "TABREF3": { |
| "text": "Average macro-F1 score, higher is better (standard deviation, lower is better) of the 20 experiments with the specified setup. N means normal fine-tuning and B behavioral fine-tuning. Data means data order and Init layer means initialization of the final layer. Rand means random, and fix refers to frozen.", |
| "content": "<table><tr><td>English</td><td>Spanish</td><td>Portuguese</td><td>Hindi</td></tr><tr><td colspan=\"4\">Sub-task 1 53.46 (84.55) 46.47 (77.27) 46.47 (84.00) 29.66 (78.77)</td></tr><tr><td colspan=\"4\">Sub-task 2 75.64 (85.32) 76.39 (88.61) 81.61 (88.47) /</td></tr><tr><td colspan=\"4\">Sub-task 4 69.96 (78.11) 56.64 (66.20) 61.87 (73.24) /</td></tr></table>", |
| "type_str": "table", |
| "num": null, |
| "html": null |
| } |
| } |
| } |
| } |