ACL-OCL / Base_JSON /prefixW /json /wnut /2020.wnut-1.40.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:35:47.800551Z"
},
"title": "BiTeM at WNUT 2020 Shared Task-1: Named Entity Recognition over Wet Lab Protocols using an Ensemble of Contextual Language Models",
"authors": [
{
"first": "Julien",
"middle": [],
"last": "Knafou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Applied",
"location": {
"country": "Switzerland"
}
},
"email": ""
},
{
"first": "Nona",
"middle": [],
"last": "Naderi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Applied",
"location": {
"country": "Switzerland"
}
},
"email": ""
},
{
"first": "Jenny",
"middle": [],
"last": "Copara",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Applied",
"location": {
"country": "Switzerland"
}
},
"email": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Teodoro",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Applied",
"location": {
"country": "Switzerland"
}
},
"email": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Ruch",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Applied",
"location": {
"country": "Switzerland"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Recent improvements in machine-reading technologies attracted much attention to automation problems and their possibilities. In this context, WNUT 2020 introduces a Name Entity Recognition (NER) task based on wet laboratory procedures. In this paper, we present a 3-step method based on deep neural language models that reported the best overall exact match F 1-score (77.99%) of the competition. By fine-tuning 10 times, 10 different pretrained language models, this work shows the advantage of having more models in an ensemble based on a majority of votes strategy. On top of that, having 100 different models allowed us to analyse the combinations of ensemble that demonstrated the impact of having multiple pretrained models versus fine-tuning a pretrained model multiple times. Entity BERT (cased) BioClinical BERT BioBERT RoBERTa BioMed RoBERTa PubMed BERT XLNet Ensemble Baseline base large base large base large Action 88.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Recent improvements in machine-reading technologies attracted much attention to automation problems and their possibilities. In this context, WNUT 2020 introduces a Name Entity Recognition (NER) task based on wet laboratory procedures. In this paper, we present a 3-step method based on deep neural language models that reported the best overall exact match F 1-score (77.99%) of the competition. By fine-tuning 10 times, 10 different pretrained language models, this work shows the advantage of having more models in an ensemble based on a majority of votes strategy. On top of that, having 100 different models allowed us to analyse the combinations of ensemble that demonstrated the impact of having multiple pretrained models versus fine-tuning a pretrained model multiple times. Entity BERT (cased) BioClinical BERT BioBERT RoBERTa BioMed RoBERTa PubMed BERT XLNet Ensemble Baseline base large base large base large Action 88.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The last decades have seen both the amount and the complexity of biological experiments grow. Coupling this phenomenon with the improvement in machine-reading technologies seem to have led researchers to look for ways to automate wet laboratory procedures. Such technologies should allow reproducibility while reducing human errors in the process. However, as current protocols are usually written in a natural language, a collection of wet laboratory protocols annotated with entities and relations would help assess current machine-reading performances in this specific setting (Kulkarni et al., 2018) .",
"cite_spans": [
{
"start": 580,
"end": 603,
"text": "(Kulkarni et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this context, WNUT (Workshop on Noisy User-generated Text 1 ) 2020 (Tabassum et al., 2020) proposes two tasks, a Named Entity Recognition (NER) task and a Relation Extraction (RE) task. In this paper, we present a 3-step method we used for the NER task. Our approach is essentially based on a deep neural language models supported by transformer-like architectures (Vaswani et al., 2017) . First, we fine-tuned 10 different pretrained language models on the downstream task. Then, we generated 10 instances of those pretrained models, each time with a new random initialization of the last layer, namely the classifier. Finally, we used an ensemble strategy based on a majority of votes. Our approach achieves the exact-match F 1 -score of 77.99% that ranks first in the shared task.",
"cite_spans": [
{
"start": 70,
"end": 93,
"text": "(Tabassum et al., 2020)",
"ref_id": "BIBREF14"
},
{
"start": 368,
"end": 390,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Deep learning approaches trained on large unstructured data have shown considerable success in NLP problems, including NER (Devlin et al., 2019; Liu et al., 2019; Lample et al., 2016; Beltagy et al., 2019; Jin et al., 2019) . These models use the learned representations over the large data and reuse them in a supervised setting for a downstream task. For domain-specific tasks, the models that are trained on large general text can be further trained on domain specific large data and then adapted for a downstream task Gururangan et al., 2020; Alsentzer et al., 2019) or the models can be trained only on domain-specific data and then adapted for a specific task (Beltagy et al., 2019) .",
"cite_spans": [
{
"start": 123,
"end": 144,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 145,
"end": 162,
"text": "Liu et al., 2019;",
"ref_id": "BIBREF13"
},
{
"start": 163,
"end": 183,
"text": "Lample et al., 2016;",
"ref_id": "BIBREF11"
},
{
"start": 184,
"end": 205,
"text": "Beltagy et al., 2019;",
"ref_id": "BIBREF1"
},
{
"start": 206,
"end": 223,
"text": "Jin et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 522,
"end": 546,
"text": "Gururangan et al., 2020;",
"ref_id": "BIBREF7"
},
{
"start": 547,
"end": 570,
"text": "Alsentzer et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 666,
"end": 688,
"text": "(Beltagy et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "The data provided for this task is a subset of Kulkarni et al.'s corpus (Kulkarni et al., 2018) . The dataset consists of 615 unique protocols annotated with 17 types of entities and action (an example is shown in Figure 1) .",
"cite_spans": [
{
"start": 72,
"end": 95,
"text": "(Kulkarni et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 214,
"end": 223,
"text": "Figure 1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "The organizers provided a set of protocols for training, development, and test. They further released a final set of unlabelled protocols for test during the competition (called test 2020). Table 1 shows the way the dataset has been split into a training set, a development set, 2 a test set, and the competition test (test 2020).",
"cite_spans": [],
"ref_spans": [
{
"start": 190,
"end": 198,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "In Table 2 , we see the distribution of all the entities by each subset. As we can see, we have 18 entities and only two of them (Action and Reagent) represent about 50% of annotations. This table also shows us that entities' proportions are fairly similar across all the subsets. connected layer on top of the token representations. The models include BERT (cased) (Devlin et al., 2019) , BioBERT (BERT trained on PubMed abstracts and PMC full-text articles) , Bio+ClinicalBERT (BioBERT trained on notes in the MIMIC-III v1.4 database) (Alsentzer et al., 2019) , PubMedBERT (Gu et al., 2020) , RoBERTa (Liu et al., 2019) , BioMed RoBERTa (Gururangan et al., 2020), and XLNet (Yang et al., 2019) .",
"cite_spans": [
{
"start": 366,
"end": 387,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 537,
"end": 561,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 575,
"end": 592,
"text": "(Gu et al., 2020)",
"ref_id": null
},
{
"start": 603,
"end": 621,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 676,
"end": 695,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "Our method has been driven in 3 steps. First, we chose 10 different pretrained models and fine-tuned them on the downstream task. Then, using a voting strategy, we created ensemble models. Finally, we fine-tuned 9 more times each model, each time with a new random initialization of the fully connected layer, to see if sampling ensemble models from this set of models would improve the results even more.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "In order to use transformers as a NER model, the only preprocessing we had to do was to break each protocol into sentences. Those sentences will then be the sequences that are fed into our model. As there were no overlapping entities in the text, we used a sof tmax function which allowed us to classify each token to only one entity. As transformers usually use tokenizers that work on word bits (or sub-tokens), we had to deal with it by assigning a dummy entity to each sub-token that was part of a word. In such cases, at training time, we only assign the true entity to the first sub-token. This allowed us to build back the original text quite easily. Indeed, during prediction, a word will get the highest probable entity label among all the subtokens' predictions of that word. In other words, the highest probable entity label will be assigned to all the sub-tokens of the word and the sub-tokens will be merged to build back the original word with the respective assigned label. Finally, in a given",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Transformers with a fully connected layer on top of the token representations",
"sec_num": "4.1"
},
{
"text": "Corpus type # Parameters BERT (Devlin et al., 2019) base General 110M large 340M BioBERT Bio 110M Bio+ClinicalBERT (Alsentzer et al., 2019) Bio 110M PubMedBERT (Gu et al., 2020) Bio 110M",
"cite_spans": [
{
"start": 30,
"end": 51,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 115,
"end": 139,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 160,
"end": 177,
"text": "(Gu et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Models",
"sec_num": null
},
{
"text": "RoBERTa (Liu et al., 2019) base General 110M large 340M BioMed RoBERTa (Gururangan et al., 2020) Bio 110M",
"cite_spans": [
{
"start": 8,
"end": 26,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 71,
"end": 96,
"text": "(Gururangan et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pretrained Models",
"sec_num": null
},
{
"text": "XLNet (Yang et al., 2019) base General 110M large 340M Table 3 : Pretrained models features sequence, if two adjacent words were given the same entity prediction, we would consider the two words as a passage related to that entity. Using the above setup, we fine-tuned 10 pretrained transformers for 10 epochs using an Adam optimizer (Kingma and Ba, 2014), a learning rate of 3e \u22125 , a batch size of 24 and a maximum sequence length of 256 tokens. We used 1x T4 GPU for all base models and 2x T4 GPUs for the large ones. For a given model, it took in average roughly 16 minutes per epoch to train, thus about 2.67 hours for the 10 epochs. After each epoch, we predicted the development set, computed the F 1 -score and saved the model if it improved the previous epoch score. Table 3 shows more information about all the pretrained models that we fine-tuned on the NER task. Indeed, 4 models out of 10 were trained on Biomedical corpus, such as PubMed and/or BioMed whereas the others were trained on general corpora, such as Wikipedia. Another key difference is the model type which defines the way a given model has been trained. This includes the training task (e.g., MLM, next sentence prediction, . . . ), the tokenizer algorithm, the optimizer and more. We used 5 different kinds of BERT-based, 3 of RoBERTa-based and 2 of XLNet-based models. For more details regarding the specifics of the architectures, please refer directly to their respective papers.",
"cite_spans": [
{
"start": 6,
"end": 25,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 55,
"end": 62,
"text": "Table 3",
"ref_id": null
},
{
"start": 776,
"end": 783,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Pretrained Models",
"sec_num": null
},
{
"text": "As implemented in (Copara et al., 2020b,a) , our ensemble model strategy is based on a majority of votes. This means that for a given ensemble model composition, each composing model has the right to vote. In other words, for a given protocol and a given sequence, each model will return its predictions which can be interpreted as passage/entity combinations. Once we collected all models' predictions, we then counted all the passage/entity combinations and validated only those that had cast a majority of votes.",
"cite_spans": [
{
"start": 18,
"end": 42,
"text": "(Copara et al., 2020b,a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "An ensemble based on a voting strategy",
"sec_num": "4.2"
},
{
"text": "Once we had all the models trained and ready, we were wondering if we could improve efficiency by adding more voters. The idea is to repeat the first step where each time we have a new random initialization on the fully connected layer. We ended up with 100 different models, corresponding to 10 different pretrained models fine-tuned 10 times.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling",
"sec_num": "4.3"
},
{
"text": "With only a few models to choose from, we would have been able to predict all the possible model compositions; however, as using 50 models out of 100 would have resulted in about 10 29 possible ensembles, we had to sample randomly ensemble model compositions. For each number of models taken into account in a given ensemble, we took a sample size of 1000 combinations. This will later allow us to show the results distribution of our ensemble models and examine how it will behave in certain circumstances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling",
"sec_num": "4.3"
},
{
"text": "The ensemble model we chose to use for the submission was the one that gave us the best F 1 -score on the test set. It is a composition of 14 models that were fine-tuned on the task. It contained the following pretrained models: 2\u00d7 BioBERT (BioBERT models with two different random initializations or seeds), 2\u00d7 BioClinicalBERT (2 random seeds), 3\u00d7 PubMedBERT (3 random seeds), 2\u00d7 RoBERTa base (2 random seeds), 1\u00d7 RoBERTa large 1\u00d7 BioMed RoBERTa and 3\u00d7 XLNet large (3 random seeds).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sampling",
"sec_num": "4.3"
},
{
"text": "In Table 4 , we see the F 1 -score for all the 10 models we fine-tuned across all the 18 entities. The reported baseline is the CRF baseline 3 that was provided for the shared task. First, we can see that the ensemble model outperforms the baseline by far. When comparing all the models (ensemble apart), we also notice that PubMedBERT is quite consistent as it often outperforms all the other models, including the ensemble for a few entities, namely Mention, Seal, Temperature and pH. Additionally, when compared to its peers, it clearly shows the best micro and macro F 1 -scores.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "5"
},
{
"text": "However, when looking at Speed, it seems that the transformers-based models we used are not able to do a better job than the baseline. A closer look at the errors should be done in order to see what caused such a difference with the baseline (see Section 5.4). When comparing the micro to the macro F 1score standard deviations across all the models, we can see that the macro F 1 -score standard deviations are systematically higher. This is probably due to the fact that some entities, namely Generic-Measure, Mention, Seal, Size and pH, which account for less than 1% of the test set each (see Table 2 ), seem to have a relatively high F 1 -score standard deviations level. The same applies to Measure-Type, Numerical and Speed that are less than 2% of the test set each. This is in line with the results reported by Dodge et al. (2020) which shows that results can vary a lot across the seeds when a small amount of data is available. Indeed, as these entities are quite rare, a simple misclassification can have a high impact on the macro F 1 -score. That being said, the micro F 1 -scores seem relatively stable across all the pretrained models.",
"cite_spans": [
{
"start": 821,
"end": 840,
"text": "Dodge et al. (2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 597,
"end": 605,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results and Discussions",
"sec_num": "5"
},
{
"text": "In this section, we will try to analyse the results we observe when sampling on different ensemble model compositions. These results are exclusively computed on the test set. The idea behind this experiment is to try to understand the behaviour of some metrics when adding more models. Figures 2 to 4 show the F 1 -score, the recall and precision distributions with respect to the number of models taken in a given ensemble, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 286,
"end": 300,
"text": "Figures 2 to 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Ensemble results analysis",
"sec_num": "5.1"
},
{
"text": "The first thing we notice from Figures 2 to 4 is that the more the number of models taken into account in an ensemble grows, the more the metrics variance tends to be smaller and steadier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble results analysis",
"sec_num": "5.1"
},
{
"text": "When looking at Figure 3 and 4, we clearly see that odd number of voters has a positive impact on the recall while it looks like it has a negative impact on the precision. For the moment, this is unclear to us why this behaviour can be observed; however, we think it could be linked to the majority rule we introduced in our voting strategy where majority is easier to reach in an odd system. When looking closely at Figure 2 , it appears that the \"odd/even number effect\" tends to cancel out when the number of voters increases and even number of voter getting slightly better results.",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 24,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 417,
"end": 425,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Ensemble results analysis",
"sec_num": "5.1"
},
{
"text": "In Figure 3 , there is clearly a positive slope that seems to flatten at the end, which means that the more models we have in our ensemble, the higher recall we should expect. Conversely, this trend doesn't seem that clear for precision (Figure 4) where it looks like we have a positive relation with odd numbers of voters, a negative one with even number of voters which at the end seem to converge into a flat trend for both of them. However, in both figures, as already mentioned, the variance of their respective metrics seems to get steadier and smaller when adding more models in the ensemble composition.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 237,
"end": 247,
"text": "(Figure 4)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ensemble results analysis",
"sec_num": "5.1"
},
{
"text": "Figures 6 to 8 show the same metrics while trying to isolate the effect of adding a new pretrained model versus the effect of adding an already taken pretrained model with a new random fully connected layer initialization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble results analysis",
"sec_num": "5.1"
},
{
"text": "In order to understand the setting of this experiment, we first build a matrix (see Figure 5) where each column is a pretrained model and each row is a fine-tuned version of it. We then compare the performances of ensemble models based on combinations of columns to those of the ensemble models based on combinations of rows. In Figures 6 to 8 , the x\u2212axis represents the number of row or columns taken into account.",
"cite_spans": [],
"ref_spans": [
{
"start": 84,
"end": 93,
"text": "Figure 5)",
"ref_id": null
},
{
"start": 329,
"end": 344,
"text": "Figures 6 to 8",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Ensemble results analysis",
"sec_num": "5.1"
},
{
"text": "For instance, the first two boxplots are computing metrics distributions of ensembles taking either one row or one column as an ensemble, the following two boxplots will take a combination of either two rows or two columns as ensemble and so on up to 9 rows/columns combinations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble results analysis",
"sec_num": "5.1"
},
{
"text": "More precisely, the first pink boxplot will compose an ensemble taking one column of models, namely, all the BERT base models to begin with, then all the BERT large models and so on until it computes the metrics for an ensemble composed with all the XLNet large models. Then, the second pink boxplot will take the composition of 2 columns, for example, it will first compute an ensemble with all the BERT base models and all the BERT large models, then another with all the BERT base models and all XLNet base models and so on until it computes an ensemble containing all the XLNet base and XLNet large models. On the other hand, the blue boxplots will compose ensembles with combination of rows. This means that the first blue boxplot will first compute an ensemble composed of the first row (BERT base 1 , BERT large 1 , . . . , XLNet base 1 , XLNet large 1 ), then of the second row (BERT base 2 , BERT large 2 , . . . , XLNet base 2 , XLNet large 2 ) and so on until it computes an ensemble with the last row (BERT base 10 , BERT large 10 , . . . , XLNet base 10 , XLNet large 10 ). In the same manner, the second blue boxplot will compute ensembles composed by the combinations of two rows. First, all the models in the first and second rows, then, all the models in the first and third rows and so on until it computes an ensemble composed with all the models of the last two rows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble results analysis",
"sec_num": "5.1"
},
{
"text": "In this setting, as the maximum number of possible combinations of row is 252 = 10 5 , we were able to compute all the possible combinations instead of sampling them. As we have the same number of pretrained models as fine-tuned versions, we end up with the same number of possible com- Figure 6 : Micro F 1 -score distribution using ensemble composed of either 1 to 9 different pretrained models (each time with 10 different fine-tuning) vs. 1 to 9 different fine-tuning using all the pretrained models. binations of ensemble. That being said, for each number of models taken into account in an ensemble, this allows us to compare the pink boxplot with the blue one in a more convenient manner.",
"cite_spans": [],
"ref_spans": [
{
"start": 287,
"end": 295,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ensemble results analysis",
"sec_num": "5.1"
},
{
"text": "It is worth noting that the more we increase the number of columns and rows present in an ensemble model, the more they share a certain number of models. For example, at 9, the pink boxplot shows the distribution of the metrics for all the possible ensemble models containing 9 columns of models (90 models out of 100), while the blue boxplot shows the same metrics for 9 rows of models (also 90 models out of 100). At this point, it is expected to see both boxplots converging as they both share 64 models predictions out of 90.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble results analysis",
"sec_num": "5.1"
},
{
"text": "Focusing on the left part of Figure 6 , we clearly see the benefits of using more pretrained models. First, it shows better results with only an ensemble of 10 different pretrained models. Then, it really looks steadier as the F 1 -score distribution is much narrower than the ensemble composed of multiple fine-tuning of the same pretrained model. When looking at Figure 7 , we see that the major difference between both distributions are the variances of the recall distributions, indeed, taking different pretrained models tends to retrieve important passages more systematically. The trend of both selection strategies seems to be increasing, in other words, in both cases, the more we add models, the more we retrieve important passages.",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 37,
"text": "Figure 6",
"ref_id": null
},
{
"start": 365,
"end": 373,
"text": "Figure 7",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Ensemble results analysis",
"sec_num": "5.1"
},
{
"text": "Finally, it is interesting to see in Figure 8 that the precision begins quite high and tends to decrease when we add more fine-tuned models. Conversely, when taking more pretrained models, it seems the precision has a positive relation to the number of models we use. As explained before, this relation is also due to the fact that we share more and more models in both ensemble selection strategies.",
"cite_spans": [],
"ref_spans": [
{
"start": 37,
"end": 45,
"text": "Figure 8",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Ensemble results analysis",
"sec_num": "5.1"
},
{
"text": "This analysis helped us to understand a bit more about what was happening behind our majority of votes strategy, it would be interesting to take notes of some of the observed behaviours and try to devise new strategies accordingly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble results analysis",
"sec_num": "5.1"
},
{
"text": "The official results in terms of Precision, Recall, and F 1 on the test 2020 set is shown in Table 5 . Each team was allowed to submit only one run. Our submitted run was based on the ensemble model described in sections 4.2 and 4.3. Our BiTeM team achieved the highest precision score in both ex- act match and partial match evaluation reaching 84.73% and 88.72%, respectively, and F 1 -score in exact match evaluation reaching 77.99% among 13 teams. The F 1 -score of our model in partial match (81.67%) was slightly lower than the best F 1 -score (81.75%).",
"cite_spans": [],
"ref_spans": [
{
"start": 93,
"end": 100,
"text": "Table 5",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Official results",
"sec_num": "5.2"
},
{
"text": "The precision, recall, and F 1 -score results of all entities and Action on the test 2020 in the exact match evaluation is represented in Table 6 . The best F 1 -score was achieved for pH. Size was the most difficult entity for detection. Figure 9 shows the normalized confusion matrix for the predictions (exact match) of the ensemble model on the test 2020 data. As we can see, more than 78% of Size predictions are mislabelled as Amount. This can be due to the few number of training instances of Size entity. As we can see in the following examples, 50 mL can refer to both Size and Amount depending on the context. In the first example, 50 mL refers to Amount and in the second example, it refers to Size.",
"cite_spans": [],
"ref_spans": [
{
"start": 138,
"end": 145,
"text": "Table 6",
"ref_id": "TABREF8"
},
{
"start": 239,
"end": 247,
"text": "Figure 9",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Results of the ensemble model on test 2020 data",
"sec_num": "5.3"
},
{
"text": "Example 5.4.1 Add more NEB -no \u03b2\u2212mercaptoethanol to final volume of 50 mL. About 17% of the Device predictions are mislabelled as Location that can be due to the inconsistencies in the annotation process, for example magnetic rack is annotated as Device in a few protocols (protocol 0680, protocol 0683, protocol 0685), and as Location in others (protocol 32148, protocol 33630). Here are two examples of magnetic rack annotated as Location and Device, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5.4"
},
{
"text": "Example 5.4.3 Place samples on magnetic rack, and incubate for 5 mins on the rack. Remove supernatant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5.4"
},
{
"text": "Example 5.4.4 Place the tube on a magnetic rack.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5.4"
},
{
"text": "Similarly freezer is annotated interchangeably as Location and Device. Generic-Measure is mostly confused with Concentration label (20.4%), and Method is mostly confused by Action. About 12% of Numerical is annotated as Concentration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error analysis",
"sec_num": "5.4"
},
{
"text": "With almost no preprocessing, we have seen that current pretrained language models seem to be quite efficient in any NER task (Copara et al., 2020a,b) . By analysing our voting strategy, we have also demonstrated the strengths as well as the weaknesses of such ensemble models. For instance, it looks like the more models we use, the more the performances tend to be high and stable, however, it appears that new pretrained model brings more information than fine-tuning again a pretrained model with a new fully connected weights random initialization. With this voting strategy, our submission achieved the best exact match overall F 1 -score of the competition. This clearly shows the power of such models. With almost no knowledge on the topic of wet laboratory protocols required, we think that those models open opportunity to out-of-field researchers.",
"cite_spans": [
{
"start": 126,
"end": 150,
"text": "(Copara et al., 2020a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In future work, it would be interesting to improve the number of pretrained models selection and explore bootstrapping instead of fine-tuning multiple times the same pretrained model. It would also be interesting to see if some preprocessing tweaks could help us to improve the detection performance of Speed where our models were outperformed by the baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "http://noisy-text.github.io/2020/ wlp-task.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "MethodOur models essentially focused on transformerslike(Vaswani et al., 2017) language models that we fine-tuned on the NER task by adding a fully 2 Protocol 621 (in the development set) is a duplicate of protocol 570 (in the train set), but their labels do not totally match.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/jeniyat/WNUT_2020_ NER/tree/master/code/baseline_CRF",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Publicly Available Clinical BERT Embeddings",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Alsentzer",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Boag",
"suffix": ""
},
{
"first": "Wei-Hung",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Jindi",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Naumann",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Mcdermott",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "72--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Alsentzer, John Murphy, William Boag, Wei- Hung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly Available Clinical BERT Embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Work- shop, pages 72-78.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "SciB-ERT: A Pretrained Language Model for Scientific Text",
"authors": [
{
"first": "Iz",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Arman",
"middle": [],
"last": "Cohan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3606--3611",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciB- ERT: A Pretrained Language Model for Scientific Text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3606-3611.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Rencontre des\u00c9tudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (R\u00c9CITAL, 22e\u00e9dition)",
"authors": [
{
"first": "Jenny",
"middle": [],
"last": "Copara",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Knafou",
"suffix": ""
},
{
"first": "Nona",
"middle": [],
"last": "Naderi",
"suffix": ""
},
{
"first": "Claudia",
"middle": [],
"last": "Moro",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Ruch",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Teodoro",
"suffix": ""
}
],
"year": 2020,
"venue": "Actes de la 6e conf\u00e9rence conjointe Journ\u00e9es d'\u00c9tudes sur la Parole (JEP, 33e\u00e9dition)",
"volume": "",
"issue": "",
"pages": "36--48",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny Copara, Julien Knafou, Nona Naderi, Claudia Moro, Patrick Ruch, and Douglas Teodoro. 2020a. Contextualized French language models for biomed- ical named entity recognition. In Actes de la 6e conf\u00e9rence conjointe Journ\u00e9es d'\u00c9tudes sur la Pa- role (JEP, 33e\u00e9dition), Traitement Automatique des Langues Naturelles (TALN, 27e\u00e9dition), Rencon- tre des\u00c9tudiants Chercheurs en Informatique pour le Traitement Automatique des Langues (R\u00c9CITAL, 22e\u00e9dition). Atelier D\u00c9fi Fouille de Textes, pages 36-48, Nancy, France. ATALA et AFCP.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Named entity recognition in chemical patents using ensemble of contextual language models",
"authors": [
{
"first": "Jenny",
"middle": [],
"last": "Copara",
"suffix": ""
},
{
"first": "Nona",
"middle": [],
"last": "Naderi",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Knafou",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Ruch",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Teodoro",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2007.12569"
]
},
"num": null,
"urls": [],
"raw_text": "Jenny Copara, Nona Naderi, Julien Knafou, Patrick Ruch, and Douglas Teodoro. 2020b. Named en- tity recognition in chemical patents using ensem- ble of contextual language models. arXiv preprint arXiv:2007.12569.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping",
"authors": [
{
"first": "Jesse",
"middle": [],
"last": "Dodge",
"suffix": ""
},
{
"first": "Gabriel",
"middle": [],
"last": "Ilharco",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "Farhadi",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2002.06305"
]
},
"num": null,
"urls": [],
"raw_text": "Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, and Noah Smith. 2020. Fine-tuning pretrained language models: Weight initializations, data orders, and early stop- ping. arXiv preprint arXiv:2002.06305.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Jianfeng Gao, and Hoifung Poon. 2020. Domainspecific language model pretraining for biomedical natural language processing",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Tinn",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Lucas",
"suffix": ""
},
{
"first": "Naoto",
"middle": [],
"last": "Usuyama",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Naumann",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2020. Domain- specific language model pretraining for biomedical natural language processing.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Don't stop pretraining: Adapt language models to domains and tasks",
"authors": [
{
"first": "Ana",
"middle": [],
"last": "Suchin Gururangan",
"suffix": ""
},
{
"first": "Swabha",
"middle": [],
"last": "Marasovi\u0107",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Swayamdipta",
"suffix": ""
},
{
"first": "Iz",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Doug",
"middle": [],
"last": "Beltagy",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Downey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suchin Gururangan, Ana Marasovi\u0107, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Probing biomedical embeddings from language models",
"authors": [
{
"first": "Qiao",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Bhuwan",
"middle": [],
"last": "Dhingra",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Xinghua",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP",
"volume": "",
"issue": "",
"pages": "82--89",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qiao Jin, Bhuwan Dhingra, William Cohen, and Xinghua Lu. 2019. Probing biomedical embeddings from language models. In Proceedings of the 3rd Workshop on Evaluating Vector Space Representa- tions for NLP, pages 82-89.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "An annotated corpus for machine reading of instructions in wet lab protocols",
"authors": [
{
"first": "Chaitanya",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Raghu",
"middle": [],
"last": "Machiraju",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "97--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chaitanya Kulkarni, Wei Xu, Alan Ritter, and Raghu Machiraju. 2018. An annotated corpus for machine reading of instructions in wet lab protocols. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 2 (Short Papers), pages 97-106, New Orleans, Louisiana. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Neural architectures for named entity recognition",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Miguel",
"middle": [],
"last": "Ballesteros",
"suffix": ""
},
{
"first": "Sandeep",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Kazuya",
"middle": [],
"last": "Kawakami",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "260--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 260-270.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "BioBERT: a pretrained biomedical language representation model for biomedical text mining",
"authors": [
{
"first": "Jinhyuk",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wonjin",
"middle": [],
"last": "Yoon",
"suffix": ""
},
{
"first": "Sungdong",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Donghyeon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Sunkyu",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Chan",
"middle": [],
"last": "Ho So",
"suffix": ""
},
{
"first": "Jaewoo",
"middle": [],
"last": "Kang",
"suffix": ""
}
],
"year": 2019,
"venue": "Bioinformatics",
"volume": "36",
"issue": "4",
"pages": "1234--1240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre- trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Roberta: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "WNUT-2020 Task 1 Overview: Extracting Entities and Relations from Wet Lab Protocols",
"authors": [
{
"first": "Jeniya",
"middle": [],
"last": "Tabassum",
"suffix": ""
},
{
"first": "Sydney",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeniya Tabassum, Sydney Lee, Wei Xu, and Alan Rit- ter. 2020. WNUT-2020 Task 1 Overview: Extract- ing Entities and Relations from Wet Lab Protocols. In Proceedings of EMNLP 2020 Workshop on Noisy User-generated Text (WNUT).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "6000--6010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Sys- tems, pages 6000-6010.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "XLNet: Generalized Autoregressive Pretraining for Language Understanding",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"G"
],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5753--5763",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In Advances in neural in- formation processing systems, pages 5753-5763.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "An example of the data.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Micro F 1 -score distribution by number of models used in an ensemble (sample size of 1000) on the test set.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Micro recall distribution by number of models used in an ensemble (sample size of 1000) on the test set.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Micro precision distribution by number of models used in an ensemble (sample size of 1000) on the test set. \uf8ee \uf8ef \uf8ef \uf8f0 BERT base 1 BERT large 1 \u2022\u2022\u2022 XLNet base 1 XLNet large 1 BERT base 2 BERT large 2 \u2022\u2022\u2022 XLNet base 2 XLNet large 2 base 9 BERT large 9 \u2022\u2022\u2022 XLNet base 9 XLNet large 9 BERT base 10 BERT large 10 \u2022\u2022\u2022 XLNet base 10 XLNet large 10 Matrix where each column represents a pretrained model and each row represents a fine-tuned model with a new random initialization of the fully connected layer.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "Micro recall distribution using ensemble composed of either 1 to 9 different pretrained models (each time with 10 different fine-tuning) vs. 1 to 9 different fine-tuning using all the pretrained models.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "Micro precision distribution using ensemble composed of either 1 to 9 different pretrained models (each time with 10 different fine-tuning) vs. 1 to 9 different fine-tuning using all the pretrained models.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF6": {
"text": "Transfer the aqueous phase to a.new 50 mL Falcon tube.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF7": {
"text": "Normalized Confusion matrix for the ensemble model on the test 2020 data.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF1": {
"text": "Number of protocols in WNUT-NER dataset.",
"content": "<table><tr><td>Entity</td><td>Train Count</td><td colspan=\"2\">Dev % Count</td><td colspan=\"2\">Test % Count</td><td colspan=\"3\">Test 2020 % Count</td><td>%</td></tr><tr><td>Action</td><td colspan=\"2\">12,355 25.91</td><td colspan=\"2\">4,011 25.49</td><td colspan=\"2\">4,138 25.32</td><td colspan=\"3\">5,346 23.04</td></tr><tr><td>Amount</td><td>3,432</td><td>7.20</td><td>1,090</td><td>6.93</td><td>1,190</td><td>7.28</td><td>1,223</td><td colspan=\"2\">5.27</td></tr><tr><td>Concentration</td><td>1,330</td><td>2.79</td><td>422</td><td>2.68</td><td>535</td><td>3.27</td><td>701</td><td colspan=\"2\">3.02</td></tr><tr><td>Device</td><td>1,752</td><td>3.67</td><td>616</td><td>3.92</td><td>468</td><td>2.86</td><td>888</td><td colspan=\"2\">3.83</td></tr><tr><td>Generic-Measure</td><td>484</td><td>1.02</td><td>132</td><td>0.84</td><td>143</td><td>0.87</td><td>173</td><td colspan=\"2\">0.75</td></tr><tr><td>Location</td><td>3,921</td><td>8.23</td><td>1,396</td><td>8.87</td><td>1,326</td><td>8.11</td><td>1,657</td><td colspan=\"2\">7.14</td></tr><tr><td>Measure-Type</td><td>857</td><td>1.79</td><td>324</td><td>2.06</td><td>272</td><td>1.66</td><td>720</td><td colspan=\"2\">3.10</td></tr><tr><td>Mention</td><td>257</td><td>0.54</td><td>83</td><td>0.53</td><td>56</td><td>0.34</td><td>142</td><td colspan=\"2\">0.61</td></tr><tr><td>Method</td><td>1,597</td><td>3.36</td><td>538</td><td>3.43</td><td>581</td><td>3.56</td><td>1,059</td><td colspan=\"2\">4.56</td></tr><tr><td>Modifier</td><td>4,588</td><td>9.62</td><td>1,547</td><td>9.83</td><td>1,601</td><td>9.79</td><td colspan=\"3\">3,416 14.72</td></tr><tr><td>Numerical</td><td>832</td><td>1.75</td><td>259</td><td>1.65</td><td>231</td><td>1.41</td><td>513</td><td colspan=\"2\">2.21</td></tr><tr><td>Reagent</td><td colspan=\"2\">11,121 23.33</td><td colspan=\"2\">3,594 22.93</td><td colspan=\"2\">3,995 24.44</td><td colspan=\"3\">5,012 21.60</td></tr><tr><td>Seal</td><td>210</td><td>0.44</td><td>92</td><td>0.58</td><td>64</td><td>0.39</td><td>119</td><td colspan=\"2\">0.51</td></tr><tr><td>Size</td><td>262</td><td>0.55</td><td>123</td><td>0.78</td><td>113</td><td>0.69</td><td>232</td><td colspan=\"2\">1.00</td></tr><tr><td>Speed</td><td>626</td><td>1.31</td><td>239</td><td>1.52</td><td>167</td><td>1.02</td><td>238</td><td colspan=\"2\">1.03</td></tr><tr><td>Temperature</td><td>1,592</td><td>3.34</td><td>486</td><td>3.09</td><td>532</td><td>3.25</td><td>744</td><td colspan=\"2\">3.21</td></tr><tr><td>Time</td><td>2,396</td><td>5.02</td><td>745</td><td>4.74</td><td>870</td><td>5.32</td><td>951</td><td colspan=\"2\">4.10</td></tr><tr><td>pH</td><td>67</td><td>0.14</td><td>37</td><td>0.23</td><td>62</td><td>0.38</td><td>66</td><td colspan=\"2\">0.28</td></tr><tr><td>Total</td><td>47,679</td><td/><td>15,734</td><td/><td>16,344</td><td/><td>23,200</td><td/></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF2": {
"text": "",
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF4": {
"text": "F 1 -score by model on the test set. We reported averages across the 10 random seeds for all the pretrained model results, for those, subscripts represents the standard deviations. The improvements of the ensemble model over the best performing transformer model is shown in parentheses in the ensemble column.",
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF5": {
"text": "63.93 70.25 84.85 69.59 76.46 BIO-BIO 78.49 71.06 74.59 83.16 75.29 79.03 BiTeM 84.73 72.25 77.99 88.72 75.66 81.67 DSC-IITISM 64.20 57.07 60.42 68.52 60.90 64.49 Fancy Man 76.21 71.76 73.92 81.15 76.41 78.71 IBS 74.26 62.55 67.90 79.72 67.15 72.89",
"content": "<table><tr><td>Team Name</td><td>P</td><td>Exact Match R</td><td>F 1</td><td>P</td><td>Partial Match R</td><td>F 1</td></tr><tr><td colspan=\"7\">B-NLP 77.95 Kabir 78.79 72.20 75.35 83.73 76.73 80.08</td></tr><tr><td>KaushikAcharya</td><td colspan=\"6\">73.68 63.98 68.48 79.31 68.87 73.73</td></tr><tr><td>mahab</td><td colspan=\"6\">50.19 52.96 51.54 55.09 58.14 56.57</td></tr><tr><td>mgsohrab</td><td colspan=\"6\">83.69 70.62 76.60 87.95 74.22 80.50</td></tr><tr><td colspan=\"7\">PublishInCovid19 81.36 74.12 77.57 85.74 78.11 81.75</td></tr><tr><td>SudeshnaTCS</td><td colspan=\"6\">74.99 71.43 73.16 79.73 75.95 77.80</td></tr><tr><td>IITKGP</td><td colspan=\"6\">77.00 72.93 74.91 81.76 77.43 79.54</td></tr></table>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF6": {
"text": "Official results on Test 2020.",
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF8": {
"text": "The precision, recall, and F 1 -score of the ensemble model for all the entities and action on the test 2020.",
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}