ACL-OCL / Base_JSON /prefixE /json /eval4nlp /2021.eval4nlp-1.1.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:39:00.206527Z"
},
"title": "Differential Evaluation: a Qualitative Analysis of Natural Language Processing System Behavior Based Upon Data Resistance to Processing",
"authors": [
{
"first": "Lucie",
"middle": [],
"last": "Gianola",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "LISN",
"location": {
"postCode": "91405",
"settlement": "Orsay",
"country": "France"
}
},
"email": ""
},
{
"first": "Hicham",
"middle": [],
"last": "El Boukkouri",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "LISN",
"location": {
"postCode": "91405",
"settlement": "Orsay",
"country": "France"
}
},
"email": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Grouin",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "LISN",
"location": {
"postCode": "91405",
"settlement": "Orsay",
"country": "France"
}
},
"email": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Lavergne",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "LISN",
"location": {
"postCode": "91405",
"settlement": "Orsay",
"country": "France"
}
},
"email": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Paroubek",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "LISN",
"location": {
"postCode": "91405",
"settlement": "Orsay",
"country": "France"
}
},
"email": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Zweigenbaum",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "LISN",
"location": {
"postCode": "91405",
"settlement": "Orsay",
"country": "France"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Most of the time, when dealing with a particular Natural Language Processing task, systems are compared on the basis of global statistics such as recall, precision, F1-score, etc. While such scores provide a general idea of the behavior of these systems, they ignore a key piece of information that can be useful for assessing progress and discerning remaining challenges: the relative difficulty of test instances. To address this shortcoming, we introduce the notion of differential evaluation which effectively defines a pragmatic partition of instances into gradually more difficult bins by leveraging the predictions made by a set of systems. Comparing systems along these difficulty bins enables us to produce a finergrained analysis of their relative merits, which we illustrate on two use-cases: a comparison of systems participating in a multi-label text classification task (CLEF eHealth 2018 ICD-10 coding), and a comparison of neural models trained for biomedical entity detection (BioCreative V chemical-disease relations dataset).",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Most of the time, when dealing with a particular Natural Language Processing task, systems are compared on the basis of global statistics such as recall, precision, F1-score, etc. While such scores provide a general idea of the behavior of these systems, they ignore a key piece of information that can be useful for assessing progress and discerning remaining challenges: the relative difficulty of test instances. To address this shortcoming, we introduce the notion of differential evaluation which effectively defines a pragmatic partition of instances into gradually more difficult bins by leveraging the predictions made by a set of systems. Comparing systems along these difficulty bins enables us to produce a finergrained analysis of their relative merits, which we illustrate on two use-cases: a comparison of systems participating in a multi-label text classification task (CLEF eHealth 2018 ICD-10 coding), and a comparison of neural models trained for biomedical entity detection (BioCreative V chemical-disease relations dataset).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The analysis of NLP system results has mainly focused on evaluation scores meant to rank systems and feed leaderboards. In tasks such as information extraction, text classification, etc., evaluation generally relies on the comparison of a hypothesis (typically a system output) with a gold standard, generally produced through manual annotation. Since the MUC-6 conference (Grishman and Sundheim, 1996) , the metrics used were created for information retrieval (Cleverdon, 1960) : recall (true positive rate), precision (positive predictive value) and their harmonic (possibly weighted) mean, the F1score. Evaluation scripts are widely available nowadays, for instance those of the CoNLL shared tasks (Tjong Kim Sang and De Meulder, 2003) . These scripts rely on an annotation scheme based on the BIO prefix used to specify whether a token is at the beginning, inside or outside of an annotation span, making it a de facto standard for NER evaluation (Nadeau and Sekine, 2007) . Many other NLP tasks have developed or used their own metrics, such as accuracy for classification, BLEU (Papineni et al., 2002) for machine translation, ROUGE for machine translation and text summarization (Lin, 2004) , word error rate for automatic speech recognition, etc. While evaluation is the key step in shared tasks, developers also need to evaluate the performance of their systems for feature selection or architecture design choices, especially when several systems are combined (Jiang et al., 2016) .",
"cite_spans": [
{
"start": 373,
"end": 402,
"text": "(Grishman and Sundheim, 1996)",
"ref_id": "BIBREF7"
},
{
"start": 461,
"end": 478,
"text": "(Cleverdon, 1960)",
"ref_id": "BIBREF1"
},
{
"start": 701,
"end": 738,
"text": "(Tjong Kim Sang and De Meulder, 2003)",
"ref_id": "BIBREF17"
},
{
"start": 951,
"end": 976,
"text": "(Nadeau and Sekine, 2007)",
"ref_id": "BIBREF13"
},
{
"start": 1084,
"end": 1107,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF15"
},
{
"start": 1186,
"end": 1197,
"text": "(Lin, 2004)",
"ref_id": "BIBREF12"
},
{
"start": 1470,
"end": 1490,
"text": "(Jiang et al., 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, scores only are insufficient to capture the behavior of systems and to provide a finergrained analysis of their pros and cons. Indeed, though widely used, scores are not free of imperfections, as demonstrated by Peyrard et al. (2021) who discuss the use of the average to aggregate evaluation scores. They show that very different system behaviors can yield similar scores when using the average and suggest an alternative aggregation mechanism. Some researchers also call for going beyond performance scores: Ethayarajh and Jurafsky (2020) suggest that performance-based evaluation (as promoted by leaderboards) overlooks aspects such as utility, prediction cost, and robustness of models. They recommend considering the point of view of the user of models rather than just performance scores to estimate their relevance.",
"cite_spans": [
{
"start": 221,
"end": 242,
"text": "Peyrard et al. (2021)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Trying to provide a finer understanding of the issues raised by the input text and of the limitations of the evaluated systems, we propose a new qualitative analysis method that takes into account the observed relative difficulty of predicting gold labels for each input. This difficulty is assessed pragmatically based upon the number of systems that predict a gold label (a true positive) for a given input. As a qualitative method, its aim is not to compute an evaluation measure nor to rank systems, but Figure 1 : Example input file for a set of six systems. 1 means the system yielded a true positive for the instance, and 0 means it did not (the instance was 'missed').",
"cite_spans": [],
"ref_spans": [
{
"start": 508,
"end": 516,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "instead to obtain an overview of how different systems achieve the task, and thus understand where their strengths and weaknesses are.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "After explaining how the method works globally (Section 2), we illustrate it with data from two shared tasks from the biomedical domain, one for multi-label classification and another for named entity recognition (Section 3), then discuss a few points and directions for future investigation (Section 4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our qualitative analysis method, which we call differential evaluation 1 , globally considers the various sets of correct instances ('true positives', or 'gold instances') that were discovered by a set of systems. Since the aim of the method is not to produce a ranking, the considered systems can be different systems performing the same task, as in a shared task for example, or different versions of the same system also performing a given task, as in a development context. As input, the algorithm takes a matrix of instances and systems, as shown in Figure 1 . For each instance, it then computes how many systems discovered it correctly (i.e., in Figure 1 , '762_levodopa' has been discovered by 6 systems, '1034_cyp' has been discovered by 4 systems, etc.) This enables it to compute then how many instances have been detected by all systems, by all systems but one, by all systems but two, etc., and by no system at all. This yields a grouping of instances into bins depending on the number of systems that discovered them. There are as many bins as there are systems plus one for the set of instances that were discovered by none of the systems. Bin-1 is the set of instances detected by exactly one system, bin-2 the set of instances detected by exactly two systems, etc.; and bin-0, the set of instances that no system was able to detect (see Section 3 for illustrated examples). Figure 2 shows the composition of bin-5 in a case where six systems are compared, and displays the percentage coverage of the bin for each system. Figure 3 shows a schema of the global scenario of the method.",
"cite_spans": [],
"ref_spans": [
{
"start": 555,
"end": 563,
"text": "Figure 1",
"ref_id": null
},
{
"start": 653,
"end": 661,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1391,
"end": 1399,
"text": "Figure 2",
"ref_id": "FIGREF0"
},
{
"start": 1538,
"end": 1546,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Differential evaluation: highlighting the 'difficulty' of examples",
"sec_num": "2"
},
{
"text": "Instances in bin-N (where N is the number of considered systems), which holds the set of entities discovered by all systems, can be considered as the easiest to predict, while instances in bin-0, which holds the set of entities that no system was able to detect, can be seen as the most difficult. More- Table 1 and Figure 4 respectively. System contributions to a bin can have a null intersection: i.e. here, in bin-4, Systems B and C may be yielding TPs for totally different sets of instances. Bins 1, 2 and 3 omitted for conciseness.",
"cite_spans": [],
"ref_spans": [
{
"start": 304,
"end": 311,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 316,
"end": 324,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Differential evaluation: highlighting the 'difficulty' of examples",
"sec_num": "2"
},
{
"text": "over, bin-1, which holds instances discovered by a single system, can be seen as the bin holding the singular contribution of each system. As such, bin-1 is particularly interesting when considering system combination architectures or ROVER-like performance measures (Fiscus, 1997) . Figure 4 presents one of the outputs of the method, a heatmap of percentages of system TPs relative to the total number of instances in each bin, in this case for the CLEF eHealth 2018 ICD-10 coding task for Italian (we analyse this example in detail in Section 3.1.1). The first column on the left is bin-0, holding only 0 values as we have said that bin-0 is the bin of instances missed by all systems (as shown by Table 1 , here 305 instances were missed by all systems). The second column from the left holds bin-1, and so on. Another output of the method is the table of absolute values corresponding to the percentages heatmap, such as Table 1 . It would then be interesting to investigate whether a pattern emerges concerning the linguistic nature of instances, which would help to chart the difficulty of the task, and complete the qualitative aspect of the analysis.",
"cite_spans": [
{
"start": 267,
"end": 281,
"text": "(Fiscus, 1997)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 284,
"end": 292,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 701,
"end": 708,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 926,
"end": 933,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Differential evaluation: highlighting the 'difficulty' of examples",
"sec_num": "2"
},
{
"text": "In this section, we present insights that can be drawn from the use of differential evaluation on data related to two shared tasks addressing respectively multi-label text classification and named entity recognition, both in the biomedical domain. Note that our algorithm processes the systems in the order in which they are presented and that it is not intended to create a new ranking of the systems, but rather to provide more fine-grained information to analyze how a given system has performed or achieved its ranking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We show as an example the output obtained in the comparison of systems in a multi-label text classification task in Italian and Hungarian (N\u00e9v\u00e9ol et al., 2018) . In the gold standard, each input text is associated to one or more true labels, i.e., codes in the International Classification of Diseases (ICD-10). A true positive system prediction is an association between a given input text and one of the true labels for this text in the gold standard. In this dataset, Systems bin-0 bin-1 bin-2 bin-3 bin-4 bin-5 bin-6 bin-7 bin-8 bin-9 bin-10 bin-11 Total the evaluation method therefore compares label attribution rather than entities.",
"cite_spans": [
{
"start": 138,
"end": 159,
"text": "(N\u00e9v\u00e9ol et al., 2018)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CLEF eHealth 2018 ICD-10 coding",
"sec_num": "3.1"
},
{
"text": "Eleven systems were examined for Italian, and 15,534 labels were to be discovered. Some of the teams that participated in the shared task submitted two runs for variants of their base system, hence names such as B1 and B2 when two systems are submitted by the same team in Figure 4 and other tables or figures. As shown in Table 1 , bin-0 holds 305 labels found by none of the systems. Bin-1 holds 185 labels found by exactly one system, among which System A discovered 21 labels, System B1 discovered 69 labels, and so on. Bin-11 holds 3,800 labels found by all eleven systems. Figure 4 and Table 1 show bin repartition with percentages and absolute values. In Table 1 , column \"Total TPs per system\" presents the total number of labels found per system, and row \"Total per bin\" contains the total number of labels to be found. We use color codes to highlight the best/worst system for each bin. Performances are pretty steady, with System B1 outperforming all the others in every bin. The worst results are shared by Systems C1 and C2, and System A that performs badly for bins-8 and 10, which are among the \"easiest\" bins. As seen in Table 1 , although System E2 scores the worst for bin-1 with only two labels discovered, it manages to keep up with the performances of the other systems in the other bins, and its global performance (12,884 total TPs discovered) is pretty average. On the other hand, Systems C1 and C2, which are the worst systems across all bins, are not so bad globally with 9,877 and 9,572 total TPs. In fact, System A achieves a very low performance on two of the \"easiest\" bins, and thus yields less than half of the total labels, despite a not so bad performance on bin-1. Figure 4 shows that systems can be divided into groups of better and worse performances (B1, B2, D1, D2, E1, E2 vs. A, C1, C2, F1, F2). We can also see that System B1 reaches a perfect score over all easier bins up to bin-8, which hints at its being robust on easy instances. Figure 5 and Table 2 show the proportion and number of detected labels per system within each bin for the Hungarian language 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 273,
"end": 281,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 323,
"end": 330,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 579,
"end": 587,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 592,
"end": 599,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 662,
"end": 669,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1137,
"end": 1144,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1700,
"end": 1708,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 1976,
"end": 1984,
"text": "Figure 5",
"ref_id": "FIGREF3"
},
{
"start": 1989,
"end": 1996,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Italian",
"sec_num": "3.1.1"
},
{
"text": "As highlighted by colors in Table 2 , we can see that globally, Systems G1 and G2 perform the best, and Systems K1 and K2 perform the worst.",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 35,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Hungarian",
"sec_num": "3.1.2"
},
{
"text": "Just above K1 and K2 in terms of Total TPs per system (Table 2) , System J is the worst at detecting labels from bin-8 (see also Figure 5 ), which can Systems bin-0 bin-1 bin-2 bin-3 bin-4 bin-5 bin-6 bin-7 bin-8 bin-9 Total Figure 6 : Proportion of labels discovered by exactly one system, per system for Hungarian. be considered \"easy\" labels, with a very low proportion of 24% when all other systems are above 90%. In contrast however, it detects the largest number of labels in bin-1 (see also Figure 6 ). This is the only case where System G1 is significantly outperformed. System J is therefore good at detecting some \"difficult\" labels. This is a strong indicator that this system is likely to use a method that is quite different from the other systems and might bring complementary expertise on some inputs, which deserves further investigation.",
"cite_spans": [],
"ref_spans": [
{
"start": 54,
"end": 63,
"text": "(Table 2)",
"ref_id": "TABREF3"
},
{
"start": 129,
"end": 137,
"text": "Figure 5",
"ref_id": "FIGREF3"
},
{
"start": 225,
"end": 233,
"text": "Figure 6",
"ref_id": null
},
{
"start": 498,
"end": 506,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Hungarian",
"sec_num": "3.1.2"
},
{
"text": "Another perspective comes from looking at the overall performance for labels from bin-1, which, contrary to the example of Italian where most of bin-1 is yielded by four systems among eleven, is distributed in a more balanced way among systems. This means that labels from bin-1 are not yielded by one unique system that would be outperforming all the others, but that every system makes an important contribution to this bin ( Figure 6 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 428,
"end": 436,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Hungarian",
"sec_num": "3.1.2"
},
{
"text": "The BioCreative V chemical-disease relation (CDR) task is originally a relation extraction task . Its data can also be used to train and evaluate entity-detection systems for chemical and disease entities, which is what we examine here. The dataset is made of 1,500 PubMed abstracts of scientific papers, divided equally into training, development and test. In the gold standard, each input token is associated to one true label and named entities are encoded according to the BIO (begin, inside, outside) scheme. In the present work we deal with tokens rather than entities, so that we can apply the presented method directly. We consider that 'O' labels are negatives and that all other labels are positives. A true positive system prediction is an association between an input token and a non-'O' label that is the gold-standard label for this token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BioCreative V CDR entities",
"sec_num": "3.2"
},
{
"text": "We are comparing entity detection systems that rely on word embeddings based upon Character-Bert (El Boukkouri et al., 2020) or fastText (Bojanowski et al., 2017), pre-trained on different corpora, either as-is or concatenated with knowledge embeddings learned using node2vec (Grover and Leskovec, 2016) on two biomedical vocabularies (the Medical Suject Headings (MeSH), and SNOMED CT). Moreover, we also consider a variant of CharacterBert where the node2vec embeddings are injected within the model architecture. The fastText embeddings are either randomly initialized, which we note \"fastTextRandom\"; pre-trained on a newswire corpus (Gigaword (Graff et al., 2007) ), which we note \"fastTextGigaword\"; or on medical corpora (PubMed Central 3 and MIMIC-III (Johnson et al., 2016)), which Model bin-0 bin-1 bin-2 bin-3 bin-4 bin-5 bin-6 bin-7 bin-8 bin-9 bin-10 bin-11 bin-12 Tot. Table 4 : Absolute values for disease NER. Best performance in green, worst performance in red, orange when the random initialization is above one of the other initializations.",
"cite_spans": [
{
"start": 101,
"end": 124,
"text": "Boukkouri et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 648,
"end": 668,
"text": "(Graff et al., 2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 883,
"end": 890,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "BioCreative V CDR entities",
"sec_num": "3.2"
},
{
"text": "we respectively note \"fastTextPubMed\" and \"fast-TextMimic\". The CharacterBert models are either pre-trained on general corpora (English Wikipedia and OpenWebText (Gokaslan and Cohen, 2019) ), which we note \"CharBertGen\"; or pre-trained on general corpora then re-trained on PubMed and MIMIC-III, which we note \"CharBertFromGen\". In all cases the suffix \"N2V\" refers to a concatenation with the node2vec knowledge representations, with the exception of \"Enh.CharBertFromGenN2V\" which refers to the variant of CharacterBERT where the node2vec vectors are injected directly within the architecture. This last model is pretrained on the general corpus then re-trained on PubMed and MIMIC-III in order to be compared with \"CharBertFromGen\". Tables 3 and 4 respectively show absolute values for chemical and disease entity recognition, and Figures 7 and 8 the corresponding bin percentages.",
"cite_spans": [
{
"start": 162,
"end": 188,
"text": "(Gokaslan and Cohen, 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 736,
"end": 763,
"text": "Tables 3 and 4 respectively",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "BioCreative V CDR entities",
"sec_num": "3.2"
},
{
"text": "Overall, we can see that the contextual Charac-terBert embeddings perform better than the static fastText vectors in both chemical and disease recognition, with the worst performances for randomly initialized fastText embeddings. Moreover, we see that the CharacterBert models trained on medical data perform better than their general versions (Tables 3 and 4, Figures 7 and 8) , which confirms the interest of retraining the general models on indomain data.",
"cite_spans": [],
"ref_spans": [
{
"start": 361,
"end": 377,
"text": "Figures 7 and 8)",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Global performances and pairwise comparison of models",
"sec_num": "3.2.1"
},
{
"text": "Chemical CharacterBert seems to perform rather similarly regardless of the combination with node2vec embeddings. For fastText models, pairwise comparison in Table 5 shows that the introduction of knowledge embeddings (node2vec) improves recall. Comparison of bins further confirms this observation: we can see that the im-bin-1 bin-2 bin-3 bin-4 bin-5 bin-6 bin-7 bin-8 bin-9 bin-10 bin- provement is made on \"easy\" entities (bins 8 through 11). However, for \"fastTextPubMed\" the effect of node2vec is not so clear or even harmful (bins 5 and 6). This phenomenon could be explained by the fact that both PubMed and the BioCreative CDR task are from the biomedical domain while MIMIC-III and Gigaword are from somewhat different domains (clinical and newswire domains respectively). In the case of fastTextPubMed, adding medical knowledge embeddings seems to degrade performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 157,
"end": 164,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Global performances and pairwise comparison of models",
"sec_num": "3.2.1"
},
{
"text": "Disease While node2vec has a strong positive effect on fastText models regardless of their source corpus, pairwise comparison of recall for disease NER in Table 6 shows that the addition of node2vec is detrimental to CharacterBert models. However, this analysis can be refined by comparing bin-wise performances: for CharacterBert models trained on medical data (top two lines), the versions that do not use node2vec embeddings are better on \"more difficult\" bins, while the enhanced version are actually better on \"easier\" bins.",
"cite_spans": [],
"ref_spans": [
{
"start": 155,
"end": 162,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Global performances and pairwise comparison of models",
"sec_num": "3.2.1"
},
{
"text": "Browsing through the bins can give an idea of the kinds of entities they hold. This can be done in different ways.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bin inspection",
"sec_num": "3.2.2"
},
{
"text": "Bin-0 exploration We inspect here the contents of bin-0 for both the chemical and disease recognition tasks, as this bin is supposed to hold false negatives that resist all systems, i.e. the most difficult entities. Bin-0 for both chemical and disease contains occurrences of abbreviations, which occur quite frequently within parentheses in the context of their full form: for example \"bs\" for \"bile salt\" and \"rd\" (sic) for \"lenalidomide and dexamethasone\" for chemical, \"mi\" for \"myocardial infarction\" and \"mb\" for \"microbleeds\" for disease. We also spot bin-1 bin-2 bin-3 bin-4 bin-5 bin-6 bin-7 bin-8 bin-9 bin-10 bin-11 Recall EnhancedCharBertFromGenN2V 10 42 43 60 65 70 79 86 91 93 97 85,14 CharBertFromGen 28 53 53 68 69 73 78 83 88 93 96 86,11 CharBertGenN2V 9 17 38 51 61 72 80 85 90 97 82,87 CharBertGen 15 19 33 53 60 61 70 79 88 92 98 83,17 fastTextGigawordN2V 2 13 21 28 39 50 61 76 88 96 98 80,69 fastTextGigaword 3 4 10 12 28 32 40 43 63 74 expressions that should perhaps not be in the gold standard, such as \"abuse of cocaine and ethanol\" tagged as a disease, or typographic errors such as \"antithyroidmedications\". Both bins also hold an important number of single-character tokens such as punctuation marks and digits. For disease recognition, these include the determiner \"a\", which occurs most of the time as a part of a multi-word entity. A similar phenomenon occurs with other tokens such as \"of\". Occurrences of these words seem to be due to multiword entities referring to diseases and conditions such as \"enlargement of pulse pressure\", \"occlusion of renal vessels\", \"thrombosis of a normal renal artery\". It seems that multi-word entities account for a significant proportion of the generated errors, where systems only recover the first word of a multi-word entity. For example, chemical bin-0 holds all occurrences of \"channel\" and \"blockers\" from \"calcium channel blockers\", while occurrences of \"calcium\" in this context are always labelled correctly.",
"cite_spans": [],
"ref_spans": [
{
"start": 634,
"end": 1032,
"text": "EnhancedCharBertFromGenN2V 10 42 43 60 65 70 79 86 91 93 97 85,14 CharBertFromGen 28 53 53 68 69 73 78 83 88 93 96 86,11 CharBertGenN2V 9 17 38 51 61 72 80 85 90 97 82,87 CharBertGen 15 19 33 53 60 61 70 79 88 92 98 83,17 fastTextGigawordN2V 2 13 21 28 39 50 61 76 88 96 98 80,69 fastTextGigaword 3 4 10 12 28 32 40 43 63 74",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Bin inspection",
"sec_num": "3.2.2"
},
{
"text": "However, a quick inspection of other bins reveals that those part-of-speech and morphological characteristics (punctuation, single-character entities and abbreviations) are not specific to bin-0. For instance, punctuation marks make for 14% of chemical bin-0 tokens, and for 9 to 28% of bins 1 to 11 (0.07% for bin-12). In the case of disease recognition, punctuation represents 8.8% of bin-0, while ranging from 1.7% to 5.1% of bins 1 to 12 (this difference in proportions between chemical and disease can be explained by the nature of the entities, chemical entities often involving dots or hyphens). Further exploration of the distribution of part-of-speech and morphological categories may lead to some understanding of the bins' contents.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bin inspection",
"sec_num": "3.2.2"
},
{
"text": "We also found two other phenomena both in chemical bin-0 and in disease bin-0: hapax legomena ('hapaxes') and ambiguous tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bin inspection",
"sec_num": "3.2.2"
},
{
"text": "Hapaxes are tokens that occur only once in the whole data. In bin-0 of the chemical NER task, examples include \"adrenergic\", \"colony\", \"steroidal\" or \"agents\". In disease bin-0, examples include \"bacillary\", \"audiogenic\", \"choreic\", \"teratogenic\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bin inspection",
"sec_num": "3.2.2"
},
{
"text": "Ambiguous tokens in bin-0 are due to their multiple or specific meanings in the corpus. This is the case for token \"chinese\" (note that the corpus is lower-cased), which occurs in \"chinese herbal slimming pill\", \"chinese herbal\", \"chinese herbs\", and is systematically missed in the chemical recognition tasks. We assume that this is probably because it is confused with \"chinese\" used as the nationality of patients. The same applies to hapax \"philadelphia\" from \"philadelphia chromosome\". These examples lead us to assume that specialized usage of \"common\" vocabulary terms in chemical or disease entities induces a difficulty for systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bin inspection",
"sec_num": "3.2.2"
},
{
"text": "Distribution across bins Finally, another way to perform bin inspection is to look at the distribution of mentions of a same word across bins. As an illustration, we use the distribution of \"calcium\" in chemical bins (Table 7) : one mention is in bin-1, no mention is in bin-2 and 3, one mention is in bin-4, etc. While most mentions of \"calcium\" are retrieved by all eleven systems (29 mentions precisely), a total of six of those mentions are individually discovered respectively by exactly one, four, five, nine, nine, and ten systems. This feedback is potentially very useful, since we can then rank every mention in ascending order of difficulty, and proceed to look for explanations for why those six mentions resist detection by a number of systems. bin-0 bin-1 bin-2 bin-3 bin-4 bin-5 bin-6 bin-7 bin-8 bin-9 bin-10 bin-11 bin-12 calcium 0 1 0 0 1 1 0 0 0 2 1 0 29 Table 7 : Distribution of \"calcium\" occurrences through bins.",
"cite_spans": [],
"ref_spans": [
{
"start": 217,
"end": 226,
"text": "(Table 7)",
"ref_id": null
},
{
"start": 873,
"end": 880,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bin inspection",
"sec_num": "3.2.2"
},
{
"text": "As we have seen, differential evaluation is a qualitative analysis method that allows for more in-depth evaluation when comparing the behavior of several systems with each other. Rather than relying only on the classical global metrics, it provides an insight into how the performance of each system is actually distributed in automatically-determined subsets of examples relative to other systems, and how systems contribute in their very own way. As presented in the heatmap we used, harder elements to process are in the first column while easier elements are in the last column. This sorting into several columns allows us to rapidly overview how systems perform on a given task. Based on the analysis we made on the content of bins from several tasks and distinct domains, we observed that the first bin is generally composed of elements such as abbreviations and ambiguous words used in several contexts (some of these contexts are a part of an annotation while other contexts are not); moreover, these elements are often short (two or three characters long), which makes them difficult to process for statistical approaches. In the case of multi-label text classification for Hungarian (Section 3.1.2), differential analysis provided an insight that would have been overlooked by global scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and future work",
"sec_num": "4"
},
{
"text": "Future directions include the following points. First, as we have seen in Section 3.2.2, in the case of named entity recognition, examples composed of several tokens are counted token per token and not as a whole entity. Including this dimension will give another insight into the behavior of models for named-entity recognition. A second direction is to extend the current approach, which focuses on recall, hence true positives against false negatives, to take into account other basic evaluation variables, namely false positives and true negatives. A third useful direction would be to retrieve information on the context of occurrence of examples and their global features: sentence length, direct context, average number of characters per token for each bin, etc. Finally, a fourth direction would be to automatically track the distribution of different mentions of a same word across bins, as we have done manually with \"calcium\" in the second paragraph of Section 3.2.2. Linked to the previous development regarding contextual information, this would allow us to understand precisely why one particular occurrence of a word is missed while the others are more easily spotted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion and future work",
"sec_num": "4"
},
{
"text": "https://github.com/PierreZweigenbaum/differentialevaluation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The data are not the same as that for Italian, hence the different total values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.ncbi.nlm.nih.gov/pmc/tools/openftlist/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work has been made possible by the resources of the MAPA project. The MAPA project is an INEA-funded Action for the European Commission under the Connecting Europe Facility (CEF) -Telecommunications Sector with Grant Agreement No INEA/CEF/ICT/A2019/1927065. https://mapaproject.eu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00051"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The ASLIB Cranfield research project on the comparative efficiency of indexing systems",
"authors": [
{
"first": "C",
"middle": [
"W"
],
"last": "Cleverdon",
"suffix": ""
}
],
"year": 1960,
"venue": "ASLIB Proceedings",
"volume": "12",
"issue": "",
"pages": "1--253",
"other_ids": {
"DOI": [
"10.1108/eb049778"
]
},
"num": null,
"urls": [],
"raw_text": "C.W. Cleverdon. 1960. The ASLIB Cranfield re- search project on the comparative efficiency of in- dexing systems. ASLIB Proceedings, 12:421-431. ISSN: 0001-253X.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "CharacterBERT: Reconciling ELMo and BERT for word-level open-vocabulary representations from characters",
"authors": [
{
"first": "Hicham",
"middle": [
"El"
],
"last": "Boukkouri",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Ferret",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Lavergne",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Noji",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Zweigenbaum",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6903--6915",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.609"
]
},
"num": null,
"urls": [],
"raw_text": "Hicham El Boukkouri, Olivier Ferret, Thomas Lavergne, Hiroshi Noji, Pierre Zweigenbaum, and Jun'ichi Tsujii. 2020. CharacterBERT: Reconciling ELMo and BERT for word-level open-vocabulary representations from characters. In Proceedings of the 28th International Conference on Compu- tational Linguistics, pages 6903-6915, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Utility is in the eye of the user: A critique of NLP leaderboards",
"authors": [
{
"first": "Kawin",
"middle": [],
"last": "Ethayarajh",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "4846--4853",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.393"
]
},
"num": null,
"urls": [],
"raw_text": "Kawin Ethayarajh and Dan Jurafsky. 2020. Utility is in the eye of the user: A critique of NLP leaderboards. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4846-4853, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A post-processing system to yield reduced word error rates: recognizer output voting error reduction (ROVER)",
"authors": [
{
"first": "G",
"middle": [],
"last": "Jonathan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fiscus",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the IEEE Workshop on Automatic Speech Recognition and Understanding",
"volume": "",
"issue": "",
"pages": "347--357",
"other_ids": {
"DOI": [
"10.1109/ASRU.1997.659110"
]
},
"num": null,
"urls": [],
"raw_text": "Jonathan G Fiscus. 1997. A post-processing system to yield reduced word error rates: recognizer output voting error reduction (ROVER). In Proceedings of the IEEE Workshop on Automatic Speech Recogni- tion and Understanding, pages 347-357, Santa Bar- bara, CA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Openwebtext corpus",
"authors": [
{
"first": "Aaron",
"middle": [],
"last": "Gokaslan",
"suffix": ""
},
{
"first": "Vanya",
"middle": [],
"last": "Cohen",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aaron Gokaslan and Vanya Cohen. 2019. Openweb- text corpus. http://Skylion007.github. io/OpenWebTextCorpus.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "English Gigaword, LDC2007T07. Linguistic Data Consortium",
"authors": [
{
"first": "David",
"middle": [],
"last": "Graff",
"suffix": ""
},
{
"first": "Junbo",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kazuaki",
"middle": [],
"last": "Maeda",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.35111/k4mz-9k30"
]
},
"num": null,
"urls": [],
"raw_text": "David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2007. English Gigaword, LDC2007T07. Linguistic Data Consortium, Philadelphia. Web Download.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Message Understanding Conference-6: A brief history",
"authors": [
{
"first": "Ralph",
"middle": [],
"last": "Grishman",
"suffix": ""
},
{
"first": "Beth",
"middle": [],
"last": "Sundheim",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the 16th Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "466--471",
"other_ids": {
"DOI": [
"10.3115/992628.992709"
]
},
"num": null,
"urls": [],
"raw_text": "Ralph Grishman and Beth Sundheim. 1996. Message Understanding Conference-6: A brief history. In Proceedings of the 16th Conference on Computa- tional Linguistics -Volume 1, COLING '96, page 466-471, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Node2vec: Scalable feature learning for networks",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Grover",
"suffix": ""
},
{
"first": "Jure",
"middle": [],
"last": "Leskovec",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16",
"volume": "",
"issue": "",
"pages": "855--864",
"other_ids": {
"DOI": [
"10.1145/2939672.2939754"
]
},
"num": null,
"urls": [],
"raw_text": "Aditya Grover and Jure Leskovec. 2016. Node2vec: Scalable feature learning for networks. In Proceed- ings of the 22nd ACM SIGKDD International Con- ference on Knowledge Discovery and Data Mining, KDD '16, page 855-864, New York, NY, USA. As- sociation for Computing Machinery.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "International Statistical Classification of Diseases and Related Health Problems. 10th Revision. Volume 2. Instruction manual. World Health Organization",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "ICD-10. 2011. ICD-10. International Statistical Clas- sification of Diseases and Related Health Problems. 10th Revision. Volume 2. Instruction manual. World Health Organization.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Evaluating and combining name entity recognition systems",
"authors": [
{
"first": "Ridong",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Rafael",
"middle": [
"E"
],
"last": "Banchs",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Sixth Named Entity Workshop",
"volume": "",
"issue": "",
"pages": "21--27",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2703"
]
},
"num": null,
"urls": [],
"raw_text": "Ridong Jiang, Rafael E. Banchs, and Haizhou Li. 2016. Evaluating and combining name entity recognition systems. In Proceedings of the Sixth Named Entity Workshop, pages 21-27, Berlin, Germany. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "MIMIC-III, a freely accessible critical care database",
"authors": [
{
"first": "E W",
"middle": [],
"last": "Alistair",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tom",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Pollard",
"suffix": ""
},
{
"first": "Li-Wei H",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Mengling",
"middle": [],
"last": "Lehman",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Ghassemi",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Moody",
"suffix": ""
},
{
"first": "Leo",
"middle": [
"Anthony"
],
"last": "Szolovits",
"suffix": ""
},
{
"first": "Roger G",
"middle": [],
"last": "Celi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mark",
"suffix": ""
}
],
"year": 2016,
"venue": "Sci Data",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1038/sdata.2016.35"
]
},
"num": null,
"urls": [],
"raw_text": "Alistair E W Johnson, Tom J Pollard, Lu Shen, Li- Wei H Lehman, Mengling Feng, Mohammad Ghas- semi, Benjamin Moody, Peter Szolovits, Leo An- thony Celi, and Roger G Mark. 2016. MIMIC-III, a freely accessible critical care database. Sci Data, 3:160035.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A survey of named entity recognition and classification",
"authors": [
{
"first": "David",
"middle": [],
"last": "Nadeau",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
}
],
"year": 2007,
"venue": "Lingvisticae Investigationes",
"volume": "30",
"issue": "",
"pages": "3--26",
"other_ids": {
"DOI": [
"10.1075/li.30.1.03nad"
]
},
"num": null,
"urls": [],
"raw_text": "David Nadeau and Satoshi Sekine. 2007. A survey of named entity recognition and classification. Lingvis- ticae Investigationes, 30:3-26.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "CLEF eHealth 2018 multilingual information extraction task overview: ICD10 coding of death certificates in French, Hungarian and Italian",
"authors": [
{
"first": "Aur\u00e9lie",
"middle": [],
"last": "N\u00e9v\u00e9ol",
"suffix": ""
},
{
"first": "Aude",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Grippo",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Morgand",
"suffix": ""
},
{
"first": "Chiara",
"middle": [],
"last": "Orsi",
"suffix": ""
},
{
"first": "L\u00e1szl\u00f3",
"middle": [],
"last": "Pelik\u00e1n",
"suffix": ""
},
{
"first": "Lionel",
"middle": [],
"last": "Ramadier",
"suffix": ""
},
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Rey",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Zweigenbaum",
"suffix": ""
}
],
"year": 2018,
"venue": "CLEF 2017 Evaluation Labs and Workshop: Online Working Notes",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aur\u00e9lie N\u00e9v\u00e9ol, Aude Robert, Francesco Grippo, Claire Morgand, Chiara Orsi, L\u00e1szl\u00f3 Pelik\u00e1n, Lionel Ramadier, Gr\u00e9goire Rey, and Pierre Zweigenbaum. 2018. CLEF eHealth 2018 multilingual information extraction task overview: ICD10 coding of death cer- tificates in French, Hungarian and Italian. In CLEF 2017 Evaluation Labs and Workshop: Online Work- ing Notes.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {
"DOI": [
"10.3115/1073083.1073135"
]
},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Better than average: Paired evaluation of NLP systems",
"authors": [
{
"first": "Maxime",
"middle": [],
"last": "Peyrard",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Eger",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "West",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "2301--2315",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.179"
]
},
"num": null,
"urls": [],
"raw_text": "Maxime Peyrard, Wei Zhao, Steffen Eger, and Robert West. 2021. Better than average: Paired evaluation of NLP systems. In Proceedings of the 59th Annual Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2301-2315, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "Tjong",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003",
"volume": "",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natu- ral Language Learning at HLT-NAACL 2003, pages 142-147.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Assessing the state of the art in biomedical relation extraction: overview of the BioCreative V chemicaldisease relation (CDR) task",
"authors": [
{
"first": "Chih-Hsuan",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Leaman",
"suffix": ""
},
{
"first": "Allan",
"middle": [
"Peter"
],
"last": "Davis",
"suffix": ""
},
{
"first": "Carolyn",
"middle": [
"J"
],
"last": "Mattingly",
"suffix": ""
},
{
"first": "Jiao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"C"
],
"last": "Wiegers",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2016,
"venue": "Database J. Biol. Databases Curation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1093/database/baw032"
]
},
"num": null,
"urls": [],
"raw_text": "Chih-Hsuan Wei, Yifan Peng, Robert Leaman, Al- lan Peter Davis, Carolyn J. Mattingly, Jiao Li, Thomas C. Wiegers, and Zhiyong Lu. 2016. As- sessing the state of the art in biomedical relation ex- traction: overview of the BioCreative V chemical- disease relation (CDR) task. Database J. Biol. Databases Curation, 2016.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Composition of bin-5 in the comparison of six systems. Each instance (row) is missed by exactly one system. Note that each system (column) may miss multiple instances in this bin.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Differential evaluation scenario. True positives (TPs) are displayed with absolute and relative values (percentage of the number of instances in the bin) in the output matrix, as in",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Percentage of labels (true positives) correctly found by each system in each bin for Italian in the CLEF eHealth 2018 ICD-10 coding task. Systems on x-axis and bins on y-axis.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Percentage of labels (true positives) correctly found by each system in each bin for Hungarian. Systems on x-axis and bins on y-axis.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "Percentage of labels (true positives) correctly found by each system in each bin for chemical substances.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "Percentage of labels (true positives) correctly found by each system in each bin for diseases.",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF1": {
"text": "Number of labels (true positives) correctly found by each system in each bin for Italian: absolute values. Bin n contains the labels found by exactly n systems. Best performance in green, worst performance in red.",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF3": {
"text": "Number of labels (true positives) correctly found by each system in each bin for Hungarian: absolute values. Bin n contains the labels found by exactly n systems. In this analysis, the systems are ordered in decreasing order of F1-score, determined prior to the present analysis.",
"content": "<table><tr><td/><td>A2</td><td/><td/></tr><tr><td>B1</td><td>18%</td><td/><td>A1</td></tr><tr><td/><td>11%</td><td>17%</td><td/></tr><tr><td>B2</td><td>10%</td><td>3% 1%</td><td>E1 E2</td></tr><tr><td/><td>7%</td><td>22%</td><td/></tr><tr><td>C1</td><td>11%</td><td/><td/></tr><tr><td/><td/><td>D</td><td/></tr><tr><td/><td>C2</td><td/><td/></tr></table>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF5": {
"text": "Absolute values for chemical NER. Best performance in green, worst performance in red, orange when the random initialization is above one of the other initializations.",
"content": "<table><tr><td>Models</td><td colspan=\"12\">bin-0 bin-1 bin-2 bin-3 bin-4 bin-5 bin-6 bin-7 bin-8 bin-9 bin-10 bin-11 bin-12</td><td>Tot. TPs /system</td></tr><tr><td colspan=\"2\">Enh.CharBertFromGenN2V 0</td><td>16</td><td>70</td><td>74</td><td colspan=\"6\">124 115 159 181 256 296 389</td><td>800</td><td colspan=\"2\">3617 6097</td></tr><tr><td>CharBertFromGen</td><td>0</td><td>44</td><td>89</td><td>92</td><td colspan=\"6\">142 123 164 179 247 289 389</td><td>791</td><td colspan=\"2\">3617 6166</td></tr><tr><td>CharBertGenN2V</td><td>0</td><td>14</td><td>29</td><td>66</td><td colspan=\"6\">106 110 137 166 238 278 378</td><td>795</td><td colspan=\"2\">3617 5934</td></tr><tr><td>CharBertGen</td><td>0</td><td>24</td><td>32</td><td>57</td><td colspan=\"6\">110 107 137 162 234 287 387</td><td>802</td><td colspan=\"2\">3617 5956</td></tr><tr><td>fastTextGigawordN2V</td><td>0</td><td>3</td><td>22</td><td>36</td><td>59</td><td>70</td><td colspan=\"4\">112 141 224 288 403</td><td>803</td><td colspan=\"2\">3617 5778</td></tr><tr><td>fastTextGigawordN2V</td><td>0</td><td>5</td><td>7</td><td>17</td><td>25</td><td>50</td><td>72</td><td>91</td><td colspan=\"2\">126 205 311</td><td>730</td><td colspan=\"2\">3617 5256</td></tr><tr><td>fastTextMimicN2V</td><td>0</td><td>6</td><td>12</td><td>25</td><td>39</td><td>54</td><td colspan=\"4\">103 144 207 257 359</td><td>791</td><td colspan=\"2\">3617 5614</td></tr><tr><td>fastTextMimic</td><td>0</td><td>13</td><td>12</td><td>29</td><td>33</td><td>51</td><td>85</td><td>94</td><td colspan=\"2\">145 200 325</td><td>746</td><td colspan=\"2\">3617 5350</td></tr><tr><td>fastTextPubMedN2V</td><td>0</td><td>6</td><td>15</td><td>32</td><td>64</td><td>65</td><td colspan=\"4\">141 162 236 292 408</td><td>814</td><td colspan=\"2\">3617 5852</td></tr><tr><td>fastTextPubMed</td><td>0</td><td>5</td><td>12</td><td>29</td><td>50</td><td>53</td><td colspan=\"4\">103 118 182 204 332</td><td>764</td><td colspan=\"2\">3617 5469</td></tr><tr><td>fastTextRandomN2V</td><td>0</td><td>10</td><td>27</td><td>41</td><td>52</td><td>52</td><td>85</td><td colspan=\"3\">112 177 223 314</td><td>717</td><td colspan=\"2\">3617 5427</td></tr><tr><td>fastTextRandom</td><td>0</td><td>10</td><td>9</td><td>24</td><td>28</td><td>40</td><td>58</td><td>60</td><td>96</td><td>124 195</td><td>489</td><td colspan=\"2\">3617 4750</td></tr><tr><td>Total TPs per bin</td><td colspan=\"10\">340 156 168 174 208 178 226 230 296 327 419</td><td>822</td><td colspan=\"2\">3617 7161</td></tr></table>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF7": {
"text": "Pairwise comparison of systems with or without addition of Node2Vec embeddings for chemical NER (bin-0 and bin-12 are not considered). The best model for each bin is highlighted in green.",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF9": {
"text": "Pairwise comparison of systems with or without addition of Node2Vec embeddings for disease NER (bin-0 and bin-12 are not considered). The best model for each bin is highlighted in green.",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
}
}
}
}