| { |
| "paper_id": "Q17-1033", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:11:57.017354Z" |
| }, |
| "title": "Replicability Analysis for Natural Language Processing: Testing Significance with Multiple Datasets", |
| "authors": [ |
| { |
| "first": "Rotem", |
| "middle": [], |
| "last": "Dror", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Gili", |
| "middle": [], |
| "last": "Baumer", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Marina", |
| "middle": [], |
| "last": "Bogomolov", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "With the ever growing amount of textual data from a large variety of languages, domains, and genres, it has become standard to evaluate NLP algorithms on multiple datasets in order to ensure a consistent performance across heterogeneous setups. However, such multiple comparisons pose significant challenges to traditional statistical analysis methods in NLP and can lead to erroneous conclusions. In this paper we propose a Replicability Analysis framework for a statistically sound analysis of multiple comparisons between algorithms for NLP tasks. We discuss the theoretical advantages of this framework over the current, statistically unjustified, practice in the NLP literature, and demonstrate its empirical value across four applications: multi-domain dependency parsing, multilingual POS tagging, cross-domain sentiment classification and word similarity prediction. 1", |
| "pdf_parse": { |
| "paper_id": "Q17-1033", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "With the ever growing amount of textual data from a large variety of languages, domains, and genres, it has become standard to evaluate NLP algorithms on multiple datasets in order to ensure a consistent performance across heterogeneous setups. However, such multiple comparisons pose significant challenges to traditional statistical analysis methods in NLP and can lead to erroneous conclusions. In this paper we propose a Replicability Analysis framework for a statistically sound analysis of multiple comparisons between algorithms for NLP tasks. We discuss the theoretical advantages of this framework over the current, statistically unjustified, practice in the NLP literature, and demonstrate its empirical value across four applications: multi-domain dependency parsing, multilingual POS tagging, cross-domain sentiment classification and word similarity prediction. 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The field of Natural Language Processing (NLP) is going through the data revolution. With the persistent increase of the heterogeneous web, for the first time in human history, written language from multiple languages, domains, and genres is now abundant. Naturally, the expectations from NLP algorithms also grow and evaluating a new algorithm on as many languages, domains, and genres as possible is becoming a de-facto standard.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "For example, the phrase structure parsers of Charniak (2000) and Collins (2003) were mostly evaluated on the Wall Street Journal Penn Treebank (Marcus et al., 1993) , consisting of written, edited English text of economic news. In contrast, modern dependency parsers are expected to excel on the 19 languages of the CoNLL 2006 CoNLL -2007 shared tasks on multilingual dependency parsing (Buchholz and Marsi, 2006; Nilsson et al., 2007) , and additional challenges, such as the shared task on parsing multiple English Web domains (Petrov and McDonald, 2012) , are continuously proposed.", |
| "cite_spans": [ |
| { |
| "start": 45, |
| "end": 60, |
| "text": "Charniak (2000)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 65, |
| "end": 79, |
| "text": "Collins (2003)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 143, |
| "end": 164, |
| "text": "(Marcus et al., 1993)", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 316, |
| "end": 326, |
| "text": "CoNLL 2006", |
| "ref_id": null |
| }, |
| { |
| "start": 327, |
| "end": 338, |
| "text": "CoNLL -2007", |
| "ref_id": null |
| }, |
| { |
| "start": 387, |
| "end": 413, |
| "text": "(Buchholz and Marsi, 2006;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 414, |
| "end": 435, |
| "text": "Nilsson et al., 2007)", |
| "ref_id": "BIBREF55" |
| }, |
| { |
| "start": 529, |
| "end": 556, |
| "text": "(Petrov and McDonald, 2012)", |
| "ref_id": "BIBREF60" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Despite the growing number of evaluation tasks, the analysis toolbox employed by NLP researchers has remained quite stable. Indeed, in most experimental NLP papers, several algorithms are compared on a number of datasets where the performance of each algorithm is reported together with per-dataset statistical significance figures. However, with the growing number of evaluation datasets, it becomes more challenging to draw comprehensive conclusions from such comparisons. This is because although the probability of drawing an erroneous conclusion from a single comparison is small, with multiple comparisons the probability of making one or more false claims may be very high.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The goal of this paper is to provide the NLP community with a statistical analysis framework, which we term Replicability Analysis, which will allow us to draw statistically sound conclusions in evaluation setups that involve multiple comparisons. The classical goal of replicability analysis is to examine the consistency of findings across studies in order to address the basic dogma of science, that a find-ing is more convincingly true if it is replicated in at least one more study (Heller et al., 2014; Patil et al., 2016) . We adapt this goal to NLP, where we wish to ascertain the superiority of one algorithm over another across multiple datasets, which may come from different languages, domains, and genres. Finding that one algorithm outperforms another across domains gives a sense of consistency to the results and positive evidence that the better performance is not specific to a selected setup. 2 In this work we address two questions: (1) Counting: For how many datasets does a given algorithm outperform another? and (2) Identification: What are these datasets?", |
| "cite_spans": [ |
| { |
| "start": 487, |
| "end": 508, |
| "text": "(Heller et al., 2014;", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 509, |
| "end": 528, |
| "text": "Patil et al., 2016)", |
| "ref_id": "BIBREF57" |
| }, |
| { |
| "start": 912, |
| "end": 913, |
| "text": "2", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "When comparing two algorithms on multiple datasets, NLP papers often answer informally the questions we address in this work. In some cases this is done without any statistical analysis, by simply declaring better performance of a given algorithm for datasets where its performance measure is better than that of another algorithm, and counting these datasets. In other cases answers are based on the p-values from statistical tests performed for each dataset: declaring better performance for datasets with p-value below the significance level (e.g. 0.05) and counting these datasets. While it is clear that the first approach is not statistically valid, it seems that our community is not aware of the fact that the second approach, which may seem statistically sound, is not valid as well. This may lead to erroneous conclusions, which result in adopting new (and probably complicated) algorithms, while they are not better than previous (probably more simple) ones.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this work, we demonstrate this problem and show that it becomes more severe as the number of evaluation sets grows, which seems to be the current trend in NLP. We adopt a known general statistical methodology for addressing the counting (question (1)) and identification (question (2)) problems, by choosing the tests and procedures which are valid for 2 \"Replicability\" is sometimes referred to as \"reproducibility\". In recent NLP work the term reproducibility was used when trying to get identical results on the same data (N\u00e9v\u00e9ol et al., 2016; Marrese-Taylor and Matsuo, 2017) . In this paper, we adopt the meaning of \"replicability\" and its distinction from \"reproducibility\" from Peng (2011) and Leek and Peng (2015) and refer to replicability analysis as the effort to show that a finding is consistent over different datasets from different domains or languages, and is not idiosyncratic to a specific scenario. situations encountered in NLP problems, and giving specific recommendations for such situations.", |
| "cite_spans": [ |
| { |
| "start": 528, |
| "end": 549, |
| "text": "(N\u00e9v\u00e9ol et al., 2016;", |
| "ref_id": "BIBREF54" |
| }, |
| { |
| "start": 550, |
| "end": 582, |
| "text": "Marrese-Taylor and Matsuo, 2017)", |
| "ref_id": "BIBREF49" |
| }, |
| { |
| "start": 704, |
| "end": 724, |
| "text": "Leek and Peng (2015)", |
| "ref_id": "BIBREF43" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Particularly, we first demonstrate (Section 3) that the current prominent approach in the NLP literature, identifying the datasets for which the difference between the performance of the algorithms reaches a predefined significance level according to some statistical significance test, does not guarantee to bound the probability to make at least one erroneous claim. Hence this approach is error-prone when the number of participating datasets is large. We thus propose an alternative approach (Section 4). For question (1), we adopt the approach of Benjamini et al. (2009) to replicability analysis of multiple studies, based on the partial conjunction framework of Benjamini and Heller (2008) . This analysis comes with a guarantee that the probability of overestimating the true number of datasets with effect is upper bounded by a predefined constant. For question (2), we motivate a multiple testing procedure which guarantees that the probability of making at least one erroneous claim on the superiority of one algorithm over another is upper bounded by a predefined constant.", |
| "cite_spans": [ |
| { |
| "start": 552, |
| "end": 575, |
| "text": "Benjamini et al. (2009)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 683, |
| "end": 696, |
| "text": "Heller (2008)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In Sections 5 and 6 we demonstrate how to apply the proposed frameworks to two synthetic data toy examples and four NLP applications: multidomain dependency parsing, multilingual POS tagging, cross-domain sentiment classification, and word similarity prediction with word embedding models. Our results demonstrate that the current practice in NLP for addressing our questions is error-prone, and illustrate the differences between it and the proposed statistically sound approach.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We hope that this work will encourage our community to increase the number of standard evaluation setups per task when appropriate (e.g. including additional languages and domains), possibly paving the way to hundreds of comparisons per study. This is due to two main reasons. First, replicability analysis is a statistically sound framework that allows a researcher to safely draw valid conclusions with well defined statistical guarantees. Moreover, this framework provides a means of summarizing a large number of experiments with a handful of easily interpretable numbers (e.g., see Table 1 ). This allows researchers to report results over a large number of comparisons in a concise manner, delving into details of particular comparisons when necessary.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 587, |
| "end": 594, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our work recognizes the current trend in the NLP community where, for many tasks and applications, the number of evaluation datasets constantly increases. We believe this trend is inherent to language processing technology due to the multiplicity of languages and of linguistic genres and domains. In order to extend the reach of NLP algorithms, they have to be designed so that they can deal with many languages and with the various domains of each. Having a sound statistical framework that can deal with multiple comparisons is hence crucial for the field.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "This section is hence divided into two. We start by discussing representative examples for multiple comparisons in NLP, focusing on evaluations across multiple languages and multiple domains. We then discuss existing analysis frameworks for multiple comparisons, both in the NLP and in the machine learning literatures, pointing to the need for establishing new standards for our community.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Multiple Comparisons in NLP Multiple comparisons of algorithms over datasets from different languages, domains and genres have become a de-facto standard in many areas of NLP. Here we survey a number of representative examples. A full list of NLP tasks is beyond the scope of this paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A common multilingual example is, naturally, machine translation, where it is customary to compare algorithms across a large number of sourcetarget language pairs. This is done, for example, with the Europarl corpus consisting of 21 European languages (Koehn, 2005; Koehn and Schroeder, 2007) and with the datasets of the WMT workshop series with its multiple domains (e.g. news and biomedical in 2017), each consisting of several language pairs (7 and 14, respectively, in 2017).", |
| "cite_spans": [ |
| { |
| "start": 252, |
| "end": 265, |
| "text": "(Koehn, 2005;", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 266, |
| "end": 292, |
| "text": "Koehn and Schroeder, 2007)", |
| "ref_id": "BIBREF41" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Multiple dataset comparisons are also abundant in domain adaptation work. Representative tasks include named entity recognition (Guo et al., 2009) , POS tagging (Daum\u00e9 III, 2007) , dependency parsing (Petrov and McDonald, 2012) , word sense disambiguation (Chan and Ng, 2007) and sentiment classification (Blitzer et al., 2006; Blitzer et al., 2007) .", |
| "cite_spans": [ |
| { |
| "start": 128, |
| "end": 146, |
| "text": "(Guo et al., 2009)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 161, |
| "end": 178, |
| "text": "(Daum\u00e9 III, 2007)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 200, |
| "end": 227, |
| "text": "(Petrov and McDonald, 2012)", |
| "ref_id": "BIBREF60" |
| }, |
| { |
| "start": 256, |
| "end": 275, |
| "text": "(Chan and Ng, 2007)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 305, |
| "end": 327, |
| "text": "(Blitzer et al., 2006;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 328, |
| "end": 349, |
| "text": "Blitzer et al., 2007)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "More recently, with the emergence of crowdsourcing that makes data collection cheap and fast (Snow et al., 2008) , an ever growing number of datasets is being created. This is particularly notice-able in lexical semantics tasks that have become central in NLP research due to the prominence of neural networks. For example, it is customary to compare word embedding models (Mikolov et al., 2013; Pennington et al., 2014; \u00d3 S\u00e9aghdha and Korhonen, 2014; Levy and Goldberg, 2014; Schwartz et al., 2015) on multiple datasets where word pairs are scored according to the degree to which different semantic relations, such as similarity and association, hold between the members of the pair (Finkelstein et al., 2001a; Bruni et al., 2014; Silberer and Lapata, 2014; Hill et al., 2015) . In some works (e.g., ) these embedding models are compared across a large number of simple tasks.", |
| "cite_spans": [ |
| { |
| "start": 93, |
| "end": 112, |
| "text": "(Snow et al., 2008)", |
| "ref_id": "BIBREF68" |
| }, |
| { |
| "start": 373, |
| "end": 395, |
| "text": "(Mikolov et al., 2013;", |
| "ref_id": "BIBREF51" |
| }, |
| { |
| "start": 396, |
| "end": 420, |
| "text": "Pennington et al., 2014;", |
| "ref_id": "BIBREF59" |
| }, |
| { |
| "start": 421, |
| "end": 451, |
| "text": "\u00d3 S\u00e9aghdha and Korhonen, 2014;", |
| "ref_id": null |
| }, |
| { |
| "start": 452, |
| "end": 476, |
| "text": "Levy and Goldberg, 2014;", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 477, |
| "end": 499, |
| "text": "Schwartz et al., 2015)", |
| "ref_id": "BIBREF65" |
| }, |
| { |
| "start": 685, |
| "end": 712, |
| "text": "(Finkelstein et al., 2001a;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 713, |
| "end": 732, |
| "text": "Bruni et al., 2014;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 733, |
| "end": 759, |
| "text": "Silberer and Lapata, 2014;", |
| "ref_id": "BIBREF66" |
| }, |
| { |
| "start": 760, |
| "end": 778, |
| "text": "Hill et al., 2015)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "As discussed in Section 1, the outcomes of such comparisons are often summarized in a table that presents numerical performance values, usually accompanied by statistical significance figures and sometimes also with cross-comparison statistics such as average performance figures. Here, we analyze the conclusions that can be drawn from this information and suggest that with the growing number of comparisons, a more intricate analysis is required.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Existing Analysis Frameworks Machine learning work on multiple dataset comparisons dates back to Dietterich (1998) who raised the question: \"given two learning algorithms and datasets from several domains, which algorithm will produce more accurate classifiers when trained on examples from new domains?\". The seminal work that proposed practical means for this problem is that of Dem\u0161ar (2006) . Given performance measures for two algorithms on multiple datasets, the authors test whether there is at least one dataset on which the difference between the algorithms is statistically significant. For this goal they propose methods such as a paired t-test, a nonparametric sign-rank test and a wins/losses/ties count, all computed across the results collected from all participating datasets. In contrast, our goal is to count and identify the datasets for which one algorithm significantly outperforms the other, which provides more intricate information, especially when the datasets come from different sources.", |
| "cite_spans": [ |
| { |
| "start": 381, |
| "end": 394, |
| "text": "Dem\u0161ar (2006)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In NLP, several studies addressed the problem of measuring the statistical significance of results on a single dataset (e.g., Berg-Kirkpatrick et al. (2012) ; S\u00f8gaard (2013) ; S\u00f8gaard et al. (2014) ). S\u00f8gaard (2013) is, to the best of our knowledge, the only work that addressed the statistical properties of evaluation with multiple datasets. For this aim he modified the statistical tests proposed in Dem\u0161ar (2006) to use a Gumbel distribution assumption on the test statistics, which he considered to suit NLP better than the original Gaussian assumption. However, while this procedure aims to estimate the effect size across datasets, it answers neither the counting nor the identification question of Section 1.", |
| "cite_spans": [ |
| { |
| "start": 126, |
| "end": 156, |
| "text": "Berg-Kirkpatrick et al. (2012)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 159, |
| "end": 173, |
| "text": "S\u00f8gaard (2013)", |
| "ref_id": "BIBREF70" |
| }, |
| { |
| "start": 176, |
| "end": 197, |
| "text": "S\u00f8gaard et al. (2014)", |
| "ref_id": "BIBREF69" |
| }, |
| { |
| "start": 403, |
| "end": 416, |
| "text": "Dem\u0161ar (2006)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In the next section we provide the preliminary knowledge from the field of statistics that forms the basis for the proposed framework and then proceed with its description.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We start by formulating a general hypothesis testing framework for a comparison between two algorithms. This is a common type of hypothesis testing framework applied in NLP, its detailed formulation will help us develop our ideas.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminaries", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We wish to compare between two algorithms, A and B. Let X be a collection of datasets", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Testing", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "X = {X 1 , X 2 , . . . , X N }, where for all i \u2208 {1, . . . , N }, X i = {x i,1 , . . . , x i,n i } .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Testing", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Each dataset X i can be of a different language or a different domain. We denote by x i,k the granular unit on which results are being measured, that, in most NLP tasks, is a word or a sequence of words. The difference in performance between the two algorithms is measured using one or more of the evaluation measures in the set", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Testing", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "M = {M 1 , . . . , M m }. 3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Testing", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Let us denote M j (ALG, X i ) as the value of the measure M j when algorithm ALG is applied on the dataset X i . Without loss of generality, we assume that higher values of the measure are better. We define the difference in performance between two algorithms, A and B, according to the measure M j on the dataset X i as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Testing", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u03b4 j (X i ) = M j (A, X i ) \u2212 M j (B, X i ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Testing", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "3 To keep the discussion concise, throughout this paper we assume that only one evaluation measure is used. Our framework can be easily extended to deal with multiple measures.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Testing", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Finally, using this notation we formulate the following statistical hypothesis testing problem:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Testing", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "H 0i (j) :\u03b4 j (X i ) \u2264 0 H 1i (j) :\u03b4 j (X i ) > 0.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Testing", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "( 1)The null hypothesis, stating that there is no difference between the performance of algorithm A and algorithm B, or that B performs better, is tested versus the alternative statement that A is superior. If the statistical test results in rejecting the null hypothesis, one concludes that A outperforms B in this setup. Otherwise, there is not enough evidence in the data to make this conclusion.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Testing", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Rejection of the null hypothesis when it is true is termed type I error, and non-rejection of the null hypothesis when the alternative is true is termed type II error. The classical approach to hypothesis testing is to find a test that guarantees that the probability of making a type I error is upper bounded by a predefined constant \u03b1, the test significance level, while achieving as low probability of type II error as possible, a.k.a \"achieving as high power as possible\".", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Testing", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We next turn to the case where the difference between two algorithms is tested across multiple datasets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hypothesis Testing", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Equation 1 defines a multiple hypothesis testing problem when considering the formulation for all N datasets. If N is large, testing each hypothesis separately at the nominal significance level may result in a high number of erroneously rejected null hypotheses. In our context, when the performance of algorithm A is compared to that of algorithm B across multiple datasets, and for each dataset algorithm A is declared as superior, based on a statistical test at the nominal significance level \u03b1, the expected number of erroneous claims may grow as N grows.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Multiplicity Problem", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "For example, if a single test is performed with a significance level of \u03b1 = 0.05, there is only a 5% chance of incorrectly rejecting the null hypothesis. On the other hand, for 100 tests where all null hypotheses are true, the expected number of incorrect rejections is 100 \u2022 0.05 = 5. Denoting the total number of type I errors as V , we can see below that if the test statistics are independent then the probability of making at least one incorrect rejection is 0.994:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Multiplicity Problem", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "P(V > 0) = 1 \u2212 P(V = 0) = 1 \u2212 100 i=1 P(no type I error in i) =1 \u2212 (1 \u2212 0.05) 100 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Multiplicity Problem", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "This demonstrates that the naive method of counting the datasets for which significance was reached at the nominal level is error-prone. Similar examples can be constructed for situations where some of the null hypotheses are false.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Multiplicity Problem", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The multiple testing literature proposes various procedures for bounding the probability of making at least one type I error, as well as other, less restrictive error criteria (see a survey in Farcomeni (2007) ). In this paper, we address the questions of counting and identifying the datasets for which algorithm A outperforms B, with certain statistical guarantees regarding erroneous claims. While identifying the datasets gives more information when compared to just declaring their number, we consider these two questions separately. As our experiments show, according to the statistical analysis we propose the estimated number of datasets with effect (question 1) may be higher than the number of identified datasets (question 2). We next present the fundamentals of the partial conjunction framework which is at the heart of our proposed methods.", |
| "cite_spans": [ |
| { |
| "start": 193, |
| "end": 209, |
| "text": "Farcomeni (2007)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Multiplicity Problem", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We start by reformulating the set of hypothesis testing problems of Equation 1 as a unified hypothesis testing problem. This problem aims to identify whether algorithm A is superior to B across all datasets. The notation for the null hypothesis in this problem is H N/N 0 since we test if N out of N alternative hypotheses are true:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial Conjunction Hypotheses", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "H N/N 0 : N i=1 H 0i is true vs. H N/N 1 : N i=1 H 1i is true.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial Conjunction Hypotheses", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Requiring the rejection of the disjunction of all null hypotheses is often too restrictive for it involves observing a significant effect on all datasets, i \u2208 {1, . . . , N }. Instead, one can require a rejection of the global null hypothesis stating that all individual null hypotheses are true, i.e., evidence that at least one alternative hypothesis is true. This hypothesis testing problem is formulated as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial Conjunction Hypotheses", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "H 1/N 0 : N i=1 H 0i is true vs. H 1/N 1 : N i=1 H 1i is true.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial Conjunction Hypotheses", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Obviously, rejecting the global null may not provide enough information: it only indicates that algorithm A outperforms B on at least one dataset. Hence, this claim does not give any evidence for the consistency of the results across multiple datasets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial Conjunction Hypotheses", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "A natural compromise between the above two formulations is to test the partial conjunction null, which states that the number of false null hypotheses is lower than u, where 1 \u2264 u \u2264 N is a pre-specified integer constant. The partial conjunction test contrasts this statement with the alternative statement that at least u out of the N null hypotheses are false.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial Conjunction Hypotheses", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Definition 1 (Benjamini and Heller (2008) ). Consider N \u2265 2 null hypotheses: H 01 , H 02 , . . . , H 0N , and let p 1 , . . . , p N be their associated p\u2212values. Let k be the true unknown number of false null hypotheses, then our question \"Are at least u out of N null hypotheses false?\" can be formulated as follows:", |
| "cite_spans": [ |
| { |
| "start": 13, |
| "end": 41, |
| "text": "(Benjamini and Heller (2008)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial Conjunction Hypotheses", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "H u/N 0 : k < u vs. H u/N 1 : k \u2265 u.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial Conjunction Hypotheses", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "In our context, k is the number of datasets where algorithm A is truly better, and the partial conjunction test examines whether algorithm A outperforms algorithm B in at least u of N cases. Benjamini and Heller (2008) developed a general method for testing the above hypothesis for a given u. They also showed how to extend their method in order to answer our counting question. We next describe their framework and advocate a different, yet related method for dataset identification.", |
| "cite_spans": [ |
| { |
| "start": 191, |
| "end": 218, |
| "text": "Benjamini and Heller (2008)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Partial Conjunction Hypotheses", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Referred to as the cornerstone of science (Moonesinghe et al., 2007) , replicability analysis is of predominant importance in many scientific fields including psychology (Collaboration, 2012), genomics (Heller et al., 2014) , economics (Herndon et al., 2014) and medicine (Begley and Ellis, 2012) , among others. Findings are usually considered as replicated if they are obtained in two or more studies that differ from each other in some aspects (e.g. language, domain or genre in NLP).", |
| "cite_spans": [ |
| { |
| "start": 42, |
| "end": 68, |
| "text": "(Moonesinghe et al., 2007)", |
| "ref_id": "BIBREF53" |
| }, |
| { |
| "start": 202, |
| "end": 223, |
| "text": "(Heller et al., 2014)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 236, |
| "end": 258, |
| "text": "(Herndon et al., 2014)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 272, |
| "end": 296, |
| "text": "(Begley and Ellis, 2012)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Replicability Analysis for NLP", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The replicability analysis framework we employ (Benjamini and Heller, 2008; Benjamini et al., 2009) is based on partial conjunction testing. Particularly, these authors have shown that a lower bound on the number of false null hypotheses with a confidence level of 1 \u2212 \u03b1 can be obtained by finding the largest u for which we can reject the partial conjunction null hypothesis H", |
| "cite_spans": [ |
| { |
| "start": 47, |
| "end": 75, |
| "text": "(Benjamini and Heller, 2008;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 76, |
| "end": 99, |
| "text": "Benjamini et al., 2009)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Replicability Analysis for NLP", |
| "sec_num": "4" |
| }, |
| { |
| "text": "u/N 0 along with H 1/N 0 , . . . , H (u\u22121)/N 0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Replicability Analysis for NLP", |
| "sec_num": "4" |
| }, |
| { |
| "text": "at a significance level \u03b1. Since rejecting H u/N 0 means that we see evidence in at least u out of N datasets, algorithm A is superior to B. This lower bound on k is taken as our answer to the Counting question of Section 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Replicability Analysis for NLP", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In line with the hypothesis testing framework of Section 3, the partial conjunction null, H", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Replicability Analysis for NLP", |
| "sec_num": "4" |
| }, |
| { |
| "text": "u/N 0 , is rejected at level \u03b1 if p u/N \u2264 \u03b1, where p u/N", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Replicability Analysis for NLP", |
| "sec_num": "4" |
| }, |
| { |
| "text": "is the partial conjunction p-value. Based on the known methods for testing the global null hypothesis (see, e.g., Loughin (2004) ), Benjamini and Heller (2008) proposed methods for combining the p\u2212values p 1 , . . . , p N of H 01 , H 02 , . . . , H 0N in order to obtain p u/N . Below, we describe two such methods and their properties.", |
| "cite_spans": [ |
| { |
| "start": 114, |
| "end": 128, |
| "text": "Loughin (2004)", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 132, |
| "end": 159, |
| "text": "Benjamini and Heller (2008)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Replicability Analysis for NLP", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The methods we focus on were developed by Benjamini and Heller (2008) , and are based on Fisher's and Bonferroni's methods for testing the global null hypothesis. For brevity, we name them Bonferroni and Fisher. We choose them because they are valid in different setups that are frequently encountered in NLP (Section 6): Bonferroni for dependent datasets and both Fisher and Bonferroni for independent datasets. 4 Bonferroni's method does not make any assumptions about the dependencies between the participating datasets and it is hence applicable in NLP tasks, since in NLP it is most often hard to determine the type of dependence between the datasets. Fisher's method, while assuming independence across the participating datasets, is often more powerful than Bonferroni's method (see Loughin (2004) and Benjamini and Heller (2008) for other methods and a comparison between them). Our recommendation is hence to use the Bonferroni's method when the datasets are dependent and to use the more powerful Fisher's method when the datasets are independent.", |
| "cite_spans": [ |
| { |
| "start": 56, |
| "end": 69, |
| "text": "Heller (2008)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 413, |
| "end": 414, |
| "text": "4", |
| "ref_id": null |
| }, |
| { |
| "start": 790, |
| "end": 804, |
| "text": "Loughin (2004)", |
| "ref_id": "BIBREF46" |
| }, |
| { |
| "start": 809, |
| "end": 836, |
| "text": "Benjamini and Heller (2008)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Partial Conjunction p\u2212value", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Let p (i) be the i-th smallest p\u2212value among p 1 , . . . , p N . The partial conjunction p\u2212values are:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Partial Conjunction p\u2212value", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "p u/N Bonf erroni = (N \u2212 u + 1)p (u) (2) p u/N F isher = P \u03c7 2 2(N \u2212u+1) \u2265 \u22122 N i=u ln p (i)", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "The Partial Conjunction p\u2212value", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "where \u03c7 2 2(N \u2212u+1) denotes a chi-squared random variable with 2(N \u2212 u + 1) degrees of freedom.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Partial Conjunction p\u2212value", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "To understand the reasoning behind these methods, let us consider first the above p\u2212values for testing the global null, i.e., for the case of u = 1. Rejecting the global null hypothesis requires evidence that at least one null hypothesis is false. Intuitively, we would like to see one or more small p\u2212values.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Partial Conjunction p\u2212value", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Both of the methods above agree with this intuition. Bonferroni's method rejects the global null if p (1) \u2264 \u03b1/N , i.e. if the minimum p\u2212value is small enough, where the threshold guarantees that the significance level of the test is \u03b1 for any dependency among the p\u2212values p 1 , . . . , p N . Fisher's method rejects the global null for large values of", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Partial Conjunction p\u2212value", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u22122 N i=1 ln p (i) , or equivalently for small values of N i=1 p i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Partial Conjunction p\u2212value", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "That is, while both these methods are intuitive, they are different. Fisher's method requires a small enough product of p\u2212values as evidence that at least one null hypothesis is false. Bonferroni's method, on the other hand, requires as evidence at least one small enough p\u2212value.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Partial Conjunction p\u2212value", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For the case u = N , i.e., when the alternative states that all null hypotheses are false, both methods require that the maximal p\u2212value is small enough for rejection of H N/N 0 . This is also intuitive because we expect that all the p\u2212values will be small when all the null hypotheses are false. For other cases, where 1 < u < N , the reasoning is more complicated and is beyond the scope of this paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Partial Conjunction p\u2212value", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The partial conjunction test for a specific u answers the question \"Does algorithm A perform better than B on at least u datasets?\" The next step is the estimation of the number of datasets for which algorithm A performs better than B.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Partial Conjunction p\u2212value", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Recall that the number of datasets where algorithm A outperforms algorithm B (denoted with k in Definition 1) is the true number of false null hypotheses in our problem. Benjamini and Heller (2008) proposed to estimate k to be the largest u for which", |
| "cite_spans": [ |
| { |
| "start": 170, |
| "end": 197, |
| "text": "Benjamini and Heller (2008)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset Counting (Question 1)", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "H u/N 0 , along with H 1/N 0 , . . . , H (u\u22121)/N 0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset Counting (Question 1)", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "is rejected. Specifically, the estimatork is defined as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset Counting (Question 1)", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "k = max{u : p u/N * \u2264 \u03b1},", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Dataset Counting (Question 1)", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "where p", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset Counting (Question 1)", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "u/N * = max{p (u\u22121)/N * , p u/N }, p 1/N = p 1/N *", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset Counting (Question 1)", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "and \u03b1 is the desired upper bound on the probability to overestimate the true k. It is guaranteed that P(k > k) \u2264 \u03b1 as long as the p\u2212value combination method used for constructing p u/N is valid for the given dependency across the test statistics. 5 Whenk is based on p", |
| "cite_spans": [ |
| { |
| "start": 247, |
| "end": 248, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset Counting (Question 1)", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "u/N Bonf erroni it is denoted wit\u0125 k Bonf erroni ; when it is based on p u/N F isher , it is de- noted withk F isher .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset Counting (Question 1)", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "A crucial practical consideration, when choosing betweenk Bonf erroni andk F isher , is the assumed dependency between the datasets. As discussed in Section 4.1, p u/N F isher is recommended when the participating datasets are assumed to be independent; when this assumption cannot be made, only p u/N Bonf erroni is appropriate. As thek estimators are based on the respective p u/N s, the same considerations hold when choosing between them.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset Counting (Question 1)", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "With thek estimators, one can answer the counting question of Section 1, reporting that algorithm A is better than algorithm B in at leastk out of N datasets with a confidence level of 1 \u2212 \u03b1. Regarding the identification question, a natural approach would be to declare thek datasets with the smallest p\u2212values as those for which the effect holds. However, withk F isher this approach does not guarantee control over type I errors. In contrast, for k Bonf erroni , the above approach comes with such guarantees, as described in the next section.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset Counting (Question 1)", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "As demonstrated in Section 3.2, identifying the datasets with p\u2212value below the nominal significance level and declaring them as those where algorithm A is better than B may lead to a very high number of erroneous claims. A variety of methods exist for addressing this problem. A classical and very simple method for addressing this problem is named the Bonferroni's procedure, which compensates for the increased probability of making at least one type I error by testing each individual hypothesis at a significance level of \u03b1 = \u03b1/N , where \u03b1 is the predefined bound on this probability and N is the number of hypotheses tested. 6 While Bonferroni's procedure is valid for any dependency among the p\u2212values, the probability of detecting a true effect using this procedure is often very low, because of its strict p\u2212value threshold.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset Identification (Question 2)", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Many other procedures controlling the above or other error criteria, and having less strict p\u2212value thresholds, have been proposed. Below we advocate one of these methods: the Holm procedure (Holm, 1979) . This is a simple p\u2212value based procedure that is concordant with the partial conjunction analysis when p u/N Bonf erroni is used in that analysis. Importantly for NLP applications, Holm controls the probability of making at least one type I error for any type of dependency between the participating datasets (see a demonstration in Section 6).", |
| "cite_spans": [ |
| { |
| "start": 191, |
| "end": 203, |
| "text": "(Holm, 1979)", |
| "ref_id": "BIBREF37" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset Identification (Question 2)", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Let \u03b1 be the desired upper bound on the probability that at least one false rejection occurs, let p (1) \u2264 p (2) \u2264 . . . \u2264 p (N ) be the ordered p\u2212values and let the associated hypotheses be H (1) . . . H (N ) . The Holm procedure for identifying the datasets with a significant effect is given below. list of null hypotheses; the corresponding datasets are those we return in response to the identification question of Section 1. Note that the Holm procedure rejects a subset of hypotheses with p-value below \u03b1. Each p-value is compared to a threshold which is smaller or equal to \u03b1 and depends on the number of evaluation datasets N. The dependence of the thresholds on N can be intuitively explained as follows: the probability of making one or more erroneous claims may increase with N, as demonstrated in Section 3.2. Therefore, in order to bound this probability by a pre-specified level \u03b1, the thresholds for p-values should depend on N.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset Identification (Question 2)", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "It can be shown that the Holm procedure at level \u03b1 always rejects thek Bonf erroni hypotheses with the smallest p\u2212values, wherek Bonf erroni is the lower bound for k with a confidence level of 1 \u2212 \u03b1. Therefore,k Bonf erroni corresponding to a confidence level of 1 \u2212 \u03b1 is always smaller or equal to the number of datasets for which the difference between the compared algorithms is significant at level \u03b1. This is not surprising in view of the fact that, without making any assumptions on the dependencies among the datasets,k Bonf erroni guarantees that the probability of making a too optimistic claim (k > k) is bounded by \u03b1, when simply counting the number of datasets with p-value below \u03b1, the probability of making a too optimistic claim may be close to 1, as demonstrated in Section 5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset Identification (Question 2)", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Framework Summary Following Section 4.2 we answer the counting question of Section 1 by reporting eitherk F isher (when all datasets can be assumed to be independent) ork Bonf erroni (when such an independence assumption cannot be made). Based on Section 4.3 we suggest to answer the identification question of Section 1 by reporting the rejection list returned by the Holm procedure.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset Identification (Question 2)", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Our proposed framework is based on certain assumptions regarding the experiments conducted in NLP setups. The most prominent of these assumptions states that for dependent datasets the type of dependency cannot be determined. Indeed, to the best of our knowledge, the nature of the dependency between dependent test sets in NLP work has not been analyzed before. In Section 7 we revisit our assumptions and point to alternative methods for answering our questions. These methods may be ap- propriate under other assumptions that may become relevant in future.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset Identification (Question 2)", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We next demonstrate the value of the proposed replicability analysis through toy examples with synthetic data (Section 5) as well as analysis of state-of-the-art algorithms for four major NLP applications (Section 6). Our point of reference is the standard, yet statistically unjustified, counting method that sets its estimator,k count , to the number of datasets for which the difference between the compared algorithms is significant with p\u2212value \u2264 \u03b1 (i.e.k count = #{i : p i \u2264 \u03b1}). 7", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset Identification (Question 2)", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "For the examples of this section we synthesize p\u2212values to emulate a test with N = 100 hypotheses (domains), and set \u03b1 to 0.05. We start with a simulation of a scenario where algorithm A is equivalent to B for each domain, and the datasets representing these domains are independent. We sample the 100 p\u2212values from a standard uniform distribution, which is the p\u2212value distribution under the null hypothesis, repeating the simulation 1,000 times.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Toy Examples", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Since all the null hypotheses are true then k, the number of false null hypotheses, is 0. Figure 1 presents the histogram ofk values from all 1,000 iterations according tok Bonf erroni ,k F isher andk count .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 90, |
| "end": 96, |
| "text": "Figure", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Toy Examples", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The figure clearly demonstrates thatk count provides an overestimation of k whilek Bonf erroni and k F isher do much better. Indeed, the histogram yields the following probability estimates:P (k count > k) = 0.963,P (k Bonf erroni > k) = 0.001 and P (k F isher > k) = 0.021 (only the latter two are lower than 0.05). This simulation strongly supports the theoretical results of Section 4.2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Toy Examples", |
| "sec_num": "5" |
| }, |
| { |
| "text": "To consider a scenario where a dependency between the participating datasets does exist, we consider a second toy example. In this example we generate N = 100 p\u2212values corresponding to 34 independent normal test statistics, and two other groups of 33 positively correlated normal test statistics with \u03c1 = 0.2 and \u03c1 = 0.5, respectively. We again assume that all null hypotheses are true and thus all the p\u2212values are distributed uniformly, repeating the simulation 1,000 times. To generate positively dependent p\u2212values, we followed the process described in Section 6.1 of Benjamini et al. (2006) .", |
| "cite_spans": [ |
| { |
| "start": 572, |
| "end": 595, |
| "text": "Benjamini et al. (2006)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Toy Examples", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We estimate the probability thatk > k = 0 for the threek estimators based on the 1000 repetitions and get the values of:P (k count > k) = 0.943, P (k Bonf erroni > k) = 0.046 andP (k F isher > k) = 0.234. This simulation demonstrates the importance of using Bonferroni's method rather than Fisher's method when the datasets are dependent, even if some of the datasets are independent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Toy Examples", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In this section we demonstrate the potential impact of replicability analysis on the way experimental results are analyzed in NLP setups. We explore four NLP applications: (a) two where the datasets are independent: multi-domain dependency parsing and multilingual POS tagging; and (b) two where dependency between the datasets does exist: cross-domain sentiment classification and word similarity prediction with word embedding models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NLP Applications", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Dependency Parsing We consider a multidomain setup, analyzing the results reported in Choi et al. (2015) . The authors compared ten state-of-the-art parsers from which we pick three: (a) Mate (Bohnet, 2010) 8 that performed best on the majority of datasets; (b) Redshift (Honnibal et al., 2013) 9 which demonstrated comparable, still somewhat lower, performance compared to Mate; and (c) SpaCy (Honnibal and Johnson, 2015) that was substantially outperformed by Mate.", |
| "cite_spans": [ |
| { |
| "start": 86, |
| "end": 104, |
| "text": "Choi et al. (2015)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 192, |
| "end": 206, |
| "text": "(Bohnet, 2010)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 271, |
| "end": 294, |
| "text": "(Honnibal et al., 2013)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 394, |
| "end": 422, |
| "text": "(Honnibal and Johnson, 2015)", |
| "ref_id": "BIBREF39" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "All parsers were trained and tested on the English portion of the OntoNotes 5 corpus (Weischedel et al., 2011; Pradhan et al., 2013) , a large multigenre corpus consisting of the following 7 genres: broadcasting conversations (BC), broadcasting news (BN), news magazine (MZ), newswire (NW), pivot text (PT), telephone conversations (TC) and web text (WB). Train and test set size (in sentences) range from 6672 to 34,492 and from 280 to 2327, respectively (see Table 1 of Choi et al. (2015)). We copy the test set UAS results of Choi et al. 2015and compute p\u2212values using the data downloaded from http://amandastent.com/dependable/.", |
| "cite_spans": [ |
| { |
| "start": 85, |
| "end": 110, |
| "text": "(Weischedel et al., 2011;", |
| "ref_id": "BIBREF72" |
| }, |
| { |
| "start": 111, |
| "end": 132, |
| "text": "Pradhan et al., 2013)", |
| "ref_id": "BIBREF62" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 461, |
| "end": 468, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "We consider a multilingual setup, analyzing the results reported in (Pinter et al., 2017) . The authors compare their MIMICK model with the model of Ling et al. (2015) , denoted with CHAR\u2192TAG. Evaluation is performed on 23 of the 44 languages shared by the Polyglot word embedding dataset (Al-Rfou et al., 2013) and the universal dependencies (UD) dataset (De Marneffe et al., 2014) . Pinter et al. (2017) choose their languages so that they reflect a variety of typological, and particularly morphological, properties. The training/test split is the standard UD split. We copy the word level accuracy figures of Pinter et al. (2017) for the low resource training set setup, the focus setup of that paper. The authors kindly sent us their p-values. Sentiment Classification In this task, an algorithm is trained on reviews from one domain and should classify the sentiment of reviews from another domain to the positive and negative classes. For replicability analysis we explore the results of Ziser and Reichart (2017) for the cross-domain sentiment classification task of Blitzer et al. (2007) . The data in this task consists of Amazon product reviews from 4 domains: books (B), DVDs (D), electronic items (E), and kitchen appliances (K), for the total of 12 domain pairs, each domain having a 2000 review test set. 10 Ziser and Reichart (2017) compared the accuracy of their AE-SCL-SR model to MSDA (Chen et al., 2011) , a well known domain adaptation method, and kindly sent us the required p-values.", |
| "cite_spans": [ |
| { |
| "start": 68, |
| "end": 89, |
| "text": "(Pinter et al., 2017)", |
| "ref_id": "BIBREF61" |
| }, |
| { |
| "start": 149, |
| "end": 167, |
| "text": "Ling et al. (2015)", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 289, |
| "end": 311, |
| "text": "(Al-Rfou et al., 2013)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 360, |
| "end": 382, |
| "text": "Marneffe et al., 2014)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 385, |
| "end": 405, |
| "text": "Pinter et al. (2017)", |
| "ref_id": "BIBREF61" |
| }, |
| { |
| "start": 613, |
| "end": 633, |
| "text": "Pinter et al. (2017)", |
| "ref_id": "BIBREF61" |
| }, |
| { |
| "start": 1075, |
| "end": 1096, |
| "text": "Blitzer et al. (2007)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 1320, |
| "end": 1322, |
| "text": "10", |
| "ref_id": null |
| }, |
| { |
| "start": 1404, |
| "end": 1423, |
| "text": "(Chen et al., 2011)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "POS Tagging", |
| "sec_num": null |
| }, |
| { |
| "text": "We compare two state-of-the-art word embedding collections: (a) word2vec CBOW (Mikolov et al., 2013) vectors, generated by the model titled the best \"predict\" model in ; 11 and (b) GloVe (Pennington et al., 2014) vectors generated by a model trained on a 42B token common web crawl. 12 We employed the demo of Faruqui and Dyer (2014) to perform a Spearman correlation evaluation of these vector collections on 12 English word pair datasets: WS-353 (Finkelstein et al., 2001b) , WS-353-SIM (Agirre et al., 2009) , WS-353-REL (Agirre et al., 2009) , MC-30 (Miller and Charles, 1991) , RG-65 (Rubenstein and Goodenough, 1965) , Rare-Word (Luong et al., 2013) , MEN (Bruni et al., 2012) , MTurk-287 (Radinsky et al., 2011) , MTurk-771 (Halawi et al., 2012) , YP-130 (Yang and Powers, ) , SimLex-999 (Hill et al., 2016) , and Verb-143 (Baker et al., 2014) .", |
| "cite_spans": [ |
| { |
| "start": 78, |
| "end": 100, |
| "text": "(Mikolov et al., 2013)", |
| "ref_id": "BIBREF51" |
| }, |
| { |
| "start": 187, |
| "end": 212, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": "BIBREF59" |
| }, |
| { |
| "start": 283, |
| "end": 285, |
| "text": "12", |
| "ref_id": null |
| }, |
| { |
| "start": 310, |
| "end": 333, |
| "text": "Faruqui and Dyer (2014)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 448, |
| "end": 475, |
| "text": "(Finkelstein et al., 2001b)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 489, |
| "end": 510, |
| "text": "(Agirre et al., 2009)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 524, |
| "end": 545, |
| "text": "(Agirre et al., 2009)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 554, |
| "end": 580, |
| "text": "(Miller and Charles, 1991)", |
| "ref_id": "BIBREF52" |
| }, |
| { |
| "start": 589, |
| "end": 622, |
| "text": "(Rubenstein and Goodenough, 1965)", |
| "ref_id": "BIBREF64" |
| }, |
| { |
| "start": 635, |
| "end": 655, |
| "text": "(Luong et al., 2013)", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 662, |
| "end": 682, |
| "text": "(Bruni et al., 2012)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 695, |
| "end": 718, |
| "text": "(Radinsky et al., 2011)", |
| "ref_id": "BIBREF63" |
| }, |
| { |
| "start": 731, |
| "end": 752, |
| "text": "(Halawi et al., 2012)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 762, |
| "end": 781, |
| "text": "(Yang and Powers, )", |
| "ref_id": null |
| }, |
| { |
| "start": 795, |
| "end": 814, |
| "text": "(Hill et al., 2016)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 830, |
| "end": 850, |
| "text": "(Baker et al., 2014)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Similarity", |
| "sec_num": null |
| }, |
| { |
| "text": "We first calculate the p\u2212values for each task and dataset according to the principals of p\u2212values computation for NLP as discussed in Yeh (2000) , Berg-Kirkpatrick et al. (2012) and S\u00f8gaard et al. (2014) .", |
| "cite_spans": [ |
| { |
| "start": 134, |
| "end": 144, |
| "text": "Yeh (2000)", |
| "ref_id": "BIBREF75" |
| }, |
| { |
| "start": 147, |
| "end": 177, |
| "text": "Berg-Kirkpatrick et al. (2012)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 182, |
| "end": 203, |
| "text": "S\u00f8gaard et al. (2014)", |
| "ref_id": "BIBREF69" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Significance Tests", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "For dependency parsing, we employ the aparametric paired bootstrap test (Efron and Tibshirani, 1994 ) that does not assume any distribution on the test statistics. We chose this test because the distribution of the values for the measures commonly applied in this task is unknown. We implemented the test as in (Berg-Kirkpatrick et al., 2012) with a bootstrap size of 500 and with 10 5 repetitions.", |
| "cite_spans": [ |
| { |
| "start": 72, |
| "end": 99, |
| "text": "(Efron and Tibshirani, 1994", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Significance Tests", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "For multilingual POS tagging, we employ the Wilcoxon signed-rank test (Wilcoxon, 1945) on the differences of the sentence level accuracy scores of the two compared models. This test is a nonparametric test for differences in measure, testing the null hypothesis that the difference has a symmetric distribution around zero. It is appropriate for tasks with paired continuous measures for each observation, which is the case when comparing sentence level accuracies.", |
| "cite_spans": [ |
| { |
| "start": 70, |
| "end": 86, |
| "text": "(Wilcoxon, 1945)", |
| "ref_id": "BIBREF73" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Significance Tests", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "11 http://clic.cimec.unitn.it/composes/ semantic-vectors.html. Parameters: 5-word context window, 10 negative samples, subsampling, 400 dimensions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Significance Tests", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "12 http://nlp.stanford.edu/projects/glove/. 300 dimensions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Significance Tests", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "For sentiment classification we employ the Mc-Nemar test for paired nominal data (McNemar, 1947) . This test is appropriate for binary classification tasks and since we compare the results of the algorithms when applied on the same datasets, we employ its paired version. Finally, for word similarity with its Spearman correlation evaluation, we choose the Steiger test (Steiger, 1980) for comparing elements in a correlation matrix.", |
| "cite_spans": [ |
| { |
| "start": 81, |
| "end": 96, |
| "text": "(McNemar, 1947)", |
| "ref_id": "BIBREF50" |
| }, |
| { |
| "start": 370, |
| "end": 385, |
| "text": "(Steiger, 1980)", |
| "ref_id": "BIBREF71" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical Significance Tests", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "We consider the case of \u03b1 = 0.05 for all four applications. For the dependent datasets experiments (sentiment classification and word similarity prediction) with their generally lower p\u2212values (see below), we also consider the case where \u03b1 = 0.01. Table 4 : Cross-domain sentiment classification accuracy for models taken from (Ziser and Reichart, 2017) . In an X \u2192 Y setup, X is the source domain and Y is the target domain. * and + indicate domains identified by the Holm procedure with \u03b1 = 0.05 and \u03b1 = 0.01, respectively.", |
| "cite_spans": [ |
| { |
| "start": 327, |
| "end": 353, |
| "text": "(Ziser and Reichart, 2017)", |
| "ref_id": "BIBREF76" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 248, |
| "end": 255, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Statistical Significance Tests", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "where in most domains the differences between the compared algorithms are smaller and the p\u2212values are higher (Mate vs. Redshift). Our multilingual POS tagging scenario (MIMICK vs. Char\u2192Tag) is more similar to scenario (b) in terms of the differences between the participating algorithms. Table 1 demonstrates thek estimators for the various tasks and scenarios. For dependency parsing, as expected, in scenario (a) where all the p\u2212values are small, all estimators, even the error-pronek count , provide the same information. In case (b) of dependency parsing, however,k F isher estimates the number of domains where Mate outperforms Redshift to be 5, whilek count estimates this number to be 2. This is a substantial difference given that the number of domains is 7. Thek Bonf erroni estimator, that is valid under arbitrary dependencies, is even more conservative thank count and its estimation is only 1.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 289, |
| "end": 296, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "Perhaps not surprisingly, the multilingual POS and the GLOVE model. * and + are as in Table 4 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 86, |
| "end": 93, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "tagging results are similar to case (b) of dependency parsing. Here, again,k count is too conservative, estimating the number of languages with effect to be 11 (out of 23) whilek F isher estimates this number to be 16 (an increase of 5/23 in the estimated number of languages with effect).k Bonf erroni is again more conservative, estimating the number of languages with effect to be only 6, which is not very surprising given that it does not exploit the independence between the datasets. These two examples of case (b) demonstrate that when the differences between the algorithms are quite small,k F isher may be more sensitive than the current practice in NLP for discovering the number of datasets with effect.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "To complete the analysis, we would like to name the datasets with effect. As discussed in Section 4.2, while this can be straightforwardly done by naming the datasets with thek smallest p\u2212values, in general, this approach does not control the probability of identifying at least one dataset erroneously. We thus employ the Holm procedure for the identification task, noticing that the number of datasets it identifies should be equal to the value of thek Bonf erroni estimator (Section 4.3).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "Indeed, for dependency parsing in case (a), the Holm procedure identifies all seven domains as cases where Mate outperforms SpaCy, while in case (b) it identifies only the MZ domain as a case where Mate outperforms Redshift. For multilingual POS tagging the Holm procedure identifies Tamil, Hungarian, Basque, Indonesian, Chinese and Czech as languages where MIMICK outperforms Char\u2192Tag. This analysis demonstrates that when the performance gap between two algorithms becomes narrower, inquiring for more information (i.e. identifying the domains with effect rather than just estimating their number), may result in weaker results. 13 Dependent Datasets In cross-domain sentiment classification (Table 4 ) and word similarity prediction (Table 5) , the involved datasets manifest mutual dependence. Particularly, each sentiment setup shares its test dataset with 2 other setups, while in word similarity WS-353 is the union of WS-353-REL and WS-353-SIM. As discussed in Section 4, k Bonf erroni is the appropriate estimator of the number of cases one algorithm outperforms another.", |
| "cite_spans": [ |
| { |
| "start": 632, |
| "end": 634, |
| "text": "13", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 695, |
| "end": 703, |
| "text": "(Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 737, |
| "end": 746, |
| "text": "(Table 5)", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "The results in Table 1 manifest the phenomenon demonstrated by the second toy example in Section 5, which shows that when the datasets are dependent,k F isher as well as the error-pronek count may be too optimistic regarding the number of datasets with effect. This stands in contrast t\u00f4 k Bonf erroni which controls the probability to overestimate the number of such datasets.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 15, |
| "end": 22, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "Indeed,k Bonf erroni is much more conservative, yielding values of 6 (\u03b1 = 0.05) and 2 (\u03b1 = 0.01) for sentiment, and of 6 (\u03b1 = 0.05) and 4 (\u03b1 = 0.01) for word similarity. The differences from the conclusions that might have been drawn byk count are again quite substantial. The difference between k Bonf erroni andk count in sentiment classification is 4, which accounts to 1/3 of the 12 test setups. Even for word similarity, the difference between the two methods, which account to 2 for both \u03b1 values, represents 1/6 of the 12 test setups. The domains identified by the Holm procedure are marked in the tables.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "Results Overview Our goal in this section is to demonstrate that the approach of simply looking at the number of datasets for which the difference between the performance of the algorithms reaches a predefined significance level, gives different results from our suggested statistically sound analysis. This approach is denoted here withk count and shown to be statistically not valid in Sections 3.2 and 5. We observe that this happens especially in evaluation setups where the differences between the algorithms are small for most datasets. In some cases, when the datasets are independent, our analysis has the power to declare a larger number of datasets with effect than the number of individual significant test values (k count ). In other cases, when the datasets are interdependent,k count is much too optimistic.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "Our proposed analysis changes the observations that might have been made based on the papers where the results analyzed here were originally reported. For example, for the Mate-Redshift comparison (independent evaluation sets), we show that there is evidence that the number of datasets with effect is much higher than one would assume based on counting the significant sets (5 vs. 2 out of 7 evaluation sets), giving a stronger claim regarding the superiority of Mate. In multilingual POS tagging (again, a setup of independent evaluation sets) our analysis shows evidence for 16 sets with effect compared to only 11 of the erroneous count method -a difference in 5 out of 23 evaluation sets (21.7%). Finally, in the cross-domain sentiment classification and the word similarity judgment tasks (dependent evaluation sets), the unjustified counting method may be too optimistic (e.g. 10 vs. 6 out of 12 evaluation sets, for \u03b1 = 0.05 in the sentiment task), in favor of the new algorithms.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "We proposed a statistically sound replicability analysis framework for cases where algorithms are compared across multiple datasets. Our main contributions are: (a) analyzing the limitations of the current practice in NLP work; and (b) proposing a framework that addresses both the estimation of the number of datasets with effect and their identification.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and Future Directions", |
| "sec_num": "7" |
| }, |
| { |
| "text": "The framework we propose addresses two different situations encountered in NLP: independent and dependent datasets. For dependent datasets, we assumed that the type of dependency cannot be determined. One could use more powerful methods if certain assumptions on the dependency between the test statistics could be made. For example, one could use the partial conjunction p-value based on Simes test for the global null hypothesis (Simes, 1986) , which was proposed by Benjamini and Heller (2008) for the case where the test statistics satisfy certain positive dependency properties (see Theorem 1 in (Benjamini and Heller, 2008) ). Using this partial conjunction p-value rather than the one based on Bonferroni, one may obtain higher values ofk with the same statistical guarantee. Similarly, for the identification question, if certain positive dependency properties hold, Holm's procedure could be replaced by Hochberg's or Hommel's procedures (Hochberg, 1988; Hommel, 1988) which are more powerful.", |
| "cite_spans": [ |
| { |
| "start": 431, |
| "end": 444, |
| "text": "(Simes, 1986)", |
| "ref_id": "BIBREF67" |
| }, |
| { |
| "start": 469, |
| "end": 496, |
| "text": "Benjamini and Heller (2008)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 601, |
| "end": 629, |
| "text": "(Benjamini and Heller, 2008)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 947, |
| "end": 963, |
| "text": "(Hochberg, 1988;", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 964, |
| "end": 977, |
| "text": "Hommel, 1988)", |
| "ref_id": "BIBREF38" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and Future Directions", |
| "sec_num": "7" |
| }, |
| { |
| "text": "An alternative, more powerful multiple testing procedure for identification of datasets with effect, is the method in Benjamini and Hochberg (1995) , that controls the false discovery rate (FDR), a less strict error criterion than the one considered here. This method is more appropriate in cases where one may tolerate some errors as long as the proportion of errors among all the claims made is small, as expected to happen when the number of datasets grows.", |
| "cite_spans": [ |
| { |
| "start": 118, |
| "end": 147, |
| "text": "Benjamini and Hochberg (1995)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and Future Directions", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We note that the increase in the number of evaluation datasets may have positive and negative aspects. As noted in Section 2, we believe that multiple comparisons are integral to NLP research when aiming to develop algorithms that perform well across languages and domains. On the other hand, experimenting with multiple evaluation sets that reflect very similar linguistic phenomena may only complicate the comparison between alternative algorithms.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and Future Directions", |
| "sec_num": "7" |
| }, |
| { |
| "text": "In fact, our analysis is useful mostly where the datasets are heterogeneous, coming from different languages or domains. When they are just technically different but could potentially be just combined into a one big dataset, then we believe the question of Dem\u0161ar (2006) , whether at least one dataset shows evidence for effect, is more appropriate.", |
| "cite_spans": [ |
| { |
| "start": 257, |
| "end": 270, |
| "text": "Dem\u0161ar (2006)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and Future Directions", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Our code is at: https://github.com/rtmdrr/replicabilityanalysis-NLP .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Transactions of the Association for Computational Linguistics, vol. 5, pp. 471-486, 2017. Action Editor: Brian Roark.Submission batch: 3/2017; Revision batch: 7/2017; Published 11/2017. c 2017 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For simplicity we refer to dependent/independent datasets as those for which the test statistics are dependent/independent. We assume the test statistics are independent if the corresponding datasets do not have mutual samples, and one dataset is not a transformation of the other.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "This result is a special case of Theorem 4 inBenjamini and Heller (2008).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We use \u03b1 in two different contexts: the significance level of an individual test and the bound on the probability to overestimate k. This is the standard notation in the statistical literature.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "code.google.com/p/mate-tools. 9 github.com/syllog1sm/Redshift.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.cs.jhu.edu/\u02dcmdredze/ datasets/sentiment", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For completeness, we also performed the analysis for the independent dataset setups with \u03b1 = 0.01. The results are (kcount,k Bonf erroni ,k F isher ): Mate vs. SpaCy: (7,7,7); Mate vs. Redshift (1,0,2); MIMICK vs. Char\u2192Tag: (7,5,13). The patterns are very similar to those discussed in the text.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The research of M. Bogomolov was supported by the Israel Science Foundation grant No. 1112/14. We thank Yuval Pinter for his great help with the multilingual experiments and for his useful feedback. We also thank Ruth Heller, Marten van Schijndel, Oren Tsur, Or Zuk and the ie@technion NLP group members for their useful comments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgement", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A study on similarity and relatedness using distributional and WordNet-based approaches", |
| "authors": [ |
| { |
| "first": "Eneko", |
| "middle": [], |
| "last": "Agirre", |
| "suffix": "" |
| }, |
| { |
| "first": "Enrique", |
| "middle": [], |
| "last": "Alfonseca", |
| "suffix": "" |
| }, |
| { |
| "first": "Keith", |
| "middle": [], |
| "last": "Hall", |
| "suffix": "" |
| }, |
| { |
| "first": "Jana", |
| "middle": [], |
| "last": "Kravalova", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pa\u015fca, and Aitor Soroa. 2009. A study on similarity and relatedness using distributional and WordNet-based approaches. In Proceedings of HLT-NAACL.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Polyglot: Distributed word representations for multilingual NLP", |
| "authors": [ |
| { |
| "first": "Rami", |
| "middle": [], |
| "last": "Al-Rfou", |
| "suffix": "" |
| }, |
| { |
| "first": "Bryan", |
| "middle": [], |
| "last": "Perozzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Skiena", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2013. Polyglot: Distributed word representations for multi- lingual NLP. In Proceedings of CoNLL.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "An unsupervised model for instance level subcategorization acquisition", |
| "authors": [ |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Baker", |
| "suffix": "" |
| }, |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Simon Baker, Roi Reichart, and Anna Korhonen. 2014. An unsupervised model for instance level subcatego- rization acquisition. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Don't count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| }, |
| { |
| "first": "Georgiana", |
| "middle": [], |
| "last": "Dinu", |
| "suffix": "" |
| }, |
| { |
| "first": "Germ\u00e1n", |
| "middle": [], |
| "last": "Kruszewski", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marco Baroni, Georgiana Dinu, and Germ\u00e1n Kruszewski. 2014. Don't count, predict! a systematic compari- son of context-counting vs. context-predicting seman- tic vectors. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Drug development: Raise standards for preclinical cancer research", |
| "authors": [ |
| { |
| "first": "Glenn", |
| "middle": [], |
| "last": "Begley", |
| "suffix": "" |
| }, |
| { |
| "first": "Lee", |
| "middle": [ |
| "M" |
| ], |
| "last": "Ellis", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Nature", |
| "volume": "483", |
| "issue": "7391", |
| "pages": "531--533", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Glenn Begley and Lee M. Ellis. 2012. Drug develop- ment: Raise standards for preclinical cancer research. Nature, 483(7391):531-533.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Screening for partial conjunction hypotheses", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Benjamini", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruth", |
| "middle": [], |
| "last": "Heller", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Biometrics", |
| "volume": "64", |
| "issue": "4", |
| "pages": "1215--1222", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Benjamini and Ruth Heller. 2008. Screen- ing for partial conjunction hypotheses. Biometrics, 64(4):1215-1222.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Controlling the false discovery rate: A practical and powerful approach to multiple testing", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Benjamini", |
| "suffix": "" |
| }, |
| { |
| "first": "Yosef", |
| "middle": [], |
| "last": "Hochberg", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Journal of the Royal Statistical Society. Series B (Methodological)", |
| "volume": "", |
| "issue": "", |
| "pages": "289--300", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Benjamini and Yosef Hochberg. 1995. Control- ling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Sta- tistical Society. Series B (Methodological), pages 289- 300.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Adaptive linear step-up procedures that control the false discovery rate", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Benjamini", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Abba", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Krieger", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Yekutieli", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Biometrika", |
| "volume": "", |
| "issue": "", |
| "pages": "491--507", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Benjamini, Abba M. Krieger, and Daniel Yekutieli. 2006. Adaptive linear step-up procedures that control the false discovery rate. Biometrika, pages 491-507.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Selective inference in complex research", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Benjamini", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruth", |
| "middle": [], |
| "last": "Heller", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Yekutieli", |
| "suffix": "" |
| } |
| ], |
| "year": 1906, |
| "venue": "Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences", |
| "volume": "367", |
| "issue": "", |
| "pages": "4255--4271", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Benjamini, Ruth Heller, and Daniel Yekutieli. 2009. Selective inference in complex research. Philo- sophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 367(1906):4255-4271.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "An empirical investigation of statistical significance in NLP", |
| "authors": [ |
| { |
| "first": "Taylor", |
| "middle": [], |
| "last": "Berg-Kirkpatrick", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Burkett", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of EMNLP-CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An empirical investigation of statistical signifi- cance in NLP. In Proceedings of EMNLP-CoNLL.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Domain adaptation with structural correspondence learning", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Blitzer", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspon- dence learning. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Biographies, Bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Blitzer", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Dredze", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, Bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Very high accuracy and fast dependency parsing is not a contradiction", |
| "authors": [ |
| { |
| "first": "Bernd", |
| "middle": [], |
| "last": "Bohnet", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bernd Bohnet. 2010. Very high accuracy and fast depen- dency parsing is not a contradiction. In Proceedings of COLING.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Distributional semantics in technicolor", |
| "authors": [ |
| { |
| "first": "Elia", |
| "middle": [], |
| "last": "Bruni", |
| "suffix": "" |
| }, |
| { |
| "first": "Gemma", |
| "middle": [], |
| "last": "Boleda", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| }, |
| { |
| "first": "Nam-Khanh", |
| "middle": [], |
| "last": "Tran", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Elia Bruni, Gemma Boleda, Marco Baroni, and Nam- Khanh Tran. 2012. Distributional semantics in tech- nicolor. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Multimodal distributional semantics", |
| "authors": [ |
| { |
| "first": "Elia", |
| "middle": [], |
| "last": "Bruni", |
| "suffix": "" |
| }, |
| { |
| "first": "Nam-Khanh", |
| "middle": [], |
| "last": "Tran", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Baroni", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Journal of Artificial Intelligence Research (JAIR)", |
| "volume": "49", |
| "issue": "", |
| "pages": "1--47", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Arti- ficial Intelligence Research (JAIR), 49:1-47.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "CoNLL-x shared task on multilingual dependency parsing", |
| "authors": [ |
| { |
| "first": "Sabine", |
| "middle": [], |
| "last": "Buchholz", |
| "suffix": "" |
| }, |
| { |
| "first": "Erwin", |
| "middle": [], |
| "last": "Marsi", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sabine Buchholz and Erwin Marsi. 2006. CoNLL-x shared task on multilingual dependency parsing. In Proceedings of CoNLL.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Domain adaptation with active learning for word sense disambiguation", |
| "authors": [ |
| { |
| "first": "Yee", |
| "middle": [], |
| "last": "Seng Chan", |
| "suffix": "" |
| }, |
| { |
| "first": "Hwee Tou", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yee Seng Chan and Hwee Tou Ng. 2007. Domain adap- tation with active learning for word sense disambigua- tion. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "A maximum-entropy-inspired parser", |
| "authors": [ |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of HLT-NAACL.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Automatic feature decomposition for single view co-training", |
| "authors": [ |
| { |
| "first": "Minmin", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Yixin", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Kilian", |
| "middle": [ |
| "Q" |
| ], |
| "last": "Weinberger", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Minmin Chen, Yixin Chen, and Kilian Q. Weinberger. 2011. Automatic feature decomposition for single view co-training. In Proceedings of ICML.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "An open, largescale, collaborative effort to estimate the reproducibility of psychological science", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Jinho", |
| "suffix": "" |
| }, |
| { |
| "first": "Joel", |
| "middle": [], |
| "last": "Choi", |
| "suffix": "" |
| }, |
| { |
| "first": "Amanda", |
| "middle": [], |
| "last": "Tetreault", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Stent", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of ACL. Open Science Collaboration", |
| "volume": "7", |
| "issue": "", |
| "pages": "657--660", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jinho D. Choi, Joel Tetreault, and Amanda Stent. 2015. It depends: Dependency parser comparison using a web-based evaluation tool. In Proceedings of ACL. Open Science Collaboration. 2012. An open, large- scale, collaborative effort to estimate the reproducibil- ity of psychological science. Perspectives on Psycho- logical Science, 7(6):657-660.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Head-driven statistical models for natural language parsing", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "29", |
| "issue": "", |
| "pages": "589--637", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Collins. 2003. Head-driven statistical models for natural language parsing. Computational linguis- tics, 29(4):589-637.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Frustratingly easy domain adaptation", |
| "authors": [ |
| { |
| "first": "Hal", |
| "middle": [], |
| "last": "Daum\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Iii", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hal Daum\u00e9 III. 2007. Frustratingly easy domain adapta- tion. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Stanford dependencies: A cross-linguistic typology", |
| "authors": [ |
| { |
| "first": "Marie-Catherine De", |
| "middle": [], |
| "last": "Marneffe", |
| "suffix": "" |
| }, |
| { |
| "first": "Timothy", |
| "middle": [], |
| "last": "Dozat", |
| "suffix": "" |
| }, |
| { |
| "first": "Natalia", |
| "middle": [], |
| "last": "Silveira", |
| "suffix": "" |
| }, |
| { |
| "first": "Katri", |
| "middle": [], |
| "last": "Haverinen", |
| "suffix": "" |
| }, |
| { |
| "first": "Filip", |
| "middle": [], |
| "last": "Ginter", |
| "suffix": "" |
| }, |
| { |
| "first": "Joakim", |
| "middle": [], |
| "last": "Nivre", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marie-Catherine De Marneffe, Timothy Dozat, Natalia Silveira, Katri Haverinen, Filip Ginter, Joakim Nivre, and Christopher D. Manning. 2014. Stanford depen- dencies: A cross-linguistic typology. In Proceedings of LREC.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Statistical comparisons of classifiers over multiple data sets", |
| "authors": [ |
| { |
| "first": "Janez", |
| "middle": [], |
| "last": "Dem\u0161ar", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "7", |
| "issue": "", |
| "pages": "1--30", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Janez Dem\u0161ar. 2006. Statistical comparisons of clas- sifiers over multiple data sets. Journal of Machine Learning Research, 7:1-30.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Approximate statistical tests for comparing supervised classification learning algorithms", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Thomas", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Dietterich", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Neural computation", |
| "volume": "10", |
| "issue": "7", |
| "pages": "1895--1923", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas G. Dietterich. 1998. Approximate statistical tests for comparing supervised classification learning algorithms. Neural computation, 10(7):1895-1923.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "An introduction to the bootstrap", |
| "authors": [ |
| { |
| "first": "Bradley", |
| "middle": [], |
| "last": "Efron", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [ |
| "J" |
| ], |
| "last": "Tibshirani", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bradley Efron and Robert J. Tibshirani. 1994. An intro- duction to the bootstrap. CRC press.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "A review of modern multiple hypothesis testing, with particular attention to the false discovery proportion", |
| "authors": [ |
| { |
| "first": "Alessio", |
| "middle": [], |
| "last": "Farcomeni", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Statistical Methods in Medical Research", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alessio Farcomeni. 2007. A review of modern multiple hypothesis testing, with particular attention to the false discovery proportion. Statistical Methods in Medical Research.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Community evaluation and exchange of word vectors at wordvectors.org", |
| "authors": [ |
| { |
| "first": "Manaal", |
| "middle": [], |
| "last": "Faruqui", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the ACL: System Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Manaal Faruqui and Chris Dyer. 2014. Community evaluation and exchange of word vectors at wordvec- tors.org. In Proceedings of the ACL: System Demon- strations.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Placing search in context: The concept revisited", |
| "authors": [ |
| { |
| "first": "Lev", |
| "middle": [], |
| "last": "Finkelstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Evgeniy", |
| "middle": [], |
| "last": "Gabrilovich", |
| "suffix": "" |
| }, |
| { |
| "first": "Yossi", |
| "middle": [], |
| "last": "Matias", |
| "suffix": "" |
| }, |
| { |
| "first": "Ehud", |
| "middle": [], |
| "last": "Rivlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Zach", |
| "middle": [], |
| "last": "Solan", |
| "suffix": "" |
| }, |
| { |
| "first": "Gadi", |
| "middle": [], |
| "last": "Wolfman", |
| "suffix": "" |
| }, |
| { |
| "first": "Eytan", |
| "middle": [], |
| "last": "Ruppin", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of WWW", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001a. Placing search in context: The con- cept revisited. In Proceedings of WWW.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Placing search in context: The concept revisited", |
| "authors": [ |
| { |
| "first": "Lev", |
| "middle": [], |
| "last": "Finkelstein", |
| "suffix": "" |
| }, |
| { |
| "first": "Evgeniy", |
| "middle": [], |
| "last": "Gabrilovich", |
| "suffix": "" |
| }, |
| { |
| "first": "Yossi", |
| "middle": [], |
| "last": "Matias", |
| "suffix": "" |
| }, |
| { |
| "first": "Ehud", |
| "middle": [], |
| "last": "Rivlin", |
| "suffix": "" |
| }, |
| { |
| "first": "Zach", |
| "middle": [], |
| "last": "Solan", |
| "suffix": "" |
| }, |
| { |
| "first": "Gadi", |
| "middle": [], |
| "last": "Wolfman", |
| "suffix": "" |
| }, |
| { |
| "first": "Eytan", |
| "middle": [], |
| "last": "Ruppin", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of WWW", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001b. Placing search in context: The con- cept revisited. In Proceedings of WWW.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Domain adaptation with latent semantic association for named entity recognition", |
| "authors": [ |
| { |
| "first": "Honglei", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "Huijia", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhili", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaoxun", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xian", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhong", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Honglei Guo, Huijia Zhu, Zhili Guo, Xiaoxun Zhang, Xian Wu, and Zhong Su. 2009. Domain adapta- tion with latent semantic association for named entity recognition. In Proceedings of HLT-NAACL.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Large-scale learning of word relatedness with constraints", |
| "authors": [ |
| { |
| "first": "Guy", |
| "middle": [], |
| "last": "Halawi", |
| "suffix": "" |
| }, |
| { |
| "first": "Gideon", |
| "middle": [], |
| "last": "Dror", |
| "suffix": "" |
| }, |
| { |
| "first": "Evgeniy", |
| "middle": [], |
| "last": "Gabrilovich", |
| "suffix": "" |
| }, |
| { |
| "first": "Yehuda", |
| "middle": [], |
| "last": "Koren", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of ACM SIGKDD", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Guy Halawi, Gideon Dror, Evgeniy Gabrilovich, and Yehuda Koren. 2012. Large-scale learning of word relatedness with constraints. In Proceedings of ACM SIGKDD.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Deciding whether follow-up studies have replicated findings in a preliminary large-scale omics study", |
| "authors": [ |
| { |
| "first": "Ruth", |
| "middle": [], |
| "last": "Heller", |
| "suffix": "" |
| }, |
| { |
| "first": "Marina", |
| "middle": [], |
| "last": "Bogomolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Benjamini", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the National Academy of Sciences", |
| "volume": "111", |
| "issue": "46", |
| "pages": "16262--16267", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ruth Heller, Marina Bogomolov, and Yoav Benjamini. 2014. Deciding whether follow-up studies have repli- cated findings in a preliminary large-scale omics study. Proceedings of the National Academy of Sciences, 111(46):16262-16267.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Does high public debt consistently stifle economic growth? a critique of Reinhart and Rogoff", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Herndon", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Ash", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Pollin", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Cambridge Journal of Economics", |
| "volume": "38", |
| "issue": "2", |
| "pages": "257--279", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas Herndon, Michael Ash, and Robert Pollin. 2014. Does high public debt consistently stifle economic growth? a critique of Reinhart and Rogoff. Cambridge Journal of Economics, 38(2):257-279.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation", |
| "authors": [ |
| { |
| "first": "Felix", |
| "middle": [], |
| "last": "Hill", |
| "suffix": "" |
| }, |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Computational Linguistics", |
| "volume": "41", |
| "issue": "4", |
| "pages": "665--695", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2015. Simlex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics, 41(4):665-695.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Simlex-999: Evaluating semantic models with (genuine) similarity estimation", |
| "authors": [ |
| { |
| "first": "Felix", |
| "middle": [], |
| "last": "Hill", |
| "suffix": "" |
| }, |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Felix Hill, Roi Reichart, and Anna Korhonen. 2016. Simlex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "A sharper Bonferroni procedure for multiple tests of significance", |
| "authors": [ |
| { |
| "first": "Yosef", |
| "middle": [], |
| "last": "Hochberg", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Biometrika", |
| "volume": "75", |
| "issue": "4", |
| "pages": "800--802", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yosef Hochberg. 1988. A sharper Bonferroni proce- dure for multiple tests of significance. Biometrika, 75(4):800-802.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "A simple sequentially rejective multiple test procedure", |
| "authors": [ |
| { |
| "first": "Sture", |
| "middle": [], |
| "last": "Holm", |
| "suffix": "" |
| } |
| ], |
| "year": 1979, |
| "venue": "Scandinavian Journal of Statistics", |
| "volume": "6", |
| "issue": "2", |
| "pages": "65--70", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sture Holm. 1979. A simple sequentially rejective multi- ple test procedure. Scandinavian Journal of Statistics, 6(2):65-70.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "A stagewise rejective multiple test procedure based on a modified Bonferroni test", |
| "authors": [ |
| { |
| "first": "Gerhard", |
| "middle": [], |
| "last": "Hommel", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Biometrika", |
| "volume": "75", |
| "issue": "2", |
| "pages": "383--386", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gerhard Hommel. 1988. A stagewise rejective multi- ple test procedure based on a modified Bonferroni test. Biometrika, 75(2):383-386.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "An improved non-monotonic transition system for dependency parsing", |
| "authors": [ |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Honnibal", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew Honnibal and Mark Johnson. 2015. An im- proved non-monotonic transition system for depen- dency parsing. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "A non-monotonic arc-eager transition system for dependency parsing", |
| "authors": [ |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Honnibal", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matthew Honnibal, Yoav Goldberg, and Mark Johnson. 2013. A non-monotonic arc-eager transition system for dependency parsing. In Proceedings of CoNLL.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Experiments in domain adaptation for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Josh", |
| "middle": [], |
| "last": "Schroeder", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the Second Workshop on Statistical Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philipp Koehn and Josh Schroeder. 2007. Experiments in domain adaptation for statistical machine translation. In Proceedings of the Second Workshop on Statistical Machine Translation.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Europarl: A parallel corpus for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the tenth Machine Translation Summit", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of the tenth Machine Translation Summit.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "Opinion: Reproducible research can still be wrong: Adopting a prevention approach", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [ |
| "T" |
| ], |
| "last": "Leek", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Roger", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Peng", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the National Academy of Sciences", |
| "volume": "112", |
| "issue": "6", |
| "pages": "1645--1646", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey T. Leek and Roger D Peng. 2015. Opinion: Reproducible research can still be wrong: Adopting a prevention approach. Proceedings of the National Academy of Sciences, 112(6):1645-1646.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Dependencybased word embeddings", |
| "authors": [ |
| { |
| "first": "Omer", |
| "middle": [], |
| "last": "Levy", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Goldberg", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Omer Levy and Yoav Goldberg. 2014. Dependency- based word embeddings. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Finding function in form: Compositional character models for open vocabulary word representation", |
| "authors": [ |
| { |
| "first": "Wang", |
| "middle": [], |
| "last": "Ling", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Alan", |
| "middle": [ |
| "W" |
| ], |
| "last": "Black", |
| "suffix": "" |
| }, |
| { |
| "first": "Isabel", |
| "middle": [], |
| "last": "Trancoso", |
| "suffix": "" |
| }, |
| { |
| "first": "Ramon", |
| "middle": [], |
| "last": "Fermandez", |
| "suffix": "" |
| }, |
| { |
| "first": "Silvio", |
| "middle": [], |
| "last": "Amir", |
| "suffix": "" |
| }, |
| { |
| "first": "Luis", |
| "middle": [], |
| "last": "Marujo", |
| "suffix": "" |
| }, |
| { |
| "first": "Tiago", |
| "middle": [], |
| "last": "Luis", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wang Ling, Chris Dyer, Alan W. Black, Isabel Trancoso, Ramon Fermandez, Silvio Amir, Luis Marujo, and Tiago Luis. 2015. Finding function in form: Com- positional character models for open vocabulary word representation. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "A systematic comparison of methods for combining p-values from independent tests", |
| "authors": [ |
| { |
| "first": "Thomas", |
| "middle": [ |
| "M" |
| ], |
| "last": "Loughin", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Computational Statistics & Data Analysis", |
| "volume": "47", |
| "issue": "3", |
| "pages": "467--485", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas M. Loughin. 2004. A systematic compari- son of methods for combining p-values from indepen- dent tests. Computational Statistics & Data Analysis, 47(3):467-485.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "Better word representations with recursive neural networks for morphology", |
| "authors": [ |
| { |
| "first": "Minh-Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Minh-Thang Luong, Richard Socher, and Christopher D. Manning. 2013. Better word representations with re- cursive neural networks for morphology. In Proceed- ings of CoNLL.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "Building a large annotated corpus of English: The Penn Treebank", |
| "authors": [ |
| { |
| "first": "Mitchell", |
| "middle": [ |
| "P" |
| ], |
| "last": "Marcus", |
| "suffix": "" |
| }, |
| { |
| "first": "Mary", |
| "middle": [ |
| "Ann" |
| ], |
| "last": "Marcinkiewicz", |
| "suffix": "" |
| }, |
| { |
| "first": "Beatrice", |
| "middle": [], |
| "last": "Santorini", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational linguistics", |
| "volume": "19", |
| "issue": "2", |
| "pages": "313--330", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beat- rice Santorini. 1993. Building a large annotated cor- pus of English: The Penn Treebank. Computational linguistics, 19(2):313-330.", |
| "links": null |
| }, |
| "BIBREF49": { |
| "ref_id": "b49", |
| "title": "Replication issues in syntax-based aspect extraction for opinion mining", |
| "authors": [ |
| { |
| "first": "Edison", |
| "middle": [], |
| "last": "Marrese", |
| "suffix": "" |
| }, |
| { |
| "first": "-", |
| "middle": [], |
| "last": "Taylor", |
| "suffix": "" |
| }, |
| { |
| "first": "Yutaka", |
| "middle": [], |
| "last": "Matsuo", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the Student Research Workshop at EACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Edison Marrese-Taylor and Yutaka Matsuo. 2017. Repli- cation issues in syntax-based aspect extraction for opinion mining. In Proceedings of the Student Re- search Workshop at EACL.", |
| "links": null |
| }, |
| "BIBREF50": { |
| "ref_id": "b50", |
| "title": "Note on the sampling error of the difference between correlated proportions or percentages", |
| "authors": [ |
| { |
| "first": "Quinn", |
| "middle": [], |
| "last": "Mcnemar", |
| "suffix": "" |
| } |
| ], |
| "year": 1947, |
| "venue": "Psychometrika", |
| "volume": "12", |
| "issue": "2", |
| "pages": "153--157", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Quinn McNemar. 1947. Note on the sampling error of the difference between correlated proportions or per- centages. Psychometrika, 12(2):153-157.", |
| "links": null |
| }, |
| "BIBREF51": { |
| "ref_id": "b51", |
| "title": "Distributed representations of words and phrases and their compositionality", |
| "authors": [ |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Gregory", |
| "middle": [ |
| "S" |
| ], |
| "last": "Corrado", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Dean", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013. Distributed rep- resentations of words and phrases and their composi- tionality. In Proceedings of NIPS.", |
| "links": null |
| }, |
| "BIBREF52": { |
| "ref_id": "b52", |
| "title": "Contextual correlates of semantic similarity", |
| "authors": [ |
| { |
| "first": "George", |
| "middle": [ |
| "A" |
| ], |
| "last": "Miller", |
| "suffix": "" |
| }, |
| { |
| "first": "Walter", |
| "middle": [ |
| "G" |
| ], |
| "last": "Charles", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "Language and cognitive processes", |
| "volume": "6", |
| "issue": "1", |
| "pages": "1--28", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George A. Miller and Walter G. Charles. 1991. Contex- tual correlates of semantic similarity. Language and cognitive processes, 6(1):1-28.", |
| "links": null |
| }, |
| "BIBREF53": { |
| "ref_id": "b53", |
| "title": "Most published research findings are false but a little replication goes a long way", |
| "authors": [ |
| { |
| "first": "Ramal", |
| "middle": [], |
| "last": "Moonesinghe", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Muin", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Khoury", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [ |
| "J W" |
| ], |
| "last": "Cecile", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Janssens", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "PLoS Med", |
| "volume": "4", |
| "issue": "2", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ramal Moonesinghe, Muin J. Khoury, and A. Cecile J. W. Janssens. 2007. Most published research find- ings are false but a little replication goes a long way. PLoS Med, 4(2):e28.", |
| "links": null |
| }, |
| "BIBREF54": { |
| "ref_id": "b54", |
| "title": "Replicability of research in biomedical natural language processing: a pilot evaluation for a coding task", |
| "authors": [ |
| { |
| "first": "Aur\u00e9lie", |
| "middle": [], |
| "last": "N\u00e9v\u00e9ol", |
| "suffix": "" |
| }, |
| { |
| "first": "Cyril", |
| "middle": [], |
| "last": "Grouin", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [ |
| "Bretonnel" |
| ], |
| "last": "Cohen", |
| "suffix": "" |
| }, |
| { |
| "first": "Aude", |
| "middle": [], |
| "last": "Robert", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aur\u00e9lie N\u00e9v\u00e9ol, Cyril Grouin, Kevin Bretonnel Cohen, and Aude Robert. 2016. Replicability of research in biomedical natural language processing: a pilot evalu- ation for a coding task. Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF55": { |
| "ref_id": "b55", |
| "title": "The CoNLL 2007 shared task on dependency parsing", |
| "authors": [ |
| { |
| "first": "Jens", |
| "middle": [], |
| "last": "Nilsson", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Riedel", |
| "suffix": "" |
| }, |
| { |
| "first": "Deniz", |
| "middle": [], |
| "last": "Yuret", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The CoNLL 2007 shared task on dependency parsing. In Proceedings of CoNLL.", |
| "links": null |
| }, |
| "BIBREF56": { |
| "ref_id": "b56", |
| "title": "Probabilistic distributional semantics", |
| "authors": [ |
| { |
| "first": "Diarmuid\u00f3", |
| "middle": [], |
| "last": "S\u00e9aghdha", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Korhonen", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Computational Linguistics", |
| "volume": "40", |
| "issue": "3", |
| "pages": "587--631", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diarmuid\u00d3 S\u00e9aghdha and Anna Korhonen. 2014. Prob- abilistic distributional semantics. Computational Lin- guistics, 40(3):587-631.", |
| "links": null |
| }, |
| "BIBREF57": { |
| "ref_id": "b57", |
| "title": "A statistical definition for reproducibility and replicability", |
| "authors": [ |
| { |
| "first": "Prasad", |
| "middle": [], |
| "last": "Patil", |
| "suffix": "" |
| }, |
| { |
| "first": "Roger", |
| "middle": [ |
| "D" |
| ], |
| "last": "Peng", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Leek", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Prasad Patil, Roger D. Peng, and Jeffrey Leek. 2016. A statistical definition for reproducibility and replicabil- ity. bioRxiv.", |
| "links": null |
| }, |
| "BIBREF58": { |
| "ref_id": "b58", |
| "title": "Reproducible research in computational science", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Roger", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Peng", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Science", |
| "volume": "334", |
| "issue": "6060", |
| "pages": "1226--1227", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roger D. Peng. 2011. Reproducible research in compu- tational science. Science, 334(6060):1226-1227.", |
| "links": null |
| }, |
| "BIBREF59": { |
| "ref_id": "b59", |
| "title": "GloVe: Global vectors for word representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word rep- resentation. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF60": { |
| "ref_id": "b60", |
| "title": "Overview of the 2012 shared task on parsing the web", |
| "authors": [ |
| { |
| "first": "Slav", |
| "middle": [], |
| "last": "Petrov", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Notes of the First Workshop on Syntactic Analysis of Non-Canonical Language (SANCL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Slav Petrov and Ryan McDonald. 2012. Overview of the 2012 shared task on parsing the web. In Notes of the First Workshop on Syntactic Analysis of Non- Canonical Language (SANCL).", |
| "links": null |
| }, |
| "BIBREF61": { |
| "ref_id": "b61", |
| "title": "Mimicking word embeddings using subword RNNs", |
| "authors": [ |
| { |
| "first": "Yuval", |
| "middle": [], |
| "last": "Pinter", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Guthrie", |
| "suffix": "" |
| }, |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Eisenstein", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yuval Pinter, Robert Guthrie, and Jacob Eisenstein. 2017. Mimicking word embeddings using subword RNNs. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF62": { |
| "ref_id": "b62", |
| "title": "Towards robust linguistic analysis using OntoNotes", |
| "authors": [ |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Sameer Pradhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Moschitti", |
| "suffix": "" |
| }, |
| { |
| "first": "Hwee Tou", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "Olga", |
| "middle": [], |
| "last": "Bj\u00f6rkelund", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuchen", |
| "middle": [], |
| "last": "Uryupina", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhi", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zhong", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Hwee Tou Ng, Anders Bj\u00f6rkelund, Olga Uryupina, Yuchen Zhang, and Zhi Zhong. 2013. Towards robust linguistic analysis using OntoNotes. In Proceedings of CoNLL.", |
| "links": null |
| }, |
| "BIBREF63": { |
| "ref_id": "b63", |
| "title": "A word at a time: Computing word relatedness using temporal semantic analysis", |
| "authors": [ |
| { |
| "first": "Kira", |
| "middle": [], |
| "last": "Radinsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Agichtein", |
| "suffix": "" |
| }, |
| { |
| "first": "Evgeniy", |
| "middle": [], |
| "last": "Gabrilovich", |
| "suffix": "" |
| }, |
| { |
| "first": "Shaul", |
| "middle": [], |
| "last": "Markovitch", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of WWW", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kira Radinsky, Eugene Agichtein, Evgeniy Gabrilovich, and Shaul Markovitch. 2011. A word at a time: Com- puting word relatedness using temporal semantic anal- ysis. In Proceedings of WWW.", |
| "links": null |
| }, |
| "BIBREF64": { |
| "ref_id": "b64", |
| "title": "Contextual correlates of synonymy", |
| "authors": [ |
| { |
| "first": "Herbert", |
| "middle": [], |
| "last": "Rubenstein", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [ |
| "B" |
| ], |
| "last": "Goodenough", |
| "suffix": "" |
| } |
| ], |
| "year": 1965, |
| "venue": "Communications of the ACM", |
| "volume": "8", |
| "issue": "10", |
| "pages": "627--633", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Herbert Rubenstein and John B. Goodenough. 1965. Contextual correlates of synonymy. Communications of the ACM, 8(10):627-633.", |
| "links": null |
| }, |
| "BIBREF65": { |
| "ref_id": "b65", |
| "title": "Symmetric pattern based word embeddings for improved word similarity prediction", |
| "authors": [ |
| { |
| "first": "Roy", |
| "middle": [], |
| "last": "Schwartz", |
| "suffix": "" |
| }, |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "" |
| }, |
| { |
| "first": "Ari", |
| "middle": [], |
| "last": "Rappoport", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Roy Schwartz, Roi Reichart, and Ari Rappoport. 2015. Symmetric pattern based word embeddings for im- proved word similarity prediction. In Proceedings of CoNLL.", |
| "links": null |
| }, |
| "BIBREF66": { |
| "ref_id": "b66", |
| "title": "Learning grounded meaning representations with autoencoders", |
| "authors": [ |
| { |
| "first": "Carina", |
| "middle": [], |
| "last": "Silberer", |
| "suffix": "" |
| }, |
| { |
| "first": "Mirella", |
| "middle": [], |
| "last": "Lapata", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Carina Silberer and Mirella Lapata. 2014. Learning grounded meaning representations with autoencoders. In Proceedings of ACL.", |
| "links": null |
| }, |
| "BIBREF67": { |
| "ref_id": "b67", |
| "title": "An improved Bonferroni procedure for multiple tests of significance", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Simes", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "Biometrika", |
| "volume": "", |
| "issue": "", |
| "pages": "751--754", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. John Simes. 1986. An improved Bonferroni proce- dure for multiple tests of significance. Biometrika, pages 751-754.", |
| "links": null |
| }, |
| "BIBREF68": { |
| "ref_id": "b68", |
| "title": "Cheap and fast-but is it good?: Evaluating non-expert annotations for natural language tasks", |
| "authors": [ |
| { |
| "first": "Rion", |
| "middle": [], |
| "last": "Snow", |
| "suffix": "" |
| }, |
| { |
| "first": "O'", |
| "middle": [], |
| "last": "Brendan", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Connor", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Jurafsky", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Y. Ng. 2008. Cheap and fast-but is it good?: Evaluating non-expert annotations for natural language tasks. In Proceedings of EMNLP.", |
| "links": null |
| }, |
| "BIBREF69": { |
| "ref_id": "b69", |
| "title": "What's in a p-value in NLP ?", |
| "authors": [ |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "S\u00f8gaard", |
| "suffix": "" |
| }, |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "Johannsen", |
| "suffix": "" |
| }, |
| { |
| "first": "Barbara", |
| "middle": [], |
| "last": "Plank", |
| "suffix": "" |
| }, |
| { |
| "first": "Dirk", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "H\u00e9ctor Mart\u00ednez", |
| "middle": [], |
| "last": "Alonso", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anders S\u00f8gaard, Anders Johannsen, Barbara Plank, Dirk Hovy, and H\u00e9ctor Mart\u00ednez Alonso. 2014. What's in a p-value in NLP ? In Proceedings of CoNLL.", |
| "links": null |
| }, |
| "BIBREF70": { |
| "ref_id": "b70", |
| "title": "Estimating effect size across datasets", |
| "authors": [ |
| { |
| "first": "Anders", |
| "middle": [], |
| "last": "S\u00f8gaard", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Anders S\u00f8gaard. 2013. Estimating effect size across datasets. In Proceedings of HLT-NAACL.", |
| "links": null |
| }, |
| "BIBREF71": { |
| "ref_id": "b71", |
| "title": "Tests for comparing elements of a correlation matrix", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [ |
| "H" |
| ], |
| "last": "Steiger", |
| "suffix": "" |
| } |
| ], |
| "year": 1980, |
| "venue": "Psychological Bulletin", |
| "volume": "87", |
| "issue": "2", |
| "pages": "245--251", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James H. Steiger. 1980. Tests for comparing ele- ments of a correlation matrix. Psychological Bulletin, 87(2):245-251.", |
| "links": null |
| }, |
| "BIBREF72": { |
| "ref_id": "b72", |
| "title": "OntoNotes: A large training corpus for enhanced processing. Handbook of Natural Language Processing and Machine Translation", |
| "authors": [ |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Weischedel", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Mitchell", |
| "middle": [], |
| "last": "Marcus", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Belvin", |
| "suffix": "" |
| }, |
| { |
| "first": "Sameer", |
| "middle": [], |
| "last": "Pradhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Lance", |
| "middle": [], |
| "last": "Ramshaw", |
| "suffix": "" |
| }, |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ralph Weischedel, Eduard Hovy, Mitchell Marcus, Martha Palmer, Robert Belvin, Sameer Pradhan, Lance Ramshaw, and Nianwen Xue. 2011. OntoNotes: A large training corpus for enhanced pro- cessing. Handbook of Natural Language Processing and Machine Translation. Springer.", |
| "links": null |
| }, |
| "BIBREF73": { |
| "ref_id": "b73", |
| "title": "Individual comparisons by ranking methods", |
| "authors": [ |
| { |
| "first": "Frank", |
| "middle": [], |
| "last": "Wilcoxon", |
| "suffix": "" |
| } |
| ], |
| "year": 1945, |
| "venue": "Biometrics bulletin", |
| "volume": "1", |
| "issue": "6", |
| "pages": "80--83", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Frank Wilcoxon. 1945. Individual comparisons by rank- ing methods. Biometrics bulletin, 1(6):80-83.", |
| "links": null |
| }, |
| "BIBREF74": { |
| "ref_id": "b74", |
| "title": "Verb similarity on the taxonomy of WordNet", |
| "authors": [ |
| { |
| "first": "Dongqiang", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "W" |
| ], |
| "last": "David", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Powers", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Proceedings of the 3rd International WordNet Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dongqiang Yang and David M.W. Powers. Verb similar- ity on the taxonomy of WordNet. In Proceedings of the 3rd International WordNet Conference.", |
| "links": null |
| }, |
| "BIBREF75": { |
| "ref_id": "b75", |
| "title": "More accurate tests for the statistical significance of result differences", |
| "authors": [ |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Yeh", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexander Yeh. 2000. More accurate tests for the statis- tical significance of result differences. In Proceedings of CoNLL.", |
| "links": null |
| }, |
| "BIBREF76": { |
| "ref_id": "b76", |
| "title": "Neural structural correspondence learning for domain adaptation", |
| "authors": [ |
| { |
| "first": "Yftah", |
| "middle": [], |
| "last": "Ziser", |
| "suffix": "" |
| }, |
| { |
| "first": "Roi", |
| "middle": [], |
| "last": "Reichart", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yftah Ziser and Roi Reichart. 2017. Neural structural correspondence learning for domain adaptation. In Proceedings of CoNLL.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "k be the minimal index such thatp (k) > \u03b1 N +1\u2212k . 2) Rejectthe null hypotheses H (1) . . . H (k\u22121) and do not reject H (k) . . . H (N ) . If no such k exists, then reject all null hypotheses. The output of the Holm procedure is a rejection 6Bonferroni's correction is based on similar considerations as p u/N Bonf erroni for u = 1 (Eq. 2). The partial conjunction framework (Sec. 4.1) extends this idea for other values of u." |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "type_str": "figure", |
| "num": null, |
| "text": "k histogram for the independent datasets simulation." |
| }, |
| "TABREF0": { |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td colspan=\"4\">summarizes the replicability analysis results</td></tr><tr><td colspan=\"4\">while Table 2 -5 present task specific performance</td></tr><tr><td>measures and p\u2212values.k</td><td/><td/><td/></tr><tr><td colspan=\"4\">countkBonf.kF ish. Independent Datasets</td></tr><tr><td colspan=\"3\">Dependency Parsing (7 datasets)</td><td/></tr><tr><td>Mate-SpaCy Mate-Redshift</td><td>7 2</td><td>7 1</td><td>7 5</td></tr><tr><td colspan=\"4\">Multilingual POS Tagging (23 datasets)</td></tr><tr><td colspan=\"2\">MIMICK-Char\u2192Tag Dependent Datasets 11</td><td>6</td><td>16</td></tr><tr><td colspan=\"3\">Sentiment Classification (12 setups)</td><td/></tr><tr><td>AE-SCL-SR-MSDA (\u03b1 = 0.05)</td><td>10</td><td>6</td><td>10</td></tr><tr><td>AE-SCL-SR-MSDA (\u03b1 = 0.01)</td><td>6</td><td>2</td><td>8</td></tr><tr><td colspan=\"3\">Word Similarity (12 datasets)</td><td/></tr><tr><td>W2V-GloVe (\u03b1 = 0.05)</td><td>8</td><td>6</td><td>7</td></tr><tr><td>W2V-GloVe (\u03b1 = 0.01)</td><td>6</td><td>4</td><td>6</td></tr></table>", |
| "text": "" |
| }, |
| "TABREF1": { |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Model | Data Mate</td><td>BC 90.73</td><td>BN 90.82</td><td>MZ 91.92</td><td>NW 91.68</td><td>PT 96.64</td><td>TC 89.87</td><td>WB 89.89</td></tr><tr><td>SpaCy</td><td>89.05</td><td>89.31</td><td>89.29</td><td>89.52</td><td>95.27</td><td>87.65</td><td>87.40</td></tr><tr><td>p\u2212val (Mate,SpaCy) Redshift</td><td>(10 \u22124 ) 90.19</td><td>(10 \u22124 ) 90.46</td><td>(0.0) 90.90</td><td>(0.0) 90.99</td><td colspan=\"2\">(2 \u2022 10 \u22124 ) (9 \u2022 10 \u22124 ) 96.22 88.99</td><td>(0.0) 89.31</td></tr><tr><td colspan=\"6\">p\u2212val (Mate,Redshift) (0.0979) (0.1662) (0.0046) (0.0376) (0.0969)</td><td colspan=\"2\">(0.0912) (0.0823)</td></tr></table>", |
| "text": "Replicability analysis results. The appropriate estimator for each scenario is in bold. For independent datasets \u03b1 = 0.05.k count is based on the current practice in the NLP literature and does not have statistical guarantees regarding overestimation of the true k. Likewise, k F isher does not provide statistical guarantees regarding the overestimation of the true k for dependent datasets." |
| }, |
| "TABREF2": { |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Language</td><td colspan=\"3\">MIMICK Char\u2192Tag p\u2212value</td></tr><tr><td>Kazakh</td><td>83.95</td><td>83.64</td><td>0.0944</td></tr><tr><td>Tamil *</td><td>81.55</td><td>84.97</td><td>0.0001</td></tr><tr><td>Latvian</td><td>84.32</td><td>84.49</td><td>0.0623</td></tr><tr><td>Vietnamese</td><td>84.22</td><td>84.85</td><td>0.0359</td></tr><tr><td>Hungarian *</td><td>88.93</td><td>85.83</td><td>1.12e-08</td></tr><tr><td>Turkish</td><td>85.60</td><td>84.23</td><td>0.1461</td></tr><tr><td>Greek</td><td>93.63</td><td>94.05</td><td>0.0104</td></tr><tr><td>Bulgarian</td><td>93.16</td><td>93.03</td><td>0.1957</td></tr><tr><td>Swedish</td><td>92.30</td><td>92.27</td><td>0.0939</td></tr><tr><td>Basque *</td><td>84.44</td><td>86.01</td><td>3.87e-10</td></tr><tr><td>Russian</td><td>89.72</td><td>88.65</td><td>0.0081</td></tr><tr><td>Danish</td><td>90.13</td><td>89.96</td><td>0.1016</td></tr><tr><td>Indonesian *</td><td>89.34</td><td>89.81</td><td>0.0008</td></tr><tr><td>Chinese *</td><td>85.69</td><td>81.84</td><td>0</td></tr><tr><td>Persian</td><td>93.58</td><td>93.53</td><td>0.4450</td></tr><tr><td>Hebrew</td><td>91.69</td><td>91.93</td><td>0.1025</td></tr><tr><td>Romanian</td><td>89.18</td><td>88.96</td><td>0.2198</td></tr><tr><td>English</td><td>88.45</td><td>88.89</td><td>0.0208</td></tr><tr><td>Arabic</td><td>90.58</td><td>90.49</td><td>0.0731</td></tr><tr><td>Hindi</td><td>87.77</td><td>87.92</td><td>0.0288</td></tr><tr><td>Italian</td><td>92.50</td><td>92.45</td><td>0.4812</td></tr><tr><td>Spanish</td><td>91.41</td><td>91.71</td><td>0.1176</td></tr><tr><td>Czech *</td><td>90.81</td><td>90.17</td><td>2.91e-05</td></tr></table>", |
| "text": "UAS results for multi-domain dependency parsing. p\u2212values are in parentheses." |
| }, |
| "TABREF3": { |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table><tr><td>Independent Datasets Dependency parsing (Ta-ble 2) and multilingual POS tagging (Table 3) are</td></tr><tr><td>our example tasks for this setup, wherek F isher is our recommended valid estimator for the number of</td></tr><tr><td>cases where one algorithm outperforms another.</td></tr><tr><td>For dependency parsing, we compare two scenar-</td></tr><tr><td>ios: (a) where in most domains the differences be-</td></tr><tr><td>tween the compared algorithms are quite large and</td></tr><tr><td>the p\u2212values are small (Mate vs. SpaCy); and (b)</td></tr></table>", |
| "text": "Multilingual POS tagging accuracy for the MIM-ICK and the Char\u2192Tag models. * indicates languages identified by the Holm procedure with \u03b1 = 0.05 ." |
| }, |
| "TABREF5": { |
| "num": null, |
| "type_str": "table", |
| "html": null, |
| "content": "<table/>", |
| "text": "Spearman's \u03c1 values for the best performing predict model (W2V-CBOW) of" |
| } |
| } |
| } |
| } |