ACL-OCL / Base_JSON /prefixH /json /humeval /2022.humeval-1.7.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:29:15.764079Z"
},
"title": "Toward More Effective Human Evaluation for Machine Translation",
"authors": [
{
"first": "Bel\u00e9n",
"middle": [],
"last": "Sald\u00edas",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Markus",
"middle": [],
"last": "Freitag",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Qijun",
"middle": [],
"last": "Tan",
"suffix": "",
"affiliation": {},
"email": "qijuntan@google.com"
},
{
"first": "Mit",
"middle": [
"Media"
],
"last": "Lab",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Improvements in text generation technologies such as machine translation have necessitated more costly and time-consuming human evaluation procedures to ensure an accurate signal. We investigate a simple way to reduce cost by reducing the number of text segments that must be annotated in order to accurately predict a score for a complete test set. Using a sampling approach, we demonstrate that information from document membership and automatic metrics can help improve estimates compared to a pure random sampling baseline. We achieve gains of up to 20% in average absolute error by leveraging stratified sampling and control variates. Our techniques can improve estimates made from a fixed annotation budget, are easy to implement, and can be applied to any problem with structure similar to the one we study.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Improvements in text generation technologies such as machine translation have necessitated more costly and time-consuming human evaluation procedures to ensure an accurate signal. We investigate a simple way to reduce cost by reducing the number of text segments that must be annotated in order to accurately predict a score for a complete test set. Using a sampling approach, we demonstrate that information from document membership and automatic metrics can help improve estimates compared to a pure random sampling baseline. We achieve gains of up to 20% in average absolute error by leveraging stratified sampling and control variates. Our techniques can improve estimates made from a fixed annotation budget, are easy to implement, and can be applied to any problem with structure similar to the one we study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "As automatic natural language generation systems improve, evaluating them is becoming more challenging for both human and automatic methods (\u00c7elikyilmaz et al., 2020; Gehrmann et al., 2022) . In machine translation, this has led to increased adoption of techniques such as MQM (Freitag et al., 2021a,b) , an elaborate error-based methodology for scoring output, typically carried out by skilled human annotators. While MQM is more accurate than traditional crowd-based Likert-type scoring, it can also be significantly slower and more expensive, creating a strong incentive to reduce annotation time and cost.",
"cite_spans": [
{
"start": 140,
"end": 166,
"text": "(\u00c7elikyilmaz et al., 2020;",
"ref_id": "BIBREF22"
},
{
"start": 167,
"end": 189,
"text": "Gehrmann et al., 2022)",
"ref_id": "BIBREF5"
},
{
"start": 277,
"end": 302,
"text": "(Freitag et al., 2021a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we investigate a simple solution to this problem, namely reducing the number of text segments that a human annotator must rate. We assume a basic scenario in which a single annotator is given a test set to rate, and the task is to predict the average MQM score they would assign to the whole set by having them rate only a selected subset. This is a natural and versatile way to deploy human annotation effort within a framework like MQM; it differs from the tasks considered by recent work with similar motivation, which focus on system ranking (Mendon\u00e7a et al., 2021; Thorleiksd\u00f3ttir et al., 2021) or combining human and metric scores without the express aim of predicting human performance (Hashimoto et al., 2019; Singla et al., 2021) . Although our experiments are carried out with MQM-based scores, our methodology is applicable to any setting in which numerical scores are assigned to items for later averaging.",
"cite_spans": [
{
"start": 560,
"end": 583,
"text": "(Mendon\u00e7a et al., 2021;",
"ref_id": "BIBREF10"
},
{
"start": 584,
"end": 613,
"text": "Thorleiksd\u00f3ttir et al., 2021)",
"ref_id": null
},
{
"start": 707,
"end": 731,
"text": "(Hashimoto et al., 2019;",
"ref_id": "BIBREF8"
},
{
"start": 732,
"end": 752,
"text": "Singla et al., 2021)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We approach the task of choosing segments as a sampling problem, and investigate classical methods for reducing sample variance and bounding estimation error. To improve accuracy, we employ two sources of supplementary information. First, in keeping with recent practice, we assume segments are grouped into documents. This lets us exploit the tendency of segments within a document to be relatively homogeneous. Second, we make use of modern automatic metrics such as COMET (Rei et al., 2020) and BLEURT (Sellam et al., 2020) which correlate better at the segment level with human judgments than traditional surface-based metrics like BLEU (Papineni et al., 2002) . These serve as a rough proxy for human scores.",
"cite_spans": [
{
"start": 475,
"end": 493,
"text": "(Rei et al., 2020)",
"ref_id": "BIBREF14"
},
{
"start": 505,
"end": 526,
"text": "(Sellam et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 641,
"end": 664,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We show that document and metric information can be used to reduce average estimation error by up to 20% over a pure random sampling baseline. Due to high sample variance, it is difficult to reliably achieve a similar reduction in annotator effort for a given error tolerance. However, we suggest an alternative perspective in which our technique can be used to improve estimates made on the basis of a fixed rating budget. Although there is no guarantee of beating random sampling in any particular case, there is a high probability of improving on average. This improved estimator is easy to implement, and applicable to any human labeling task that produces numerical scores, and for which document membership and automatic metrics are available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our work is most similar to that of Chaganty et al. (2018) , which we extend in several ways. We adopt their use of control variates, but consider multiple metrics rather than just one, including learned metric combinations; we also employ modern neural metrics rather than metrics based on surface information. We combine control variates with stratified sampling using either proportional or optimal allocation, and additionally evaluate an incremental scenario in which sampling adapts to observed ratings. Finally, we investigate two analytical methods for bounding the error in our estimate.",
"cite_spans": [
{
"start": 36,
"end": 58,
"text": "Chaganty et al. (2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We assume a fixed test set consisting of translated segment pairs, and a single human rater who assigns scores to segments. Each segment belongs to a document, and has an associated vector of scores from automatic metrics. Our goal is to select an informative subset of segments to be labeled by the rater, and use the subset to predict the average score that would have resulted if we had asked the rater to label the whole set. By exploiting document and metric information, we hope to reduce the number of segments that must be manually labeled.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "Formally, let",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "x 1 , . . . x N be the segment scores, \u00b5 = N i=1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "x i /N be the test-set score to be predicted, and \u03c3 2 be the variance of the scores. The following side information is available for each segment i: an index d i that indicates its membership in one of D documents, and a vector of automaticmetric scores y i \u2208 R M . Unlike the segment scores, which are only revealed if they are in the selected subset, the side information is always available for the whole test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "We approach this task as the problem of sampling n \u2264 N scores X 1 , . . . , X n without replacement from the test set and deriving an estimate\u03bc for \u00b5 from the sample such that E(\u03bc) = \u00b5 (that is, \u00b5 is unbiased) and Var(\u03bc) is as small as possible. Low-variance estimators make it more likely that the estimation error |\u00b5 \u2212\u03bc| will be small. A baseline is to draw n segments at random and compute their mean. This gives an estimate that is unbiased, with variance:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "Var(\u03bc) = \u03c3 2 n N \u2212 n N \u2212 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "We investigated two classical unbiased strategies for reducing variance relative to this baseline: stratified sampling and control variates (Rice, 2007; Bratley et al., 2012) .",
"cite_spans": [
{
"start": 140,
"end": 152,
"text": "(Rice, 2007;",
"ref_id": "BIBREF15"
},
{
"start": 153,
"end": 174,
"text": "Bratley et al., 2012)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Methods",
"sec_num": "2"
},
{
"text": "Stratified sampling involves partitioning scores into bins that group similar items, then sampling some items from each bin. Intuitively, the idea is that if the variance within each bin is low, drawing too many samples from a particular bin is inefficient because it only serves to improve an already good estimate-therefore the sample should be spread evenly (in some sense) across bins. See Figure 1a for an illustration. As a side benefit, having human scores more evenly distributed across different types of segments is a useful characteristic if the labeled segments are to be the subject of further analysis.",
"cite_spans": [],
"ref_spans": [
{
"start": 394,
"end": 403,
"text": "Figure 1a",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Stratified sampling",
"sec_num": "2.1"
},
{
"text": "Formally, suppose the test set is divided into L bins, where bin l contains N l segments of which n l have been sampled, with sample mean\u03bc l . Then the stratified estimate is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stratified sampling",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00b5 = L l=1\u03bc l N l /N.",
"eq_num": "(1)"
}
],
"section": "Stratified sampling",
"sec_num": "2.1"
},
{
"text": "It is easy to verify that this is unbiased. Stratified sampling requires a method for partitioning the test set into bins and a way of allocating the n segments in the sample to individual bins. We investigated two methods for partitioning the test set: by documents and by metric-score similarity. The optimal (lowest variance) allocation assigns segments proportional to a bin's size and variance:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stratified sampling",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "n l = n \u03c3 l N l L l=1 \u03c3 l N l .",
"eq_num": "(2)"
}
],
"section": "Stratified sampling",
"sec_num": "2.1"
},
{
"text": "Since the bin variances \u03c3 l are unknown, a conservative strategy is to assume they are all equal, resulting in pure proportional allocation: n l = n N l /N . A potential enhancement is to approximate optimal allocation using estimated variances\u03c3 l \u2248 \u03c3 l derived from the metric scores in each bin. Two technical issues arise in stratified sampling. First, the per-bin sizes specified by equation 2are not necessarily whole numbers. This can be solved using a rounding scheme that minimizes L l=1 |n l \u2212 n l |, where n l are whole numbers that sum to n. A second problem is that n l can be greater than the number of available segments N l when using optimal allocation in high-variance bins. When this occurs, we choose the bin for which n l \u2212 N l is largest, set n l = N l , then recursively reallocate the remaining bins. Note that both these strategies can result in bins for which n l = 0 when n is small. (a) Stratified sampling forces sampled segments (shown in red) to be evenly distributed across bins, resulting in better estimates when the score variance within bins is lower than the variance across bins.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stratified sampling",
"sec_num": "2.1"
},
{
"text": "(b) Control variates allow for reversing the shift of the sample meanXn depending on the strength of the correlation between X and Z. In this illustration, where X and Z are highly correlated (\u223c0.9),Zn < 0 reflects the negative shift inXn. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Stratified sampling",
"sec_num": "2.1"
},
{
"text": "Hitherto we have assumed that sampling works by choosing a fixed batch of n segments, then sending them to a rater for scoring. It is also possible to consider an interactive scenario where the rater labels segments sequentially, and the sampling procedure is refined after each new rating is received. A convenient way to incorporate known ratings is to use them for improving the per-bin variance estimate\u015d \u03c3 l in optimal allocation. We tested two ways of accomplishing this: empirically estimate\u03c3 l from the known ratings in each bin; and learn a general mapping from metrics y to rating x over all known ratings, then use this mapping to estimate the unknown ratings in each bin, and derive\u03c3 l from those estimates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Incremental sampling",
"sec_num": null
},
{
"text": "The control-variates estimator makes use of an auxiliary random variable Z that is standardized (has zero mean and unit variance) on the test set:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Control variates",
"sec_num": "2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00b5 =X n \u2212 Cov(X, Z) Var(Z)Z n =X n \u2212 Cov(X, Z)Z n",
"eq_num": "(3)"
}
],
"section": "Control variates",
"sec_num": "2.2"
},
{
"text": "whereX n andZ n are mean values over the sample, and the covariance is over the whole test set. This is the lowest-variance estimator that uses information from Z. It is unbiased becauseX n is unbiased, Cov(X, Z) is independent of the current sample, and E(Z n ) = 0. The control-variates estimator can be thought of as usingZ n to infer the direction thatX n has been shifted away from \u00b5 and reversing this shift by an amount that depends on the degree of correlation between X and Z-see Figure 1b for an illustration. In general, Cov(X, Z) is unknown, but it can be estimated from the sample as follows: 1",
"cite_spans": [],
"ref_spans": [
{
"start": 489,
"end": 498,
"text": "Figure 1b",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Control variates",
"sec_num": "2.2"
},
{
"text": "Cov(X, Z) \u2248 1 n n i=1 X i Z i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Control variates",
"sec_num": "2.2"
},
{
"text": "The control-variates estimator can be extended to handle multiple auxiliary variables by forming a linear combination (Glynn and Szechtman, 2002) :",
"cite_spans": [
{
"start": 118,
"end": 145,
"text": "(Glynn and Szechtman, 2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Control variates",
"sec_num": "2.2"
},
{
"text": "\u00b5 =X n \u2212 (E(ZZ T ) \u22121 E(XZ)) TZ n (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Control variates",
"sec_num": "2.2"
},
{
"text": "where Z is a vector of standardized variables,Z n is its mean over the sample, and the expectations of the covariance matrix ZZ T and weighted vectors XZ are taken over the test set. The latter is unknown, but as in the scalar case it can be estimated from the sample:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Control variates",
"sec_num": "2.2"
},
{
"text": "E(XZ) \u2248 1 n n i=1 X i Z i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Control variates",
"sec_num": "2.2"
},
{
"text": "In our setting, control variates are easily derived by standardizing the metric scores y i , which are available for all segments in the test set. The resulting estimator is convenient because it is applied after sampling is complete, making it independent of the sampling method, including whether the sample is drawn incrementally or in batch mode.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Control variates",
"sec_num": "2.2"
},
{
"text": "For practical applications it is desirable to upperbound the error |\u00b5 \u2212\u03bc| in the estimated score with some degree of confidence. Given a confidence level \u03b3 (e.g., 0.95), we would like to find an error bound t such that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Bounds",
"sec_num": "2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (|\u00b5 \u2212\u03bc| \u2264 t) \u2265 \u03b3",
"eq_num": "(5)"
}
],
"section": "Error Bounds",
"sec_num": "2.3"
},
{
"text": "A classical bound can be derived from Hoeffding's inequality, which states that equation 5holds if:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Bounds",
"sec_num": "2.3"
},
{
"text": "t = R k n log(2/\u03b4) 2n ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Bounds",
"sec_num": "2.3"
},
{
"text": "where R is the difference between the largest and smallest scores in the test set, \u03b4 = 1 \u2212 \u03b3, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Bounds",
"sec_num": "2.3"
},
{
"text": "k n = 1 \u2212 (n \u2212 1)/N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Bounds",
"sec_num": "2.3"
},
{
"text": "is an adjustment for sampling without replacement (Serfling, 1974) . A problem with Hoeffding's inequality is that it scales with the range of the scores and does not take variance into account, so its bound will be pessimistic if variance is small relative to the extremes. In such cases, the Bernstein bound (Mnih et al., 2008) will be tighter:",
"cite_spans": [
{
"start": 50,
"end": 66,
"text": "(Serfling, 1974)",
"ref_id": "BIBREF17"
},
{
"start": 310,
"end": 329,
"text": "(Mnih et al., 2008)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error Bounds",
"sec_num": "2.3"
},
{
"text": "t =\u03c3 2 log(3/\u03b4) n + 3R log(3/\u03b4) n ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Bounds",
"sec_num": "2.3"
},
{
"text": "where\u03c3 is a sample estimate of the variance. Note that the contribution of R diminishes as 1/n in this formula, compared with 1/ \u221a n in the Hoeffding bound. Both these bounds are general in the sense that they make no assumptions about the score distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Bounds",
"sec_num": "2.3"
},
{
"text": "Our development data consists of MQM ratings made available by Freitag et al. (2021a) for 10 English-German and 10 Chinese-English \"systems\" (including human translations and MT) from the WMT 2020 news test sets (Barrault et al., 2020) . Each segment was annotated by three expert raters who assigned scores ranging from 0 (perfect) to 25 (worst). There were six annotators per language pair, each of whom rated all system outputs for a set of documents comprising approximately half the complete test set (about 710 segments / rater for German, and 1000 segments / rater for Chinese).",
"cite_spans": [
{
"start": 63,
"end": 85,
"text": "Freitag et al. (2021a)",
"ref_id": null
},
{
"start": 212,
"end": 235,
"text": "(Barrault et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "We created simulations for each rater and system combination, excluding the Human-A \"system\", as it was the reference for the MT metrics we used as features. This resulted in 54 simulations for each language pair. For each simulation, the task is to predict the average score over the complete subset of segments annotated by a single rater for a single system. No knowledge of other segments, system outputs, or rater decisions is permitted to leak across simulations. As features, we used the 10 metrics submitted to the WMT 2020 metrics task (Mathur et al., 2020) that had highest average segment-level Pearson correlation with the MQM scores in our dev data. 2 These correlations are generally poor: from 0.279-0.410 for English-German, and 0.425-0.465 for Chinese-English. 3 To eliminate the effects of hyper-parameter tuning on the development data, we carried out additional evaluation on a test set consisting of newstest data from the WMT 2021 metrics shared task (Freitag et al., 2021b) for English-German (17 systems), Chinese-English (15 systems), and English-Russian (16 systems). This is similar to the dev data, except that only one MQM rating is available per segment. The number of rated segments was 527 for German and Russian, and 650 for Chinese. English-Russian ratings were annotated using a different MQM methodology (from Unbabel rather than Google), resulting in scores on a 0-100 scale, with 100 being best. As before, we created separate simulations for each system, omitting the human \"system\" used as a reference for the metrics. To avoid bias, rather than selecting metrics according to correlation, we chose the WMT 2021 primary submissions of two top-performing metrics from the dev data: BLEURT and COMET. 4 Appendix A contains further details about scores and rater assignments for the dev and test sets.",
"cite_spans": [
{
"start": 545,
"end": 566,
"text": "(Mathur et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 778,
"end": 779,
"text": "3",
"ref_id": null
},
{
"start": 973,
"end": 996,
"text": "(Freitag et al., 2021b)",
"ref_id": null
},
{
"start": 1739,
"end": 1740,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "We tested the sampling and estimation strategies described in section 2 by comparing them to the baseline of simple random sampling with a mean estimator. For each simulation we considered sample sizes ranging from 5-50% of the available data, at 5% intervals. 5 For each sample size and technique for establishing\u03bc, we drew 100 random samples, computed the average and std deviation of the error |\u00b5 \u2212\u03bc| across the samples, then averaged the results across simulations to summarize performance at that sample size. We also measured the number of \"wins\"-simulations in which a technique had a lower average error than the baseline. Finally, we aggregated these results across sample sizes to summarize performance in a single number. We begin by evaluating the stratified sampling methods described in section 2.1, comparing stratification over documents and over bins defined by metric scores. The latter were formed by scoring each segment with an average of the standardized metric scores assigned to it, then sorting and partitioning so each bin contained approximately 80 segments (8x larger than the average document). More elaborate clustering and metric-selection techniques did not improve over this method. Performance was also quite flat as a function of bin size, though it worsened as bin size approached average document size. We tested both stratification methods with proportional and optimal allocation using averaged metric scores as proxies for human scores when estimating the variance in each bin. Figure 2 shows absolute error for these methods as a function of sample size, and Table 1 summarizes aggregate performance across sizes. The general pattern is similar for both language pairs: proportional allocation with documents (docs-prop) outperforms the random-sampling baseline; proportional allocation with metrics (metrics-prop) behaves similarly; and optimal allocation with document bins (docs-opt) underperforms, as does optimal allocation with metric bins (not shown, as it is much worse). Optimal allocation focuses sharply on bins with high estimated variance-which will be harmful if the estimates are wrong-so we experimented with various smoothing methods, but none improved over pure proportional allocation.",
"cite_spans": [],
"ref_spans": [
{
"start": 1518,
"end": 1526,
"text": "Figure 2",
"ref_id": "FIGREF2"
},
{
"start": 1600,
"end": 1607,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "Although stratification clearly reduces the error on average, the usefulness of this result is tempered by the large variances shown in Table 1 . For any given random draw, these imply that the stratified estimate is only slightly more likely to be better than the baseline. Even when comparing errors averaged over 100 random draws per simulation, the stratified estimates are only better than the baseline for approximately 75% of simulations for English-German, and 90% for Chinese-English. Table 2 shows aggregate results for incremental stratified sampling using documents as bins, with two methods for estimating per-bin variances for optimal allocation. 6 The docs-incr-metrics method involves learning a k-nearest-neighbor (k=25) model with standardized metrics as features on all labeled segments, then using its predictions to estimate variances for the unlabeled segments in each bin. In docs-incr-human, the variance of the segments remaining in each bin is estimated from the segments that have already been scored. Both these methods underperform the baseline; in particular, the use of a learned mapping in docsincr-metrics provides only modest gains over the raw averages in docs-opt. We now turn to experiments with the controlvariate estimators described in section 2.2. and Table 3 present the results. We derived standardized scalar variates to plug into equation 3from: a single high-performing metric (BLEURTextended, cv-bleurt); the mean of all metrics (cvmean); and predictions from a knn model learned from all metric values on the labeled segments (cvknn). We also used all standardized metrics directly (cv-multi) as input to the vector in equation 4. 7 All tested variants give reasonable improvements over the baseline, with quite similar error rates, es-pecially for English-German. For Chinese-English, combining all metrics with the knn model improves slightly over BLEURT-extended, reducing the absolute error by 5%. This may reflect somewhat higher metric correlations for this language pair.",
"cite_spans": [
{
"start": 661,
"end": 662,
"text": "6",
"ref_id": null
},
{
"start": 1679,
"end": 1680,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 136,
"end": 143,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 494,
"end": 501,
"text": "Table 2",
"ref_id": "TABREF3"
},
{
"start": 1293,
"end": 1300,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4"
},
{
"text": "As control variate estimation is applied after sampling is complete, it is straightforward to combine it with stratification. Figure 4 and Table 4 show the results of combining proportional stratified sampling using documents with the best control variates estimator (docs-prop+cv-knn), along with the component techniques for comparison. As one might hope, the techniques are complementary despite their similar individual performance. Interestingly, this is not the case when metric-based clusters are used for stratification instead of documents (metrics-prop+cv-knn, last line in Table 4 ), because the same information is used for both variancereduction techniques. The docs-prop+cv-knn combination produces our best results, with error reductions of 14% and 23% over the baseline for English-German and English-Chinese, and better average performance in almost 90% and 100% of simulations, respectively. Unfortunately, however, the standard deviation of these estimates remains uncomfortably close to the size of the average absolute error. Table 5 : Performance of error bounds for different sample sizes. Statistics are averaged over simulations: cal is % of samples for which the true error was lower than the bound, slack is the difference between the bound and the error, and t is the bound. base is the baseline estimator, and best is docs-prop+cv-knn.",
"cite_spans": [],
"ref_spans": [
{
"start": 126,
"end": 134,
"text": "Figure 4",
"ref_id": "FIGREF4"
},
{
"start": 139,
"end": 146,
"text": "Table 4",
"ref_id": "TABREF7"
},
{
"start": 584,
"end": 591,
"text": "Table 4",
"ref_id": "TABREF7"
},
{
"start": 1047,
"end": 1054,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Control variates and combined results",
"sec_num": "4.2"
},
{
"text": "Despite large variance across individual samples, sampling techniques can be useful in practice if it is possible to reliably bound the error in the estimate derived from a given sample. We computed the bounds from section 2.3 for different sample sizes with docs-prop+cv-knn, setting \u03b3 = 0.95. Both the Hoeffding and Bernstein bounds are very loose, overestimating the true error in 100% of samples, by margins that are about an order of magnitude greater than the average error in Figure 4 . 8 We hypothesize that this is due to scores having a large range R, and being highly skewed, with \u00b5 R. To test this, we recomputed the Hoeffding bound with empirically-determined R values of 4 and 7 for English-German and Chinese-English. As shown in Table 5 , this gives results which are well calibrated (cal > 95%) for doc-prop+cv-knn, with reasonable error bounds. Performance is somewhat worse for the baseline estimates, although the difference in error between the two techniques is negligible compared to the predicted bound. This oracle experiment suggests that it will be difficult to find non-oracle bounds that are substantially lower for doc-prop+cv-knn than for the baseline. EnRu baseline 1.601 1.197 docs-prop+cv-knn 1.482 1.117 77.3 Table 6 : Results on test data for baseline and best combined estimator aggregated over sample sizes from 5%-50%. Figure 5 and Table 6 show results comparing baseline random sampling with docs-prop+cv-knn on our evaluation set. Both the curves and the aggregate results display a similar pattern to the development results, with relatively large gains over the baseline for Chinese-English (21% relative error reduction, wins in 98% of simulations), and smaller ones for English-German and English-Russian 9 (reductions of 7% and win rates of about 77%). As before, standard deviations are very high. ",
"cite_spans": [
{
"start": 494,
"end": 495,
"text": "8",
"ref_id": null
}
],
"ref_spans": [
{
"start": 483,
"end": 491,
"text": "Figure 4",
"ref_id": "FIGREF4"
},
{
"start": 745,
"end": 752,
"text": "Table 5",
"ref_id": null
},
{
"start": 1244,
"end": 1251,
"text": "Table 6",
"ref_id": null
},
{
"start": 1358,
"end": 1366,
"text": "Figure 5",
"ref_id": "FIGREF6"
},
{
"start": 1371,
"end": 1378,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error estimation",
"sec_num": "4.3"
},
{
"text": "How should we interpret these results? If we had a more reliable way of binning segments with similar human ratings, or metrics that correlated better at the segment level, it would be possible to reduce variance to levels that would permit realistic error bounds. That would enable a scenario in which we could determine the number of segments n that need to be rated in order to estimate the complete test-set score to within a given tolerance. As it is, however, our error bounds are very large-and we do not manage to reduce them significantly with improved sampling and estimation methods. This is unlikely to change soon for complex annotation tasks like MT because because humans are noisy raters; as shown in Table 12 , they are difficult to predict even when using other humans as oracles.",
"cite_spans": [],
"ref_spans": [
{
"start": 717,
"end": 725,
"text": "Table 12",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "In the absence of more reliable signals for reducing variance, a way to make practical use of the techniques we study is to flip the scenario around and aim to improve the quality of an estimate made from a fixed budget of n human ratings. It is common practice to obtain human annotations for only a portion of a larger test set due to time or cost constraints (Barrault et al., 2020; Freitag et al., 2021a) . In this setting, our techniques can lead to improved estimates compared to just taking the mean of randomly-selected segments (although there is no guarantee that they will do so for any given sample).",
"cite_spans": [
{
"start": 362,
"end": 385,
"text": "(Barrault et al., 2020;",
"ref_id": null
},
{
"start": 386,
"end": 408,
"text": "Freitag et al., 2021a)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "The risks in applying this strategy are low. Stratified sampling with proportional allocation provides an unbiased estimate of the test-set mean, with variance that is \u2264 random sampling (Rice, 2007) , and equality only in the case that the bins have identical statistics. The situation is trickier for control variates. In theory, the control-variate estimator is also unbiased, with lower variance than the sample mean, but this assumes that the test-set covariance Cov(X, Z) between scores X and the auxiliary variable Z is known. Since we only know the scores in the sample, we must rely on an estimate for Cov(X, Z), creating the possibility for errors if this is significantly larger than the true covariance. However, as Chaganty et al. (2018) point out, the error in the sample estimate for Cov(X, Z) diminishes as 1/n, much faster than the 1/ \u221a n rate for the error |\u00b5 \u2212\u03bc| in the estimated score. In our data, we found no appreciable degradation of performance on small samples, even ones containing as few as 30 items.",
"cite_spans": [
{
"start": 186,
"end": 198,
"text": "(Rice, 2007)",
"ref_id": "BIBREF15"
},
{
"start": 727,
"end": 749,
"text": "Chaganty et al. (2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Based on these observations, we can make the following recommendations for improving the estimated mean score of a test set containing N items given a fixed number n < N of items to be manually annotated:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "1. Use prior information such as document membership to partition items into bins, then choose items using stratified sampling as described in equation 1, with proportional allocation. Beware of rounding errors when only a few samples are taken from each bin.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "2. Use an automatic metric or other feature that correlates with human scores as a control variate in equation (3). This step is carried out after sampling is complete, and is independent of the sampling method used. If multiple metrics are available, combine them into a single variate by averaging or applying a smooth regressor learned on the sample (knn with k=25 worked well for us). Be alert to the possibility of errors in the covariance estimate when n is small (\u2264 30).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "Chaganty et al. (2018) pioneered control variates for NLP evaluation, using them to improve estimates for summarization and question answering. Despite some technical differences-they measure variance ratios rather than absolute error, simulate human variance by sampling from a collection of raters, and use bootstrapped confidence intervalstheir findings are roughly in line with ours. We extend their work by showing that gains from stratified sampling are complementary to those from control variates, and explore a broader range of scenarios, including using multiple variates and incremental sampling. Recent work has investigated incremental labeling tasks and/or combining human scores with automatic metrics. Mendon\u00e7a et al. (2021) apply online learning algorithms to an MT system-ranking task in which different segments are selected for human evaluation on each iteration, using COMET to fill in missing human scores in WMT 2019 data. Their algorithm converges to correct results after several hundred iterations, but this condition is not detected automatically. Thorleiksd\u00f3ttir et al. (2021) use Hoeffding's inequality to measure confidence in pairwise ranking decisions of varying difficulties for controlled text generation output; they consider human scores only. Singla et al. (2021) sample foreign-language test responses for human grading, with the aim of improving over purely automatic scoring; a reverse problem to ours. Hashimoto et al. (2019) propose a synergistic combination of human and automatic scoring for evaluating text generation.",
"cite_spans": [
{
"start": 718,
"end": 740,
"text": "Mendon\u00e7a et al. (2021)",
"ref_id": "BIBREF10"
},
{
"start": 1075,
"end": 1104,
"text": "Thorleiksd\u00f3ttir et al. (2021)",
"ref_id": null
},
{
"start": 1443,
"end": 1466,
"text": "Hashimoto et al. (2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Finally, there has been considerable work on measuring and rectifying inaccuracies in human annotation (Sun et al., 2020; Wei and Jia, 2021; Gladkoff et al., 2021; Paun et al., 2018) . We sidestep this issue by aiming to predict the performance of a single human rater, assuming that if this can be done accurately, conflicts among raters can be resolved in a post-processing step.",
"cite_spans": [
{
"start": 103,
"end": 121,
"text": "(Sun et al., 2020;",
"ref_id": "BIBREF19"
},
{
"start": 122,
"end": 140,
"text": "Wei and Jia, 2021;",
"ref_id": "BIBREF21"
},
{
"start": 141,
"end": 163,
"text": "Gladkoff et al., 2021;",
"ref_id": "BIBREF6"
},
{
"start": 164,
"end": 182,
"text": "Paun et al., 2018)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "We investigate two classical variance-reduction techniques for improving the accuracy of sampled human ratings of MT output, measured against the mean of all ratings for a given test set. We find that stratified sampling and control variates are complementary, contributing about equally to gains of up to 20% in average absolute error reduction com-pared to random sampling. Exploiting this result to dynamically reduce annotator effort given a target error tolerance is not feasible due to the high variance in our data, but we propose that our techniques could instead be used to improve estimates made from a fixed annotation budget. Concrete recommendations for this scenario are provided in section 5. Our method is easy to implement, and can be applied to any setting involving averaged numerical item-wise scores where document (or other prior grouping) and automatic metric side information is available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "In future work we look forward to delving into questions raised by our results: why doesn't optimal allocation work better, particularly in the incremental setting; is there a better way to estimate variance from metrics; why aren't metric combinations more helpful; and can error bounds be improved, perhaps with bootstrapping methods? EnDe ZhEn rater segs docs segs docs rater1 713 64 993 76 rater2 683 66 992 76 rater3 705 66 1012 78 rater4 709 65 996 79 rater5 722 64 1021 77 rater6 722 65 986 79 corpus 1418 130 2000 155 Table 8 : MQM scores for WMT 2020 outputs from (Freitag et al., 2021a) . Scores range from 0 (perfect) to 25 (worst). The reference used for metrics is shown in bold.",
"cite_spans": [
{
"start": 606,
"end": 629,
"text": "(Freitag et al., 2021a)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 337,
"end": 533,
"text": "EnDe ZhEn rater segs docs segs docs rater1 713 64 993 76 rater2 683 66 992 76 rater3 705 66 1012 78 rater4 709 65 996 79 rater5 722 64 1021 77 rater6 722 65 986 79",
"ref_id": "TABREF1"
},
{
"start": 559,
"end": 566,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "This section gives details of the development and test data used in our experiments. Table 7 shows the numbers of segments and documents assigned to each rater in our development data. Table 8 contains the scores assigned to all ten evaluated systems; each score is an average of three rater scores per segments, averaged over all segments in the test set. Table 9 lists the selected metrics used for the development-set experiments, along with the segment-level Pearson correlation for each metric. Tables 10 and 11 contain rater assignments and system scores for the three language pairs used in the test data.",
"cite_spans": [],
"ref_spans": [
{
"start": 85,
"end": 92,
"text": "Table 7",
"ref_id": "TABREF9"
},
{
"start": 185,
"end": 192,
"text": "Table 8",
"ref_id": null
},
{
"start": 357,
"end": 364,
"text": "Table 9",
"ref_id": null
},
{
"start": 500,
"end": 516,
"text": "Tables 10 and 11",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "A Data",
"sec_num": null
},
{
"text": "A difficulty in predicting human ratings is that humans are noisy annotators (Wei and Jia, 2021) . To quantify the noise in our data, we computed the error when predicting each rater's average score over their assigned segments using the average of the other two raters who also rated those segments. Table 12 shows that this varies substantially across raters and languages, with the hardest-topredict rater's error being over 3x that of the easiestto-predict rater in both languages, and Chinese-English errors being higher than English-German. (Variance across raters may be due in part to differences in their assigned subsets of segments, as some segments are harder to rate than others. Variances across languages is likely due to Chinese-English system scores being higher (worse) than German-English scores.) Comparing the average errors of 0.3 and 0.8 for English-German and Chinese-English to Figure 4 , we observe that only a small number of samples (less than 10%) of a particular annotator's own ratings are sufficient to predict their test-set score with greater precision than knowing the average of other raters' scores over the whole test set (a rough proxy for the \"true\" test-set score).",
"cite_spans": [
{
"start": 77,
"end": 96,
"text": "(Wei and Jia, 2021)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 301,
"end": 309,
"text": "Table 12",
"ref_id": "TABREF1"
},
{
"start": 903,
"end": 911,
"text": "Figure 4",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "B Variabililty in human scores",
"sec_num": null
},
{
"text": "A key element of our technique is using automatic MT metrics to predict human scores at the segment level. Figure 6 shows scatter plots for a single high-performing metric (COMET) that illustrate the challenges with this: the relation with MQM scores is noisy and non-linear, and there are extreme outliers due to segments that were assigned the worst possible MQM score. Furthermore, as indicated by the slope of the regression lines, the relation can vary substantially across different settings, even for different systems scored by a single rater, or for the same system scored by different raters. This implies that a strategy of pre-calibrating a particular metric on data that is independent of the current rater and system is likely to be ineffective for our problem. Table 12 : Absolute errors when predicting each rater's score from the average of other raters' scores. Numbers shown are averages over all systems and all segments annotated by the given rater. Figure 6 : Example WMT20 EnDe human MQM versus COMET scores for the same rater but different MT systems (top panels), and different raters but the same MT system (bottom panels). Each point represents a single segment, and the lines show the best linear fit. Errors are average absolute segment-level differences between the line and the points.",
"cite_spans": [],
"ref_spans": [
{
"start": 107,
"end": 115,
"text": "Figure 6",
"ref_id": null
},
{
"start": 776,
"end": 784,
"text": "Table 12",
"ref_id": "TABREF1"
},
{
"start": 971,
"end": 979,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "B Variabililty in human scores",
"sec_num": null
},
{
"text": "This equation follows from expanding Cov(X, Z) over the complete test set, dropping all terms that contain the true mean of Z (0 by construction) and estimating the one term that remains from the sample. Alternatively one can choose to estimate Cov(X, Z) purely from the sample as n i=1 (Xi \u2212 X)(Zi \u2212Z)/n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also tried using all submitted metrics, with slightly worse results.3 For comparison, target sequence length correlations are 0.223 and 0.439 respectively (better than the three lowestranked metrics for Chinese).4 The primary submissions were BLEURT-20 and COMET-MQM_2021.5 Beyond 50%, the variance of the baseline estimator becomes very low and there is limited opportunity for improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We omit the corresponding curves for space reasons.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that the latter combines scores linearly, in contrast to the knn model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Surprisingly, the Bernstein bound is somewhat worse, likely due to our small sample sizes in conjunction with the large multiplier on R in the Bernstein formula.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that the absolute errors are higher for English-Russian due to the 4x scale for ratings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": " (Freitag et al., 2021b) . Scores range from 0 (perfect) to 25 (worst), except for English-Russian, where they range from 0 (worst) to 100 (perfect). The reference used for metrics is shown in bold.",
"cite_spans": [
{
"start": 1,
"end": 24,
"text": "(Freitag et al., 2021b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Santanu Pal, Matt Post, and Marcos Zampieri. 2020. Findings of the 2020 Conference on Machine Translation (WMT20). In Proceedings of the Fifth Conference on Machine Translation",
"authors": [
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Magdalena",
"middle": [],
"last": "Biesialska",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Joanis",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kocmi",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Chi-Kiu",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Nikola",
"middle": [],
"last": "Ljube\u0161i\u0107",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Morishita",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
},
{
"first": "Toshiaki",
"middle": [],
"last": "Nakazawa",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lo\u00efc Barrault, Magdalena Biesialska, Ond\u0159ej Bojar, Marta R. Costa-juss\u00e0, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Matthias Huck, Eric Joanis, Tom Kocmi, Philipp Koehn, Chi-kiu Lo, Nikola Ljube\u0161i\u0107, Christof Monz, Makoto Morishita, Masaaki Nagata, Toshi- aki Nakazawa, Santanu Pal, Matt Post, and Mar- cos Zampieri. 2020. Findings of the 2020 Confer- ence on Machine Translation (WMT20). In Pro- ceedings of the Fifth Conference on Machine Trans- lation, pages 1-55, Online. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "A Guide to Simulation",
"authors": [
{
"first": "P",
"middle": [],
"last": "Bratley",
"suffix": ""
},
{
"first": "B",
"middle": [
"L"
],
"last": "Fox",
"suffix": ""
},
{
"first": "L",
"middle": [
"E"
],
"last": "Schrage",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Bratley, B.L. Fox, and L.E. Schrage. 2012. A Guide to Simulation. Springer New York.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "The price of debiasing automatic metrics in natural language evalaution",
"authors": [
{
"first": "Arun",
"middle": [],
"last": "Chaganty",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Mussmann",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "643--653",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1060"
]
},
"num": null,
"urls": [],
"raw_text": "Arun Chaganty, Stephen Mussmann, and Percy Liang. 2018. The price of debiasing automatic metrics in natural language evalaution. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 643-653, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021a. Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": null,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "9",
"issue": "",
"pages": "1460--1474",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00437"
]
},
"num": null,
"urls": [],
"raw_text": "Markus Freitag, George Foster, David Grangier, Viresh Ratnakar, Qijun Tan, and Wolfgang Macherey. 2021a. Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Transla- tion. Transactions of the Association for Computa- tional Linguistics, 9:1460-1474.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Alon Lavie, and Ond\u0159ej Bojar. 2021b. Results of the WMT21 metrics shared task: Evaluating metrics with expertbased human evaluations on TED and news domain",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "Nitika",
"middle": [],
"last": "Mathur",
"suffix": ""
},
{
"first": "Chi-Kiu",
"middle": [],
"last": "Lo",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Stewart",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the Sixth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "733--774",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Freitag, Ricardo Rei, Nitika Mathur, Chi-kiu Lo, Craig Stewart, George Foster, Alon Lavie, and Ond\u0159ej Bojar. 2021b. Results of the WMT21 met- rics shared task: Evaluating metrics with expert- based human evaluations on TED and news domain. In Proceedings of the Sixth Conference on Machine Translation, pages 733-774, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Repairing the cracked foundation: A survey of obstacles in evaluation practices for generated text",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Gehrmann",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Thibault",
"middle": [],
"last": "Sellam",
"suffix": ""
}
],
"year": 2022,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2202.06935"
]
},
"num": null,
"urls": [],
"raw_text": "Sebastian Gehrmann, Elizabeth Clark, and Thibault Sellam. 2022. Repairing the cracked foundation: A survey of obstacles in evaluation practices for gener- ated text. arXiv preprint arXiv:2202.06935.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Measuring uncertainty in translation quality evaluation (tqe)",
"authors": [
{
"first": "Serge",
"middle": [],
"last": "Gladkoff",
"suffix": ""
},
{
"first": "Irina",
"middle": [],
"last": "Sorokina",
"suffix": ""
},
{
"first": "Lifeng",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Alekseeva",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2111.07699"
]
},
"num": null,
"urls": [],
"raw_text": "Serge Gladkoff, Irina Sorokina, Lifeng Han, and Alexandra Alekseeva. 2021. Measuring uncer- tainty in translation quality evaluation (tqe). arXiv preprint arXiv:2111.07699.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Some new perspectives on the method of control variates",
"authors": [
{
"first": "W",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Glynn",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Szechtman",
"suffix": ""
}
],
"year": 2000,
"venue": "Monte Carlo and Quasi-Monte Carlo Methods",
"volume": "",
"issue": "",
"pages": "27--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter W Glynn and Roberto Szechtman. 2002. Some new perspectives on the method of control variates. In Monte Carlo and Quasi-Monte Carlo Methods 2000, pages 27-49. Springer.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Unifying human and statistical evaluation for natural language generation",
"authors": [
{
"first": "Tatsu",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Hugh",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2019,
"venue": "North American Association for Computational Linguistics (NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tatsu Hashimoto, Hugh Zhang, and Percy Liang. 2019. Unifying human and statistical evaluation for natu- ral language generation. In North American Associ- ation for Computational Linguistics (NAACL).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Results of the WMT20 metrics shared task",
"authors": [
{
"first": "Nitika",
"middle": [],
"last": "Mathur",
"suffix": ""
},
{
"first": "Johnny",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "Qingsong",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the Fifth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "688--725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitika Mathur, Johnny Wei, Markus Freitag, Qingsong Ma, and Ond\u0159ej Bojar. 2020. Results of the WMT20 metrics shared task. In Proceedings of the Fifth Conference on Machine Translation, pages 688-725, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Online learning meets machine translation evaluation: Finding the best systems with the least human effort",
"authors": [
{
"first": "V\u00e2nia",
"middle": [],
"last": "Mendon\u00e7a",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "Lu\u00edsa",
"middle": [],
"last": "Coheur",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Sardinha",
"suffix": ""
},
{
"first": "Ana L\u00facia",
"middle": [],
"last": "Santos",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "3105--3117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V\u00e2nia Mendon\u00e7a, Ricardo Rei, Lu\u00edsa Coheur, Alberto Sardinha, and Ana L\u00facia Santos. 2021. Online learn- ing meets machine translation evaluation: Finding the best systems with the least human effort. In Pro- ceedings of the 59th Annual Meeting of the Associa- tion for Computational Linguistics and the 11th In- ternational Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 3105- 3117.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Empirical bernstein stopping",
"authors": [
{
"first": "Volodymyr",
"middle": [],
"last": "Mnih",
"suffix": ""
},
{
"first": "Csaba",
"middle": [],
"last": "Szepesv\u00e1ri",
"suffix": ""
},
{
"first": "Jean-Yves",
"middle": [],
"last": "Audibert",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 25th international conference on Machine learning",
"volume": "",
"issue": "",
"pages": "672--679",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Volodymyr Mnih, Csaba Szepesv\u00e1ri, and Jean-Yves Audibert. 2008. Empirical bernstein stopping. In Proceedings of the 25th international conference on Machine learning, pages 672-679.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting of the Association for Compu- tational Linguistics, pages 311-318.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Comparing bayesian models of annotation",
"authors": [
{
"first": "Bob",
"middle": [],
"last": "Silviu Paun",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Carpenter",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Chamberlain",
"suffix": ""
},
{
"first": "Udo",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Massimo",
"middle": [],
"last": "Kruschwitz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Poesio",
"suffix": ""
}
],
"year": 2018,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "6",
"issue": "",
"pages": "571--585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Silviu Paun, Bob Carpenter, Jon Chamberlain, Dirk Hovy, Udo Kruschwitz, and Massimo Poesio. 2018. Comparing bayesian models of annotation. Transac- tions of the Association for Computational Linguis- tics, 6:571-585.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "COMET: A neural framework for MT evaluation",
"authors": [
{
"first": "Ricardo",
"middle": [],
"last": "Rei",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Stewart",
"suffix": ""
},
{
"first": "Ana",
"middle": [
"C"
],
"last": "Farinha",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "2685--2702",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.213"
]
},
"num": null,
"urls": [],
"raw_text": "Ricardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A neural framework for MT evaluation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 2685-2702, Online. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Mathematical Statistics and Data Analysis. Advanced series",
"authors": [
{
"first": "J",
"middle": [
"A"
],
"last": "Rice",
"suffix": ""
}
],
"year": 2007,
"venue": "Cengage Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.A. Rice. 2007. Mathematical Statistics and Data Analysis. Advanced series. Cengage Learning.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "BLEURT: Learning robust metrics for text generation",
"authors": [
{
"first": "Thibault",
"middle": [],
"last": "Sellam",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7881--7892",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.704"
]
},
"num": null,
"urls": [],
"raw_text": "Thibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7881-7892, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Probability inequalities for the sum in sampling without replacement",
"authors": [
{
"first": "R",
"middle": [
"J"
],
"last": "Serfling",
"suffix": ""
}
],
"year": 1974,
"venue": "The Annals of Statistics",
"volume": "2",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. J. Serfling. 1974. Probability inequalities for the sum in sampling without replacement. The Annals of Statistics, 2(1).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Using sampling to estimate and improve performance of automated scoring systems with guarantees",
"authors": [
{
"first": "Yaman",
"middle": [],
"last": "Kumar Singla",
"suffix": ""
},
{
"first": "Sriram",
"middle": [],
"last": "Krishna",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rajiv Ratn",
"suffix": ""
},
{
"first": "Changyou",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaman Kumar Singla, Sriram Krishna, Rajiv Ratn Shah, and Changyou Chen. 2021. Using sampling to estimate and improve performance of automated scoring systems with guarantees.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Improving human-labeled data through dynamic automatic conflict resolution",
"authors": [
{
"first": "David",
"middle": [
"Q"
],
"last": "Sun",
"suffix": ""
},
{
"first": "Hadas",
"middle": [],
"last": "Kotek",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Mayank",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"D"
],
"last": "Williams",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "3547--3557",
"other_ids": {
"DOI": [
"10.18653/v1/2020.coling-main.316"
]
},
"num": null,
"urls": [],
"raw_text": "David Q. Sun, Hadas Kotek, Christopher Klein, Mayank Gupta, William Li, and Jason D. Williams. 2020. Improving human-labeled data through dy- namic automatic conflict resolution. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, pages 3547-3557, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Nora Hollenstein, and Ce Zhang. 2021. Dynamic human evaluation for relative model comparisons",
"authors": [
{
"first": "Th\u00f3rhildur",
"middle": [],
"last": "Thorleiksd\u00f3ttir",
"suffix": ""
},
{
"first": "Cedric",
"middle": [],
"last": "Renggli",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Th\u00f3rhildur Thorleiksd\u00f3ttir, Cedric Renggli, Nora Hol- lenstein, and Ce Zhang. 2021. Dynamic human eval- uation for relative model comparisons.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "The statistical advantage of automatic NLG metrics at the system level",
"authors": [
{
"first": "Johnny",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "6840--6854",
"other_ids": {
"DOI": [
"10.18653/v1/2021.acl-long.533"
]
},
"num": null,
"urls": [],
"raw_text": "Johnny Wei and Robin Jia. 2021. The statistical advan- tage of automatic NLG metrics at the system level. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 6840-6854, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Evaluation of text generation: A survey. ArXiv, abs",
"authors": [
{
"first": "Asli",
"middle": [],
"last": "\u00c7elikyilmaz",
"suffix": ""
},
{
"first": "Elizabeth",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Asli \u00c7elikyilmaz, Elizabeth Clark, and Jianfeng Gao. 2020. Evaluation of text generation: A survey. ArXiv, abs/2006.14799.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Complementary strategies for reducing the variance of the estimated average score.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Figure 3",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Absolute error and standard deviation for stratified sampling methods.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "Absolute error and std deviation for different control-variate estimators with random sampling.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF4": {
"text": "Absolute error and std deviation for control-variate estimators and stratified sampling.",
"num": null,
"uris": null,
"type_str": "figure"
},
"FIGREF6": {
"text": "Absolute error for control-variate estimators and stratified sampling on eval data.",
"num": null,
"uris": null,
"type_str": "figure"
},
"TABREF1": {
"num": null,
"content": "<table><tr><td>: Stratified sampling results aggregated over sample sizes from 5%-50%. Segment allocation is re-ferred to as 'prop' for proportional-and as 'opt' for optimal-allocation with either document-based (docs) or metric-based (metrics) bin membership.</td></tr></table>",
"html": null,
"type_str": "table",
"text": ""
},
"TABREF3": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": ""
},
"TABREF5": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Control variates results aggregated over sample sizes from 5%-50%."
},
"TABREF7": {
"num": null,
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Combined stratified sampling and control variates aggregated over sample sizes from 5%-50%."
},
"TABREF9": {
"num": null,
"content": "<table><tr><td colspan=\"3\">: Numbers of segments and documents anno-tated by each rater for each system in WMT 2020 new-stest.</td></tr><tr><td>EnDe</td><td>ZhEn</td><td/></tr><tr><td>system</td><td>MQM system</td><td>MQM</td></tr><tr><td>Human-B Human-A Human-P</td><td>0.75 Human-A 0.91 Human-B 1.41 VolcTrans</td><td>3.43 3.62 5.03</td></tr><tr><td>Tohoku</td><td>2.02 WeChat</td><td>5.13</td></tr><tr><td>OPPO</td><td>2.25 Tencent</td><td>5.19</td></tr><tr><td>eTranslation</td><td>2.33 OPPO</td><td>5.20</td></tr><tr><td>Tencent</td><td>2.35 THUNLP</td><td>5.34</td></tr><tr><td>VolcTrans</td><td>2.45 DeepMind</td><td>5.41</td></tr><tr><td>Online-B</td><td>2.48 DiDi_NLP</td><td>5.48</td></tr><tr><td>Online-A</td><td>2.99 Online-B</td><td>5.85</td></tr></table>",
"html": null,
"type_str": "table",
"text": ""
}
}
}
}