ACL-OCL / Base_JSON /prefixK /json /K16 /K16-1021.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K16-1021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:11:39.712751Z"
},
"title": "Exploring Prediction Uncertainty in Machine Translation Quality Estimation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Beck",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {
"country": "United Kingdom"
}
},
"email": "debeck1@sheffield.ac.uk"
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sheffield",
"location": {
"country": "United Kingdom"
}
},
"email": "l.specia@sheffield.ac.uk"
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Melbourne",
"location": {
"country": "Australia"
}
},
"email": "t.cohn@unimelb.edu.au"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Machine Translation Quality Estimation is a notoriously difficult task, which lessens its usefulness in real-world translation environments. Such scenarios can be improved if quality predictions are accompanied by a measure of uncertainty. However, models in this task are traditionally evaluated only in terms of point estimate metrics, which do not take prediction uncertainty into account. We investigate probabilistic methods for Quality Estimation that can provide well-calibrated uncertainty estimates and evaluate them in terms of their full posterior predictive distributions. We also show how this posterior information can be useful in an asymmetric risk scenario, which aims to capture typical situations in translation workflows.",
"pdf_parse": {
"paper_id": "K16-1021",
"_pdf_hash": "",
"abstract": [
{
"text": "Machine Translation Quality Estimation is a notoriously difficult task, which lessens its usefulness in real-world translation environments. Such scenarios can be improved if quality predictions are accompanied by a measure of uncertainty. However, models in this task are traditionally evaluated only in terms of point estimate metrics, which do not take prediction uncertainty into account. We investigate probabilistic methods for Quality Estimation that can provide well-calibrated uncertainty estimates and evaluate them in terms of their full posterior predictive distributions. We also show how this posterior information can be useful in an asymmetric risk scenario, which aims to capture typical situations in translation workflows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Quality Estimation (QE) (Blatz et al., 2004; Specia et al., 2009) models aim at predicting the quality of automatically translated text segments. Traditionally, these models provide point estimates and are evaluated using metrics like Mean Absolute Error (MAE), Root-Mean-Square Error (RMSE) and Pearson's r correlation coefficient. However, in practice QE models are built for use in decision making in large workflows involving Machine Translation (MT). In these settings, relying on point estimates would mean that only very accurate prediction models can be useful in practice.",
"cite_spans": [
{
"start": 24,
"end": 44,
"text": "(Blatz et al., 2004;",
"ref_id": "BIBREF5"
},
{
"start": 45,
"end": 65,
"text": "Specia et al., 2009)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A way to improve decision making based on quality predictions is to explore uncertainty estimates. Consider for example a post-editing scenario where professional translators use MT in an effort to speed-up the translation process. A QE model can be used to determine if an MT segment is good enough for post-editing or should be discarded and translated from scratch. But since QE models are not perfect they can end up allowing bad MT segments to go through for postediting because of a prediction error. In such a scenario, having an uncertainty estimate for the prediction can provide additional information for the filtering decision. For instance, in order to ensure good user experience for the human translator and maximise translation productivity, an MT segment could be forwarded for post-editing only if a QE model assigns a high quality score with low uncertainty (high confidence). Such a decision process is not possible with point estimates only.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Good uncertainty estimates can be acquired from well-calibrated probability distributions over the quality predictions. In QE, arguably the most successful probabilistic models are Gaussian Processes (GPs) since they considered the state-ofthe-art for regression Hensman et al., 2013) , especially in the low-data regimes typical for this task. We focus our analysis in this paper on GPs since other common models used in QE can only provide point estimates as predictions. Another reason why we focus on probabilistic models is because this lets us employ the ideas proposed by Qui\u00f1onero-Candela et al. (2006) , which defined new evaluation metrics that take into account probability distributions over predictions.",
"cite_spans": [
{
"start": 263,
"end": 284,
"text": "Hensman et al., 2013)",
"ref_id": "BIBREF15"
},
{
"start": 579,
"end": 610,
"text": "Qui\u00f1onero-Candela et al. (2006)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remaining of this paper is organised as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 In Section 2 we further motivate the use of GPs for uncertainty modelling in QE and revisit their underlying theory. We also propose some model extensions previously developed in the GP literature and argue they are more appropriate for the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We intrinsically evaluate our proposed models in terms of their posterior distributions on training and test data in Section 3. Specifically, we show that differences in uncertainty modelling are not captured by the usual point estimate metrics commonly used for this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 As an example of an application for predicitive distributions, in Section 4 we show how they can be useful in scenarios with asymmetric risk and how the proposed models can provide better performance in this case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We discuss related work in Section 5 and give conclusions and avenues for future work in Section 6. While we focus on QE as application, the methods we explore in this paper can be applied to any text regression task where modelling predictive uncertainty is useful, either in human decision making or by propagating this information for further computational processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Traditionally, QE is treated as a regression task with hand-crafted features. Kernel methods are arguably the state-of-the-art in QE since they can easily model non-linearities in the data. Furthermore, the scalability issues that arise in kernel methods do not tend to affect QE in practice since the datasets are usually small, in the order of thousands of instances.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Models for QE",
"sec_num": "2"
},
{
"text": "The most popular method for QE is Support Vector Regression (SVR), as shown in the multiple instances of the WMT QE shared tasks (Callisonburch et al., 2012; Bojar et al., 2013; Bojar et al., 2014; Bojar et al., 2015) . While SVR models can generate competitive predictions for this task, they lack a probabilistic interpretation, which makes it hard to extract uncertainty estimates using them. Bootstrapping approaches like bagging (Abe and Mamitsuka, 1998) can be applied, but this requires setting and optimising hyperparameters like bag size and number of bootstraps. There is also no guarantee these estimates come from a well-calibrated probabilistic distribution.",
"cite_spans": [
{
"start": 129,
"end": 157,
"text": "(Callisonburch et al., 2012;",
"ref_id": null
},
{
"start": 158,
"end": 177,
"text": "Bojar et al., 2013;",
"ref_id": "BIBREF6"
},
{
"start": 178,
"end": 197,
"text": "Bojar et al., 2014;",
"ref_id": "BIBREF7"
},
{
"start": 198,
"end": 217,
"text": "Bojar et al., 2015)",
"ref_id": "BIBREF8"
},
{
"start": 434,
"end": 459,
"text": "(Abe and Mamitsuka, 1998)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Models for QE",
"sec_num": "2"
},
{
"text": "Gaussian Processes (GPs) (Rasmussen and Williams, 2006) is an alternative kernel-based framework that gives competitive results for point estimates Shah et al., 2013; Beck et al., 2014b) . Unlike SVR, they explicitly model uncertainty in the data and in the predictions. This makes GPs very applicable when well-calibrated uncertainty estimates are required. Furthermore, they are very flexible in terms of modelling decisions by allowing the use of a variety of kernels and likelihoods while providing efficient ways of doing model selection. Therefore, in this work we focus on GPs for probabilistic modelling of QE. In what follows we briefly describe the GPs framework for regression.",
"cite_spans": [
{
"start": 25,
"end": 55,
"text": "(Rasmussen and Williams, 2006)",
"ref_id": "BIBREF25"
},
{
"start": 148,
"end": 166,
"text": "Shah et al., 2013;",
"ref_id": "BIBREF26"
},
{
"start": 167,
"end": 186,
"text": "Beck et al., 2014b)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Probabilistic Models for QE",
"sec_num": "2"
},
{
"text": "Here we follow closely the definition of GPs given by Rasmussen and Williams (2006) . Let X = {(x 1 , y 1 ), (x 2 , y 2 ), . . . , (x n , y n )} be our data, where each x \u2208 R D is a D-dimensional input and y is its corresponding response variable. A GP is defined as a stochastic model over the latent function f that generates the data X :",
"cite_spans": [
{
"start": 54,
"end": 83,
"text": "Rasmussen and Williams (2006)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Process Regression",
"sec_num": "2.1"
},
{
"text": "f (x) \u223c GP(m(x), k(x, x )),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Process Regression",
"sec_num": "2.1"
},
{
"text": "where m(x) is the mean function, which is usually the 0 constant, and k(x, x ) is the kernel or covariance function, which describes the covariance between values of f at the different locations of x and x .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Process Regression",
"sec_num": "2.1"
},
{
"text": "The prior is combined with a likelihood via Bayes' rule to obtain a posterior over the latent function:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Process Regression",
"sec_num": "2.1"
},
{
"text": "p(f |X ) = p(y|X, f )p(f ) p(y|X) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Process Regression",
"sec_num": "2.1"
},
{
"text": "where X and y are the training inputs and response variables, respectively. For regression, we assume that each",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Process Regression",
"sec_num": "2.1"
},
{
"text": "y i = f (x i ) + \u03b7, where \u03b7 \u223c N (0, \u03c3 2 n )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Process Regression",
"sec_num": "2.1"
},
{
"text": "is added white noise. Having a Gaussian likelihood results in a closed form solution for the posterior.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Process Regression",
"sec_num": "2.1"
},
{
"text": "Training a GP involves the optimisation of model hyperparameters, which is done by maximising the marginal likelihood p(y|X) via gradient ascent. Predictive posteriors for unseen x * are obtained by integrating over the latent function evaluations at x * .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Process Regression",
"sec_num": "2.1"
},
{
"text": "GPs can be extended in many different ways by applying different kernels, likelihoods and modifying the posterior, for instance. In the next Sections, we explain in detail some sensible modelling choices in applying GPs for QE.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Gaussian Process Regression",
"sec_num": "2.1"
},
{
"text": "Choosing an appropriate kernel is a crucial step in defining a GP model (and any other kernel method). A common choice is to employ the exponentiated quadratic (EQ) kernel 1 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mat\u00e8rn Kernels",
"sec_num": "2.2"
},
{
"text": "k EQ (x, x ) = \u03c3 v exp(\u2212 r 2 2 ) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mat\u00e8rn Kernels",
"sec_num": "2.2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mat\u00e8rn Kernels",
"sec_num": "2.2"
},
{
"text": "r 2 = D i=1 (x i \u2212 x i ) 2 l 2 i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mat\u00e8rn Kernels",
"sec_num": "2.2"
},
{
"text": "is the scaled distance between the two inputs, \u03c3 v is a scale hyperparameter and l is a vector of lengthscales. Most kernel methods tie all lengthscale to a single value, resulting in an isotropic kernel. However, since in GPs hyperparameter optimisation can be done efficiently, it is common to employ one lengthscale per feature, a method called Automatic Relevance Determination (ARD). The EQ kernel allows the modelling of nonlinearities between the inputs and the response variables but it makes a strong assumption: it generates smooth, infinitely differentiable functions. This assumption can be too strong for noisy data. An alternative is the Mat\u00e8rn class of kernels, which relax the smoothness assumption by modelling functions which are \u03bd-times differentiable only. Common values for \u03bd are the half-integers 3/2 and 5/2, resulting in the following Mat\u00e8rn kernels:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mat\u00e8rn Kernels",
"sec_num": "2.2"
},
{
"text": "k M32 = \u03c3 v (1 + \u221a 3r 2 ) exp(\u2212 \u221a 3r 2 ) k M52 = \u03c3 v 1 + \u221a 5r 2 + 5r 2 3 exp(\u2212 \u221a 5r 2 ) ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Mat\u00e8rn Kernels",
"sec_num": "2.2"
},
{
"text": "where we have omitted the dependence of k M32 and k M52 on the inputs (x, x ) for brevity. Higher values for \u03bd are usually not very useful since the resulting behaviour is hard to distinguish from limit case \u03bd \u2192 \u221e, which retrieves the EQ kernel (Rasmussen and Williams, 2006, Sec. 4.2) . The relaxed smoothness assumptions from the Mat\u00e8rn kernels makes them promising candidates for QE datasets, which tend to be very noisy. We expect that employing them will result in a better models for this application.",
"cite_spans": [
{
"start": 245,
"end": 285,
"text": "(Rasmussen and Williams, 2006, Sec. 4.2)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mat\u00e8rn Kernels",
"sec_num": "2.2"
},
{
"text": "The Gaussian likelihood of standard GPs has support over the entire real number line. However, common quality scores are strictly positive values, which means that the Gaussian assumption 1 Also known as Radial Basis Function (RBF) kernel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Warped Gaussian Processes",
"sec_num": "2.3"
},
{
"text": "is not ideal. A usual way to deal with this problem is model the logarithm of the response variables, since this transformation maps strictly positive values to the real line. However, there is no reason to believe this is the best possible mapping: a better idea would be to learn it from the data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Warped Gaussian Processes",
"sec_num": "2.3"
},
{
"text": "Warped GPs (Snelson et al., 2004) are an extension of GPs that allows the learning of arbitrary mappings. It does that by placing a monotonic warping function over the observations and modelling the warped values inside a standard GP. The posterior distribution is obtained by applying a change of variables:",
"cite_spans": [
{
"start": 11,
"end": 33,
"text": "(Snelson et al., 2004)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Warped Gaussian Processes",
"sec_num": "2.3"
},
{
"text": "p(y * |x * ) = f (y * ) 2\u03c0\u03c3 2 * exp f (y * ) \u2212 \u00b5 * 2\u03c3 * ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Warped Gaussian Processes",
"sec_num": "2.3"
},
{
"text": "where \u00b5 * and \u03c3 * are the mean and standard deviation of the latent (warped) response variable and f and f are the warping function and its derivative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Warped Gaussian Processes",
"sec_num": "2.3"
},
{
"text": "Point predictions from this model depend on the loss function to be minimised. For absolute error, the median is the optimal value while for squared error it is the mean of the posterior. In standard GPs, since the posterior is Gaussian the median and mean coincide but this in general is not the case for a Warped GP posterior. The median can be easily obtained by applying the inverse warping function to the latent median:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Warped Gaussian Processes",
"sec_num": "2.3"
},
{
"text": "y med * = f \u22121 (\u00b5 * ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Warped Gaussian Processes",
"sec_num": "2.3"
},
{
"text": "While the inverse of the warping function is usually not available in closed form, we can use its gradient to have a numerical estimate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Warped Gaussian Processes",
"sec_num": "2.3"
},
{
"text": "The mean is obtained by integrating y * over the latent density:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Warped Gaussian Processes",
"sec_num": "2.3"
},
{
"text": "E[y * ] = f \u22121 (z)N z (\u00b5 * , \u03c3 2 * )dz,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Warped Gaussian Processes",
"sec_num": "2.3"
},
{
"text": "where z is the latent variable. This can be easily approximated using Gauss-Hermite quadrature since it is a one dimensional integral over a Gaussian density. The warping function should be flexible enough to allow the learning of complex mappings, but it needs to be monotonic. Snelson et al. (2004) proposes a parametric form composed of a sum of tanh functions, similar to a neural network layer:",
"cite_spans": [
{
"start": 279,
"end": 300,
"text": "Snelson et al. (2004)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Warped Gaussian Processes",
"sec_num": "2.3"
},
{
"text": "f (y) = y + I i=1 a i tanh(b i (y + c i )),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Warped Gaussian Processes",
"sec_num": "2.3"
},
{
"text": "where I is the number of tanh terms and a, b and c are treated as model hyperparameters and optimised jointly with the kernel and likelihood hyperparameters. Large values for I allow more complex mappings to be learned but raise the risk of overfitting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Warped Gaussian Processes",
"sec_num": "2.3"
},
{
"text": "Warped GPs provide an easy and elegant way to model response variables with non-Gaussian behaviour within the GP framework. In our experiments we explore models employing warping functions with up to 3 terms, which is the value recommended by Snelson et al. (2004) . We also report results using the f (y) = log(y) warping function.",
"cite_spans": [
{
"start": 243,
"end": 264,
"text": "Snelson et al. (2004)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Warped Gaussian Processes",
"sec_num": "2.3"
},
{
"text": "Given a set of different probabilistic QE models, we are interested in evaluating the performance of these models, while also taking their uncertainty into account, particularly to distinguish among models with seemingly same or similar performance. A straightforward way to measure the performance of a probabilistic model is to inspect its negative (log) marginal likelihood. This measure, however, does not capture if a model overfit the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Uncertainty Evaluation",
"sec_num": "3"
},
{
"text": "We can have a better generalisation measure by calculating the likelihood on test data instead. This was proposed in previous work and it is called Negative Log Predictive Density (NLPD) (Qui\u00f1onero-Candela et al., 2006) :",
"cite_spans": [
{
"start": 187,
"end": 219,
"text": "(Qui\u00f1onero-Candela et al., 2006)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Uncertainty Evaluation",
"sec_num": "3"
},
{
"text": "NLPD(\u0177, y) = \u2212 1 n n i=1 log p(\u0177 i = y i |x i ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Uncertainty Evaluation",
"sec_num": "3"
},
{
"text": "where\u0177 is a set of test predictions, y is the set of true labels and n is the test set size. This metric has since been largely adopted by the ML community when evaluating GPs and other probabilistic models for regression (see Section 5 for some examples).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Uncertainty Evaluation",
"sec_num": "3"
},
{
"text": "As with other error metrics, lower values are better. Intuitively, if two models produce equally incorrect predictions but they have different uncertainty estimates, NLPD will penalise the overconfident model more than the underconfident one. On the other hand, if predictions are close to the true value then NLPD will penalise the underconfident model instead.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Uncertainty Evaluation",
"sec_num": "3"
},
{
"text": "In our first set of experiments we evaluate models proposed in Section 2 according to their negative log likelihood (NLL) and the NLPD on test data. We also report two point estimate metrics on test data: Mean Absolute Error (MAE), the most commonly used evaluation metric in QE, and Pearson's r, which has recently proposed by Graham (2015) as a more robust alternative.",
"cite_spans": [
{
"start": 328,
"end": 341,
"text": "Graham (2015)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Intrinsic Uncertainty Evaluation",
"sec_num": "3"
},
{
"text": "Our experiments comprise datasets containing three different language pairs, where the label to predict is post-editing time:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "3.1"
},
{
"text": "English-Spanish (en-es) This dataset was used in the WMT14 QE shared task (Bojar et al., 2014) . It contains 858 sentences translated by one MT system and post-edited by a professional translator.",
"cite_spans": [
{
"start": 74,
"end": 94,
"text": "(Bojar et al., 2014)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "3.1"
},
{
"text": "French-English (fr-en) Described in (Specia, 2011) , this dataset contains 2, 525 sentences translated by one MT system and post-edited by a professional translator.",
"cite_spans": [
{
"start": 36,
"end": 50,
"text": "(Specia, 2011)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "3.1"
},
{
"text": "English-German (en-de) This dataset is part of the WMT16 QE shared task 2 . It was translated by one MT system for consistency we use a subset of 2, 828 instances post-edited by a single professional translator.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "3.1"
},
{
"text": "As part of the process of creating these datasets, post-editing time was logged on an sentence basis for all datasets. Following common practice, we normalise the post-editing time by the length of the machine translated sentence to obtain postediting rates and use these as our response variables.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "3.1"
},
{
"text": "Technically our approach could be used with any other numeric quality labels from the literature, including the commonly used Human Translation Error Rate (HTER) (Snover et al., 2006) . Our decision to focus on post-editing time was based on the fact that time is a more complete measure of post-editing effort, capturing not only technical effort like HTER, but also cognitive effort (Koponen et al., 2012) . Additionally, time is more directly applicable in real translation environments -where uncertainty estimates could be useful, as it relates directly to productivity measures.",
"cite_spans": [
{
"start": 162,
"end": 183,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF28"
},
{
"start": 385,
"end": 407,
"text": "(Koponen et al., 2012)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "3.1"
},
{
"text": "For model building, we use a standard set of 17 features from the QuEst framework . These features are used in the strong baseline models provided by the WMT QE shared tasks. While the best performing systems in the shared tasks use larger feature sets, these are mostly resource-intensive and languagedependent, and therefore not equally applicable to all our language pairs. Moreover, our goal is to compare probabilistic QE models through the predictive uncertainty perspective, rather than improving the state-of-the-art in terms of point predictions. We perform 10-fold cross validation instead of using a single train/test splits and report averaged metric scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "3.1"
},
{
"text": "The model hyperparameters were optimised by maximising the likelihood on the training data. We perform a two-pass procedure similar to that in ): first we employ an isotropic kernel and optimise all hyperparameters using 10 random restarts; then we move to an ARD equivalent kernel and perform a final optimisation step to fine tune feature lengthscales. Point predictions were fixed as the median of the distribution. Table 1 shows the results obtained for all datasets. The first two columns shows an interesting finding in terms of model learning: using a warping function drastically decreases both NLL and NLPD. The main reason behind this is that standard GPs distribute probability mass over negative values, while the warped models do not. For the fr-en and en-de datasets, NLL and NLPD follow similar trends. This means that we can trust NLL as a measure of uncertainty for these datasets. However, this is not observed in the en-es dataset. Since this dataset is considerably smaller than the others, we believe this is evidence of overfitting, thus showing that NLL is not a reliable metric for small datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 419,
"end": 426,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "3.1"
},
{
"text": "In terms of different warping functions, using the parametric tanh function with 3 terms performs better than the log for the fr-en and en-de datasets. This is not the case of the en-es dataset, where the log function tends to perform better. We believe that this is again due to the smaller dataset size. The gains from using a Mat\u00e8rn kernel over EQ are less conclusive. While they tend to perform better for fr-en, there does not seem to be any difference in the other datasets. Different kernels can be more appropriate depending on the language pair, but more experiments are needed to verify this, which we leave for future work. The differences in uncertainty modelling are by and large not captured by the point estimate metrics. While MAE does show gains from standard to Warped GPs, it does not reflect the difference found between warping functions for fr-en. Pearson's r is also quite inconclusive in this sense, except for some observed gains for en-es. This shows that NLPD indeed should be preferred as a evaluation metric when proper prediction uncertainty estimates are required by a QE model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "3.2"
},
{
"text": "To obtain more insights about the performance in uncertainty modelling we inspected the predictive distributions for two sentence pairs in the fr-en dataset. We show the distributions for a standard GP and a Warped GP with a tanh3 function in Figure 1. In the first case, where both models give accurate predictions, we see that the Warped GP distribution is peaked around the predicted value, as it should be. It also gives more probability mass to positive values, showing that the model is able to learn that the label is non-negative. In the second case we analyse the distributions when both models make inaccurate predictions. We can see that the Warped GP is able to give a broader distribution in this case, while still keeping most of the mass outside the negative range.",
"cite_spans": [],
"ref_spans": [
{
"start": 243,
"end": 249,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "3.3"
},
{
"text": "We also report above each plot in Figure 1 the NLPD for each prediction. Comparing only the Warped GP predictions, we can see that their values reflect the fact that we prefer sharp distributions when predictions are accurate and broader ones when predictions are not accurate. However, it is interesting to see that the metric also penalises predictions when their distributions are too broad, as it is the case with the standard GPs since they can not discriminate between positive and negative values as well as the Warped GPs.",
"cite_spans": [],
"ref_spans": [
{
"start": 34,
"end": 42,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "3.3"
},
{
"text": "Inspecting the resulting warping functions can bring additional modelling insights. In Figure 2 we show instances of tanh3 warping functions learned from the three datasets and compare them with the log warping function. We can see that the parametric tanh3 model is able to learn nontrivial mappings. For instance, in the en-es case the learned function is roughly logarithmic in the low scales but it switches to a linear mapping after y = 4. Notice also the difference in the scales, which means that the optimal model uses a latent Gaussian with a larger variance. The top two plots correspond to a prediction with low absolute error, while the bottom two plots show the behaviour when the absolute error is high.",
"cite_spans": [],
"ref_spans": [
{
"start": 87,
"end": 95,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "3.3"
},
{
"text": "Evaluation metrics for QE, including those used in the WMT QE shared tasks, are assumed to be symmetric, i.e., they penalise over and underestimates equally. This assumption is however too simplistic for many possible applications of QE. For example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Asymmetric Risk Scenarios",
"sec_num": "4"
},
{
"text": "\u2022 In a post-editing scenario, a project manager may have translators with limited expertise in post-editing. In this case, automatic translations should not be provided to the translator unless they are highly likely to have very good quality. This can be enforced this by increasing the penalisation weight for underestimates. We call this the pessimistic scenario.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Asymmetric Risk Scenarios",
"sec_num": "4"
},
{
"text": "\u2022 In a gisting scenario, a company wants to automatically translate their product reviews so that they can be published in a foreign language without human intervention. The company would prefer to publish only the reviews translated well enough, but having more reviews published will increase the chances of selling products. In this case, having better recall is more important and thus only reviews with very poor translation quality should be discarded. We can accomplish this by heavier penalisation on overestimates, a scenario we call optimistic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Asymmetric Risk Scenarios",
"sec_num": "4"
},
{
"text": "In this Section we show how these scenarios can be addressed by well-calibrated predictive distributions and by employing asymmetric loss functions. An example of such a function is the asymmetric linear (henceforth, AL) loss, which is a generalisation of the absolute error:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Asymmetric Risk Scenarios",
"sec_num": "4"
},
{
"text": "L(\u0177, y) = w(\u0177 \u2212 y) if\u0177 > y y \u2212\u0177 if\u0177 \u2264 y,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Asymmetric Risk Scenarios",
"sec_num": "4"
},
{
"text": "where w > 0 is the weight given to overestimates. If w > 1 we have the pessimistic scenario, and the optimistic one can be obtained using 0 < w < 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Asymmetric Risk Scenarios",
"sec_num": "4"
},
{
"text": "For w = 1 we retrieve the original absolute error loss. Another asymmetric loss is the linear exponential or linex loss (Zellner, 1986) :",
"cite_spans": [
{
"start": 120,
"end": 135,
"text": "(Zellner, 1986)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Asymmetric Risk Scenarios",
"sec_num": "4"
},
{
"text": "L(\u0177, y) = exp[w(\u0177 \u2212 y)] \u2212 (\u0177 \u2212 y) \u2212 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Asymmetric Risk Scenarios",
"sec_num": "4"
},
{
"text": "where w \u2208 R is the weight. This loss attempts to keep a linear penalty in lesser risk regions, while imposing an exponential penalty in the higher risk ones. Negative values for w will result in a pessimistic setting, while positive values will result in the optimistic one. For w = 0, the loss approximates a squared error loss. Usual values for w tend to be close to 1 or \u22121 since for higher weights the loss can quickly reach very large scores. Both losses are shown on Figure 3 . Figure 3 : Asymmetric losses. These curves correspond to the pessimistic scenario since they impose larger penalties when the prediction is lower than the true label. In the optimistic scenario the curves would be reflected with respect to the vertical axis.",
"cite_spans": [],
"ref_spans": [
{
"start": 473,
"end": 481,
"text": "Figure 3",
"ref_id": null
},
{
"start": 484,
"end": 492,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Asymmetric Risk Scenarios",
"sec_num": "4"
},
{
"text": "The losses introduced above can be incorporated directly into learning algorithms to obtain models for a given scenario. In the context of the AL loss this is called quantile regression (Koenker, 2005) , since optimal estimators for this loss are posterior quantiles. However, in a production environment the loss can change over time. For instance, in the gisting scenario discussed above the parameter w could be changed based on feedback from indicators of sales revenue or user experience. If the loss is attached to the underlying learning algorithms, a change in w would require full model retraining, which can be costly.",
"cite_spans": [
{
"start": 186,
"end": 201,
"text": "(Koenker, 2005)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Bayes Risk for Asymmetric Losses",
"sec_num": "4.1"
},
{
"text": "Instead of retraining the model every time there is a different loss, we can train a single probabilistic model and derive Bayes risk estimators for the loss we are interested in. This allows estimates to be obtained without having to retrain models when the loss changes. Additionally, this allows different losses/scenarios to be employed at the same time using the same model. Minimum Bayes risk estimators for asymmetric losses were proposed by Christoffersen and Diebold (1997) and we follow their derivations in our experiments. The best estimator for the AL loss is equivalent to the w w+1 quantile of the predictive distribution. Note that we retrieve the median when w = 1, as expected. The best estimator for the linex loss can be easily derived and results in:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayes Risk for Asymmetric Losses",
"sec_num": "4.1"
},
{
"text": "\u0177 = \u00b5 y \u2212 w\u03c3 2 y 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayes Risk for Asymmetric Losses",
"sec_num": "4.1"
},
{
"text": "where \u00b5 y and \u03c3 2 y are the mean and the variance of the predictive posterior.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bayes Risk for Asymmetric Losses",
"sec_num": "4.1"
},
{
"text": "Here we assess the models and datasets used in Section 3.1 in terms of their performance in the asymmetric setting. Following the explanation in the previous Section, we do not perform any retraining: we collect the predictions obtained using the 10-fold cross-validation protocol and apply different Bayes estimators corresponding to the asymmetric losses. Evaluation is performed using the same loss employed in the estimator (for instance, when using the linex estimator with w = 0.75 we report the results using the linex loss with same w) and averaged over the 10 folds.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.2"
},
{
"text": "To simulate both pessimistic and optimistic scenarios, we use w \u2208 {3, 1/3} for the AL loss and w \u2208 {\u22120.75, 0.75} for the linex loss. The only exception is the en-de dataset, where we report results for w \u2208 \u22120.25, 0.75 for linex 3 . We also report results only for models using the Mat\u00e8rn52 kernel. While we did experiment with different kernels and weighting schemes 4 our findings showed similar trends so we omit them for the sake of clarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.2"
},
{
"text": "Results are shown on Table 2. In the optimistic scenario the tanh-based warped GP models give consistently better results than standard GPs. The log-based models also gives good results for AL but for linex the results are mixed except for en-es. This is probably again related to the larger sizes of the fr-en and en-de datasets, which allows the tanh-based models to learn richer representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.3"
},
{
"text": "nalised by the loss. This might be a case where a standard GP is preferred but can also indicate that this loss is biased towards models with high variance, even if it does that by assigning probability mass to nonsensical values (like negative time). We leave further investigation of this phenomenon for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.3"
},
{
"text": "Quality Estimation is generally framed as text regression task, similarly to many other applications such as movie revenue forecasting based on reviews (Joshi et al., 2010; Bitvai and Cohn, 2015) and detection of emotion strength in news headlines (Strapparava and Mihalcea, 2008; Beck et al., 2014a) and song lyrics (Mihalcea and Strapparava, 2012) . In general, these applications are evaluated in terms of their point estimate predictions, arguably because not all of them employ probabilistic models.",
"cite_spans": [
{
"start": 152,
"end": 172,
"text": "(Joshi et al., 2010;",
"ref_id": "BIBREF17"
},
{
"start": 173,
"end": 195,
"text": "Bitvai and Cohn, 2015)",
"ref_id": "BIBREF4"
},
{
"start": 248,
"end": 280,
"text": "(Strapparava and Mihalcea, 2008;",
"ref_id": "BIBREF32"
},
{
"start": 281,
"end": 300,
"text": "Beck et al., 2014a)",
"ref_id": "BIBREF2"
},
{
"start": 317,
"end": 349,
"text": "(Mihalcea and Strapparava, 2012)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "The NLPD is common and established metric used in the GP literature to evaluate new approaches. Examples include the original work on Warped GPs (Snelson et al., 2004) , but also others like L\u00e1zaro-Gredilla (2012) and Chalupka et al. (2013) . It has also been used to evaluate recent work on uncertainty propagation methods for neural networks (Hern\u00e1ndez-Lobato and Adams, 2015).",
"cite_spans": [
{
"start": 145,
"end": 167,
"text": "(Snelson et al., 2004)",
"ref_id": "BIBREF27"
},
{
"start": 218,
"end": 240,
"text": "Chalupka et al. (2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Asymmetric loss functions are common in the econometrics literature and were studied by Zellner (1986) and Koenker (2005) , among others. Besides the AL and the linex, another well studied loss is the asymmetric quadratic, which in turn relates to the concept of expectiles (Newey and Powell, 1987) . This loss generalises the commonly used squared error loss. In terms of applications, Cain and Janssen (1995) gives an example in real estate assessment, where the consequences of under-and over-assessment are usually different depending on the specific scenario. An engineering example is given by Zellner (1986) in the context of dam construction, where an underestimate of peak water level is much more serious than an overestimate. Such real-world applications guided many developments in this field: we believe that translation and other language processing scenarios which rely on NLP technologies can heavily benefit from these advancements.",
"cite_spans": [
{
"start": 88,
"end": 102,
"text": "Zellner (1986)",
"ref_id": "BIBREF33"
},
{
"start": 107,
"end": 121,
"text": "Koenker (2005)",
"ref_id": "BIBREF18"
},
{
"start": 274,
"end": 298,
"text": "(Newey and Powell, 1987)",
"ref_id": "BIBREF22"
},
{
"start": 387,
"end": 410,
"text": "Cain and Janssen (1995)",
"ref_id": "BIBREF9"
},
{
"start": 600,
"end": 614,
"text": "Zellner (1986)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "This work explored new probabilistic models for machine translation QE that allow better uncertainty estimates. We proposed the use of NLPD, which can capture information on the whole predictive distribution, unlike usual point estimatebased metrics. By assessing models using NLPD we can make better informed decisions about which model to employ for different settings. Furthermore, we showed how information in the predictive distribution can be used in asymmetric loss scenarios and how the proposed models can be beneficial in these settings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Uncertainty estimates can be useful in many other settings beyond the ones explored in this work. Active Learning can benefit from variance information in their query methods and it has shown to be useful for QE (Beck et al., 2013) . Exploratory analysis is another avenue for future work, where error bars can provide further insights about the task, as shown in recent work (Nguyen and O'Connor, 2015) . This kind of analysis can be useful for tracking post-editor behaviour and assessing cost estimates for translation projects, for instance.",
"cite_spans": [
{
"start": 212,
"end": 231,
"text": "(Beck et al., 2013)",
"ref_id": "BIBREF1"
},
{
"start": 376,
"end": 403,
"text": "(Nguyen and O'Connor, 2015)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Our main goal in this paper was to raise awareness about how different modelling aspects should be taken into account when building QE models. Decision making can be risky using simple point estimates and we believe that uncertainty information can be beneficial in such scenarios by providing more informed solutions. These ideas are not restricted to QE and we hope to see similar studies in other natural language applications in the future.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "www.statmt.org/wmt16",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Using w = \u22120.75 in this case resulted in loss values on the order of 10 7 . In fact, as it will be discussed in the next Section, the results for the linex loss in the pessimistic scenario were inconclusive. However, we report results using a higher w in this case for completeness and to clarify the inconclusive trends we found.4 We also tried w \u2208 {1/9, 1/7, 1/5, 5, 7, 9} for the AL loss and w \u2208 {\u22120.5, \u22120.25, 0.25, 0.5} for the linex loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Daniel Beck was supported by funding from CNPq/Brazil (No. 237999/2012-9). Lucia Specia was supported by the QT21 project (H2020 No. 645452). Trevor Cohn is the recipient of an Australian Research Council Future Fellowship (project number FT130101105). The authors would like to thank James Hensman for his advice on Warped GPs and the three anonymous reviewers for their comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "The pessimistic scenario shows interesting trends. While the results for AL follow a similar pattern when compared to the optimistic setting, the results for linex are consistently worse than the standard GP baseline. A key difference between AL and linex is that the latter depends on the variance of the predictive distribution. Since the warped models tend to have less variance, we believe the estimator is not being \"pushed\" towards the positive tails as much as in the standard GPs. This turns the resulting predictions not conservative enough (i.e. the post-editing time predictions are lower) and this is heavily (exponentially) pe-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Query learning strategies using boosting and bagging",
"authors": [
{
"first": "Naoki",
"middle": [],
"last": "Abe",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Mamitsuka",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Fifteenth International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naoki Abe and Hiroshi Mamitsuka. 1998. Query learning strategies using boosting and bagging. In Proceedings of the Fifteenth International Confer- ence on Machine Learning, pages 1-9.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Reducing Annotation Effort for Quality Estimation via Active Learning",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Beck",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Beck, Lucia Specia, and Trevor Cohn. 2013. Reducing Annotation Effort for Quality Estimation via Active Learning. In Proceedings of ACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Joint Emotion Analysis via Multi-task Gaussian Processes",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Beck",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "1798--1803",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Beck, Trevor Cohn, and Lucia Specia. 2014a. Joint Emotion Analysis via Multi-task Gaussian Pro- cesses. In Proceedings of EMNLP, pages 1798- 1803.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "SHEF-Lite 2.0 : Sparse Multi-task Gaussian Processes for Translation Quality Estimation",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Beck",
"suffix": ""
},
{
"first": "Kashif",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of WMT14",
"volume": "",
"issue": "",
"pages": "307--312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Beck, Kashif Shah, and Lucia Specia. 2014b. SHEF-Lite 2.0 : Sparse Multi-task Gaussian Pro- cesses for Translation Quality Estimation. In Pro- ceedings of WMT14, pages 307-312.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Non-Linear Text Regression with a Deep Convolutional Neural Network",
"authors": [
{
"first": "Zsolt",
"middle": [],
"last": "Bitvai",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zsolt Bitvai and Trevor Cohn. 2015. Non-Linear Text Regression with a Deep Convolutional Neural Net- work. In Proceedings of ACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Confidence estimation for machine translation",
"authors": [
{
"first": "John",
"middle": [],
"last": "Blatz",
"suffix": ""
},
{
"first": "Erin",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "315--321",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Blatz, Erin Fitzgerald, and George Foster. 2004. Confidence estimation for machine translation. In Proceedings of the 20th Conference on Computa- tional Linguistics, pages 315-321.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Findings of the 2013 Workshop on Statistical Machine Translation. In Proceedings of WMT13",
"authors": [
{
"first": "Ondej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Buck",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ondej Bojar, Christian Buck, Chris Callison-Burch, Christian Federmann, Barry Haddow, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2013. Findings of the 2013 Workshop on Statistical Machine Translation. In Proceedings of WMT13, pages 1-44.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Findings of the 2014 Workshop on Statistical Machine Translation. In Proceedings of WMT14",
"authors": [
{
"first": "Ondej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Buck",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Leveling",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Pecina",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Herve",
"middle": [],
"last": "Saint-Amand",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Ale\u0161",
"middle": [],
"last": "Tamchyna",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "12--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ondej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-amand, Radu Soricut, Lucia Specia, and Ale\u0161 Tamchyna. 2014. Findings of the 2014 Workshop on Statistical Machine Translation. In Proceedings of WMT14, pages 12-58.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Findings of the 2015 Workshop on Statistical Machine Translation. In Proceedings of WMT15",
"authors": [
{
"first": "Ondej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Hokamp",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Scarton",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "22--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ondej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 Workshop on Statistical Machine Translation. In Proceedings of WMT15, pages 22-64.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Real Estate Price Prediction under Asymmetric Loss",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Cain",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Janssen",
"suffix": ""
}
],
"year": 1995,
"venue": "Annals of the Institute of Statististical Mathematics",
"volume": "47",
"issue": "3",
"pages": "401--414",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Cain and Christian Janssen. 1995. Real Es- tate Price Prediction under Asymmetric Loss. An- nals of the Institute of Statististical Mathematics, 47(3):401-414.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Findings of the 2012 Workshop on Statistical Machine Translation",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of WMT12",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chris Callison-burch, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2012. Findings of the 2012 Workshop on Statistical Ma- chine Translation. In Proceedings of WMT12.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A Framework for Evaluating Approximation Methods for Gaussian Process Regression",
"authors": [
{
"first": "Krzysztof",
"middle": [],
"last": "Chalupka",
"suffix": ""
},
{
"first": "K",
"middle": [
"I"
],
"last": "Christopher",
"suffix": ""
},
{
"first": "Iain",
"middle": [],
"last": "Williams",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Murray",
"suffix": ""
}
],
"year": 2013,
"venue": "Journal of Machine Learning Research",
"volume": "14",
"issue": "",
"pages": "333--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Krzysztof Chalupka, Christopher K. I. Williams, and Iain Murray. 2013. A Framework for Evaluating Approximation Methods for Gaussian Process Re- gression. Journal of Machine Learning Research, 14:333-350.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Optimal Prediction Under Asymmetric Loss",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Francis",
"middle": [
"X"
],
"last": "Christoffersen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Diebold",
"suffix": ""
}
],
"year": 1997,
"venue": "Econometric Theory",
"volume": "13",
"issue": "06",
"pages": "808--817",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Christoffersen and Francis X. Diebold. 1997. Optimal Prediction Under Asymmetric Loss. Econometric Theory, 13(06):808-817.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Modelling Annotator Bias with Multi-task Gaussian Processes: An Application to Machine Translation Quality Estimation",
"authors": [
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "32--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Trevor Cohn and Lucia Specia. 2013. Modelling Annotator Bias with Multi-task Gaussian Processes: An Application to Machine Translation Quality Es- timation. In Proceedings of ACL, pages 32-42.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Improving Evaluation of Machine Translation Quality Estimation",
"authors": [
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yvette Graham. 2015. Improving Evaluation of Ma- chine Translation Quality Estimation. In Proceed- ings of ACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Gaussian Processes for Big Data",
"authors": [
{
"first": "James",
"middle": [],
"last": "Hensman",
"suffix": ""
},
{
"first": "Nicol\u00f2",
"middle": [],
"last": "Fusi",
"suffix": ""
},
{
"first": "Neil",
"middle": [
"D"
],
"last": "Lawrence",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of UAI",
"volume": "",
"issue": "",
"pages": "282--290",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Hensman, Nicol\u00f2 Fusi, and Neil D. Lawrence. 2013. Gaussian Processes for Big Data. In Pro- ceedings of UAI, pages 282-290.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks",
"authors": [
{
"first": "Jos\u00e9 Miguel Hern\u00e1ndez-Lobato",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Ryan",
"middle": [
"P"
],
"last": "Adams",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jos\u00e9 Miguel Hern\u00e1ndez-Lobato and Ryan P. Adams. 2015. Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks. In Proceed- ings of ICML.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Movie Reviews and Revenues: An Experiment in Text Regression",
"authors": [
{
"first": "Mahesh",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mahesh Joshi, Dipanjan Das, Kevin Gimpel, and Noah A. Smith. 2010. Movie Reviews and Rev- enues: An Experiment in Text Regression. In Pro- ceedings of NAACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Quantile Regression",
"authors": [
{
"first": "Roger",
"middle": [],
"last": "Koenker",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roger Koenker. 2005. Quantile Regression. Cam- bridge University Press.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Post-editing time as a measure of cognitive effort",
"authors": [
{
"first": "Maarit",
"middle": [],
"last": "Koponen",
"suffix": ""
},
{
"first": "Wilker",
"middle": [],
"last": "Aziz",
"suffix": ""
},
{
"first": "Luciana",
"middle": [],
"last": "Ramos",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of WPTP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maarit Koponen, Wilker Aziz, Luciana Ramos, and Lucia Specia. 2012. Post-editing time as a measure of cognitive effort. In Proceedings of WPTP.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Bayesian Warped Gaussian Processes",
"authors": [
{
"first": "Miguel",
"middle": [],
"last": "L\u00e1zaro-Gredilla",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miguel L\u00e1zaro-Gredilla. 2012. Bayesian Warped Gaussian Processes. In Proceedings of NIPS, pages 1-9.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Lyrics, Music, and Emotions",
"authors": [
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
},
{
"first": "Carlo",
"middle": [],
"last": "Strapparava",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "590--599",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rada Mihalcea and Carlo Strapparava. 2012. Lyrics, Music, and Emotions. In Proceedings of the Joint Conference on Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning, pages 590-599.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Asymmetric Least Squares Estimation and Testing",
"authors": [
{
"first": "K",
"middle": [],
"last": "Whitney",
"suffix": ""
},
{
"first": "James",
"middle": [
"L"
],
"last": "Newey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Powell",
"suffix": ""
}
],
"year": 1987,
"venue": "Econometrica",
"volume": "",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Whitney K. Newey and James L. Powell. 1987. Asymmetric Least Squares Estimation and Testing. Econometrica, 55(4).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Posterior Calibration and Exploratory Analysis for Natural Language Processing Models",
"authors": [
{
"first": "Khanh",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Connor",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of EMNLP, number September",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Khanh Nguyen and Brendan O'Connor. 2015. Poste- rior Calibration and Exploratory Analysis for Natu- ral Language Processing Models. In Proceedings of EMNLP, number September, page 15.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Evaluating Predictive Uncertainty Challenge",
"authors": [
{
"first": "Joaquin",
"middle": [],
"last": "Qui\u00f1onero-Candela",
"suffix": ""
},
{
"first": "Carl",
"middle": [
"Edward"
],
"last": "Rasmussen",
"suffix": ""
},
{
"first": "Fabian",
"middle": [],
"last": "Sinz",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Bousquet",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Sch\u00f6lkopf",
"suffix": ""
}
],
"year": 2005,
"venue": "Lecture Notes in Computer Science",
"volume": "3944",
"issue": "",
"pages": "1--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joaquin Qui\u00f1onero-Candela, Carl Edward Rasmussen, Fabian Sinz, Olivier Bousquet, and Bernhard Sch\u00f6lkopf. 2006. Evaluating Predictive Uncertainty Challenge. MLCW 2005, Lecture Notes in Com- puter Science, 3944:1-27.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Gaussian processes for machine learning",
"authors": [
{
"first": "Carl",
"middle": [
"Edward"
],
"last": "Rasmussen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"K I"
],
"last": "Williams",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carl Edward Rasmussen and Christopher K. I. Williams. 2006. Gaussian processes for machine learning, volume 1. MIT Press Cambridge.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "An Investigation on the Effectiveness of Features for Translation Quality Estimation",
"authors": [
{
"first": "Kashif",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Cohn",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of MT Summit XIV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kashif Shah, Trevor Cohn, and Lucia Specia. 2013. An Investigation on the Effectiveness of Features for Translation Quality Estimation. In Proceedings of MT Summit XIV.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Warped Gaussian Processes",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Snelson",
"suffix": ""
},
{
"first": "Carl",
"middle": [
"Edward"
],
"last": "Rasmussen",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edward Snelson, Carl Edward Rasmussen, and Zoubin Ghahramani. 2004. Warped Gaussian Processes. In Proceedings of NIPS.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of AMTA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of AMTA.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Estimating the sentence-level quality of machine translation systems",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Cancedda",
"suffix": ""
},
{
"first": "Marc",
"middle": [],
"last": "Dymetman",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "Nello",
"middle": [],
"last": "Cristianini",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of EAMT",
"volume": "",
"issue": "",
"pages": "28--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Nicola Cancedda, Marc Dymetman, Marco Turchi, and Nello Cristianini. 2009. Estimat- ing the sentence-level quality of machine translation systems. In Proceedings of EAMT, pages 28-35.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Multi-level Translation Quality Prediction with QUEST++",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Gustavo",
"middle": [
"Henrique"
],
"last": "Paetzold",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Scarton",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ACL Demo Session",
"volume": "",
"issue": "",
"pages": "850--850",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia, Gustavo Henrique Paetzold, and Car- olina Scarton. 2015. Multi-level Translation Qual- ity Prediction with QUEST++. In Proceedings of ACL Demo Session, pages 850-850.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Exploiting Objective Annotations for Measuring Translation Post-editing Effort",
"authors": [
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EAMT",
"volume": "",
"issue": "",
"pages": "73--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lucia Specia. 2011. Exploiting Objective Annotations for Measuring Translation Post-editing Effort. In Proceedings of EAMT, pages 73-80.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Learning to identify emotions in text",
"authors": [
{
"first": "Carlo",
"middle": [],
"last": "Strapparava",
"suffix": ""
},
{
"first": "Rada",
"middle": [],
"last": "Mihalcea",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 ACM Symposium on Applied Computing",
"volume": "",
"issue": "",
"pages": "1556--1560",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlo Strapparava and Rada Mihalcea. 2008. Learn- ing to identify emotions in text. In Proceedings of the 2008 ACM Symposium on Applied Computing, pages 1556-1560.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Bayesian Estimation and Prediction Using Asymmetric Loss Functions",
"authors": [
{
"first": "Arnold",
"middle": [],
"last": "Zellner",
"suffix": ""
}
],
"year": 1986,
"venue": "Journal of the American Statistical Association",
"volume": "81",
"issue": "394",
"pages": "446--451",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arnold Zellner. 1986. Bayesian Estimation and Prediction Using Asymmetric Loss Functions. Journal of the American Statistical Association, 81(394):446-451.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Predictive distributions for two fr-en instances under a Standard GP and a Warped GP.",
"num": null
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"text": "Warping function instances from the three datasets. The vertical axis correspond to the latent warped values. The horizontal axis show the observed response variables, which are always positive in our case since they are post-editing times.",
"num": null
},
"TABREF1": {
"content": "<table/>",
"html": null,
"text": "Intrinsic evaluation results. The first three rows in each table correspond to standard GP models, while the remaining rows are Warped GP models with different warping functions. The number after the tanh models shows the number of terms in the warping function (see Equation 2.3). All r scores have p < 0.05.",
"type_str": "table",
"num": null
}
}
}
}