ACL-OCL / Base_JSON /prefixH /json /H05 /H05-1027.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H05-1027",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:34:18.069010Z"
},
"title": "Minimum Sample Risk Methods for Language Modeling 1",
"authors": [
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research Asia",
"location": {}
},
"email": "jfgao@microsoft.com"
},
{
"first": "Hao",
"middle": [],
"last": "Yu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiaotong Univ",
"location": {
"country": "China"
}
},
"email": ""
},
{
"first": "Wei",
"middle": [],
"last": "Yuan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiaotong Univ",
"location": {
"country": "China"
}
},
"email": ""
},
{
"first": "Peng",
"middle": [],
"last": "Xu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "John Hopkins Univ",
"location": {
"country": "U.S.A"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The work was done while the second, third and fourth authors were visiting Microsoft Research Asia. Thanks to Hisami Suzuki for her valuable comments.",
"pdf_parse": {
"paper_id": "H05-1027",
"_pdf_hash": "",
"abstract": [
{
"text": "The work was done while the second, third and fourth authors were visiting Microsoft Research Asia. Thanks to Hisami Suzuki for her valuable comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Language modeling (LM) is fundamental to a wide range of applications, such as speech recognition and Asian language text input (Jelinek 1997; Gao et al. 2002) . The traditional approach uses a parametric model with maximum likelihood estimation (MLE), usually with smoothing methods to deal with data sparseness problems. This approach is optimal under the assumption that the true distribution of data on which the parametric model is based is known. Unfortunately, such an assumption rarely holds in realistic applications.",
"cite_spans": [
{
"start": 128,
"end": 142,
"text": "(Jelinek 1997;",
"ref_id": "BIBREF7"
},
{
"start": 143,
"end": 159,
"text": "Gao et al. 2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "An alternative approach to LM is based on the framework of discriminative training, which uses a much weaker assumption that training and test data are generated from the same distribution but the form of the distribution is unknown. Unlike the traditional approach that maximizes the function (i.e. likelihood of training data) that is loosely as-sociated with error rate, discriminative training methods aim to directly minimize the error rate on training data even if they reduce the likelihood. So, they potentially lead to better solutions. However, the error rate of a finite set of training samples is usually a step function of model parameters, and cannot be easily minimized. To address this problem, previous research has concentrated on the development of a loss function that approximates the exact error rate and can be easily optimized. Though these methods (e.g. the boosting method) have theoretically appealing properties, such as convergence and bounded generalization error, we argue that the approximated loss function may prevent them from attaining the original objective of minimizing the error rate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we present a new estimation procedure for LM, called minimum sample risk (MSR) . It differs from most existing discriminative training methods in that instead of searching on an approximated loss function, MSR employs a simple heuristic training algorithm that minimizes the error rate on training samples directly. MSR operates like a multidimensional function optimization algorithm: first, it selects a subset of features that are the most effective among all candidate features. The parameters of the model are then optimized iteratively: in each iteration, only the parameter of one feature is adjusted. Both feature selection and parameter optimization are based on the criterion of minimizing the error on training samples. Our evaluation on the task of Japanese text input shows that MSR achieves more than 20% error rate reduction over MLE on two newswire data sets, and it also outperforms the other two widely applied discriminative methods, the boosting method and the perceptron algorithm, by a small but statistically significant margin.",
"cite_spans": [
{
"start": 87,
"end": 92,
"text": "(MSR)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although it has not been proved in theory that MSR is always robust, our experiments of crossdomain LM adaptation show that it is. MSR can effectively adapt a model trained on one domain to different domains. It outperforms the traditional LM adaptation method significantly, and achieves at least comparable or slightly better results to the boosting method and the perceptron algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper studies LM on the task of Asian language (e.g. Chinese or Japanese) text input. This is the standard method of inputting Chinese or Japanese text by converting the input phonetic symbols into the appropriate word string. In this paper we call the task IME, which stands for input method editor, based on the name of the commonly used Windows-based application.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IME Task and LM",
"sec_num": "2"
},
{
"text": "Performance on IME is measured in terms of the character error rate (CER), which is the number of characters wrongly converted from the phonetic string divided by the number of characters in the correct transcript. Current IME systems make about 5-15% CER in conversion of real data in a wide variety of domains (e.g. Gao et al. 2002) .",
"cite_spans": [
{
"start": 318,
"end": 334,
"text": "Gao et al. 2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IME Task and LM",
"sec_num": "2"
},
{
"text": "Similar to speech recognition, IME is viewed as a Bayes decision problem. Let A be the input phonetic string. An IME system's task is to choose the most likely word string W * among those candidates that could be converted from A:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IME Task and LM",
"sec_num": "2"
},
{
"text": ") | ( ) ( max arg ) | ( max arg (A) ) ( * W A P W P A W P W W A W GEN GEN \u2208 \u2208 = = (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IME Task and LM",
"sec_num": "2"
},
{
"text": "where GEN(A) denotes the candidate set given A. Unlike speech recognition, however, there is no acoustic ambiguity since the phonetic string is inputted by users. Moreover, if we do not take into account typing errors, it is reasonable to assume a unique mapping from W and A in IME, i.e. P(A|W) = 1. So the decision of Equation (1) depends solely upon P(W), making IME a more direct evaluation test bed for LM than speech recognition. Another advantage is that it is easy to convert W to A (for Chinese and Japanese), which enables us to obtain a large number of training data for discriminative learning, as described later.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IME Task and LM",
"sec_num": "2"
},
{
"text": "The values of P(W) in Equation (1) are traditionally calculated by MLE: the optimal model parameters \u03bb * are chosen in such a way that P(W|\u03bb * ) is maximized on training data. The arguments in favor of MLE are based on the assumption that the form of the underlying distributions is known, and that only the values of the parameters characterizing those distributions are unknown. In using MLE for LM, one always assumes a multinomial distribution of language. For example, a trigram model makes the assumption that the next word is predicted depending only on two preceding words. However, there are many cases in natural language where words over an arbitrary distance can be related. MLE is therefore not optimal because the assumed model form is incorrect.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "IME Task and LM",
"sec_num": "2"
},
{
"text": "What are the best estimators when the model is known to be false then? In IME, we can tackle this question empirically. Best IME systems achieve the least CER. Therefore, the best estimators are those which minimize the expected error rate on unseen test data. Since the distribution of test data is unknown, we can approximately minimize the error rate on some given training data (Vapnik 1999) . Toward this end, we have developed a very simple heuristic training procedure called minimum sample risk, as presented in the next section.",
"cite_spans": [
{
"start": 382,
"end": 395,
"text": "(Vapnik 1999)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "IME Task and LM",
"sec_num": "2"
},
{
"text": "We follow the general framework of linear discriminant models described in (Duda et al. 2001 ). In the rest of the paper we use the following notation, adapted from Collins (2002) .",
"cite_spans": [
{
"start": 75,
"end": 92,
"text": "(Duda et al. 2001",
"ref_id": "BIBREF2"
},
{
"start": 165,
"end": 179,
"text": "Collins (2002)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3.1"
},
{
"text": "\u2022 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2211 = = D d d d W f \u03bb 0 ) ( .",
"eq_num": "(2)"
}
],
"section": "Problem Definition",
"sec_num": "3.1"
},
{
"text": "The decision rule of Equation (1) is rewritten as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3.1"
},
{
"text": ") , ( max arg ) , ( (A) * \u03bb \u03bb GEN W Score A W W\u2208 = .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3.1"
},
{
"text": "(3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3.1"
},
{
"text": "Equation 3views IME as a ranking problem, where the model gives the ranking score, not probabilities. We therefore do not evaluate the model via perplexity. Now, assume that we can measure the number of conversion errors in W by comparing it with a reference transcript W R using an error function Er(W R ,W) (i.e. the string edit distance function in our case). We call the sum of error counts over the training samples sample risk. Our goal is to minimize the sample risk while searching for the parameters as defined in Equation 4, hence the name minimum sample risk (MSR). Wi * in Equation 4is determined by Equation 3,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3.1"
},
{
"text": "\u2211 = = M i i i R i def MSR A W W ... 1 * )) , ( , Er( min arg \u03bb \u03bb \u03bb . (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3.1"
},
{
"text": "We first present the basic MSR training algorithm, and then the two improvements we made.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Definition",
"sec_num": "3.1"
},
{
"text": "The MSR training algorithm is cast as a multidimensional function optimization approach (Press et al. 1992) : taking the feature vector as a set of directions; the first direction (i.e. feature) is selected and the objective function (i.e. sample risk) is minimized along that direction using a line search; then from there along the second direction to its minimum, and so on, cycling through the whole set of directions as many times as necessary, until the objective function stops decreasing.",
"cite_spans": [
{
"start": 88,
"end": 107,
"text": "(Press et al. 1992)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Algorithm",
"sec_num": "3.2"
},
{
"text": "This simple method can work properly under two assumptions. First, there exists an implementation of line search that optimizes the function along one direction efficiently. Second, the number of candidate features is not too large, and these features are not highly correlated. However, neither of the assumptions holds in our case. First of all, Er(.) in Equation (4) is a step function of \u03bb, thus cannot be optimized directly by regular gradientbased procedures -a grid search has to be used instead. However, there are problems with simple grid search: using a large grid could miss the optimal solution whereas using a fine-grained grid would lead to a very slow algorithm. Secondly, in the case of LM, there are millions of candidate features, some of which are highly correlated. We address these issues respectively in the next two subsections.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Algorithm",
"sec_num": "3.2"
},
{
"text": "Our implementation of a grid search is a modified version of that proposed in (Och 2003) . The modifications are made to deal with the efficiency issue due to the fact that there is a very large number of features and training samples in our task, compared to only 8 features used in (Och 2003) . Unlike a simple grid search where the intervals between any two adjacent grids are equal and fixed, we determine for each feature a sequence of grids with differently sized intervals, each corresponding to a different value of sample risk.",
"cite_spans": [
{
"start": 78,
"end": 88,
"text": "(Och 2003)",
"ref_id": "BIBREF9"
},
{
"start": 284,
"end": 294,
"text": "(Och 2003)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Grid Line Search",
"sec_num": "3.3"
},
{
"text": "As shown in Equation 4, the loss function (i.e. sample risk) over all training samples is the sum of the loss function (i.e. Er(.)) of each training sample. Therefore, in what follows, we begin with a discussion on minimizing Er(.) of a training sample using the line search.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grid Line Search",
"sec_num": "3.3"
},
{
"text": "Let \u03bb be the current model parameter vector, and fd be the selected feature. The line search aims to find the optimal parameter \u03bbd * so as to minimize Er(.). For a training sample (A, W R ), the score of each candidate word string W\u2208GEN(A), as in Equation (2), can be decomposed into two terms:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grid Line Search",
"sec_num": "3.3"
},
{
"text": ") ( ) ( ) ( ) , ( ' 0 ' ' ' W f W f W W Score d d D d d d d d \u03bb \u03bb + = = \u2211 \u2260 \u2228 = \u03bbf \u03bb ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grid Line Search",
"sec_num": "3.3"
},
{
"text": "where the first term on the right hand side does not change with \u03bbd. Note that if several candidate word strings have the same feature value fd(W), their relative rank will remain the same for any \u03bbd. Since fd(W) takes integer values in our case (fd(W) is the count of a particular n-gram in W), we can group the candidates using fd(W) so that candidates in each group have the same value of fd(W). In each group, we define the candidate with the highest value of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grid Line Search",
"sec_num": "3.3"
},
{
"text": "\u2211 \u2260 \u2228 = D d d d d d W f ' 0 ' ' '",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grid Line Search",
"sec_num": "3.3"
},
{
"text": ") ( \u03bb as the active candidate of the group because no matter what value \u03bbd takes, only this candidate could be selected according to Equation (3). Now, we reduce GEN(A) to a much smaller list of active candidates. We can find a set of intervals for \u03bbd, within each of which a particular active candidate will be selected as W * . We can compute the Er(.) value of that candidate as the Er(.) value for the corresponding interval. As a result, for each training sample, we obtain a sequence of intervals and their corresponding Er(.) values. The optimal value \u03bbd * can then be found by traversing the sequence and taking the midpoint of the interval with the lowest Er(.) value. This process can be extended to the whole training set as follows. By merging the sequence of intervals of each training sample in the training set, we obtain a global sequence of intervals as well as their corresponding sample risk. We can then find the optimal value \u03bbd * as well as the minimal sample risk by traversing the global interval sequence. An example is shown in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 1054,
"end": 1062,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Grid Line Search",
"sec_num": "3.3"
},
{
"text": "The line search can be unstable, however. In some cases when some of the intervals are very narrow (e.g. the interval A in Figure 1 ), moving the optimal value \u03bbd * slightly can lead to much larger sample risk. Intuitively, we prefer a stable solution which is also known as a robust solution (with even slightly higher sample risk, e.g. the interval B in Figure 1 ). Following Quirk et al. (2004) , we evaluate each interval in the sequence by its corresponding smoothed sample risk. Let \u03bb be the midpoint of an interval and SR(\u03bb) be the corresponding sample risk of the interval. The smoothed sample risk of the interval is defined as",
"cite_spans": [
{
"start": 378,
"end": 397,
"text": "Quirk et al. (2004)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 123,
"end": 131,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 356,
"end": 364,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Grid Line Search",
"sec_num": "3.3"
},
{
"text": "\u03bb \u03bb \u03bb \u03bb d b b ) SR( \u222b + \u2212",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grid Line Search",
"sec_num": "3.3"
},
{
"text": "where b is a smoothing factor whose value is determined empirically (0.06 in our experiments). As shown in Figure 1 , a more stable interval B is selected according to the smoothed sample risk.",
"cite_spans": [],
"ref_spans": [
{
"start": 107,
"end": 115,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Grid Line Search",
"sec_num": "3.3"
},
{
"text": "In addition to reducing GEN(A) to an active candidate list described above, the efficiency of the line search can be further improved. We find that the line search only needs to traverse a small subset of training samples because the distribution of features among training samples are very sparse. Therefore, we built an inverted index that lists for each feature all training samples that contain it. As will be shown in Section 4.2, the line search is very efficient even for a large training set with millions of candidate features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Grid Line Search",
"sec_num": "3.3"
},
{
"text": "This section describes our method of selecting among millions of features a small subset of highly effective features for MSR learning. Reducing the number of features is essential for two reasons: to reduce computational complexity and to ensure the generalization property of the linear model. A large number of features lead to a large number of parameters of the resulting linear model, as described in Section 3.1. For a limited number of training samples, keeping the number of features sufficiently small should lead to a simpler model that is less likely to overfit to the training data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Subset Selection",
"sec_num": "3.4"
},
{
"text": "The first step of our feature selection algorithm treats the features independently. The effectiveness of a feature is measured in terms of the reduction of the sample risk on top of the base feature f0. Formally, let SR(f0) be the sample risk of using the base feature only, and SR(f0 + \u03bbdfd) be the sample risk of using both f0 and fd and the parameter \u03bbd that has been optimized using the line search. Then the effectiveness of fd, denoted by E(fd), is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Subset Selection",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ")) SR( ) (SR( max ) SR( ) SR( ) ( 0 0 ... 1 , 0 0 i i D i f d d d f f f f f f f E i \u03bb \u03bb + \u2212 + \u2212 = = ,",
"eq_num": "(5)"
}
],
"section": "Feature Subset Selection",
"sec_num": "3.4"
},
{
"text": "where the denominator is a normalization term to ensure that E(f) \u2208 [0, 1].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Subset Selection",
"sec_num": "3.4"
},
{
"text": "The feature selection procedure can be stated as follows: The value of E(.) is computed according to Equation (5) for each of the candidate features. Features are then ranked in the order of descending values of E(.). The top l features are selected to form the feature vector in the linear model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Subset Selection",
"sec_num": "3.4"
},
{
"text": "Treating features independently has the advantage of computational simplicity, but may not be effective for features with high correlation. For instance, although two features may carry rich discriminative information when treated separately, there may be very little gain if they are combined in a feature vector, because of the high correlation between them. Therefore, in what follows, we describe a technique of incorporating correlation information in the feature selection criterion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Subset Selection",
"sec_num": "3.4"
},
{
"text": "Let xmd, m = 1\u2026M and d = 1\u2026D, be a Boolean value: xmd = 1 if the sample risk reduction of using the d-th feature on the m-th training sample, com-B A puted by Equation (5), is larger than zero, and 0 otherwise. The cross correlation coefficient between two features fi and fj is estimated as \u2211 \u2211",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Subset Selection",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2211 = = = = M m mj M m mi M m mj mi x x x x j i C 1 2 1 2 1 ) , ( .",
"eq_num": "(6)"
}
],
"section": "Feature Subset Selection",
"sec_num": "3.4"
},
{
"text": "It can be shown that C(i, j) \u2208 [0, 1]. Now, similar to (Theodoridis and Koutroumbas 2003) , the feature selection procedure consists of the following steps, where fi denotes any selected feature and fj denotes any candidate feature to be selected.",
"cite_spans": [
{
"start": 55,
"end": 89,
"text": "(Theodoridis and Koutroumbas 2003)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Subset Selection",
"sec_num": "3.4"
},
{
"text": "Step 1. For each of the candidate features (fd, for d = 1\u2026D), compute the value of E(f) according to Equation (5). Rank them in a descending order and choose the one with the highest E(.) value. Let us denote this feature as f1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Subset Selection",
"sec_num": "3.4"
},
{
"text": "Step 2. To select the second feature, compute the cross correlation coefficient between the selected feature f1 and each of the remaining M-1 features, according to Equation (6).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Subset Selection",
"sec_num": "3.4"
},
{
"text": "Step 3. Select the second feature f according to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Subset Selection",
"sec_num": "3.4"
},
{
"text": "{ } ) , 1 ( ) 1 ( ) ( max arg * ... 2 j C f E j j D j \u03b1 \u03b1 \u2212 \u2212 = =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Subset Selection",
"sec_num": "3.4"
},
{
"text": "where \u03b1 is the weight that determines the relative importance we give to the two terms. The value of \u03b1 is optimized on held-out data (0.8 in our experiments). This means that for the selection of the second feature, we take into account not only its impact of reducing the sample risk but also the correlation with the previously selected feature. It is expected that choosing features with less correlation gives better sample risk minimization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Subset Selection",
"sec_num": "3.4"
},
{
"text": "Step 4. Select k-th features, k = 3\u2026K, according to",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Subset Selection",
"sec_num": "3.4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u23ad \u23ac \u23ab \u23a9 \u23a8 \u23a7 \u2212 \u2212 \u2212 = \u2211 \u2212 = 1 1 ) , ( 1 1 ) ( max arg * k i j j j i C k f E j \u03b1 \u03b1",
"eq_num": "(7)"
}
],
"section": "Feature Subset Selection",
"sec_num": "3.4"
},
{
"text": "That is, we select the next feature by taking into account its average correlation with all previously selected features. The optimal number of features, l, is determined on held-out data. Similarly to the case of line search, we need to deal with the efficiency issue in the feature selection method. As shown in Equation 7, the estimates of E(.) and C(.) need to be computed. Let D and K (K << D) be the number of all candidate features and the number of features in the resulting model, respectively. According to the feature selection method described above, we need to estimate E(.) for each of the D candidate features only once in Step 1. This is not very costly due to the efficiency of our line search algorithm. Unlike the case of E(.), O(K\u00d7D) estimates of C(.) are required in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Subset Selection",
"sec_num": "3.4"
},
{
"text": "Step 4. This is computationally expensive even for a medium-sized K. Therefore, every time a new feature is selected (in Step 4), we only estimate the value of C(.) between each of the selected features and each of the top N remaining features with the highest value of E(.). This reduces the number of estimates of C(.) to O(K\u00d7N). In our experiments we set N = 1000, much smaller than D. This reduces the computational cost significantly without producing any noticeable quality loss in the resulting model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Subset Selection",
"sec_num": "3.4"
},
{
"text": "The MSR algorithm used in our experiments is summarized in Figure 2 . It consists of feature selection (line 2) and optimization (lines 3 -5) steps.",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 67,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Feature Subset Selection",
"sec_num": "3.4"
},
{
"text": "1 Set \u03bb0 = 1 and \u03bbd = 0 for d=1\u2026D 2 Rank all features and select the top K features, using the feature selection method described in Section 3.4 3 For t = 1\u2026T (T= total number of iterations) 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Subset Selection",
"sec_num": "3.4"
},
{
"text": "For each k = 1\u2026K 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Subset Selection",
"sec_num": "3.4"
},
{
"text": "Update the parameter of fk using line search. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Subset Selection",
"sec_num": "3.4"
},
{
"text": "We evaluated MSR on the task of Japanese IME. Two newspaper corpora are used as training and test data: Nikkei and Yomiuri Newspapers. Both corpora have been pre-word-segmented using a lexicon containing 167,107 entries. A 5,000-sentence subset of the Yomiuri Newspaper corpus was used as held-out data (e.g. to determine learning rate, number of iterations and features etc.). We tested our models on another 5,000-sentence subset of the Yomiuri Newspaper corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "We used an 80,000-sentence subset of the Nikkei Newspaper corpus as the training set. For each A, we produced a word lattice using the baseline system described in (Gao et al. 2002) , which uses a word trigram model trained via MLE on anther 400,000-sentence subset of the Nikkei Newspaper corpus. The two subsets do not overlap so as to simulate the case where unseen phonetic symbol strings are converted by the baseline system. For efficiency, we kept for each training sample the best 20 hypotheses in its candidate conversion set GEN(A) for discriminative training. The oracle best hypothesis, which gives the minimum number of errors, was used as the reference transcript of A.",
"cite_spans": [
{
"start": 164,
"end": 181,
"text": "(Gao et al. 2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Settings",
"sec_num": "4.1"
},
{
"text": "We used unigrams and bigrams that occurred more than once in the training set as features. We did not use trigram features because they did not result in a significant improvement in our pilot study. The total number of candidate features we used was around 860,000.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "Our main experimental results are shown in Table 1 . Row 1 is our baseline result using the word trigram model. Notice that the result is much better than the state-of-the-art performance currently available in the marketplace (e.g. Gao et al. 2002) , presumably due to the large amount of training data we used, and to the similarity between the training and the test data. Row 2 is the result of the model trained using the MSR algorithm described in Section 3. We also compared the MSR algorithm to two of the state-of-the-art discriminative training methods: Boosting in Row 3 is an implementation of the improved algorithm for the boosting loss function proposed in (Collins 2000) , and Perceptron in Row 4 is an implementation of the averaged perceptron algorithm described in (Collins 2002) .",
"cite_spans": [
{
"start": 233,
"end": 249,
"text": "Gao et al. 2002)",
"ref_id": "BIBREF5"
},
{
"start": 671,
"end": 685,
"text": "(Collins 2000)",
"ref_id": "BIBREF1"
},
{
"start": 783,
"end": 797,
"text": "(Collins 2002)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 43,
"end": 50,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "We see that all discriminative training methods outperform MLE significantly (p-value < 0.01). In particular, MSR outperforms MLE by more than 20% CER reduction. Notice that we used only unigram and bigram features that have been included in the baseline trigram model, so the improvement is solely attributed to the high performance of MSR. We also find that MSR outperforms the perceptron and boosting methods by a small but statistically significant margin.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "The MSR algorithm is also very efficient: using a subset of 20,000 features, it takes less than 20 minutes to converge on an XEON(TM) MP 1.90GHz machine. It is as efficient as the perceptron algorithm and slightly faster than the boosting method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.2"
},
{
"text": "Most theorems that justify the robustness of discriminative training algorithms concern two questions. First, is there a guarantee that a given algorithm converges even if the training samples are not linearly separable? This is called the convergence problem. Second, how well is the training error reduction preserved when the algorithm is applied to unseen test samples? This is called the generalization problem. Though we currently cannot give a theoretical justification, we present empirical evidence here for the robustness of the MSR approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness Issues",
"sec_num": "4.3"
},
{
"text": "As Vapnik (1999) pointed out, the most robust linear models are the ones that achieve the least training errors with the least number of features. Therefore, the robustness of the MSR algorithm are mainly affected by the feature selection method. To verify this, we created four different subsets of features using different settings of the feature selection method described in Section 3.4. We selected different numbers of features (i.e. 500 and 2000) with and without taking into account the correlation between features (i.e. \u03b1 in Equation 7is set to 0.8 and 1, respectively). For each of the four feature subsets, we used the MSR algorithm to generate a set of models. The CER curves of these models on training and test data sets are shown in Figures 3 and 4 The results reveal several facts. First, the convergence properties of MSR are shown in Figure 3 where in all cases, training errors drop consistently with more iterations. Secondly, as expected, using more features leads to overfitting, For example, MSR(\u03b1 =1)-2000 makes fewer errors than MSR(\u03b1 =1)-500 on training data but more errors on test data. Finally, taking into account the correlation between features (e.g. \u03b1 = 0.8 in Equation 7 sults in a better subset of features that lead to not only fewer training errors, as shown in Figure 3 , but also better generalization properties (fewer test errors), as shown in Figure 4 .",
"cite_spans": [
{
"start": 3,
"end": 16,
"text": "Vapnik (1999)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 749,
"end": 764,
"text": "Figures 3 and 4",
"ref_id": "FIGREF2"
},
{
"start": 853,
"end": 861,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 1300,
"end": 1308,
"text": "Figure 3",
"ref_id": "FIGREF2"
},
{
"start": 1386,
"end": 1394,
"text": "Figure 4",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Robustness Issues",
"sec_num": "4.3"
},
{
"text": "Though MSR achieves impressive performance in CER reduction over the comparison methods, as described in Section 4.2, the experiments are all performed using newspaper text for both training and testing, which is not a realistic scenario if we are to deploy the model in an application. This section reports the results of additional experiments in which we adapt a model trained on one domain to a different domain, i.e., in a so-called cross-domain LM adaptation paradigm. See (Suzuki and Gao 2005) for a detailed report.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Results",
"sec_num": "4.4"
},
{
"text": "The data sets we used stem from five distinct sources of text. The Nikkei newspaper corpus described in Section 4.1 was used as the background domain, on which the word trigram model was trained. We used four adaptation domains: Yomiuri (newspaper corpus), TuneUp (balanced corpus containing newspapers and other sources of text), Encarta (encyclopedia) and Shincho (collection of novels). For each of the four domains, we used an 72,000-sentence subset as adaptation training data, a 5,000-sentence subset as held-out data and another 5,000-sentence subset as test data. Similarly, all corpora have been word-segmented, and we kept for each training sample, in the four adaptation domains, the best 20 hypotheses in its candidate conversion set for discriminative training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Results",
"sec_num": "4.4"
},
{
"text": "We compared MSR with three other LM adaptation methods:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Results",
"sec_num": "4.4"
},
{
"text": "Baseline is the background word trigram model, as described in Section 4.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Results",
"sec_num": "4.4"
},
{
"text": "MAP (maximum a posteriori) is a traditional LM adaptation method where the parameters of the background model are adjusted in such a way that maximizes the likelihood of the adaptation data. Our implementation takes the form of linear interpolation as P(wi|h) = \u03bbPb(wi|h) + (1-\u03bb)Pa(wi|h), where Pb is the probability of the background model, Pa is the probability trained on adaptation data using MLE and the history h corresponds to two preceding words (i.e. Pb and Pa are trigram probabilities). \u03bb is the interpolation weight optimized on held-out data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Results",
"sec_num": "4.4"
},
{
"text": "Perceptron, Boosting and MSR are the three discriminative methods described in the previous sections. For each of them, the base feature was Table 2 . First of all, in all four adaptation domains, discriminative methods outperform MAP significantly. Secondly, the improvement margins of discriminative methods over MAP correspond to the similarities between background domain and adaptation domains. When the two domains are very similar to the background domain (such as Yomiuri), discriminative methods outperform MAP by a large margin. However, the margin is smaller when the two domains are substantially different (such as Encarta and Shincho). The phenomenon is attributed to the underlying difference between the two adaptation methods: MAP aims to improve the likelihood of a distribution, so if the adaptation domain is very similar to the background domain, the difference between the two underlying distributions is so small that MAP cannot adjust the model effectively. However, discriminative methods do not have this limitation for they aim to reduce errors directly. Finally, we find that in most adaptation test sets, MSR achieves slightly better CER results than the two competing discriminative methods. Specifically, the improvements of MSR are statistically significant over the boosting method in three out of four domains, and over the perceptron algorithm in the Yomiuri domain. The results demonstrate again that MSR is robust.",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 148,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Domain Adaptation Results",
"sec_num": "4.4"
},
{
"text": "Discriminative models have recently been proved to be more effective than generative models in some NLP tasks, e.g., parsing (Collins 2000) , POS tagging (Collins 2002) and LM for speech recognition (Roark et al. 2004) . In particular, the linear models, though simple and non-probabilistic in nature, are preferred to their probabilistic coun-terpart such as logistic regression. One of the reasons, as pointed out by Ng and Jordan (2002) , is that the parameters of a discriminative model can be fit either to maximize the conditional likelihood on training data, or to minimize the training errors. Since the latter optimizes the objective function that the system is graded on, it is viewed as being more truly in the spirit of discriminative learning.",
"cite_spans": [
{
"start": 125,
"end": 139,
"text": "(Collins 2000)",
"ref_id": "BIBREF1"
},
{
"start": 154,
"end": 168,
"text": "(Collins 2002)",
"ref_id": "BIBREF0"
},
{
"start": 199,
"end": 218,
"text": "(Roark et al. 2004)",
"ref_id": "BIBREF12"
},
{
"start": 419,
"end": 439,
"text": "Ng and Jordan (2002)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "The MSR method shares the same motivation: to minimize the errors directly as much as possible. Because the error function on a finite data set is a step function, and cannot be optimized easily, previous research approximates the error function by loss functions that are suitable for optimization (e.g. Collins 2000; Freund et al. 1998; Juang et al. 1997; Duda et al. 2001) . MSR uses an alternative approach. It is a simple heuristic training procedure to minimize training errors directly without applying any approximated loss function.",
"cite_spans": [
{
"start": 305,
"end": 318,
"text": "Collins 2000;",
"ref_id": "BIBREF1"
},
{
"start": 319,
"end": 338,
"text": "Freund et al. 1998;",
"ref_id": "BIBREF4"
},
{
"start": 339,
"end": 357,
"text": "Juang et al. 1997;",
"ref_id": null
},
{
"start": 358,
"end": 375,
"text": "Duda et al. 2001)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "MSR shares many similarities with previous methods. The basic training algorithm described in Section 3.2 follows the general framework of multidimensional optimization (e.g., Press et al. 1992) . The line search is an extension of that described in (Och 2003; Quirk et al. 2005 . The extension lies in the way of handling large number of features and training samples. Previous algorithms were used to optimize linear models with less than 10 features. The feature selection method described in Section 3.4 is a particular implementation of the feature selection methods described in (e.g., Theodoridis and Koutroumbas 2003) . The major difference between the MSR and other methods is that it estimates the effectiveness of each feature in terms of its expected training error reduction while previous methods used metrics that are loosely coupled with reducing training errors. The way of dealing with feature correlations in feature selection in Equation 7, was suggested by Finette et al. (1983) .",
"cite_spans": [
{
"start": 176,
"end": 194,
"text": "Press et al. 1992)",
"ref_id": "BIBREF10"
},
{
"start": 250,
"end": 260,
"text": "(Och 2003;",
"ref_id": "BIBREF9"
},
{
"start": 261,
"end": 278,
"text": "Quirk et al. 2005",
"ref_id": "BIBREF11"
},
{
"start": 592,
"end": 625,
"text": "Theodoridis and Koutroumbas 2003)",
"ref_id": "BIBREF14"
},
{
"start": 978,
"end": 999,
"text": "Finette et al. (1983)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "We show that MSR is a very successful discriminative training algorithm for LM. Our experiments suggest that it leads to significantly better conversion performance on the IME task than either the MLE method or the two widely applied discriminative methods, the boosting and perceptron methods. However, due to the lack of theoretical underpinnings, we are unable to prove that MSR will always succeed. This forms one area of our future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "One of the most interesting properties of MSR is that it can optimize any objective function (whether its gradient is computable or not), such as error rate in IME or speech, BLEU score in MT, precision and recall in IR (Gao et al. 2005) . In particular, MSR can be performed on large-scale training set with millions of candidate features. Thus, another area of our future work is to test MSR on wider varieties of NLP tasks such as parsing and tagging.",
"cite_spans": [
{
"start": 220,
"end": 237,
"text": "(Gao et al. 2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Discriminative training methods for Hidden Markov Models: theory and experiments with the perceptron algorithm",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collins, Michael. 2002. Discriminative training methods for Hidden Markov Models: theory and experiments with the perceptron algorithm. In EMNLP 2002.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Discriminative reranking for natural language parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2000,
"venue": "ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collins, Michael. 2000. Discriminative reranking for natural language parsing. In ICML 2000.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Pattern classification",
"authors": [
{
"first": "Richard",
"middle": [
"O"
],
"last": "Duda",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"E"
],
"last": "Hart",
"suffix": ""
},
{
"first": "David",
"middle": [
"G"
],
"last": "Stork",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Duda, Richard O, Hart, Peter E. and Stork, David G. 2001. Pattern classification. John Wiley & Sons, Inc.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Breast tissue classification using diagnostic ultrasound and pattern recognition techniques: I. Methods of pattern recognition",
"authors": [
{
"first": "S",
"middle": [],
"last": "Finette",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Blerer",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Swindel",
"suffix": ""
}
],
"year": 1983,
"venue": "Ultrasonic Imaging",
"volume": "5",
"issue": "",
"pages": "55--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Finette S., Blerer A., Swindel W. 1983. Breast tissue clas- sification using diagnostic ultrasound and pattern rec- ognition techniques: I. Methods of pattern recognition. Ultrasonic Imaging, Vol. 5, pp. 55-70.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An efficient boosting algorithm for combining preferences",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Freund",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Iyer",
"suffix": ""
},
{
"first": "R",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 1998,
"venue": "ICML'98",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Freund, Y, R. Iyer, R. E. Schapire, and Y. Singer. 1998. An efficient boosting algorithm for combining preferences. In ICML'98.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Exploiting headword dependency and predictive clustering for language modeling",
"authors": [
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Hisami",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Wen",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gao, Jianfeng, Hisami Suzuki and Yang Wen. 2002. Exploiting headword dependency and predictive clus- tering for language modeling. In EMNLP 2002.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Linear discriminative model for information retrieval",
"authors": [
{
"first": "J",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "X",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "J.-Y",
"middle": [],
"last": "Nie",
"suffix": ""
}
],
"year": 2005,
"venue": "SIGIR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gao, J, H. Qin, X. Xiao and J.-Y. Nie. 2005. Linear dis- criminative model for information retrieval. In SIGIR.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Minimum classification error rate methods for speech recognition",
"authors": [
{
"first": "Fred",
"middle": [],
"last": "Jelinek",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE Tran. Speech and Audio Processing",
"volume": "5",
"issue": "",
"pages": "257--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jelinek, Fred. 1997. Statistical methods for speech recognition. MIT Press, Cambridge, Mass. Juang, B.-H., W.Chou and C.-H. Lee. 1997. Minimum classification error rate methods for speech recognition. IEEE Tran. Speech and Audio Processing 5-3: 257-265.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "On discriminative vs. generative classifiers: a comparison of logistic regression and na\u00efve Bayes",
"authors": [
{
"first": "A",
"middle": [
"N"
],
"last": "Ng",
"suffix": ""
},
{
"first": "M",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "841--848",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ng, A. N. and M. I. Jordan. 2002. On discriminative vs. generative classifiers: a comparison of logistic regres- sion and na\u00efve Bayes. In NIPS 2002: 841-848.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Josef",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Och, Franz Josef. 2003. Minimum error rate training in statistical machine translation. In ACL 2003",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Numerical Recipes In C: The Art of Scientific Computing",
"authors": [
{
"first": "W",
"middle": [
"H"
],
"last": "Press",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Teukolsky",
"suffix": ""
},
{
"first": "W",
"middle": [
"T"
],
"last": "Vetterling",
"suffix": ""
},
{
"first": "B",
"middle": [
"P"
],
"last": "Flannery",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Press, W. H., S. A. Teukolsky, W. T. Vetterling and B. P. Flannery. 1992. Numerical Recipes In C: The Art of Scien- tific Computing. New York: Cambridge Univ. Press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Dependency treelet translation: syntactically informed phrasal SMT",
"authors": [
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Arul",
"middle": [],
"last": "Menezes",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
}
],
"year": 2005,
"venue": "ACL 2005",
"volume": "",
"issue": "",
"pages": "271--279",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quirk, Chris, Arul Menezes, and Colin Cherry. 2005. Dependency treelet translation: syntactically informed phrasal SMT. In ACL 2005: 271-279.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Corrective language modeling for large vocabulary ASR with the perceptron algorithm",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
},
{
"first": "Murat",
"middle": [],
"last": "Saraclar",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roark, Brian, Murat Saraclar and Michael Collins. 2004. Corrective language modeling for large vocabulary ASR with the perceptron algorithm. In ICASSP 2004.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "A comparative study on language model adaptation using new evaluation metrics",
"authors": [
{
"first": "Hisami",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2005,
"venue": "HLT/EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suzuki, Hisami and Jianfeng Gao. 2005. A comparative study on language model adaptation using new evaluation metrics. In HLT/EMNLP 2005.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Pattern Recognition",
"authors": [
{
"first": "Sergios",
"middle": [],
"last": "Theodoridis",
"suffix": ""
},
{
"first": "Konstantinos",
"middle": [],
"last": "Koutroumbas",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theodoridis, Sergios and Konstantinos Koutroumbas. 2003. Pattern Recognition. Elsevier.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The nature of statistical learning theory",
"authors": [
{
"first": "V",
"middle": [
"N"
],
"last": "Vapnik",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vapnik, V. N. 1999. The nature of statistical learning theory. Springer-Verlag, New York.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Examples of line search.",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "The MSR algorithm 4 Evaluation",
"type_str": "figure",
"num": null
},
"FIGREF2": {
"uris": null,
"text": "Training error curves of the MSR algorithm",
"type_str": "figure",
"num": null
},
"FIGREF3": {
"uris": null,
"text": "Test error curves of the MSR algorithm",
"type_str": "figure",
"num": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"text": "Comparison of CER results.",
"content": "<table><tr><td>) re-</td></tr></table>",
"html": null
},
"TABREF3": {
"type_str": "table",
"num": null,
"text": "CER(%) results on four adaptation test sets .",
"content": "<table><tr><td>Model</td><td colspan=\"4\">Yomiuri TuneUp Encarta Shincho</td></tr><tr><td>Baseline</td><td>3.70</td><td>5.81</td><td>10.24</td><td>12.18</td></tr><tr><td>MAP</td><td>3.69</td><td>5.47</td><td>7.98</td><td>10.76</td></tr><tr><td>MSR</td><td>2.73</td><td>5.15</td><td>7.40</td><td>10.16</td></tr><tr><td>Boosting</td><td>2.78</td><td>5.33</td><td>7.53</td><td>10.25</td></tr><tr><td>Perceptron</td><td>2.78</td><td>5.20</td><td>7.44</td><td>10.18</td></tr><tr><td colspan=\"5\">derived from the word trigram model trained on</td></tr><tr><td colspan=\"5\">the background data, and other n-gram features (i.e.</td></tr><tr><td colspan=\"5\">fd, d = 1\u2026D in Equation (2)) were trained on adap-</td></tr><tr><td colspan=\"5\">tation data. That is, the parameters of the back-</td></tr><tr><td colspan=\"5\">ground model are adjusted in such a way that</td></tr><tr><td colspan=\"5\">minimizes the errors on adaptation data made by</td></tr><tr><td colspan=\"2\">background model.</td><td/><td/><td/></tr><tr><td colspan=\"3\">Results are summarized in</td><td/><td/></tr></table>",
"html": null
}
}
}
}