{ "paper_id": "P07-1010", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:49:50.272082Z" }, "title": "A Discriminative Language Model with Pseudo-Negative Samples", "authors": [ { "first": "Daisuke", "middle": [], "last": "Okanohara\u00fd", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Jun", "middle": [ "'" ], "last": "Ichi Tsujii\u00fd\u00fe\u00fc", "suffix": "", "affiliation": {}, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we propose a novel discriminative language model, which can be applied quite generally. Compared to the well known N-gram language models, discriminative language models can achieve more accurate discrimination because they can employ overlapping features and nonlocal information. However, discriminative language models have been used only for re-ranking in specific applications because negative examples are not available. We propose sampling pseudo-negative examples taken from probabilistic language models. However, this approach requires prohibitive computational cost if we are dealing with quite a few features and training samples. We tackle the problem by estimating the latent information in sentences using a semi-Markov class model, and then extracting features from them. We also use an online margin-based algorithm with efficient kernel computation. Experimental results show that pseudo-negative examples can be treated as real negative examples and our model can classify these sentences correctly.", "pdf_parse": { "paper_id": "P07-1010", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we propose a novel discriminative language model, which can be applied quite generally. Compared to the well known N-gram language models, discriminative language models can achieve more accurate discrimination because they can employ overlapping features and nonlocal information. However, discriminative language models have been used only for re-ranking in specific applications because negative examples are not available. We propose sampling pseudo-negative examples taken from probabilistic language models. However, this approach requires prohibitive computational cost if we are dealing with quite a few features and training samples. We tackle the problem by estimating the latent information in sentences using a semi-Markov class model, and then extracting features from them. We also use an online margin-based algorithm with efficient kernel computation. Experimental results show that pseudo-negative examples can be treated as real negative examples and our model can classify these sentences correctly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Language models (LMs) are fundamental tools for many applications, such as speech recognition, machine translation and spelling correction. The goal of LMs is to determine whether a sentence is correct or incorrect in terms of grammars and pragmatics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The most widely used LM is a probabilistic language model (PLM), which assigns a probability to a sentence or a word sequence. In particular, Ngrams with maximum likelihood estimation (NLMs) are often used. Although NLMs are simple, they are effective for many applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "However, NLMs cannot determine correctness of a sentence independently because the probability depends on the length of the sentence and the global frequencies of each word in it. For example, \u00d4\u00b4\u00cb \u00bd \u00b5 \u00d4\u00b4\u00cb \u00be \u00b5, where \u00d4\u00b4\u00cb\u00b5 is the probability of a sentence \u00cb given by an NLM, does not always mean that \u00cb \u00be is more correct, but instead could occur when \u00cb \u00be is shorter than \u00cb \u00bd , or if \u00cb \u00be has more common words than \u00cb \u00bd . Another problem is that NLMs cannot handle overlapping information or non-local information easily, which is important for more accurate sentence classification. For example, a NLM could assign a high probability to a sentence even if it does not have a verb.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Discriminative language models (DLMs) have been proposed to classify sentences directly as correct or incorrect (Gao et al., 2005; Roark et al., 2007) , and these models can handle both non-local and overlapping information. However DLMs in previous studies have been restricted to specific applications. Therefore the model cannot be used for other applications. If we had negative examples available, the models could be trained directly by discriminating between correct and incorrect sentences.", "cite_spans": [ { "start": 112, "end": 130, "text": "(Gao et al., 2005;", "ref_id": "BIBREF7" }, { "start": 131, "end": 150, "text": "Roark et al., 2007)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we propose a generic DLM, which can be used not only for specific applications, but also more generally, similar to PLMs. To achieve this goal, we need to solve two problems. The first is that since we cannot obtain negative examples (incorrect sentences), we need to generate them. The second is the prohibitive computational cost because the number of features and examples is very large. In previous studies this problem did not arise because the amount of training data was limited and they did not use a combination of features, and thus the computational cost was negligible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To solve the first problem, we propose sampling incorrect sentences taken from a PLM and then training a model to discriminate between correct and incorrect sentences. We call these examples Pseudo-Negative because they are not actually negative sentences. We call this method DLM-PN (DLM with Pseudo-Negative samples).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To deal with the second problem, we employ an online margin-based learning algorithm with fast kernel computation. This enables us to employ combinations of features, which are important for discrimination between correct and incorrect sentences. We also estimate the latent information in sentences by using a semi-Markov class model to extract features. Although there are substantially fewer latent features than explicit features such as words or phrases, latent features contain essential information for sentence classification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Experimental results show that these pseudonegative samples can be treated as incorrect examples, and that DLM-PN can learn to correctly discriminate between correct and incorrect sentences and can therefore classify these sentences correctly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Probabilistic language models (PLMs) estimate the probability of word strings or sentences. Among these models, N-gram language models (NLMs) are widely used. NLMs approximate the probability by conditioning only on the preceding AE \u00bd words. For example, let \u00cb denote a sentence of \u00d8 words, \u00cb \u00db \u00bd \u00db \u00be \u00db \u00d8 . Then, by the chain rule of probability and the approximation, we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous work", "sec_num": "2" }, { "text": "\u00c8\u00b4\u00cb\u00b5 \u00c8\u00b4\u00db \u00bd \u00db \u00be \u00db \u00d8 \u00b5 \u00bd \u00d8 \u00c8\u00b4\u00db \u00db AE \u2022\u00bd \u00db \u00bd \u00b5 (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous work", "sec_num": "2" }, { "text": "The parameters can be estimated using the maximum likelihood method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous work", "sec_num": "2" }, { "text": "Since the number of parameters in NLM is still large, several smoothing methods are used (Chen and Goodman, 1998) to produce more accurate probabilities, and to assign nonzero probabilities to any word string.", "cite_spans": [ { "start": 89, "end": 113, "text": "(Chen and Goodman, 1998)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Previous work", "sec_num": "2" }, { "text": "However, since the probabilities in NLMs depend on the length of the sentence, two sentences of different length cannot be compared directly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous work", "sec_num": "2" }, { "text": "Recently, Whole Sentence Maximum Entropy Models (Rosenfeld et al., 2001 ) (WSMEs) have been introduced. They assign a probability to each sentence using a maximum entropy model. Although WSMEs can encode all features of a sentence including non-local ones, they are only slightly superior to NLMs, in that they have the disadvantage of being computationally expensive, and not all relevant features can be included.", "cite_spans": [ { "start": 48, "end": 71, "text": "(Rosenfeld et al., 2001", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Previous work", "sec_num": "2" }, { "text": "A discriminative language model (DLM) assigns a score \u00b4\u00cb\u00b5 to a sentence \u00cb, measuring the correctness of a sentence in terms of grammar and pragmatics, so that \u00b4\u00cb\u00b5 \u00bc implies \u00cb is correct and \u00b4\u00cb\u00b5 \u00bc implies \u00cb is incorrect. A PLM can be considered as a special case of a DLM by defining using \u00c8\u00b4\u00cb\u00b5. For example, we can take \u00b4\u00cb\u00b5 \u00c8\u00b4\u00cb\u00b5 \u00cb \u00ab, where \u00ab is some threshold, and \u00cb is the length of \u00cb.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous work", "sec_num": "2" }, { "text": "Given a sentence \u00cb, we extract a feature vector ( \u00b4\u00cb\u00b5) from it using a pre-defined set of feature functions \u00d1 \u00bd . The form of the function we use is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous work", "sec_num": "2" }, { "text": "\u00b4\u00cb\u00b5 \u00db \u00a1 \u00b4\u00cb\u00b5 (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous work", "sec_num": "2" }, { "text": "where \u00db is a feature weighting vector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous work", "sec_num": "2" }, { "text": "Since there is no restriction in designing \u00b4\u00cb\u00b5, DLMs can make use of both over-lapping and nonlocal information in \u00cb. We estimate \u00db using training", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous work", "sec_num": "2" }, { "text": "samples \u00b4\u00cb \u00dd \u00b5 for \u00bd \u00d8, where \u00dd \u00bd if \u00cb is correct and \u00dd", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous work", "sec_num": "2" }, { "text": "\u00bd if \u00cb is incorrect. However, it is hard to obtain incorrect sentences because only correct sentences are available from the corpus. This problem was not an issue for previous studies because they were concerned with specific applications and therefore were able to obtain real negative examples easily. For example, Roark (2007) proposed a discriminative language model, in which a model is trained so that a correct sentence should have higher score than others. The difference between their approach and ours is that we do not assume just one application. Moreover, they had", "cite_spans": [ { "start": 317, "end": 329, "text": "Roark (2007)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Previous work", "sec_num": "2" }, { "text": "For i=1,2,...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous work", "sec_num": "2" }, { "text": "Choose a word \u00db at random according to the distribution \u00c8\u00b4\u00db \u00db AE \u2022\u00bd \u00db \u00bd \u00b5 If \u00db \"end of a sentence\" Break End End Figure 1 : Sample procedure for pseudo-negative examples taken from N-gram language models.", "cite_spans": [], "ref_spans": [ { "start": 113, "end": 121, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Previous work", "sec_num": "2" }, { "text": "training sets consisting of one correct sentence and many incorrect sentences, which were very similar because they were generated by the same input. Our framework does not assume any such training sets, and we treat correct or incorrect examples independently in training.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Previous work", "sec_num": "2" }, { "text": "We propose a novel discriminative language model; a Discriminative Language Model with Pseudo-Negative samples (DLM-PN). In this model, pseudo-negative examples, which are all assumed to be incorrect, are sampled from PLMs. First a PLM is built using training data and then examples, which are almost all negative, are sampled independently from PLMs. DLMs are trained using correct sentences from a corpus and negative examples from a Pseudo-Negative generator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discriminative Language Model with Pseudo-Negative samples", "sec_num": "3" }, { "text": "An advantage of sampling is that as many negative examples can be collected as correct ones, and a distinction can be clearly made between truly correct sentences and incorrect sentences, even though the latter might be correct in a local sense.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discriminative Language Model with Pseudo-Negative samples", "sec_num": "3" }, { "text": "For sampling, any PLMs can be used as long as the model supports a sentence sampling procedure. In this research we used NLMs with interpolated smoothing because such models support efficient sentence sampling. Figure 1 describes the sampling procedure and figure 2 shows an example of a pseudo-negative sentence.", "cite_spans": [], "ref_spans": [ { "start": 211, "end": 219, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Discriminative Language Model with Pseudo-Negative samples", "sec_num": "3" }, { "text": "Since the focus is on discriminating between correct sentences from a corpus and incorrect sentences sampled from the NLM, DLM-PN may not able to classify incorrect sentences that are not generated from the NLM. However, this does not result in a se-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discriminative Language Model with Pseudo-Negative samples", "sec_num": "3" }, { "text": "We know of no program, and animated discussions about prospects for trade barriers or regulations on the rules of the game as a whole, and elements of decoration of this peanut-shaped to priorities tasks across both target countries rious problem, because these sentences, if they exist, can be filtered out by NLMs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discriminative Language Model with Pseudo-Negative samples", "sec_num": "3" }, { "text": "The DLM-PN can be trained by using any binary classification learning methods. However, since the number of training examples is very large, batch training has suffered from prohibitively large computational cost in terms of time and memory. Therefore we make use of an online learning algorithm proposed by (Crammer et al., 2006) , which has a much smaller computational cost. We follow the definition in (Crammer et al., 2006) .", "cite_spans": [ { "start": 308, "end": 330, "text": "(Crammer et al., 2006)", "ref_id": "BIBREF3" }, { "start": 406, "end": 428, "text": "(Crammer et al., 2006)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Online margin-based learning with fast kernel computation", "sec_num": "4" }, { "text": "The initiation vector \u00db \u00bd is initialized to \u00bc and for each round the algorithm observes a training example \u00dc \u00b4\u00cb \u00b5 and predicts its label \u00dd \u00bc to be either", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Online margin-based learning with fast kernel computation", "sec_num": "4" }, { "text": "\u2022\u00bd or \u00bd. After the prediction is made, the true label \u00dd is revealed and the algorithm suffers an instantaneous hinge-loss \u00d0\u00b4\u00db \u01d7 \u00dd \u00b5\u00b5 \u00bd \u00dd \u00b4\u00db \u00a1 \u00dc \u00b5 which reflects the degree to which its prediction was wrong. If the prediction was wrong, the parameter", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Online margin-based learning with fast kernel computation", "sec_num": "4" }, { "text": "\u00db is updated as \u00db \u2022\u00bd \u00d6 \u00d1 \u00d2 \u00db \u00bd \u00be \u00db \u00db \u00be \u2022 (3) subject to \u00d0\u00b4\u00db \u01d7 \u00dd \u00b5\u00b5 and \u00bc (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Online margin-based learning with fast kernel computation", "sec_num": "4" }, { "text": "where is a slack term and is a positive parameter which controls the influence of the slack term on the objective function. A large value of will result in a more aggressive update step. This has a closed form solution as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Online margin-based learning with fast kernel computation", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u00db \u2022\u00bd \u00db \u2022 \u00dd \u00dc", "eq_num": "(5)" } ], "section": "Online margin-based learning with fast kernel computation", "sec_num": "4" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Online margin-based learning with fast kernel computation", "sec_num": "4" }, { "text": "\u00d1 \u00d2 \u00d0 \u00dc \u00be .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Online margin-based learning with fast kernel computation", "sec_num": "4" }, { "text": "As in SVMs, a final weight vector can be represented as a kerneldependent combination of the stored training examples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Online margin-based learning with fast kernel computation", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u00db \u00a1 \u00dc \u00dd \u00dc \u00a1 \u00dc", "eq_num": "(6)" } ], "section": "Online margin-based learning with fast kernel computation", "sec_num": "4" }, { "text": "Using this formulation the inner product can be replaced with a general Mercer kernel \u00c3\u00b4\u00dc \u00dc\u00b5 such as a polynomial kernel or a Gaussian kernel. The combination of features, which can capture correlation information, is important in DLMs. If the kernel-trick (Taylor and Cristianini, 2004) is applied to online margin-based learning, a subset of the observed examples, called the active set, needs to be stored. However in contrast to the support set in SVMs, an example is added to the active set every time the online algorithm makes a prediction mistake or when its confidence in a prediction is inadequately low. Therefore the active set can increase in size significantly and thus the total computational cost becomes proportional to the square of the number of training examples. Since the number of training examples is very large, the computational cost is prohibitive even if we apply the kernel trick.", "cite_spans": [ { "start": 257, "end": 287, "text": "(Taylor and Cristianini, 2004)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Online margin-based learning with fast kernel computation", "sec_num": "4" }, { "text": "The calculation of the inner product between two examples can be done by intersection of the activated features in each example. This is similar to a merge sort and can be executed in \u00c7\u00b4\u00c5 \u00b5 time where \u00c5 is the average number of activated features in an example. When the number of examples in the active set is , the total computational cost is \u00c7\u00b4\u00c5 \u00a1 \u00b5. For fast kernel computation, the Polynomial Kernel Inverted method (PKI)) is proposed (Kudo and Matsumoto, 2003) , which is an extension of Inverted Index in Information Retrieval. This algorithm uses a table \u00b4 \u00b5 for each feature item, which stores examples where a feature is fired. Let be the average of \u00b4 \u00b5 over all feature item. Then the kernel computation can be performed in \u00c7\u00b4\u00c5 \u00a1 \u00b5 time which is much less than the normal kernel computation time when . We can easily extend this algorithm into the online setting by updating \u00b4 \u00b5 when an observed example is added to an active set.", "cite_spans": [ { "start": 440, "end": 466, "text": "(Kudo and Matsumoto, 2003)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Online margin-based learning with fast kernel computation", "sec_num": "4" }, { "text": "Another problem for DLMs is that the number of features becomes very large, because all possible Ngrams are used as features. In particular, the memory requirement becomes a serious problem because quite a few active sets with many features have to be stored, not only at training time, but also at classification time. One way to deal with this is to filter out low-confidence features, but it is difficult to decide which features are important in online learning. For this reason we cluster similar N-grams using a semi-Markov class model. The class model was originally proposed by (Martin et al., 1998) . In the class model, deterministic word-to-class mappings are estimated, keeping the number of classes much smaller than the number of distinct words. A semi-Markov class model (SMCM) is an extended version of the class model, a part of which was proposed by (Deligne and BIM-BOT, 1995) . In SMCM, a word sequence is partitioned into a variable-length sequence of chunks and then chunks are clustered into classes (Figure 4) . How a chunk is clustered depends on which chunks are adjacent to it.", "cite_spans": [ { "start": 586, "end": 607, "text": "(Martin et al., 1998)", "ref_id": "BIBREF9" }, { "start": 868, "end": 895, "text": "(Deligne and BIM-BOT, 1995)", "ref_id": null } ], "ref_spans": [ { "start": 1023, "end": 1033, "text": "(Figure 4)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Latent features by semi-Markov class model", "sec_num": "5" }, { "text": "The probability of a sentence \u00c8\u00b4\u00db \u00bd \u00db \u00d8 \u00b5, in a bi-gram class model is calculated by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent features by semi-Markov class model", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u00c8\u00b4\u00db \u2022\u00bd \u2022\u00bd \u00b5\u00c8\u00b4 \u2022\u00bd \u00b5", "eq_num": "(7)" } ], "section": "Latent features by semi-Markov class model", "sec_num": "5" }, { "text": "On the other hand, the probabilities in a bi-gram semi-Markov class model are calculated by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent features by semi-Markov class model", "sec_num": "5" }, { "text": "\u00d7 \u00c8\u00b4 \u00bd \u00b5 \u00a1 \u00c8\u00b4\u00db \u00d8\u00b4 \u00b5 \u00d8\u00b4 \u00b5\u2022\u00bd \u00d9\u00b4 \u00b5 \u00b5 (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent features by semi-Markov class model", "sec_num": "5" }, { "text": "where \u00d7 varies over all possible partitions of \u00cb, \u00d8\u00b4 \u00b5 and \u00d9\u00b4 \u00b5 denote the start and end positions respectively of the -th chunk in partition \u00d7, and \u00d8\u00b4 \u2022 \u00bd \u00b5 \u00d9\u00b4 \u00b5 \u2022 \u00bd for all . Note that each word or variablelength chunk belongs to only one class, in contrast to a hidden Markov model where each word can belong to several classes. Using a training corpus, the mapping is estimated by maximum likelihood estimation. The log likelihood of the training corpus (\u00db \u00bd \u00db \u00d2 ) in a bigram class model can be calculated as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent features by semi-Markov class model", "sec_num": "5" }, { "text": "\u00d0\u00d3 \u00c8\u00b4\u00db \u2022\u00bd \u00db \u00b5 (9) \u00d0\u00d3 \u00c8\u00b4\u00db \u2022\u00bd \u2022\u00bd \u00b5\u00c8\u00b4 \u2022\u00bd \u00b5 (10) \u00bd \u00be \u00b4 \u00bd \u00be \u00b5 \u00d0 \u00d3 \u00b4 \u00bd \u00be \u00b5 \u00b4 \u00bd \u00b5 \u00b4 \u00be \u00b5 (11) \u2022 \u00db \u00b4\u00db\u00b5 \u00d0 \u00d3 \u00b4\u00db\u00b5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent features by semi-Markov class model", "sec_num": "5" }, { "text": "where \u00b4\u00db\u00b5, \u00b4 \u00b5 and \u00b4 \u00bd \u00be \u00b5 are frequencies of a word \u00db, a class and a class bi-gram \u00bd \u00be in the training corpus. In (11) only the first term is used, since the second term does not depend on the class allocation. The class allocation problem is solved by an exchange algorithm as follows. First, all words are assigned to a randomly determined class. Next, for each word \u00db, we move it to the class for which the log-likelihood is maximized. This procedure is continued until the log-likelihood converges to a local maximum. A naive implementation of the clustering algorithm scales quadratically to the number of classes, since each time a word is moved between classes, all class bi-gram counts are potentially affected. However, by considering only those counts that actually change, the algorithm can be made to scale somewhere between linearly and quadratically to the number of classes (Martin et al., 1998) . In SMCM, partitions of each sentence are also determined. We used a Viterbi decoding (Deligne and BIMBOT, 1995) for the partition. We applied the exchange algorithm and the Viterbi decoding alternately until the log-likelihood converged to the local maximum.", "cite_spans": [ { "start": 890, "end": 911, "text": "(Martin et al., 1998)", "ref_id": "BIBREF9" }, { "start": 999, "end": 1025, "text": "(Deligne and BIMBOT, 1995)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Latent features by semi-Markov class model", "sec_num": "5" }, { "text": "Since the number of chunks is very large, for example, in our experiments we used about \u00bf million chunks, the computational cost is still large. We therefore employed the following two techniques. The first was to approximate the computation in the exchange algorithm; the second was to make use of w 1 w 2 w 3 w 4 w 5 w 6 w 7 w 8 c 1 c 2 c 3 c 4 bottom-up clustering to strengthen the convergence. In each step in the exchange algorithm, the approximate value of the change of the log-likelihood was examined, and the exchange algorithm applied only if the approximate value was larger than a predefined threshold.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent features by semi-Markov class model", "sec_num": "5" }, { "text": "The second technique was to reduce memory requirements. Since the matrices used in the exchange algorithm could become very large, we clustered chunks into \u00be classes and then again we clustered these two into \u00be each, thus obtaining classes. This procedure was applied recursively until the number of classes reached a pre-defined number.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent features by semi-Markov class model", "sec_num": "5" }, { "text": "We partitioned a BNC-corpus into model-train, DLM-train-positive, and DLM-test-positive sets. The numbers of sentences in model-train, DLMtrain-positive and DLM-test-positive were \u00bc\u00bck, \u00be \u00bck, and \u00bd\u00bck respectively. An NLM was built using model-train and Pseudo-Negative examples (\u00be \u00bck sentences) were sampled from it. We mixed sentences from DLM-train-positive and the Pseudo-Negative examples and then shuffled the order of these sentences to make DLM-train. We also constructed DLM-test by mixing DLM-test-positive and \u00bd\u00bck new (not already used) sentences from the Pseudo-Negative examples. We call the sentences from DLM-train-positive \"positive\" examples and the sentences from the Pseudo-Negative examples \"negative\" examples in the following. From these sentences the ones with less than words were excluded beforehand because it was difficult to decide whether these sentences were correct or not (e.g. compound words). Let be the number of classes in SMCMs. Two SMCMs, one with \u00bd\u00bc\u00bc and the other with \u00bc \u00bc , were constructed from model-train. Each SMCM contained \u00be million extracted chunks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "6.1" }, { "text": "We examined the property of a sentence being Pseudo-Negative, in order to justify our framework. A native English speaker and two non-native English speaker were asked to assign correct/incorrect labels to \u00bd\u00bc\u00bc sentences in DLM-train 1 . The result for an native English speaker was that all positive sentences were labeled as correct and all negative sentences except for one were labeled as incorrect. On the other hand, the results for non-native English speakers are 67\u00b1 and 70\u00b1. From this result, we can say that the sampling method was able to generate incorrect sentences and if a classifier can discriminate them, the classifier can also discriminate between correct and incorrect sentences. Note that it takes an average of 25 seconds for the native English speaker to assign the label, which suggests that it is difficult even for a human to determine the correctness of a sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on Pseudo-Examples", "sec_num": "6.2" }, { "text": "We then examined whether it was possible to discriminate between correct and incorrect sentences using parsing methods, since if so, we could have used parsing as a classification tool. We examined \u00bd\u00bc\u00bc sentences using a phrase structure parser (Charniak and Johnson, 2005) and an HPSG parser 1 Since the PLM also made use of the BNC-corpus for positive examples, we were not able to classify sentences based on word occurrences (Miyao and Tsujii, 2005) . All sentences were parsed correctly except for one positive example. This result indicates that correct sentences and pseudonegative examples cannot be differentiated syntactically.", "cite_spans": [ { "start": 244, "end": 272, "text": "(Charniak and Johnson, 2005)", "ref_id": "BIBREF0" }, { "start": 292, "end": 293, "text": "1", "ref_id": null }, { "start": 428, "end": 452, "text": "(Miyao and Tsujii, 2005)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments on Pseudo-Examples", "sec_num": "6.2" }, { "text": "We investigated the performance of classifiers and the effect of different sets of features.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments on DLM-PN", "sec_num": "6.3" }, { "text": "For N-grams and Part of Speech (POS), we used tri-gram features. For SMCM, we used bi-gram features. We used DLM-train as a training set. In all experiments, we set \u00bc \u00bc where is a parameter in the classification (Section 4). In all kernel experiments, a \u00bfrd order polynomial kernel was used and values were computed using PKI (the inverted indexing method). Table 1 shows the accuracy results with different features, or in the case of the SMCMs, different numbers of classes. This result shows that the kernel method is important in achieving high performance. Note that the classifier with SMCM features performs as well as the one with word. Table 2 shows the number of features in each method. Note that a new feature is added only if the classifier needs to update its parameters. These numbers are therefore smaller than the possible number of all candidate features. This result and the previous result indicate that SMCM achieves high performance with very few features.", "cite_spans": [], "ref_spans": [ { "start": 358, "end": 365, "text": "Table 1", "ref_id": null }, { "start": 645, "end": 652, "text": "Table 2", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Experiments on DLM-PN", "sec_num": "6.3" }, { "text": "We then examined the effect of PKI. were used for both experiments because training using all the training data would have required a much longer time than was possible with our experimental setup. Figure 5 shows the margin distribution for positive and negative examples using SMCM bi-gram features. Although many examples are close to the border line (margin \u00bc), positive and negative examples are distributed on either side of \u00bc. Therefore higher recall or precision could be achieved by using a pre-defined margin threshold other than \u00bc.", "cite_spans": [], "ref_spans": [ { "start": 198, "end": 206, "text": "Figure 5", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Experiments on DLM-PN", "sec_num": "6.3" }, { "text": "Finally, we generated learning curves to examine the effect of the size of training data on performance. Figure 6 shows the result of the classification task using SMCM-bi-gram features. The result suggests that the performance could be further improved by enlarging the training data set. The accuracy is the percentage of sentences in the evaluation set classified correctly.", "cite_spans": [], "ref_spans": [ { "start": 105, "end": 113, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Experiments on DLM-PN", "sec_num": "6.3" }, { "text": "Experimental results on pseudo-negative examples indicate that combination of features is effective in a sentence discrimination method. This could be because negative examples include many unsuitable combinations of words such as a sentence containing many nouns. Although in previous PLMs, combination of features has not been discussed except for the topic-based language model (David M. Blei, 2003; Wang et al., 2005) , our result may encourage the study of the combination of features for language modeling.", "cite_spans": [ { "start": 381, "end": 402, "text": "(David M. Blei, 2003;", "ref_id": "BIBREF4" }, { "start": 403, "end": 421, "text": "Wang et al., 2005)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "A contrastive estimation method (Smith and Eisner, 2005) is similar to ours with regard to constructing pseudo-negative examples. They build a neighborhood of input examples to allow unsupervised estimation when, for example, a word is changed or deleted. A lattice is constructed, and then parameters are estimated efficiently. On the other hand, we construct independent pseudo-negative examples to enable training. Although the motivations of these studies are different, we could combine these two methods to discriminate sentences finely.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "In our experiments, we did not examine the result of using other sampling methods, For example, it would be possible to sample sentences from a whole sentence maximum entropy model (Rosenfeld et al., 2001 ) and this is a topic for future research.", "cite_spans": [ { "start": 181, "end": 204, "text": "(Rosenfeld et al., 2001", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion", "sec_num": "7" }, { "text": "In this paper we have presented a novel discriminative language model using pseudo-negative examples. We also showed that an online margin-based learning method enabled us to use half a million sentences as training data and achieve \u00b1 accuracy in the task of discrimination between correct and incorrect sentences. Experimental results indicate that while pseudo-negative examples can be seen as incorrect sentences, they are also close to correct sentences in that parsers cannot discriminate between them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Our experimental results also showed that combination of features is important for discrimination between correct and incorrect sentences. This concept has not been discussed in previous probabilistic language models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Our next step is to employ our model in machine translation and speech recognition. One main difficulty concerns how to encode global scores for the classifier in the local search space, and another is how to scale up the problem size in terms of the number of examples and features. We would like to see more refined online learning methods with kernels (Cheng et al., 2006; Dekel et al., 2005 ) that we could apply in these areas.", "cite_spans": [ { "start": 355, "end": 375, "text": "(Cheng et al., 2006;", "ref_id": "BIBREF2" }, { "start": 376, "end": 394, "text": "Dekel et al., 2005", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "We are also interested in applications such as constructing an extended version of a spelling correction tool by identifying incorrect sentences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Another interesting idea is to work with probabilistic language models directly without sampling and find ways to construct a more accurate discriminative model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Coarse-tofine n-best parsing and maxent discriminative reranking", "authors": [ { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2005, "venue": "Proc. of ACL 05", "volume": "", "issue": "", "pages": "173--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Charniak and Mark Johnson. 2005. Coarse-to- fine n-best parsing and maxent discriminative rerank- ing. In Proc. of ACL 05, pages 173-180, June.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "An empirical study of smoothing techniques for language modeling", "authors": [ { "first": "F", "middle": [], "last": "Stanley", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Goodman", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stanley F. Chen and Joshua Goodman. 1998. An empir- ical study of smoothing techniques for language mod- eling. Technical report, Harvard Computer Science Technical report TR-10-98.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Implicit online learning with kernels", "authors": [ { "first": "Li", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "S V N", "middle": [], "last": "Vishwanathan", "suffix": "" }, { "first": "Dale", "middle": [], "last": "Schuurmans", "suffix": "" }, { "first": "Shaojun", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Terry", "middle": [], "last": "Caelli", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Li Cheng, S V N Vishwanathan, Dale Schuurmans, Shao- jun Wang, and Terry Caelli. 2006. Implicit online learning with kernels. In NIPS 2006.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Online passiveaggressive algorithms", "authors": [ { "first": "Koby", "middle": [], "last": "Crammer", "suffix": "" }, { "first": "Ofer", "middle": [], "last": "Dekel", "suffix": "" }, { "first": "Joseph", "middle": [], "last": "Keshet", "suffix": "" }, { "first": "Shai", "middle": [], "last": "Shalev-Shwartz", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2006, "venue": "Journal of Machine Learning Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev- Shwartz, and Yoram Singer. 2006. Online passive- aggressive algorithms. Journal of Machine Learning Research.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Latent dirichlet allocation", "authors": [ { "first": "I. Jordan", "middle": [], "last": "Michael", "suffix": "" }, { "first": "M", "middle": [], "last": "David", "suffix": "" }, { "first": "Andrew", "middle": [ "Y" ], "last": "Blei", "suffix": "" }, { "first": "", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2003, "venue": "Journal of Machine Learning Research", "volume": "3", "issue": "", "pages": "993--1022", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael I. Jordan David M. Blei, Andrew Y. Ng. 2003. Latent dirichlet allocation. Journal of Machine Learn- ing Research., 3:993-1022.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "The forgetron: A kernel-based perceptron on a fixed budget", "authors": [ { "first": "Ofer", "middle": [], "last": "Dekel", "suffix": "" }, { "first": "Shai", "middle": [], "last": "Shalev-Shwartz", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2005, "venue": "Proc. of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ofer Dekel, Shai Shalev-Shwartz, and Yoram Singer. 2005. The forgetron: A kernel-based perceptron on a fixed budget. In Proc. of NIPS.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Language modeling by variable length sequences: Theoretical formulation and evaluation of multigrams", "authors": [ { "first": "Sabine", "middle": [], "last": "Deligne", "suffix": "" }, { "first": "Bimbot", "middle": [], "last": "Fr\u00e9d\u00e9ric", "suffix": "" } ], "year": 1995, "venue": "Proc. ICASSP '95", "volume": "", "issue": "", "pages": "169--172", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sabine Deligne and Fr\u00e9d\u00e9ric BIMBOT. 1995. Language modeling by variable length sequences: Theoretical formulation and evaluation of multigrams. In Proc. ICASSP '95, pages 169-172.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Minimum sample risk methods for language modeling", "authors": [ { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2005, "venue": "Proc. of HLT/EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jianfeng Gao, Hao Yu, Wei Yuan, and Peng Xu. 2005. Minimum sample risk methods for language modeling. In Proc. of HLT/EMNLP.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Fast methods for kernel-based text analysis", "authors": [ { "first": "Taku", "middle": [], "last": "Kudo", "suffix": "" }, { "first": "Yuji", "middle": [], "last": "Matsumoto", "suffix": "" } ], "year": 2003, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taku Kudo and Yuji Matsumoto. 2003. Fast methods for kernel-based text analysis. In ACL.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Algorithms for bigram and trigram word clustering", "authors": [ { "first": "Sven", "middle": [], "last": "Martin", "suffix": "" }, { "first": "J\u00f6rg", "middle": [], "last": "Liermann", "suffix": "" }, { "first": "Hermann", "middle": [], "last": "Ney", "suffix": "" } ], "year": 1998, "venue": "Speech Communicatoin", "volume": "24", "issue": "1", "pages": "19--37", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sven Martin, J\u00f6rg Liermann, and Hermann Ney. 1998. Algorithms for bigram and trigram word clustering. Speech Communicatoin, 24(1):19-37.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Probabilistic disambiguation models for wide-coverage hpsg parsing", "authors": [ { "first": "Yusuke", "middle": [], "last": "Miyao", "suffix": "" }, { "first": "Jun'ichi", "middle": [], "last": "Tsujii", "suffix": "" } ], "year": 2005, "venue": "Proc. of ACL 2005", "volume": "", "issue": "", "pages": "83--90", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yusuke Miyao and Jun'ichi Tsujii. 2005. Probabilistic disambiguation models for wide-coverage hpsg pars- ing. In Proc. of ACL 2005., pages 83-90, Ann Arbor, Michigan, June.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Discriminative n-gram language modeling. computer speech and language", "authors": [ { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" }, { "first": "Murat", "middle": [], "last": "Saraclar", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2007, "venue": "Computer Speech and Language", "volume": "21", "issue": "2", "pages": "373--392", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian Roark, Murat Saraclar, and Michael Collins. 2007. Discriminative n-gram language modeling. computer speech and language. Computer Speech and Lan- guage, 21(2):373-392.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Whole-sentence exponential language models: a vehicle for linguistic-statistical integration", "authors": [ { "first": "Roni", "middle": [], "last": "Rosenfeld", "suffix": "" }, { "first": "Stanley", "middle": [ "F" ], "last": "Chen", "suffix": "" }, { "first": "Xiaojin", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2001, "venue": "Computers Speech and Language", "volume": "15", "issue": "1", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roni Rosenfeld, Stanley F. Chen, and Xiaojin Zhu. 2001. Whole-sentence exponential language models: a ve- hicle for linguistic-statistical integration. Computers Speech and Language, 15(1).", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Contrastive estimation: Training log-linear models on unlabeled data", "authors": [ { "first": "A", "middle": [], "last": "Noah", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Eisner", "suffix": "" } ], "year": 2005, "venue": "Proc. of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noah A. Smith and Jason Eisner. 2005. Contrastive esti- mation: Training log-linear models on unlabeled data. In Proc. of ACL.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Kernel Methods for Pattern Analysis", "authors": [ { "first": "John", "middle": [ "S" ], "last": "Taylor", "suffix": "" }, { "first": "Nello", "middle": [], "last": "Cristianini", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John S. Taylor and Nello. Cristianini. 2004. Kernel Methods for Pattern Analysis. Cambiridge Univsity Press.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Exploiting syntactic, semantic and lexical regularities in language modeling via directed markov random fields", "authors": [ { "first": "Shaojun", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Shaomin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Russell", "middle": [], "last": "Greiner", "suffix": "" }, { "first": "Dale", "middle": [], "last": "Schuurmans", "suffix": "" }, { "first": "Li", "middle": [], "last": "Cheng", "suffix": "" } ], "year": 2005, "venue": "Proc. of ICML", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shaojun Wang, Shaomin Wang, Russell Greiner, Dale Schuurmans, and Li Cheng. 2005. Exploiting syntac- tic, semantic and lexical regularities in language mod- eling via directed markov random fields. In Proc. of ICML.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "num": null, "text": "Example of a sentence sampled by PLMs (Trigram).", "uris": null }, "FIGREF1": { "type_str": "figure", "num": null, "text": "negative label or score (margin) Input training examples Probabilistic LM (e.g. N-gram LM) Figure 3: Framework of our classification process.", "uris": null }, "FIGREF2": { "type_str": "figure", "num": null, "text": "Example of assignment in semi-Markov class model. A sentence is partitioned into variablelength chunks and each chunk is assigned a unique class number.", "uris": null }, "FIGREF3": { "type_str": "figure", "num": null, "text": "Margin distribution using SMCM bi-gram features.", "uris": null }, "FIGREF4": { "type_str": "figure", "num": null, "text": "Figure 6: A learning curve for SMCM (", "uris": null }, "TABREF1": { "type_str": "table", "num": null, "html": null, "content": "
# of distinct features
word tri-gram15773230
POS tri-gram35376
SMCM (\u00bd \u00bc \u00bc )9335
SMCM (\u00bc \u00bc )199745
", "text": "shows the results of the classifier with \u00bfrd order polynomial kernel both with and without PKI. In this experiment, only \u00be\u00bc\u00bc\u00c3 sentences in DLM-train" }, "TABREF2": { "type_str": "table", "num": null, "html": null, "content": "
training time (s) prediction time (ms)
Baseline37665.5370.6
+ Index4664.947.8
", "text": "The number of features." }, "TABREF3": { "type_str": "table", "num": null, "html": null, "content": "
: Comparison between classification perfor-
mance with/without index
200
N u m b e r o f s e n t e n c e s100n p o e s g i a t i v e t i v e
0
-3-2-10 a M r g i n123
", "text": "" } } } }