Datasets:
doc_idx int64 1 1.04k | paper_id large_stringlengths 8 8 | doc_title large_stringlengths 0 313 | doc_abs large_stringlengths 0 38.6k | doc_text large_stringlengths 12 148k |
|---|---|---|---|---|
1 | D10-1083 | Simple Type-Level Unsupervised POS Tagging | Part-of-speech (POS) tag distributions are known to exhibit sparsity â a word is likely to take a single predominant tag in a corpus.
Recent research has demonstrated that incorporating this sparsity constraint improves tagging accuracy.
However, in existing systems, this expansion come with a steep increase in model complexity.
This paper proposes a simple and effective tagging method that directly models tag sparsity and other distributional properties of valid POS tag assignments.
In addition, this formulation results in a dramatic reduction in the number of model parameters thereby, enabling unusually rapid training.
Our experiments consistently demonstrate that this model architecture yields substantial performance gains over more complex tagging counterparts.
On several languages, we report performance exceeding that of more complex state-of-the art systems.1 | Title: Simple Type-Level Unsupervised POS Tagging
ABSTRACT
Part-of-speech (POS) tag distributions are known to exhibit sparsity â a word is likely to take a single predominant tag in a corpus.
Recent research has demonstrated that incorporating this sparsity constraint improves tagging accuracy.
However, in existing systems, this expansion come with a steep increase in model complexity.
This paper proposes a simple and effective tagging method that directly models tag sparsity and other distributional properties of valid POS tag assignments.
In addition, this formulation results in a dramatic reduction in the number of model parameters thereby, enabling unusually rapid training.
Our experiments consistently demonstrate that this model architecture yields substantial performance gains over more complex tagging counterparts.
On several languages, we report performance exceeding that of more complex state-of-the art systems.1
SECTION 1: Introduction
Since the early days of statistical NLP, researchers have observed that a part-of-speech tag distribution exhibits âone tag per discourseâ sparsity â words are likely to select a single predominant tag in a corpus, even when several tags are possible.
Simply assigning to each word its most frequent associated tag in a corpus achieves 94.6% accuracy on the WSJ portion of the Penn Treebank.
This distributional sparsity of syntactic tags is not unique to English 1 The source code for the work presented in this paper is available at http://groups.csail.mit.edu/rbg/code/typetagging/.
â similar results have been observed across multiple languages.
Clearly, explicitly modeling such a powerful constraint on tagging assignment has a potential to significantly improve the accuracy of an unsupervised part-of-speech tagger learned without a tagging dictionary.
In practice, this sparsity constraint is difficult to incorporate in a traditional POS induction system (Me´rialdo, 1994; Johnson, 2007; Gao and Johnson, 2008; Grac¸a et al., 2009; Berg-Kirkpatrick et al., 2010).
These sequence models-based approaches commonly treat token-level tag assignment as the primary latent variable.
By design, they readily capture regularities at the token-level.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
Previous work has attempted to incorporate such constraints into token-level models via heavy-handed modifications to inference procedure and objective function (e.g., posterior regularization and ILP decoding) (Grac¸a et al., 2009; Ravi and Knight, 2009).
In most cases, however, these expansions come with a steep increase in model complexity, with respect to training procedure and inference time.
In this work, we take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
The model starts by generating a tag assignment for each word type in a vocabulary, assuming one tag per word.
Then, token- level HMM emission parameters are drawn conditioned on these assignments such that each word is only allowed probability mass on a single assigned tag.
In this way we restrict the parameterization of a Language Original case English Danish Dutch German Spanish Swedish Portuguese 94.6 96.3 96.6 95.5 95.4 93.3 95.6 Table 1: Upper bound on tagging accuracy assuming each word type is assigned to majority POS tag.
Across all languages, high performance can be attained by selecting a single tag per word type.
token-level HMM to reflect lexicon sparsity.
This model admits a simple Gibbs sampling algorithm where the number of latent variables is proportional to the number of word types, rather than the size of a corpus as for a standard HMM sampler (Johnson, 2007).
There are two key benefits of this model architecture.
First, it directly encodes linguistic intuitions about POS tag assignments: the model structure reflects the one-tag-per-word property, and a type- level tag prior captures the skew on tag assignments (e.g., there are fewer unique determiners than unique nouns).
Second, the reduced number of hidden variables and parameters dramatically speeds up learning and inference.
We evaluate our model on seven languages exhibiting substantial syntactic variation.
On several languages, we report performance exceeding that of state-of-the art systems.
Our analysis identifies three key factors driving our performance gain: 1) selecting a model structure which directly encodes tag sparsity, 2) a type-level prior on tag assignments, and 3) a straightforward na¨ıveBayes approach to incorporate features.
The observed performance gains, coupled with the simplicity of model implementation, makes it a compelling alternative to existing more complex counterparts.
SECTION 2: Related Work.
Recent work has made significant progress on unsupervised POS tagging (Me´rialdo, 1994; Smith and Eisner, 2005; Haghighi and Klein, 2006; Johnson,2007; Goldwater and Griffiths, 2007; Gao and John son, 2008; Ravi and Knight, 2009).
Our work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
This line of work has been motivated by empirical findings that the standard EM-learned unsupervised HMM does not exhibit sufficient word tag sparsity.
The extent to which this constraint is enforced varies greatly across existing methods.
On one end of the spectrum are clustering approaches that assign a single POS tag to each word type (Schutze, 1995; Lamar et al., 2010).
These clusters are computed using an SVD variant without relying on transitional structure.
While our method also enforces a singe tag per word constraint, it leverages the transition distribution encoded in an HMM, thereby benefiting from a richer representation of context.
Other approaches encode sparsity as a soft constraint.
For instance, by altering the emission distribution parameters, Johnson (2007) encourages the model to put most of the probability mass on few tags.
This design does not guarantee âstructural zeros,â but biases towards sparsity.
A more forceful approach for encoding sparsity is posterior regularization, which constrains the posterior to have a small number of expected tag assignments (Grac¸a et al., 2009).
This approach makes the training objective more complex by adding linear constraints proportional to the number of word types, which is rather prohibitive.
A more rigid mechanism for modeling sparsity is proposed by Ravi and Knight (2009), who minimize the size of tagging grammar as measured by the number of transition types.
The use of ILP in learning the desired grammar significantly increases the computational complexity of this method.
In contrast to these approaches, our method directly incorporates these constraints into the structure of the model.
This design leads to a significant reduction in the computational complexity of training and inference.
Another thread of relevant research has explored the use of features in unsupervised POS induction (Smith and Eisner, 2005; Berg-Kirkpatrick et al., 2010; Hasan and Ng, 2009).
These methods demonstrated the benefits of incorporating linguistic features using a log-linear parameterization, but requires elaborate machinery for training.
In our work, we demonstrate that using a simple na¨ıveBayes approach also yields substantial performance gains, without the associated training complexity.
SECTION 3: Generative Story.
We consider the unsupervised POS induction problem without the use of a tagging dictionary.
A graphical depiction of our model as well as a summary of random variables and parameters can be found in Figure 1.
As is standard, we use a fixed constant K for the number of tagging states.
Model Overview The model starts by generating a tag assignment T for each word type in a vocabulary, assuming one tag per word.
Conditioned on T , features of word types W are drawn.
We refer to (T , W ) as the lexicon of a language and Ï for the parameters for their generation; Ï depends on a single hyperparameter β.
Once the lexicon has been drawn, the model proceeds similarly to the standard token-level HMM: Emission parameters θ are generated conditioned on tag assignments T . We also draw transition parameters Ï.
Both parameters depend on a single hyperparameter α.
Once HMM parameters (θ, Ï) are drawn, a token-level tag and word sequence, (t, w), is generated in the standard HMM fashion: a tag sequence t is generated from Ï.
The corresponding token words w are drawn conditioned on t and θ.2 Our full generative model is given by: K P (Ï, θ|T , α, β) = n (P (Ït|α)P (θt|T , α)) t=1 The transition distribution Ït for each tag t is drawn according to DIRICHLET(α, K ), where α is the shared transition and emission distribution hyperparameter.
In total there are O(K 2) parameters associated with the transition parameters.
In contrast to the Bayesian HMM, θt is not drawn from a distribution which has support for each of the n word types.
Instead, we condition on the type-level tag assignments T . Specifically, let St = {i|Ti = t} denote the indices of theword types which have been assigned tag t accord ing to the tag assignments T . Then θt is drawn from DIRICHLET(α, St), a symmetric Dirichlet which only places mass on word types indicated by St. This ensures that each word will only be assigned a single tag at inference time (see Section 4).
Note that while the standard HMM, has O(K n) emission parameters, our model has O(n) effective parameters.3 Token Component Once HMM parameters (Ï, θ) have been drawn, the HMM generates a token-level corpus w in the standard way: P (w, t|Ï, θ) = P (T , W , θ, Ï, Ï, t, w|α, β) = P (T , W , Ï|β) [Lexicon]  n n ï£ (w,t)â(w,t) j  P (tj |Ïtjâ1 )P (wj |tj , θtj ) P (Ï, θ|T , α, β) [Parameter] P (w, t|Ï, θ) [Token] We refer to the components on the right hand side as the lexicon, parameter, and token component respectively.
Since the parameter and token components will remain fixed throughout experiments, we briefly describe each.
Parameter Component As in the standard Bayesian HMM (Goldwater and Griffiths, 2007), all distributions are independently drawn from symmetric Dirichlet distributions: 2 Note that t and w denote tag and word sequences respectively, rather than individual tokens or tags.
Note that in our model, conditioned on T , there is precisely one t which has nonzero probability for the token component, since for each word, exactly one θt has support.
3.1 Lexicon Component.
We present several variations for the lexical component P (T , W |Ï), each adding more complex pa rameterizations.
Uniform Tag Prior (1TW) Our initial lexicon component will be uniform over possible tag assignments as well as word types.
Its only purpose is 3 This follows since each θt has St â 1 parameters and.
P St = n. β T VARIABLES Ï Y W : Word types (W1 ,.
.., Wn ) (obs) P T : Tag assigns (T1 ,.
.., Tn ) T W Ï E w : Token word seqs (obs) t : Token tag assigns (det by T ) PARAMETERS Ï : Lexicon parameters θ : Token word emission parameters Ï : Token tag transition parameters Ï Ï t1 t2 θ θ w1 w2 K Ï T tm O K θ E wN m N N Figure 1: Graphical depiction of our model and summary of latent variables and parameters.
The type-level tag assignments T generate features associated with word types W . The tag assignments constrain the HMM emission parameters θ.
The tokens w are generated by token-level tags t from an HMM parameterized by the lexicon structure.
The hyperparameters α and β represent the concentration parameters of the token- and type-level components of the model respectively.
They are set to fixed constants.
to explore how well we can induce POS tags using only the one-tag-per-word constraint.
Specifically, the lexicon is generated as: P (T , W |Ï) =P (T )P (W |T ) Word Type Features (FEATS): Past unsupervised POS work have derived benefits from features on word types, such as suffix and capitalization features (Hasan and Ng, 2009; Berg-Kirkpatrick et al.,2010).
Past work however, has typically associ n = n P (Ti)P (Wi|Ti) = i=1 1 n K n ated these features with token occurrences, typically in an HMM.
In our model, we associate these features at the type-level in the lexicon.
Here, we conThis model is equivalent to the standard HMM ex cept that it enforces the one-word-per-tag constraint.
Learned Tag Prior (PRIOR) We next assume there exists a single prior distribution Ï over tag assignments drawn from DIRICHLET(β, K ).
This alters generation of T as follows: n P (T |Ï) = n P (Ti|Ï) i=1 Note that this distribution captures the frequency of a tag across word types, as opposed to tokens.
The P (T |Ï) distribution, in English for instance, should have very low mass for the DT (determiner) tag, since determiners are a very small portion of the vocabulary.
In contrast, NNP (proper nouns) form a large portion of vocabulary.
Note that these observa sider suffix features, capitalization features, punctuation, and digit features.
While possible to utilize the feature-based log-linear approach described in Berg-Kirkpatrick et al.
(2010), we adopt a simpler na¨ıve Bayes strategy, where all features are emitted independently.
Specifically, we assume each word type W consists of feature-value pairs (f, v).
For each feature type f and tag t, a multinomial Ïtf is drawn from a symmetric Dirichlet distribution with concentration parameter β.
The P (W |T , Ï) term in the lexicon component now decomposes as: n P (W |T , Ï) = n P (Wi|Ti, Ï) i=1 n   tions are not modeled by the standard HMM, which = n ï£ n P (v|ÏTi f ) instead can model token-level frequency.
i=1 (f,v)âWi
SECTION 4: Learning and Inference.
For inference, we are interested in the posterior probability over the latent variables in our model.
During training, we treat as observed the language word types W as well as the token-level corpus w. We utilize Gibbs sampling to approximate our collapsed model posterior: P (T ,t|W , w, α, β) â P (T , t, W , w|α, β) 0.7 0.6 0.5 0.4 0.3 English Danish Dutch Germany Portuguese Spanish Swedish = P (T , t, W , w, Ï, θ, Ï, w|α, β)dÏdθdÏ Note that given tag assignments T , there is only one setting of token-level tags t which has mass in the above posterior.
Specifically, for the ith word type, the set of token-level tags associated with token occurrences of this word, denoted t(i), must all take the value Ti to have nonzero mass. Thus in the context of Gibbs sampling, if we want to block sample Ti with t(i), we only need sample values for Ti and consider this setting of t(i).
The equation for sampling a single type-level assignment Ti is given by, 0.2 0 5 10 15 20 25 30 Iteration Figure 2: Graph of the one-to-one accuracy of our full model (+FEATS) under the best hyperparameter setting by iteration (see Section 5).
Performance typically stabilizes across languages after only a few number of iterations.
to represent the ith word type emitted by the HMM: P (t(i)|Ti, t(âi), w, α) â n P (w|Ti, t(âi), w(âi), α) (tb ,ta ) P (Ti, t(i)|T , W , t(âi), w, α, β) = P (T |tb, t(âi), α)P (ta|T , t(âi), α) âi (i) i i (âi) P (Ti|W , T âi, β)P (t |Ti, t , w, α) All terms are Dirichlet distributions and parameters can be analytically computed from counts in t(âi)where T âi denotes all type-level tag assignment ex cept Ti and t(âi) denotes all token-level tags except and w (âi) (Johnson, 2007).
t(i).
The terms on the right-hand-side denote the type-level and token-level probability terms respectively.
The type-level posterior term can be computed according to, P (Ti|W , T âi, β) â Note that each round of sampling Ti variables takes time proportional to the size of the corpus, as with the standard token-level HMM.
A crucial difference is that the number of parameters is greatly reduced as is the number of variables that are sampled during each iteration.
In contrast to results reported in Johnson (2007), we found that the per P (Ti|T âi, β) n (f,v)âWi P (v|Ti, f, W âi, T âi, β) formance of our Gibbs sampler on the basic 1TW model stabilized very quickly after about 10 full it All of the probabilities on the right-hand-side are Dirichlet, distributions which can be computed analytically given counts.
The token-level term is similar to the standard HMM sampling equations found in Johnson (2007).
The relevant variables are the set of token-level tags that appear before and after each instance of the ith word type; we denote these context pairs with the set {(tb, ta)} and they are contained in t(âi).
We use w erations of sampling (see Figure 2 for a depiction).
SECTION 5: Experiments.
We evaluate our approach on seven languages: English, Danish, Dutch, German, Portuguese, Spanish, and Swedish.
On each language we investigate the contribution of each component of our model.
For all languages we do not make use of a tagging dictionary.
Mo del Hy per par am . E n g li s h1 1 m-1 D a n i s h1 1 m-1 D u t c h1 1 m-1 G er m a n1 1 m-1 Por tug ues e1 1 m-1 S p a ni s h1 1 m-1 S w e di s h1 1 m-1 1T W be st me dia n 45.
2 62.6 45.
1 61.7 37.
2 56.2 32.
1 53.8 47.
4 53.7 43.
9 61.0 44.
2 62.2 39.
3 68.4 49.
0 68.4 48.
5 68.1 34.
3 54.4 33.
SECTION 6: 54.3
36.
0 55.3 34.
9 50.2 +P RI OR be st me dia n 47.
9 65.5 46.
5 64.7 42.
3 58.3 40.
0 57.3 51.
4 65.9 48.
3 60.7 50.
SECTION 7: 62.2
41.
7 68.3 56.
2 70.7 52.
0 70.9 42.
SECTION 8: 54.8
37.
1 55.8 38.
SECTION 9: 58.0
36.
8 57.3 +F EA TS be st me dia n 50.
9 66.4 47.
8 66.4 52.
1 61.2 43.
2 60.7 56.
4 69.0 51.
5 67.3 55.
4 70.4 46.
2 61.7 64.
1 74.5 56.
5 70.1 58.
3 68.9 50.
0 57.2 43.
3 61.7 38.
5 60.6 Table 3: Multilingual Results: We report token-level one-to-one and many-to-one accuracy on a variety of languages under several experimental settings (Section 5).
For each language and setting, we report one-to-one (11) and many- to-one (m-1) accuracies.
For each cell, the first row corresponds to the result using the best hyperparameter choice, where best is defined by the 11 metric.
The second row represents the performance of the median hyperparameter setting.
Model components cascade, so the row corresponding to +FEATS also includes the PRIOR component (see Section 3).
La ng ua ge # To ke ns # W or d Ty pe s # Ta gs E ng lis h D a ni s h D u tc h G e r m a n P or tu g u e s e S p a ni s h S w e di s h 1 1 7 3 7 6 6 9 4 3 8 6 2 0 3 5 6 8 6 9 9 6 0 5 2 0 6 6 7 8 8 9 3 3 4 1 9 1 4 6 7 4 9 2 0 6 1 8 3 5 6 2 8 3 9 3 7 2 3 2 5 2 8 9 3 1 1 6 4 5 8 2 0 0 5 7 4 5 2 5 1 2 5 4 2 2 4 7 4 1 Table 2: Statistics for various corpora utilized in experiments.
See Section 5.
The English data comes from the WSJ portion of the Penn Treebank and the other languages from the training set of the CoNLL-X multilingual dependency parsing shared task.
5.1 Data Sets.
Following the setup of Johnson (2007), we use the whole of the Penn Treebank corpus for training and evaluation on English.
For other languages, we use the CoNLL-X multilingual dependency parsing shared task corpora (Buchholz and Marsi, 2006) which include gold POS tags (used for evaluation).
We train and test on the CoNLL-X training set.
Statistics for all data sets are shown in Table 2.
5.2 Setup.
Models To assess the marginal utility of each component of the model (see Section 3), we incremen- tally increase its sophistication.
Specifically, we (+FEATS) utilizes the tag prior as well as features (e.g., suffixes and orthographic features), discussed in Section 3, for the P (W |T , Ï) component.
Hyperparameters Our model has two Dirichlet concentration hyperparameters: α is the shared hyperparameter for the token-level HMM emission and transition distributions.
β is the shared hyperparameter for the tag assignment prior and word feature multinomials.
We experiment with four values for each hyperparameter resulting in 16 (α, β) combinations: α β 0.001, 0.01, 0.1, 1.0 0.01, 0.1, 1.0, 10 Iterations In each run, we performed 30 iterations of Gibbs sampling for the type assignment variables W .4 We use the final sample for evaluation.
Evaluation Metrics We report three metrics to evaluate tagging performance.
As is standard, we report the greedy one-to-one (Haghighi and Klein, 2006) and the many-to-one token-level accuracy obtained from mapping model states to gold POS tags.
We also report word type level accuracy, the fraction of word types assigned their majority tag (where the mapping between model state and tag is determined by greedy one-to-one mapping discussed above).5 For each language, we aggregate results in the following way: First, for each hyperparameter setting, evaluate three variants: The first model (1TW) only 4 Typically, the performance stabilizes after only 10 itera-.
encodes the one tag per word constraint and is uni form over type-level tag assignments.
The second model (+PRIOR) utilizes the independent prior over type-level tag assignments P (T |Ï).
The final model tions.
5 We choose these two metrics over the Variation Information measure due to the deficiencies discussed in Gao and Johnson (2008).
we perform five runs with different random initialization of sampling state.
Hyperparameter settings are sorted according to the median one-to-one metric over runs.
We report results for the best and median hyperparameter settings obtained in this way.
Specifically, for both settings we report results on the median run for each setting.
Tag set As is standard, for all experiments, we set the number of latent model tag states to the size of the annotated tag set.
The original tag set for the CoNLL-X Dutch data set consists of compounded tags that are used to tag multi-word units (MWUs) resulting in a tag set of over 300 tags.
We tokenize MWUs and their POS tags; this reduces the tag set size to 12.
See Table 2 for the tag set size of other languages.
With the exception of the Dutch data set, no other processing is performed on the annotated tags.
6 Results and Analysis.
We report token- and type-level accuracy in Table 3 and 6 for all languages and system settings.
Our analysis and comparison focuses primarily on the one-to-one accuracy since it is a stricter metric than many-to-one accuracy, but also report many-to-one for completeness.
Comparison with state-of-the-art taggers For comparison we consider two unsupervised tag- gers: the HMM with log-linear features of Berg- Kirkpatrick et al.
(2010) and the posterior regular- ization HMM of Grac¸a et al.
(2009).
The system of Berg-Kirkpatrick et al.
(2010) reports the best unsupervised results for English.
We consider two variants of Berg-Kirkpatrick et al.
(2010)âs richest model: optimized via either EM or LBFGS, as their relative performance depends on the language.
Our model outperforms theirs on four out of five languages on the best hyperparameter setting and three out of five on the median setting, yielding an average absolute difference across languages of 12.9% and 3.9% for best and median settings respectively compared to their best EM or LBFGS performance.
While Berg-Kirkpatrick et al.
(2010) consistently outperforms ours on English, we obtain substantial gains across other languages.
For instance, on Spanish, the absolute gap on median performance is 10%.
Top 5 Bot to m 5 Go ld NN P NN JJ CD NN S RB S PD T # â , 1T W CD W RB NN S VB N NN PR P$ W DT : MD . +P RI OR CD JJ NN S WP $ NN RR B- , $ â . +F EA TS JJ NN S CD NN P UH , PR P$ # . â Table 5: Type-level English POS Tag Ranking: We list the top 5 and bottom 5 POS tags in the lexicon and the predictions of our models under the best hyperparameter setting.
Our second point of comparison is with Grac¸a et al.
(2009), who also incorporate a sparsity constraint, but does via altering the model objective using posterior regularization.
We can only compare with Grac¸a et al.
(2009) on Portuguese (Grac¸a et al.
(2009) also report results on English, but on the reduced 17 tag set, which is not comparable to ours).
Their best model yields 44.5% one-to-one accuracy, compared to our best median 56.5% result.
However, our full model takes advantage of word features not present in Grac¸a et al.
(2009).
Even without features, but still using the tag prior, our median result is 52.0%, still significantly outperforming Grac¸a et al.
(2009).
Ablation Analysis We evaluate the impact of incorporating various linguistic features into our model in Table 3.
A novel element of our model is the ability to capture type-level tag frequencies.
For this experiment, we compare our model with the uniform tag assignment prior (1TW) with the learned prior (+PRIOR).
Across all languages, +PRIOR consistently outperforms 1TW, reducing error on average by 9.1% and 5.9% on best and median settings respectively.
Similar behavior is observed when adding features.
The difference between the featureless model (+PRIOR) and our full model (+FEATS) is 13.6% and 7.7% average error reduction on best and median settings respectively.
Overall, the difference between our most basic model (1TW) and our full model (+FEATS) is 21.2% and 13.1% for the best and median settings respectively.
One striking example is the error reduction for Spanish, which reduces error by 36.5% and 24.7% for the best and median settings respectively.
We observe similar trends when using another measure â type-level accuracy (defined as the fraction of words correctly assigned their majority tag), according to which La ng ua ge M etr ic B K 10 E M B K 10 L B F G S G 10 F EA T S B es t F EA T S M ed ia n E ng lis h 1 1 m 1 4 8 . 3 6 8 . 1 5 6 . 0 7 5 . 5 â â 5 0 . 9 6 6 . 4 4 7 . 8 6 6 . 4 D an is h 1 1 m 1 4 2 . 3 6 6 . 7 4 2 . 6 5 8 . 0 â â 5 2 . 1 6 1 . 2 4 3 . 2 6 0 . 7 D ut ch 1 1 m 1 5 3 . 7 6 7 . 0 5 5 . 1 6 4 . 7 â â 5 6 . 4 6 9 . 0 5 1 . 5 6 7 . 3 Po rtu gu es e 1 1 m 1 5 0 . 8 7 5 . 3 4 3 . 2 7 4 . 8 44 .5 69 .2 6 4 . 1 7 4 . 5 5 6 . 5 7 0 . 1 S pa ni sh 1 1 m 1 â â 4 0 . 6 7 3 . 2 â â 5 8 . 3 6 8 . 9 5 0 . 0 5 7 . 2 Table 4: Comparison of our method (FEATS) to state-of-the-art methods.
Feature-based HMM Model (Berg- Kirkpatrick et al., 2010): The KM model uses a variety of orthographic features and employs the EM or LBFGS optimization algorithm; Posterior regulariation model (Grac¸a et al., 2009): The G10 model uses the posterior regular- ization approach to ensure tag sparsity constraint.
La ng ua ge 1T W + P RI O R + F E A T S E ng lis h D a ni s h D u tc h G e r m a n P or tu g u e s e S p a ni s h S w e di s h 2 1.
1 1 0.
1 2 3.
8 1 2.
8 1 8.
4 7 . 3 8 . 9 2 8 . 8 2 0 . 7 3 2 . 3 3 5 . 2 2 9 . 6 2 7 . 6 1 4 . 2 4 2 . 8 4 5 . 9 4 4 . 3 6 0 . 6 6 1 . 5 4 9 . 9 3 3 . 9 Table 6: Type-level Results: Each cell report the type- level accuracy computed against the most frequent tag of each word type.
The state-to-tag mapping is obtained from the best hyperparameter setting for 11 mapping shown in Table 3.
our full model yields 39.3% average error reduction across languages when compared to the basic configuration (1TW).
Table 5 provides insight into the behavior of different models in terms of the tagging lexicon they generate.
The table shows that the lexicon tag frequency predicated by our full model are the closest to the gold standard.
7 Conclusion and Future Work.
We have presented a method for unsupervised part- of-speech tagging that considers a word type and its allowed POS tags as a primary element of the model.
This departure from the traditional token-based tagging approach allows us to explicitly capture type- level distributional properties of valid POS tag as signments as part of the model.
The resulting model is compact, efficiently learnable and linguistically expressive.
Our empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
In this paper, we make a simplifying assumption of one-tag-per-word.
This assumption, however, is not inherent to type-based tagging models.
A promising direction for future work is to explicitly model a distribution over tags for each word type.
We hypothesize that modeling morphological information will greatly constrain the set of possible tags, thereby further refining the representation of the tag lexicon.
SECTION: Acknowledgments
The authors acknowledge the support of the NSF (CAREER grant IIS0448168, and grant IIS 0904684).
We are especially grateful to Taylor Berg- Kirkpatrick for running additional experiments.
We thank members of the MIT NLP group for their suggestions and comments.
Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations.
|
2 | W09-0621 | For developing a data-driven text rewriting algorithm for paraphrasing, it is essential to have a monolingual corpus of aligned paraphrased sentences.
News article headlines are a rich source of paraphrases; they tend to describe the same event in various different ways, and can easily be obtained from the web.
We compare two methods of aligning headlines to construct such an aligned corpus of paraphrases, one based on clustering, and the other on pairwise similarity-based matching.
We show that the latter performs best on the task of aligning paraphrastic headlines. |
ABSTRACT
For developing a data-driven text rewriting algorithm for paraphrasing, it is essential to have a monolingual corpus of aligned paraphrased sentences.
News article headlines are a rich source of paraphrases; they tend to describe the same event in various different ways, and can easily be obtained from the web.
We compare two methods of aligning headlines to construct such an aligned corpus of paraphrases, one based on clustering, and the other on pairwise similarity-based matching.
We show that the latter performs best on the task of aligning paraphrastic headlines.
SECTION 1: Introduction
In recent years, text-to-text generation has received increasing attention in the field of Natural Language Generation (NLG).
In contrast to traditional concept-to-text systems, text-to-text generation systems convert source text to target text, where typically the source and target text share the same meaning to some extent.
Applications of text-to-text generation include sum- marization (Knight and Marcu, 2002), question- answering (Lin and Pantel, 2001), and machine translation.
For text-to-text generation it is important to know which words and phrases are semantically close or exchangable in which contexts.
While there are various resources available that capture such knowledge at the word level (e.g., synset knowledge in WordNet), this kind of information is much harder to get by at the phrase level.
Therefore, paraphrase acquisition can be considered an important technology for producing resources for text-to-text generation.
Paraphrase generation has already proven to be valuable for Question Answering (Lin and Pantel, 2001; Riezler et al., 2007), Machine Translation (CallisonBurch et al., 2006) and the evaluation thereof (RussoLassner et al., 2006; Kauchak and Barzilay, 2006; Zhou et al., 2006), but also for text simplification and explanation.
In the study described in this paper, we make an effort to collect Dutch paraphrases from news article headlines in an unsupervised way to be used in future paraphrase generation.
News article headlines are abundant on the web, and are already grouped by news aggregators such as Google News.
These services collect multiple articles covering the same event.
Crawling such news aggregators is an effective way of collecting related articles which can straightforwardly be used for the acquisition of paraphrases (Dolan et al., 2004; Nelken and Shieber, 2006).
We use this method to collect a large amount of aligned paraphrases in an automatic fashion.
SECTION 2: Method.
We aim to build a high-quality paraphrase corpus.
Considering the fact that this corpus will be the basic resource of a paraphrase generation system, we need it to be as free of errors as possible, because errors will propagate throughout the system.
This implies that we focus on obtaining a high precision in the paraphrases collection process.
Where previous work has focused on aligning news-items at the paragraph and sentence level (Barzilay and Elhadad, 2003), we choose to focus on aligning the headlines of news articles.
We think this approach will enable us to harvest reliable training material for paraphrase generation quickly and efficiently, without having to worry too much about the problems that arise when trying to align complete news articles.
For the development of our system we use data which was obtained in the DAESO-project.
This project is an ongoing effort to build a Parallel Monolingual Treebank for Dutch (Marsi Proceedings of the 12th European Workshop on Natural Language Generation, pages 122â125, Athens, Greece, 30 â 31 March 2009.
Qc 2009 Association for Computational Linguistics document, and each original cluster as a collection of documents.
For each stemmed word i in sentence j, T Fi,j is a binary variable indicating if the word occurs in the sentence or not.
The T F âI DF score is then: TF.IDFi = T Fi,j · log | Table 1: Part of a sample headline cluster, with sub-clusters and Krahmer, 2007) and will be made available through the Dutch HLT Agency.
Part of the data in the DAESO-corpus consists of headline clusters crawled from Google News Netherlands in the period AprilâAugust 2006.
For each news article, the headline and the first 150 characters of the article were stored.
Roughly 13,000 clusters were retrieved.
Table 1 shows part of a (translated) cluster.
It is clear that although clusters deal roughly with one subject, the headlines can represent quite a different perspective on the content of the article.
To obtain only paraphrase pairs, the clusters need to be more coherent.
To that end 865 clusters were manually subdivided into sub-clusters of headlines that show clear semantic overlap.
Sub- clustering is no trivial task, however.
Some sentences are very clearly paraphrases, but consider for instance the last two sentences in the example.
They do paraphrase each other to some extent, but their relation can only be understood properly with |{dj : ti â dj }| |D| is the total number of sentences in the cluster and |{dj : ti â dj }| is the number of sen tences that contain the term ti.
These scores are used in a vector space representation.
The similarity between headlines can be calculated by using a similarity function on the headline vectors, such as cosine similarity.
2.1 Clustering.
Our first approach is to use a clustering algorithm to cluster similar headlines.
The original Google News headline clusters are reclustered into finer grained sub-clusters.
We use the k-means implementation in the CLUTO1 software package.
The k-means algorithm is an algorithm that assigns k centers to represent the clustering of n points (k < n) in a vector space.
The total intra-cluster variances is minimized by the function k V = (xj â µi)2 i=1 xj âSi where µi is the centroid of all the points xj â Si.The PK1 cluster-stopping algorithm as pro posed by Pedersen and Kulkarni (2006) is used to find the optimal k for each sub-cluster: C r(k) â mean(C r[1...âK ]) world knowledge.
Also, there are numerous headlines that can not be sub-clustered, such as the first P K 1(k) = std(C r[1...âK ]) three headlines shown in the example.
We use these annotated clusters as development and test data in developing a method to automatically obtain paraphrase pairs from headline clusters.
We divide the annotated headline clusters in a development set of 40 clusters, while the remainder is used as test data.
The headlines are stemmed using the porter stemmer for Dutch (Kraaij and Pohlmann, 1994).
Instead of a word overlap measure as used byHere, C r is a criterion function, which mea sures the ratio of withincluster similarity to betweencluster similarity.
As soon as P K 1(k) ex ceeds a threshold, k â 1 is selected as the optimum number of clusters.
To find the optimal threshold value for cluster- stopping, optimization is performed on the development data.
Our optimization function is an F - score: (1 + β2) · (precision · recall) Barzilay and Elhadad (2003), we use a modified Fβ = (β2 precision + recall) T F âI DF word score as was suggested by Nelken · and Shieber (2006).
Each sentence is viewed as a 1 http://glaros.dtc.umn.edu/gkhome/views/cluto/ We evaluate the number of aligments between possible paraphrases.
For instance, in a cluster of four sentences, 4) = 6 alignments can be made.
In our case, precision is the number of alignments retrieved from the clusters which are relevant, divided by the total number of retrieved alignments.
Recall is the number of relevant retrieved aligments divided by the total number of relevant alignments.
We use an Fβ -score with a β of 0.25 as we favour precision over recall.
We do not want to optimize on precision alone, because we still want to retrieve a fair amount of paraphrases and not only the ones that are very similar.
Through optimization on our development set, we find an optimal threshold for the PK1 algorithm thpk1 = 1.
For each original cluster, k-means clustering is then performed using the k found by the cluster stopping function.
In each newly obtained cluster all headlines can be aligned to each other.
2.2 Pairwise similarity.
Our second approach is to calculate the similarity between pairs of headlines directly.
If the similarity exceeds a certain threshold, the pair is accepted as a paraphrase pair.
If it is below the threshold, it is rejected.
However, as Barzilay and Elhadad (2003) have pointed out, sentence mapping in this way is only effective to a certain extent.
Beyond that point, context is needed.
With this in mind, we adopt two thresholds and the Cosine similarity function to calculate the similarity between two sentences: cos(θ) = V 1 · V 2 V 1 V 2 where V 1 and V 2 are the vectors of the two sentences being compared.
If the similarity is higher than the upper threshold, it is accepted.
If it is lower than the lower theshold, it is rejected.
In the remaining case of a similarity between the two thresholds, similarity is calculated over the contexts of the two headlines, namely the text snippet that was retrieved with the headline.
If this similarity exceeds the upper threshold, it is accepted.
Threshold values as found by optimizing on the development data using again an F0.25-score, are T hlower = 0.2 and T hupper = 0.5.
An optional final step is to add alignments that are implied by previous alignments.
For instance, if headline A is paired with headline B, and headline B is aligned to headline C , headline A can be aligned to C as Ty pe Precision Recallk m ea ns cl us ter in g 0.91 0.43 clu ste rs on lyk m ea ns cl us ter in g 0.66 0.44 all he ad lin es pa irw ise si mi lar ity 0.93 0.39 clu ste rs on ly pa irw ise si mi lar ity 0.76 0.41 all he ad lin es Table 2: Precision and Recall for both methods Pl ay st ati on 3 m or e ex pe nsi ve th an co m pe tit or P l a y s t a t i o n 3 w i l l b e c o m e m o r e e x p e n s i v e t h a n X b o x 3 6 0 So ny po stp on es Blu Ra y m ov ie s So ny po stp on es co mi ng of blu ra y dv ds Pri ce s Pl ay st ati on 3 kn ow n: fro m 49 9 eu ro s E3 20 06 : Pl ay st ati on 3 fro m 49 9 eu ro s So ny PS 3 wi th Blu R ay for sal e fro m No ve m be r 11 th PS 3 av ail abl e in Eu ro pe fro m No ve m be r 17 th Table 3: Examples of correct (above) and incorrect (below) alignments well.
We do not add these alignments, because in particular in large clusters when one wrong alignment is made, this process chains together a large amount of incorrect alignments.
SECTION 3: Results.
The 825 clusters in the test set contain 1,751 sub- clusters in total.
In these sub-clusters, there are 6,685 clustered headlines.
Another 3,123 headlines remain unclustered.
Table 2 displays the paraphrase detection precision and recall of our two approaches.
It is clear that k-means clustering performs well when all unclustered headlines are artificially ignored.
In the more realistic case when there are also items that cannot be clustered, the pairwise calculation of similarity with a back off strategy of using context performs better when we aim for higher precision.
Some examples of correct and incorrect alignments are given in Table 3.
SECTION 4: Discussion.
Using headlines of news articles clustered by Google News, and finding good paraphrases within these clusters is an effective route for obtaining pairs of paraphrased sentences with reasonable precision.
We have shown that a cosine similarity function comparing headlines and using a back off strategy to compare context can be used to extract paraphrase pairs at a precision of 0.76.
Although we could aim for a higher precision by assigning higher values to the thresholds, we still want some recall and variation in our paraphrases.
Of course the coverage of our method is still somewhat limited: only paraphrases that have some words in common will be extracted.
This is not a bad thing: we are particularly interested in extracting paraphrase patterns at the constituent level.
These alignments can be made with existing alignment tools such as the GIZA++ toolkit.
We measure the performance of our approaches by comparing to human annotation of sub- clusterings.
The human task in itself is hard.
For instance, is we look at the incorrect examples in Table 3, the difficulty of distinguishing between paraphrases and non-paraphrases is apparent.
In future research we would like to investigate the task of judging paraphrases.
The next step we would like to take towards automatic paraphrase generation, is to identify the differences between paraphrases at the constituent level.
This task has in fact been performed by human annotators in the DAESO-project.
A logical next step would be to learn to align the different constituents on our extracted paraphrases in an unsupervised way.
SECTION: Acknowledgements
Thanks are due to the Netherlands Organization for Scientific Research (NWO) and to the Dutch HLT Stevin programme.
Thanks also to Wauter Bosma for originally mining the headlines from Google News.
For more information on DAESO, please visit daeso.uvt.nl.
| |
3 | P05-1004 | Supersense Tagging of Unknown Nouns using Semantic Similarity | The limited coverage of lexical-semantic resources is a significant problem for NLP systems which can be alleviated by automatically classifying the unknown words.
Supersense tagging assigns unknown nouns one of 26 broad semantic categories used by lexicographers to organise their manual insertion into WORDNET.
Ciaramita and Johnson (2003) present a tagger which uses synonym set glosses as annotated training examples.
We describe an unsupervised approach, based on vector-space similarity, which does not require annotated examples but significantly outperforms their tagger.
We also demonstrate the use of an extremely large shallow-parsed corpus for calculating vector-space semantic similarity. | Title: Supersense Tagging of Unknown Nouns using Semantic Similarity
ABSTRACT
The limited coverage of lexical-semantic resources is a significant problem for NLP systems which can be alleviated by automatically classifying the unknown words.
Supersense tagging assigns unknown nouns one of 26 broad semantic categories used by lexicographers to organise their manual insertion into WORDNET.
Ciaramita and Johnson (2003) present a tagger which uses synonym set glosses as annotated training examples.
We describe an unsupervised approach, based on vector-space similarity, which does not require annotated examples but significantly outperforms their tagger.
We also demonstrate the use of an extremely large shallow-parsed corpus for calculating vector-space semantic similarity.
SECTION 1: Introduction
Lexical-semantic resources have been applied successful to a wide range of Natural Language Processing (NLP) problems ranging from collocation extraction (Pearce, 2001) and class-based smoothing (Clark and Weir, 2002), to text classification (Baker and McCallum, 1998) and question answering (Pasca and Harabagiu, 2001).
In particular, WORDNET (Fellbaum, 1998) has significantly influenced research in NLP.
Unfortunately, these resource are extremely time- consuming and labour-intensive to manually develop and maintain, requiring considerable linguistic and domain expertise.
Lexicographers cannot possibly keep pace with language evolution: sense distinctions are continually made and merged, words are coined or become obsolete, and technical terms migrate into the vernacular.
Technical domains, such as medicine, require separate treatment since common words often take on special meanings, and a significant proportion of their vocabulary does not overlap with everyday vocabulary.
Bur- gun and Bodenreider (2001) compared an alignment of WORDNET with the UMLS medical resource and found only a very small degree of overlap.
Also, lexical- semantic resources suffer from: bias towards concepts and senses from particular topics.
Some specialist topics are better covered in WORD- NET than others, e.g. dog has finer-grained distinctions than cat and worm although this does not reflect finer distinctions in reality; limited coverage of infrequent words and senses.
Ciaramita and Johnson (2003) found that common nouns missing from WORDNET 1.6 occurred every 8 sentences in the BLLIP corpus.
By WORDNET 2.0, coverage has improved but the problem of keeping up with language evolution remains difficult.
consistency when classifying similar words into categories.
For instance, the WORDNET lexicographer file for ionosphere (location) is different to exo- sphere and stratosphere (object), two other layers of the earthâs atmosphere.
These problems demonstrate the need for automatic or semiautomatic methods for the creation and maintenance of lexical-semantic resources.
Broad semantic classification is currently used by lexicographers to or- ganise the manual insertion of words into WORDNET, and is an experimental precursor to automatically inserting words directly into the WORDNET hierarchy.
Ciaramita and Johnson (2003) call this supersense tagging and describe a multi-class perceptron tagger, which uses WORDNETâs hierarchical structure to create many annotated training instances from the synset glosses.
This paper describes an unsupervised approach to supersense tagging that does not require annotated sentences.
Instead, we use vector-space similarity to retrieve a number of synonyms for each unknown common noun.
The supersenses of these synonyms are then combined to determine the supersense.
This approach significantly outperforms the multi-class perceptron on the same dataset based on WORDNET 1.6 and 1.7.1.
26 Proceedings of the 43rd Annual Meeting of the ACL, pages 26â33, Ann Arbor, June 2005.
Qc 2005 Association for Computational Linguistics L E X -FI L E D E S C R I P T I O N act acts or actions animal animals artifact man-made objects attribute attributes of people and objects body body parts cognition cognitive processes and contents communication communicative processes and contents event natural events feeling feelings and emotions food foods and drinks group groupings of people or objects location spatial position motive goals object natural objects (not man-made) person people phenomenon natural phenomena plant plants possession possession and transfer of possession process natural processes quantity quantities and units of measure relation relations between people/things/ideas shape two and three dimensional shapes state stable states of affairs substance substances time time and temporal relations Table 1: 25 noun lexicographer files in WORDNET
SECTION 2: Supersenses.
There are 26 broad semantic classes employed by lexicographers in the initial phase of inserting words into the WORDNET hierarchy, called lexicographer files (lex- files).
For the noun hierarchy, there are 25 lex-files and a file containing the top level nodes in the hierarchy called Tops.
Other syntactic classes are also organised using lex-files: 15 for verbs, 3 for adjectives and 1 for adverbs.
Lex-files form a set of coarse-grained sense distinctions within WORDNET.
For example, company appears in the following lex-files in WORDNET 2.0: group, which covers company in the social, commercial and troupe fine-grained senses; and state, which covers companionship.
The names and descriptions of the noun lex-files are shown in Table 1.
Some lex-files map directly to the top level nodes in the hierarchy, called unique beginners, while others are grouped together as hyponyms of a unique beginner (Fellbaum, 1998, page 30).
For example, abstraction subsumes the lex-files attribute, quantity, relation, communication and time.
Ciaramita and Johnson (2003) call the noun lex-file classes supersenses.
There are 11 unique beginners in the WORDNET noun hierarchy which could also be used as supersenses.
Ciaramita (2002) has produced a mini- WORDNET by manually reducing the WORDNET hierarchy to 106 broad categories.
Ciaramita et al.
(2003) describe how the lex-files can be used as root nodes in a two level hierarchy with the WORDNET synsets appear ing directly underneath.
Other alternative sets of supersenses can be created by an arbitrary cut through the WORDNET hierarchy near the top, or by using topics from a thesaurus such as Rogetâs (Yarowsky, 1992).
These topic distinctions are coarser-grained than WORDNET senses, which have been criticised for being too difficult to distinguish even for experts.
Ciaramita and Johnson (2003) believe that the key sense distinctions are still maintained by supersenses.
They suggest that supersense tagging is similar to named entity recognition, which also has a very small set of categories with similar granularity (e.g. location and person) for labelling predominantly unseen terms.
Supersense tagging can provide automated or semi- automated assistance to lexicographers adding words to the WORDNET hierarchy.
Once this task is solved successfully, it may be possible to insert words directly into the fine-grained distinctions of the hierarchy itself.
Clearly, this is the ultimate goal, to be able to insert new terms into lexical resources, extending the structure where necessary.
Supersense tagging is also interesting for many applications that use shallow semantics, e.g. information extraction and question answering.
SECTION 3: Previous Work.
A considerable amount of research addresses structurally and statistically manipulating the hierarchy of WORD- NET and the construction of new wordnets using the concept structure from English.
For lexical FreeNet, Beefer- man (1998) adds over 350 000 collocation pairs (trigger pairs) extracted from a 160 million word corpus of broadcast news using mutual information.
The co-occurrence window was 500 words which was designed to approximate average document length.
Caraballo and Charniak (1999) have explored determining noun specificity from raw text.
They find that simple frequency counts are the most effective way of determining the parent-child ordering, achieving 83% accuracy over types of vehicle, food and occupation.
The other measure they found to be successful was the entropy of the conditional distribution of surrounding words given the noun.
Specificity ordering is a necessary step for building a noun hierarchy.
However, this approach clearly cannot build a hierarchy alone.
For instance, entity is less frequent than many concepts it subsumes.
This suggests it will only be possible to add words to an existing abstract structure rather than create categories right up to the unique beginners.
Hearst and Schu¨ tze (1993) flatten WORDNET into 726 categories using an algorithm which attempts to minimise the variance in category size.
These categories are used to label paragraphs with topics, effectively repeating Yarowskyâs (1992) experiments using the their categories rather than Rogetâs thesaurus.
Schu¨ tzeâs (1992) WordSpace system was used to add topical links, such as between ball, racquet and game (the tennis problem).
Further, they also use the same vector-space techniques to label previously unseen words using the most common class assigned to the top 20 synonyms for that word.
Widdows (2003) uses a similar technique to insert words into the WORDNET hierarchy.
He first extracts synonyms for the unknown word using vector-space similarity measures based on Latent Semantic Analysis and then searches for a location in the hierarchy nearest to these synonyms.
This same technique as is used in our approach to supersense tagging.
Ciaramita and Johnson (2003) implement a super- sense tagger based on the multi-class perceptron classifier (Crammer and Singer, 2001), which uses the standard collocation, spelling and syntactic features common in WSD and named entity recognition systems.
Their insight was to use the WORDNET glosses as annotated training data and massively increase the number of training instances using the noun hierarchy.
They developed an efficient algorithm for estimating the model over hierarchical training data.
SECTION 4: Evaluation.
Ciaramita and Johnson (2003) propose a very natural evaluation for supersense tagging: inserting the extra common nouns that have been added to a new version of WORDNET.
They use the common nouns that have been added to WORDNET 1.7.1 since WORDNET 1.6 and compare this evaluation with a standard cross-validation approach that uses a small percentage of the words from their WORDNET 1.6 training set for evaluation.
Their results suggest that the WORDNET 1.7.1 test set is significantly harder because of the large number of abstract category nouns, e.g. communication and cognition, that appear in the 1.7.1 data, which are difficult to classify.
Our evaluation will use exactly the same test sets as Ciaramita and Johnson (2003).
The WORDNET 1.7.1 test set consists of 744 previously unseen nouns, the majority of which (over 90%) have only one sense.
The WORD- NET 1.6 test set consists of several cross-validation sets of 755 nouns randomly selected from the BLLIP training set used by Ciaramita and Johnson (2003).
They have kindly supplied us with the WORDNET 1.7.1 test set and one cross-validation run of the WORDNET 1.6 test set.
Our development experiments are performed on the WORDNET 1.6 test set with one final run on the WORD- NET 1.7.1 test set.
Some examples from the test sets are given in Table 2 with their supersenses.
SECTION 5: Corpus.
We have developed a 2 billion word corpus, shallow- parsed with a statistical NLP pipeline, which is by far the Table 2: Example nouns and their supersenses largest NLP processed corpus described in published re search.
The corpus consists of the British National Corpus (BNC), the Reuters Corpus Volume 1 (RCV1), and most of the Linguistic Data Consortiumâs news text collected since 1987: Continuous Speech Recognition III (CSRIII); North American News Text Corpus (NANTC); the NANTC Supplement (NANTS); and the ACQUAINT Corpus.
The components and their sizes including punctuation are given in Table 3.
The LDC has recently released the English Gigaword corpus which includes most of the corpora listed above.
C O R P U S D O C S . S E N T S . WO R D S B N C 4 1 2 4 6 . 2 M 1 1 4 M R C V1 8 0 6 7 9 1 8 . 1 M 2 0 7 M C S R -I I I 4 9 1 3 4 9 9 . 3 M 2 2 6 M NA N T C 9 3 0 3 6 7 2 3.
2 M 5 5 9 M NA N T S 9 4 2 1 6 7 2 5.
2 M 5 0 7 M AC QU A I N T 1 03 3 46 1 2 1.
3 M 4 9 1 M Table 3: 2 billion word corpus statistics We have tokenized the text using the Grok OpenNLP tokenizer (Morton, 2002) and split the sentences using MXTerminator (Reynar and Ratnaparkhi, 1997).
Any sentences less than 3 words or more than 100 words long were rejected, along with sentences containing more than 5 numbers or more than 4 brackets, to reduce noise.
The rest of the pipeline is described in the next section.
SECTION 6: Semantic.
Similarity Vector-space models of similarity are based on the distributional hypothesis that similar words appear in similar contexts.
This hypothesis suggests that semantic similarity can be measured by comparing the contexts each word appears in.
In vector-space models each headword is represented by a vector of frequency counts recording the contexts that it appears in.
The key parameters are the context extraction method and the similarity measure used to compare context vectors.
Our approach to vector-space similarity is based on the SEXTANT system described in Grefenstette (1994).
Curran and Moens (2002b) compared several context extraction methods and found that the shallow pipeline and grammatical relation extraction used in SEXTANT was both extremely fast and produced high-quality results.
SEXTANT extracts relation tuples (w, r, wt ) for each noun, where w is the headword, r is the relation type and wt is the other word.
The efficiency of the SEXTANT approach makes the extraction of contextual information from over 2 billion words of raw text feasible.
We describe the shallow pipeline in detail below.
Curran and Moens (2002a) compared several different similarity measures and found that Grefenstetteâs weighted JACCARD measure performed the best: R E L AT I O N D E S C R I P T I O N adj nounâadjectival modifier relation dobj verbâdirect object relation iobj verbâindirect object relation nn nounânoun modifier relation nnprep nounâprepositional head relation subj verbâsubject relation Table 4: Grammatical relations from SEXTANT against the CELEX lexical database (Minnen et al., 2001) â and is very efficient, analysing over 80 000 words per second.
morpha often maintains sense distinctions between singular and plural nouns; for instance: spectacles is not reduced to spectacle, but fails to do so in other cases: glasses is converted to glass.
This inconsis L min(wgt(w1 , âr , âwI ), wgt(w2 , âr , âwI )) L max(wgt(w1 , âr , âwI ), wgt(w2 , âr , âwI )) (1) tency is problematic when using morphological analysis to smooth vector-space models.
However, morphological smoothing still produces better results in practice.
where wgt(w, r, wt ) is the weight function for relation (w, r, wt ).
Curran and Moens (2002a) introduced the TTEST weight function, which is used in collocation extraction.
Here, the t-test compares the joint and product probability distributions of the headword and context: 6.3 Grammatical Relation Extraction.
After the raw text has been POS tagged and chunked, the grammatical relation extraction algorithm is run over the chunks.
This consists of five passes over each sentence that first identify noun and verb phrase heads and p(w, r, wt ) â p(â, r, wt )p(w, â, â) p(â, r, wt )p(w, â, â) (2) then collect grammatical relations between each common noun and its modifiers and verbs.
A global list of grammatical relations generated by each pass is maintained where â indicates a global sum over that element of the relation tuple.
JACCARD and TTEST produced better quality synonyms than existing measures in the literature, so we use Curran and Moenâs configuration for our super- sense tagging experiments.
6.1 Part of Speech Tagging and Chunking.
Our implementation of SEXTANT uses a maximum entropy POS tagger designed to be very efficient, tagging at around 100 000 words per second (Curran and Clark, 2003), trained on the entire Penn Treebank (Marcus et al., 1994).
The only similar performing tool is the Trigrams ânâ Tags tagger (Brants, 2000) which uses a much simpler statistical model.
Our implementation uses a maximum entropy chunker which has similar feature types to Koeling (2000) and is also trained on chunks extracted from the entire Penn Treebank using the CoNLL 2000 script.
Since the Penn Treebank separates PPs and conjunctions from NPs, they are concatenated to match Grefenstetteâs table-based results, i.e. the SEXTANT always prefers noun attachment.
6.2 Morphological Analysis.
Our implementation uses morpha, the Sussex morphological analyser (Minnen et al., 2001), which is implemented using lex grammars for both affix splitting and generation.
morpha has wide coverage â nearly 100% across the passes.
The global list is used to determine if a word is already attached.
Once all five passes have been completed this association list contains all of the noun- modifier/verb pairs which have been extracted from the sentence.
The types of grammatical relation extracted by SEXTANT are shown in Table 4.
For relations between nouns (nn and nnprep), we also create inverse relations (wt , rt , w) representing the fact that wt can modify w. The 5 passes are described below.
Pass 1: Noun Pre-modifiers This pass scans NPs, left to right, creating adjectival (adj) and nominal (nn) pre-modifier grammatical relations (GRs) with every noun to the pre-modifierâs right, up to a preposition or the phrase end.
This corresponds to assuming right-branching noun compounds.
Within each NP only the NP and PP heads remain unattached.
Pass 2: Noun Post-modifiers This pass scans NPs, right to left, creating post-modifier GRs between the unattached heads of NPs and PPs.
If a preposition is encountered between the noun heads, a prepositional noun (nnprep) GR is created, otherwise an appositional noun (nn) GR is created.
This corresponds to assuming right-branching PP attachment.
After this phrase only the NP head remains unattached.
Tense Determination The rightmost verb in each VP is considered the head.
A VP is initially categorised as active.
If the head verb is a form of be then the VP becomes attributive.
Otherwise, the algorithm scans the VP from right to left: if an auxiliary verb form of be is encountered the VP becomes passive; if a progressive verb (except being) is encountered the VP becomes active.
Only the noun heads on either side of VPs remain unattached.
The remaining three passes attach these to the verb heads as either subjects or objects depending on the voice of the VP.
Pass 3: Verb Pre-Attachment This pass scans sentences, right to left, associating the first NP head to the left of the VP with its head.
If the VP is active, a subject (subj) relation is created; otherwise, a direct object (dobj) relation is created.
For example, antigen is the subject of represent.
Pass 4: Verb Post-Attachment This pass scans sentences, left to right, associating the first NP or PP head to the right of the VP with its head.
If the VP was classed as active and the phrase is an NP then a direct object (dobj) relation is created.
If the VP was classed as passive and the phrase is an NP then a subject (subj) relation is created.
If the following phrase is a PP then an indirect object (iobj) relation is created.
The interaction between the head verb and the preposition determine whether the noun is an indirect object of a ditransitive verb or alternatively the head of a PP that is modifying the verb.
However, SEXTANT always attaches the PP to the previous phrase.
Pass 5: Verb Progressive Participles The final step of the process is to attach progressive verbs to subjects and objects (without concern for whether they are already attached).
Progressive verbs can function as nouns, verbs and adjectives and once again a na¨ıve approximation to the correct attachment is made.
Any progressive verb which appears after a determiner or quantifier is considered a noun.
Otherwise, it is a verb and passes 3 and 4 are repeated to attach subjects and objects.
Finally, SEXTANT collapses the nn, nnprep and adj relations together into a single broad noun-modifier grammatical relation.
Grefenstette (1994) claims this extractor has a grammatical relation accuracy of 75% after manu ally checking 60 sentences.
SECTION 7: Approach.
Our approach uses voting across the known supersenses of automatically extracted synonyms, to select a super- sense for the unknown nouns.
This technique is similar to Hearst and Schu¨ tze (1993) and Widdows (2003).
However, sometimes the unknown noun does not appear in our 2 billion word corpus, or at least does not appear frequently enough to provide sufficient contextual information to extract reliable synonyms.
In these cases, our SUFFIX EXAMPLE SUPERSENSEness remoteness attribute -tion, -ment annulment act -ist, -man statesman person -ing, -ion bowling act -ity viscosity attribute -ics, -ism electronics cognition -ene, -ane, -ine arsine substance -er, -or, -ic, -ee, -an mariner person -gy entomology cognition Table 5: Hand-coded rules for supersense guessing fall-back method is a simple hand-coded classifier which examines the unknown noun and makes a guess based on simple morphological analysis of the suffix.
These rules were created by inspecting the suffixes of rare nouns in WORDNET 1.6.
The supersense guessing rules are given in Table 5.
If none of the rules match, then the default supersense artifact is assigned.
The problem now becomes how to convert the ranked list of extracted synonyms for each unknown noun into a single supersense selection.
Each extracted synonym votes for its one or more supersenses that appear in WORDNET 1.6.
There are many parameters to consider: ⢠how many extracted synonyms to use; ⢠how to weight each synonymâs vote; ⢠whether unreliable synonyms should be filtered out; ⢠how to deal with polysemous synonyms.
The experiments described below consider a range of options for these parameters.
In fact, these experiments are so quick to run we have been able to exhaustively test many combinations of these parameters.
We have experimented with up to 200 voting extracted synonyms.
There are several ways to weight each synonymâs contribution.
The simplest approach would be to give each synonym the same weight.
Another approach is to use the scores returned by the similarity system.
Alternatively, the weights can use the ranking of the extracted synonyms.
Again these options have been considered below.
A related question is whether to use all of the extracted synonyms, or perhaps filter out synonyms for which a small amount of contextual information has been extracted, and so might be unreliable.
The final issue is how to deal with polysemy.
Does every supersense of each extracted synonym get the whole weight of that synonym or is it distributed evenly between the supersenses like Resnik (1995)?
Another alternative is to only consider unambiguous synonyms with a single supersense in WORDNET.
A disadvantage of this similarity approach is that it requires full synonym extraction, which compares the unknown word against a large number of words when, in S Y S T E M W N 1.6 W N 1.7 .1 Cia ra mit a an d Joh nso n bas eli ne 2 1 % 2 8 % Cia ra mit a an d Joh nso n per cep tro n 5 3 % 5 3 % Si mil arit y bas ed res ult s 6 8 % 6 3 % Table 6: Summary of supersense tagging accuracies fact, we want to calculate the similarity to a small number of supersenses.
This inefficiency could be reduced significantly if we consider only very high frequency words, but even this is still expensive.
SECTION 8: Results.
We have used the WORDNET 1.6 test set to experiment with different parameter settings and have kept the WORDNET 1.7.1 test set as a final comparison of best results with Ciaramita and Johnson (2003).
The experiments were performed by considering all possible configurations of the parameters described above.
The following voting options were considered for each supersense of each extracted synonym: the initial voting weight for a supersense could either be a constant (IDENTITY) or the similarity score (SCORE) of the synonym.
The initial weight could then be divided by the number of supersenses to share out the weight (SHARED).
The weight could also be divided by the rank (RANK) to penalise supersenses further down the list.
The best performance on the 1.6 test set was achieved with the SCORE voting, without sharing or ranking penalties.
The extracted synonyms are filtered before contributing to the vote with their supersense(s).
This filtering involves checking that the synonymâs frequency and number of contexts are large enough to ensure it is reliable.
We have experimented with a wide range of cutoffs and the best performance on the 1.6 test set was achieved using a minimum cutoff of 5 for the synonymâs frequency and the number of contexts it appears in.
The next question is how many synonyms are considered.
We considered using just the nearest unambiguous synonym, and the top 5, 10, 20, 50, 100 and 200 synonyms.
All of the top performing configurations used 50 synonyms.
We have also experimented with filtering out highly polysemous nouns by eliminating words with two, three or more synonyms.
However, such a filter turned out to make little difference.
Finally, we need to decide when to use the similarity measure and when to fall-back to the guessing rules.
This is determined by looking at the frequency and number of attributes for the unknown word.
Not surprisingly, the similarity system works better than the guessing rules if it has any information at all.
The results are summarised in Table 6.
The accuracy of the best-performing configurations was 68% on the Table 7: Breakdown of results by supersense WORDNET 1.6 test set with several other parameter combinations described above performing nearly as well.
On the previously unused WORDNET 1.7.1 test set, our accuracy is 63% using the best system on the WORDNET 1.6 test set.
By optimising the parameters on the 1.7.1 test set we can increase that to 64%, indicating that we have not excessively over-tuned on the 1.6 test set.
Our results significantly outperform Ciaramita and Johnson (2003) on both test sets even though our system is unsupervised.
The large difference between our 1.6 and 1.7.1 test set accuracy demonstrates that the 1.7.1 set is much harder.
Table 7 shows the breakdown in performance for each supersense.
The columns show the number of instances of each supersense with the precision, recall and f-score measures as percentages.
The most frequent supersenses in both test sets were person, attribute and act.
Of the frequent categories, person is the easiest supersense to get correct in both the 1.6 and 1.7.1 test sets, followed by food, artifact and substance.
This is not surprising since these concrete words tend to have very fewer other senses, well constrained contexts and a relatively high frequency.
These factors are conducive for extracting reliable synonyms.
These results also support Ciaramita and Johnsonâs view that abstract concepts like communication, cognition and state are much harder.
We would expect the location supersense to perform well since it is quite concrete, but unfortunately our synonym extraction system does not incorporate proper nouns, so many of these words were classified using the hand-built classifier.
Also, in the data from Ciaramita and Johnson all of the words are in lower case, so no sensible guessing rules could help.
SECTION 9: Other Alternatives and Future Work.
An alternative approach worth exploring is to create context vectors for the supersense categories themselves and compare these against the words.
This has the advantage of producing a much smaller number of vectors to compare against.
In the current system, we must compare a word against the entire vocabulary (over 500 000 headwords), which is much less efficient than a comparison against only 26 supersense context vectors.
The question now becomes how to construct vectors of supersenses.
The most obvious solution is to sum the context vectors across the words which have each supersense.
However, our early experiments suggest that this produces extremely large vectors which do not match well against the much smaller vectors of each unseen word.
Also, the same questions arise in the construction of these vectors.
How are words with multiple supersenses handled?
Our preliminary experiments suggest that only combining the vectors for unambiguous words produces the best results.
One solution would be to take the intersection between vectors across words for each supersense (i.e. to find the common contexts that these words appear in).
However, given the sparseness of the data this may not leave very large context vectors.
A final solution would be to consider a large set of the canonical attributes (Curran and Moens, 2002a) to represent each supersense.
Canonical attributes summarise the key contexts for each headword and are used to improve the efficiency of the similarity comparisons.
There are a number of problems our system does not currently handle.
Firstly, we do not include proper names in our similarity system which means that location entities can be very difficult to identify correctly (as the results demonstrate).
Further, our similarity system does not currently incorporate multi-word terms.
We overcome this by using the synonyms of the last word in the multi-word term.
However, there are 174 multi-word terms (23%) in the WORDNET 1.7.1 test set which we could probably tag more accurately with synonyms for the whole multi-word term.
Finally, we plan to implement a supervised machine learner to replace the fall- back method, which currently has an accuracy of 37% on the WORDNET 1.7.1 test set.
We intend to extend our experiments beyond the Ciaramita and Johnson (2003) set to include previous and more recent versions of WORDNET to compare their difficulty, and also perform experiments over a range of corpus sizes to determine the impact of corpus size on the quality of results.
We would like to move onto the more difficult task of insertion into the hierarchy itself and compare against the initial work by Widdows (2003) using latent semantic analysis.
Here the issue of how to combine vectors is even more interesting since there is the additional structure of the WORDNET inheritance hierarchy and the small synonym sets that can be used for more fine-grained combination of vectors.
SECTION 10: Conclusion.
Our application of semantic similarity to supersense tagging follows earlier work by Hearst and Schu¨ tze (1993) and Widdows (2003).
To classify a previously unseen common noun our approach extracts synonyms which vote using their supersenses in WORDNET 1.6.
We have experimented with several parameters finding that the best configuration uses 50 extracted synonyms, filtered by frequency and number of contexts to increase their reliability.
Each synonym votes for each of its supersenses from WORDNET 1.6 using the similarity score from our synonym extractor.
Using this approach we have significantly outperformed the supervised multi-class perceptron Ciaramita and Johnson (2003).
This paper also demonstrates the use of a very efficient shallow NLP pipeline to process a massive corpus.
Such a corpus is needed to acquire reliable contextual information for the often very rare nouns we are attempting to supersense tag.
This application of semantic similarity demonstrates that an unsupervised methods can outperform supervised methods for some NLP tasks if enough data is available.
SECTION: Acknowledgements
We would like to thank Massi Ciaramita for supplying his original data for these experiments and answering our queries, and to Stephen Clark and the anonymous reviewers for their helpful feedback and corrections.
This work has been supported by a Commonwealth scholarship, Sydney University Travelling Scholarship and Australian Research Council Discovery Project DP0453131.
|
4 | P07-1040 | Improved Word-Level System Combination for Machine Translation | Recently, confusion network decoding has been applied in machine translation system combination.
Due to errors in the hypothesis alignment, decoding may result in ungrammatical combination outputs.
This paper describes an improved confusion network based method to combine outputs from multiple MT systems.
In this approach, arbitrary features may be added log-linearly into the objective function, thus allowing language model expansion and re-scoring.
Also, a novel method to automatically select the hypothesis which other hypotheses are aligned against is proposed.
A generic weight tuning algorithm may be used to optimize various automatic evaluation metrics including TER, BLEU and METEOR.
The experiments using the 2005 Arabic to English and Chinese to English NIST MT evaluation tasks show significant improvements in BLEU scores compared to earlier confusion network decoding based methods. | Title: Improved Word-Level System Combination for Machine Translation
ABSTRACT
Recently, confusion network decoding has been applied in machine translation system combination.
Due to errors in the hypothesis alignment, decoding may result in ungrammatical combination outputs.
This paper describes an improved confusion network based method to combine outputs from multiple MT systems.
In this approach, arbitrary features may be added log-linearly into the objective function, thus allowing language model expansion and re-scoring.
Also, a novel method to automatically select the hypothesis which other hypotheses are aligned against is proposed.
A generic weight tuning algorithm may be used to optimize various automatic evaluation metrics including TER, BLEU and METEOR.
The experiments using the 2005 Arabic to English and Chinese to English NIST MT evaluation tasks show significant improvements in BLEU scores compared to earlier confusion network decoding based methods.
SECTION 1: Introduction
System combination has been shown to improve classification performance in various tasks.
There are several approaches for combining classifiers.
In ensemble learning, a collection of simple classifiers is used to yield better performance than any single classifier; for example boosting (Schapire, 1990).
Another approach is to combine outputs from a few highly specialized classifiers.
The classifiers may 312 be based on the same basic modeling techniques but differ by, for example, alternative feature representations.
Combination of speech recognition outputs is an example of this approach (Fiscus, 1997).
In speech recognition, confusion network decoding (Mangu et al., 2000) has become widely used in system combination.
Unlike speech recognition, current statistical machine translation (MT) systems are based on various different paradigms; for example phrasal, hierarchical and syntax-based systems.
The idea of combining outputs from different MT systems to produce consensus translations in the hope of generating better translations has been around for a while (Frederking and Nirenburg, 1994).
Recently, confusion network decoding for MT system combination has been proposed (Bangalore et al., 2001).
To generate confusion networks, hypotheses have to be aligned against each other.
In (Bangalore et al., 2001), Levenshtein alignment was used to generate the network.
As opposed to speech recognition, the word order between two correct MT outputs may be different and the Levenshtein alignment may not be able to align shifted words in the hypotheses.
In (Matusov et al., 2006), different word orderings are taken into account by training alignment models by considering all hypothesis pairs as a parallel corpus using GIZA++ (Och and Ney, 2003).
The size of the test set may influence the quality of these alignments.
Thus, system outputs from development sets may have to be added to improve the GIZA++ alignments.
A modified Levenshtein alignment allowing shifts as in computation of the translation edit rate (TER) (Snover et al., 2006) was used to align hy Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 312â319, Prague, Czech Republic, June 2007.
Qc 2007 Association for Computational Linguistics potheses in (Sim et al., 2007).
The alignments from TER are consistent as they do not depend on the test set size.
Also, a more heuristic alignment method has been proposed in a different system combination approach (Jayaraman and Lavie, 2005).
A full comparison of different alignment methods would be difficult as many approaches require a significant amount of engineering.
Confusion networks are generated by choosing one hypothesis as the âskeletonâ, and other hypotheses are aligned against it.
The skeleton defines the word order of the combination output.
Minimum Bayes risk (MBR) was used to choose the skeleton in (Sim et al., 2007).
The average TER score was computed between each systemâs -best hypothesis and all other hypotheses.
The MBR hypothesis is the one with the minimum average TER and thus, may be viewed as the closest to all other hypotheses in terms of TER.
This work was extended in (Rosti et al., 2007) by introducing system weights for word confidences.
However, the system weights did not influence the skeleton selection, so a hypothesis from a system with zero weight might have been chosen as the skeleton.
In this work, confusion networks are generated by using the -best output from each system as the skeleton, and prior probabilities for each network are estimated from the average TER scores between the skeleton and other hypotheses.
All resulting confusion networks are connected in parallel into a joint lattice where the prior probabilities are also multiplied by the system weights.
The combination outputs from confusion network decoding may be ungrammatical due to alignment errors.
Also the word-level decoding may break coherent phrases produced by the individual systems.
In this work, log-posterior probabilities are estimated for each confusion network arc instead of using votes or simple word confidences.
This allows a log-linear addition of arbitrary features such as language model (LM) scores.
The LM scores should increase the total log-posterior of more grammatical hypotheses.
Powellâs method (Brent, 1973) is used to tune the system and feature weights simultaneously so as to optimize various automatic evaluation metrics on a development set.
Tuning is fully automatic, as opposed to (Matusov et al., 2006) where global system weights were set manually.This paper is organized as follows.
Three evalu ation metrics used in weights tuning and reporting the test set results are reviewed in Section 2.
Section 3 describes confusion network decoding for MT system combination.
The extensions to add features log-linearly and improve the skeleton selection are presented in Sections 4 and 5, respectively.
Section 6 details the weights optimization algorithm and the experimental results are reported in Section 7.
Conclusions and future work are discussed in Section 8.
SECTION 2: Evaluation Metrics.
Currently, the most widely used automatic MT evaluation metric is the NIST BLEU4 (Papineni et al., 2002).
It is computed as the geometric mean of - gram precisions up to -grams between the hypothesis and reference as follows (1) where is the brevity penalty and are the -gram precisions.
When mul tiple references are provided, the -gram counts against all references are accumulated to compute the precisions.
Similarly, full test set scores are obtained by accumulating counts over all hypothesis and reference pairs.
The BLEU scores are between and , higher being better.
Often BLEU scores are reported as percentages and âone BLEU point gainâ usually means a BLEU increase of . Other evaluation metrics have been proposed to replace BLEU.
It has been argued that METEOR correlates better with human judgment due to higher weight on recall than precision (Banerjee and Lavie, 2005).
METEOR is based on the weighted harmonic mean of the precision and recall measured on uni- gram matches as follows (2) where is the total number of unigram matches, is the hypothesis length, is the reference length and is the minimum number of -gram matches that covers the alignment.
The second term is a fragmentation penalty which penalizes the harmonic mean by a factor of up to when ; i.e., there are no matching -grams higher than . By default, METEOR script counts the words that match exactly, and words that match after a simple Porter stemmer.
Additional matching modules including WordNet stemming and synonymy may also be used.
When multiple references are provided, the lowest score is reported.
Full test set scores are obtained by accumulating statistics over all test sentences.
The METEOR scores are also between and , higher being better.
The scores in the results section are reported as percentages.
Translation edit rate (TER) (Snover et al., 2006)has been proposed as more intuitive evaluation met 1.
Each arc represents an alternative word at that.
position in the sentence and the number of votes for each word is marked in parentheses.
Confusion network decoding usually requires finding the path with the highest confidence in the network.
Based on vote counts, there are three alternatives in the example: âcat sat on the matâ, âcat on the matâ and âcat sitting on the matâ, each having accumulated 10 votes.
The alignment procedure plays an important role, as by switching the position of the word âsatâ and the following NULL in the skeleton, there would be a single highest scoring path through the network; that is, âcat on the matâ.
ric since it is based on the rate of edits required to transform the hypothesis into the reference.
The cat (2) sat (1) on (2) the (2) mat (3) TER score is computed as follows 1 2 3 4 5 6 (3) hat (1) sitting (1) a (1) where is the reference length.
The only difference to word error rate is that the TER allows shifts.
A shift of a sequence of words is counted as a single edit.
The minimum translation edit alignment is usually found through a beam search.
When multiple references are provided, the edits from the closest reference are divided by the average reference length.
Full test set scores are obtained by accumulating the edits and the average reference lengths.
The perfect TER score is 0, and otherwise higher than zero.
The TER score may also be higher than 1 due to insertions.
Also TER is reported as a percentage in the results section.
SECTION 3: Confusion Network Decoding.
Confusion network decoding in MT has to pick one hypothesis as the skeleton which determines the word order of the combination.
The other hypotheses are aligned against the skeleton.
Either votes or some form of confidences are assigned to each word in the network.
For example using âcat sat the matâ as the skeleton, aligning âcat sitting on the matâ and âhat on a matâ against it might yield the following alignments: cat sat the mat cat sitting on the mat hat on a mat where represents a NULL word.
In graphical form, the resulting confusion network is shown in Figure Figure 1: Example consensus network with votes on word arcs.
Different alignment methods yield different confusion networks.
The modified Levenshtein alignment as used in TER is more natural than simple edit distance such as word error rate since machine translation hypotheses may have different word orders while having the same meaning.
As the skeleton determines the word order, the quality of the combination output also depends on which hypothesis is chosen as the skeleton.
Since the modified Levenshtein alignment produces TER scores between the skeleton and the other hypotheses, a natural choice for selecting the skeleton is the minimum average TER score.
The hypothesis resulting in the lowest average TER score when aligned against all other hypotheses is chosen as the skeleton as follows (4) where is the number of systems.
This is equivalent to minimum Bayes risk decoding with uniform posterior probabilities (Sim et al., 2007).
Other evaluation metrics may also be used as the MBR loss function.
For BLEU and METEOR, the loss function would be and . It has been found that multiple hypotheses from each system may be used to improve the quality of the combination output (Sim et al., 2007).
When using -best lists from each system, the words may be assigned a different score based on the rank of the hypothesis.
In (Rosti et al., 2007), simple score was assigned to the word coming from the th- best hypothesis.
Due to the computational burden of the TER alignment, only -best hypotheses were considered as possible skeletons, and hypotheses per system were aligned.
Similar approach to estimate word posteriors is adopted in this work.
System weights may be used to assign a system specific confidence on each word in the network.
The weights may be based on the systemsâ relative performance on a separate development set or they may be automatically tuned to optimize some evaluation metric on the development set.
In (Rosti et al., 2007), the total confidence of the th best confusion network hypothesis , including NULL words, given the th source sentence was given by (5) word-level decoding.
For example, two synonymous words may be aligned to other words not already aligned, which may result in repetitive output.
Second, the additive confidence scores in Equation 5 have no probabilistic meaning and cannot therefore be combined with language model scores.
Language model expansion and re-scoring may help by increasing the probability of more grammatical hypotheses in decoding.
Third, the system weights are independent of the skeleton selection.
Therefore, a hypothesis from a system with a low or zero weight may be chosen as the skeleton.
SECTION 4: Log-Linear Combination with Arbitrary.
Features To address the issue with ungrammatical hypotheses and allow language model expansion and re-scoring, the hypothesis confidence computation is modified.
Instead of summing arbitrary confidence scores as in Equation 5, word posterior probabilities are used as follows (6) where is the number of nodes in the confusion network for the source sentence , is the number of translation systems, is the th system weight, is the accumulated confidence for word produced by system between nodes and , and is a weight for the number of NULL links along the hypothesis . The word confidences were increased by if the word aligns between nodes and in the network.
If no word aligns between nodes and , the NULL word confidence at that position was increased by . The last term controls the number of NULL words generated in the output and may be viewed as an insertion penalty.
Each arc in the confusion network carries the word label and scores . The decoder outputs the hypothesis with the highest given the current set of weights.
3.1 Discussion.
There are several problems with the previous confusion network decoding approaches.
First, the decoding can generate ungrammatical hypotheses due to alignment errors and phrases broken by the where is the language model weight, is the LM log-probability and is the number of words in the hypothesis . The word posteriors are estimated by scaling the confidences to sum to one for each system over all words in between nodes and . The system weights are also constrained to sum to one.
Equation 6 may be viewed as a log-linear sum of sentence- level features.
The first feature is the sum of word log-posteriors, the second is the LM log-probability, the third is the log-NULL score and the last is the log-length score.
The last two terms are not completely independent but seem to help based on experimental results.
The number of paths through a confusion network grows exponentially with the number of nodes.
Therefore expanding a network with an -gram language model may result in huge lattices if is high.
Instead of high order -grams with heavy pruning, a bi-gram may first be used to expand the lattice.
After optimizing one set of weights for the expanded confusion network, a second set of weights for - best list re-scoring with a higher order -gram model may be optimized.
On a test set, the first set of weights is used to generate an -best list from the bi-gram expanded lattice.
This -best list is then re-scored with the higher order -gram.
The second set of weights is used to find the final -best from the re-scored -best list.
SECTION 5: Multiple Confusion Network Decoding.
As discussed in Section 3, there is a disconnect between the skeleton selection and confidence estimation.
To prevent the -best from a system with a low or zero weight being selected as the skeleton, confusion networks are generated for each system and the average TER score in Equation 4 is used to estimate a prior probability for the corresponding network.
All confusion networks are connected to a single start node with NULL arcs which contain the prior probability from the system used as the skeleton for that network.
All confusion network are connected to a common end node with NULL arcs.
The final arcs have a probability of one.
The prior probabilities in the arcs leaving the first node will be multiplied by the corresponding system weights which guarantees that a path through a network generated around a -best from a system with a zero weight will not be chosen.
The prior probabilities are estimated by viewing the negative average TER scores between the skeleton and other hypotheses as log-probabilities.
These log-probabilities are scaled so that the priors sum to one.
There is a concern that the prior probabilities estimated this way may be inaccurate.
Therefore, the priors may have to be smoothed by a tunable exponent.
However, the optimization experiments showed that the best performance was obtained by having a smoothing factor of 1 which is equivalent to the original priors.
Thus, no smoothing was used in the experiments presented later in this paper.
An example joint network with the priors is shown in Figure 2.
This example has three confusion networks with priors , and . The total number of nodes in the network is represented by . Similar combination of multiple confusion networks was presented in (Matusov et al., 2006).
However, this approach did not include sentence 1 Na.
Figure 2: Three confusion networks with prior probabilities.
specific prior estimates, word posterior estimates, and did not allow joint optimization of the system and feature weights.
SECTION 6: Weights Optimization.
The optimization of the system and feature weights may be carried out using -best lists as in (Ostendorf et al., 1991).
A confusion network may be represented by a word lattice and standard tools may be used to generate -best hypothesis lists including word confidence scores, language model scores and other features.
The -best list may be reordered using the sentence-level posteriors from Equation 6 for the th source sentence and the corresponding th hypothesis . The current -best hypothesis given a set of weights may be represented as follows (7) The objective is to optimize the -best score on a development set given a set of reference translations.
For example, estimating weights which minimize TER between a set of -best hypothesis and reference translations can be written as (8) This objective function is very complicated, so gradient-based optimization methods may not be used.
In this work, modified Powellâs method as proposed by (Brent, 1973) is used.
The algorithm explores better weights iteratively starting from a set of initial weights.
First, each dimension is optimized using a grid-based line minimization algorithm.
Then, a new direction based on the changes in the objective function is estimated to speed up the search.
To improve the chances of finding a global optimum, 19 random perturbations of the initial weights are used in parallel optimization runs.
Since the -best list represents only a small portion of all hypotheses in the confusion network, the optimized weights from one iteration may be used to generate a new -best list from the lattice for the next iteration.
Similarly, weights which maximize BLEU or METEOR may be optimized.
The same Powellâs method has been used to estimate feature weights of a standard feature-based phrasal MT decoder in (Och, 2003).
A more efficient algorithm for log-linear models was also proposed.
In this work, both the system and feature weights are jointly optimized, so the efficient algorithm for the log-linear models cannot be used.
SECTION 7: Results.
The improved system combination method was compared to a simple confusion network decoding without system weights and the method proposed in (Rosti et al., 2007) on the Arabic to English and Chinese to English NIST MT05 tasks.
Six MT systems were combined: three (A,C,E) were phrase- based similar to (Koehn, 2004), two (B,D) were hierarchical similar to (Chiang, 2005) and one (F) was syntax-based similar to (Galley et al., 2006).
All systems were trained on the same data and the outputs used the same tokenization.
The decoder weights for systems A and B were tuned to optimize TER, and others were tuned to optimize BLEU.
All decoder weight tuning was done on the NIST MT02 task.
The joint confusion network was expanded with a bi-gram language model and a -best list was generated from the lattice for each tuning iteration.
The system and feature weights were tuned on the union of NIST MT03 and MT04 tasks.
All four reference translations available for the tuning and test sets were used.
A first set of weights with the bi- gram LM was optimized with three iterations.
A second set of weights was tuned for 5-gram -best list re-scoring.
The bi-gram and 5-gram English language models were trained on about 7 billion words.
The final combination outputs were detokenized and cased before scoring.
The tuning set results on the Arabic to English NIST MT03+MT04 task are shown in Table 1.
The Ar ab ic tu ni ng T E R B L E U M T R s y s t e m A s y s t e m B s y s t e m C s y s t e m D s y s t e m E s y s t e m F 44 .9 3 46 .4 1 46 .1 0 44 .3 6 45 .3 5 47 .1 0 45 .7 1 43 .0 7 46 .4 1 46 .8 3 45 .4 4 44 .5 2 66 .0 9 64 .7 9 65 .3 3 66 .9 1 65 .6 9 65 .2 8 no we ig hts ba sel in e 42 .3 5 42 .1 9 48 .9 1 49 .8 6 67 .7 6 68 .3 4 T E R t u n e d B L E U t u n e d M T R t u n e d 41 .8 8 42 .1 2 54 .0 8 51 .4 5 51 .7 2 38 .9 3 68 .6 2 68 .5 9 71 .4 2 Table 1: Mixed-case TER and BLEU, and lowercase METEOR scores on Arabic NIST MT03+MT04.
Ar ab ic tes t T E R B L E U M T R s y s t e m A s y s t e m B s y s t e m C s y s t e m D s y s t e m E s y s t e m F 42 .9 8 43 .7 9 43 .9 2 40 .7 5 42 .1 9 44 .3 0 49 .5 8 47 .0 6 47 .8 7 52 .0 9 50 .8 6 50 .1 5 69 .8 6 68 .6 2 66 .9 7 71 .2 3 70 .0 2 69 .7 5 no we ig hts ba sel in e 39 .3 3 39 .2 9 53 .6 6 54 .5 1 71 .6 1 72 .2 0 T E R t u n e d B L E U t u n e d M T R t u n e d 39 .1 0 39 .1 3 51 .5 6 55 .3 0 55 .4 8 41 .7 3 72 .5 3 72 .8 1 74 .7 9 Table 2: Mixed-case TER and BLEU, and lowercase METEOR scores on Arabic NIST MT05.
best score on each metric is shown in bold face fonts.
The row labeled as no weights corresponds to Equation 5 with uniform system weights and zero NULL weight.
The baseline corresponds to Equation 5 with TER tuned weights.
The following three rows correspond to the improved confusion network decoding with different optimization metrics.
As expected, the scores on the metric used in tuning are the best on that metric.
Also, the combination results are better than any single system on all metrics in the case of TER and BLEU tuning.
However, the METEOR tuning yields extremely high TER and low BLEU scores.
This must be due to the higher weight on the recall compared to precision in the harmonic mean used to compute the METEOR Ch in es e tu ni ng T E R B L E U M T R s y s t e m A s y s t e m B s y s t e m C s y s t e m D s y s t e m E s y s t e m F 56 .5 6 55 .8 8 58 .3 5 57 .0 9 57 .6 9 56 .1 1 29 .3 9 30 .4 5 32 .8 8 36 .1 8 33 .8 5 36 .6 4 54 .5 4 54 .3 6 56 .7 2 57 .1 1 58 .2 8 58 .9 0 no we ig hts ba sel in e 53 .1 1 53 .4 0 37 .7 7 38 .5 2 59 .1 9 59 .5 6 T E R t u n e d B L E U t u n e d M T R t u n e d 52 .1 3 53 .0 3 70 .2 7 36 .8 7 39 .9 9 28 .6 0 57 .3 0 58 .9 7 63 .1 0 Table 3: Mixed-case TER and BLEU, and lowercase METEOR scores on Chinese NIST MT03+MT04.
score.
Even though METEOR has been shown to be a good metric on a given MT output, tuning to optimize METEOR results in a high insertion rate and low precision.
The Arabic test set results are shown in Table 2.
The TER and BLEU optimized combination results beat all single system scores on all metrics.
The best results on a given metric are again obtained by the combination optimized for the corresponding metric.
It should be noted that the TER optimized combination has significantly higher BLEU score than the TER optimized baseline.
Compared to the baseline system which is also optimized for TER, the BLEU score is improved by 0.97 points.
Also, the METEOR score using the METEOR optimized weights is very high.
However, the other scores are worse in common with the tuning set results.
The tuning set results on the Chinese to English NIST MT03+MT04 task are shown in Table 3.
The baseline combination weights were tuned to optimize BLEU.
Again, the best scores on each metric are obtained by the combination tuned for that metric.
Only the METEOR score of the TER tuned combination is worse than the METEOR scores of systems E and F - other combinations are better than any single system on all metrics apart from the METEOR tuned combinations.
The test set results follow clearly the tuning results again - the TER tuned combination is the best in terms of TER, the BLEU tuned in terms of BLEU, and the METEOR tuned in Table 4: Mixed-case TER and BLEU, and lowercase METEOR scores on Chinese NIST MT05.
terms of METEOR.
Compared to the baseline, the BLEU score of the BLEU tuned combination is improved by 1.47 points.
Again, the METEOR tuned weights hurt the other metrics significantly.
SECTION 8: Conclusions.
An improved confusion network decoding method combining the word posteriors with arbitrary features was presented.
This allows the addition of language model scores by expanding the lattices or re-scoring -best lists.
The LM integration should result in more grammatical combination outputs.
Also, confusion networks generated by using the -best hypothesis from all systems as the skeleton were used with prior probabilities derived from the average TER scores.
This guarantees that the best path will not be found from a network generated for a system with zero weight.
Compared to the earlier system combination approaches, this method is fully automatic and requires very little additional information on top of the development set outputs from the individual systems to tune the weights.
The new method was evaluated on the Arabic to English and Chinese to English NIST MT05 tasks.
Compared to the baseline from (Rosti et al., 2007), the new method improves the BLEU scores significantly.
The combination weights were tuned to optimize three automatic evaluation metrics: TER, BLEU and METEOR.
The TER tuning seems to yield very good results on Arabic - the BLEU tuning seems to be better on Chinese.
It also seems like METEOR should not be used in tuning due to high insertion rate and low precision.
It would be interesting to know which tuning metric results in the best translations in terms of human judgment.
However, this would require time consuming evaluations such as human mediated TER post-editing (Snover et al., 2006).
The improved confusion network decoding approach allows arbitrary features to be used in the combination.
New features may be added in the future.
Hypothesis alignment is also very important in confusion network generation.
Better alignment methods which take synonymy into account should be investigated.
This method could also benefit from more sophisticated word posterior estimation.
SECTION: Acknowledgments
This work was supported by DARPA/IPTO Contract No.
HR001106-C-0022 under the GALE program (approved for public release, distribution unlimited).
The authors would like to thank ISI and University of Edinburgh for sharing their MT system outputs.
|
5 | P06-2124 | BiTAM: Bilingual Topic AdMixture Models forWord Alignment | We propose a novel bilingual topical admixture (BiTAM) formalism for word alignment in statistical machine translation.
Under this formalism, the parallel sentence-pairs within a document-pair are assumed to constitute a mixture of hidden topics; each word-pair follows a topic-specific bilingual translation model.
Three BiTAM models are proposed to capture topic sharing at different levels of linguistic granularity (i.e., at the sentence or word levels).
These models enable word- alignment process to leverage topical contents of document-pairs.
Efficient variational approximation algorithms are designed for inference and parameter estimation.
With the inferred latent topics, BiTAM models facilitate coherent pairing of bilingual linguistic entities that share common topical aspects.
Our preliminary experiments show that the proposed models improve word alignment accuracy, and lead to better translation quality. | Title: BiTAM: Bilingual Topic AdMixture Models forWord Alignment
ABSTRACT
We propose a novel bilingual topical admixture (BiTAM) formalism for word alignment in statistical machine translation.
Under this formalism, the parallel sentence-pairs within a document-pair are assumed to constitute a mixture of hidden topics; each word-pair follows a topic-specific bilingual translation model.
Three BiTAM models are proposed to capture topic sharing at different levels of linguistic granularity (i.e., at the sentence or word levels).
These models enable word- alignment process to leverage topical contents of document-pairs.
Efficient variational approximation algorithms are designed for inference and parameter estimation.
With the inferred latent topics, BiTAM models facilitate coherent pairing of bilingual linguistic entities that share common topical aspects.
Our preliminary experiments show that the proposed models improve word alignment accuracy, and lead to better translation quality.
SECTION 1: Introduction
Parallel data has been treated as sets of unrelated sentence-pairs in state-of-the-art statistical machine translation (SMT) models.
Most current approaches emphasize within-sentence dependencies such as the distortion in (Brown et al., 1993), the dependency of alignment in HMM (Vogel et al., 1996), and syntax mappings in (Yamada and Knight, 2001).
Beyond the sentence-level, corpus- level word-correlation and contextual-level topical information may help to disambiguate translation candidates and word-alignment choices.
For example, the most frequent source words (e.g., functional words) are likely to be translated into words which are also frequent on the target side; words of the same topic generally bear correlations and similar translations.
Extended contextual information is especially useful when translation models are vague due to their reliance solely on word-pair co- occurrence statistics.
For example, the word shot in âIt was a nice shot.â should be translated differently depending on the context of the sentence: a goal in the context of sports, or a photo within the context of sightseeing.
Nida (1964) stated that sentence-pairs are tied by the logic-flow in a document-pair; in other words, the document-pair should be word-aligned as one entity instead of being uncorrelated instances.
In this paper, we propose a probabilistic admixture model to capture latent topics underlying the context of document- pairs.
With such topical information, the translation models are expected to be sharper and the word-alignment process less ambiguous.
Previous works on topical translation models concern mainly explicit logical representations of semantics for machine translation.
This include knowledge-based (Nyberg and Mitamura, 1992) and interlingua-based (Dorr and Habash, 2002) approaches.
These approaches can be expensive, and they do not emphasize stochastic translation aspects.
Recent investigations along this line includes using word-disambiguation schemes (Carpua and Wu, 2005) and non-overlapping bilingual word-clusters (Wang et al., 1996; Och, 1999; Zhao et al., 2005) with particular translation models, which showed various degrees of success.
We propose a new statistical formalism: Bilingual Topic AdMixture model, or BiTAM, to facilitate topic-based word alignment in SMT.
Variants of admixture models have appeared in population genetics (Pritchard et al., 2000) and text modeling (Blei et al., 2003).
Statistically, an object is said to be derived from an admixture if it consists of a bag of elements, each sampled independently or coupled in some way, from a mixture model.
In a typical SMT setting, each document- pair corresponds to an object; depending on a chosen modeling granularity, all sentence-pairs or word-pairs in the document-pair correspond to the elements constituting the object.
Correspondingly, a latent topic is sampled for each pair from a prior topic distribution to induce topic-specific translations; and the resulting sentence-pairs and word- pairs are marginally dependent.
Generatively, this admixture formalism enables word translations to be instantiated by topic-specific bilingual models 969 Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 969â976, Sydney, July 2006.
Qc 2006 Association for Computational Linguistics and/or monolingual models, depending on their contexts.
In this paper we investigate three instances of the BiTAM model, They are data-driven and do not need handcrafted knowledge engineering.
The remainder of the paper is as follows: in section 2, we introduce notations and baselines; in section 3, we propose the topic admixture models; in section 4, we present the learning and inference algorithms; and in section 5 we show experiments of our models.
We conclude with a brief discussion in section 6.
SECTION 2: Notations and Baseline.
In statistical machine translation, one typically uses parallel data to identify entities such as âword-pairâ, âsentence-pairâ, and âdocument- pairâ.
Formally, we define the following terms1: ⢠A word-pair (fj , ei) is the basic unit for word alignment, where fj is a French word and ei is an English word; j and i are the position indices in the corresponding French sentence f and English sentence e. ⢠A sentence-pair (f , e) contains the source sentence f of a sentence length of J ; a target sentence e of length I . The two sentences f and e are translations of each other.⢠A document-pair (F, E) refers to two doc uments which are translations of each other.
Assuming sentences are one-to-one correspondent, a document-pair has a sequence of N parallel sentence-pairs {(fn, en)}, where (fn, en) is the ntth parallel sentence-pair.
⢠A parallel corpus C is a collection of M parallel document-pairs: {(Fd, Ed)}.
2.1 Baseline: IBM Model-1.
The translation process can be viewed as operations of word substitutions, permutations, and insertions/deletions (Brown et al., 1993) in noisy- channel modeling scheme at parallel sentence-pair level.
The translation lexicon p(f |e) is the key component in this generative process.
An efficient way to learn p(f |e) is IBM1: IBM1 has global optimum; it is efficient and easily scalable to large training data; it is one of the most informative components for re-ranking translations (Och et al., 2004).
We start from IBM1 as our baseline model, while higher-order alignment models can be embedded similarly within the proposed framework.
SECTION 3: Bilingual Topic AdMixture Model.
Now we describe the BiTAM formalism that captures the latent topical structure and generalizes word alignments and translations beyond sentence-level via topic sharing across sentence- pairs: Eâ = arg max p(F|E)p(E), (2) {E} where p(F|E) is a document-level translation model, generating the document F as one entity.
In a BiTAM model, a document-pair (F, E) is treated as an admixture of topics, which is induced by random draws of a topic, from a pool of topics, for each sentence-pair.
A unique normalized and real-valued vector θ, referred to as a topic-weight vector, which captures contributions of different topics, are instantiated for each document-pair, so that the sentence-pairs with their alignments are generated from topics mixed according to these common proportions.
Marginally, a sentence- pair is word-aligned according to a unique bilingual model governed by the hidden topical assignments.
Therefore, the sentence-level translations are coupled, rather than being independent as assumed in the IBM models and their extensions.
Because of this coupling of sentence-pairs (via topic sharing across sentence-pairs according to a common topic-weight vector), BiTAM is likely to improve the coherency of translations by treating the document as a whole entity, instead of uncorrelated segments that have to be independently aligned and then assembled.
There are at least two levels at which the hidden topics can be sampled for a document-pair, namely: the sentence- pair and the word-pair levels.
We propose three variants of the BiTAM model to capture the latent topics of bilingual documents at different levels.
J I 3.1 BiTAM1: The Frameworks p(f |e) = n ) p(fj |ei ) · p(ei |e).
(1) j=1 i=1 1 We follow the notations in (Brown et al., 1993) for.
English-French, i.e., e â f , although our models are tested,in this paper, for EnglishChinese.
We use the end-user ter minology for source and target languages.
In the first BiTAM model, we assume that topics are sampled at the sentence-level.
Each document- pair is represented as a random mixture of latent topics.
Each topic, topic-k, is presented by a topic-specific word-translation table: Bk , which is e I e I β e I a α θ z f J B N M α θ z a a f J B α θ z N M f J B N M (a) (b) (c) Figure 1: BiTAM models for Bilingual document- and sentence-pairs.
A node in the graph represents a random variable, and a hexagon denotes a parameter.
Un-shaded nodes are hidden variables.
All the plates represent replicates.
The outmost plate (M -plate) represents M bilingual document-pairs, while the inner N -plate represents the N repeated choice of topics for each sentence-pairs in the document; the inner J -plate represents J word-pairs within each sentence-pair.
(a) BiTAM1 samples one topic (denoted by z) per sentence-pair; (b) BiTAM2 utilizes the sentence-level topics for both the translation model (i.e., p(f |e, z)) and the monolingual word distribution (i.e., p(e|z)); (c) BiTAM3 samples one topic per word-pair.
a translation lexicon: Bi,j,k =p(f =fj |e=ei, z=k), where z is an indicator variable to denote the choice of a topic.
Given a specific topic-weight vector θd for a document-pair, each sentence-pair draws its conditionally independent topics from a mixture of topics.
This generative process, for a document-pair (Fd, Ed), is summarized as below: 1.
Sample sentence-number N from a Poisson(γ)..
2.
Sample topic-weight vector θd from a Dirichlet(α)..
3.
For each sentence-pair (fn , en ) in the dtth doc-pair ,.
(a) Sample sentence-length Jn from Poisson(δ); (b) Sample a topic zdn from a Multinomial(θd ); (c) Sample ej from a monolingual model p(ej );(d) Sample each word alignment link aj from a uni form model p(aj ) (or an HMM); (e) Sample each fj according to a topic-specific graphical model representation for the BiTAM generative scheme discussed so far.
Note that, the sentence-pairs are now connected by the node θd. Therefore, marginally, the sentence-pairs are not independent of each other as in traditional SMT models, instead they are conditionally independent given the topic-weight vector θd. Specifically, BiTAM1 assumes that each sentence-pair has one single topic.
Thus, the word-pairs within this sentence-pair are conditionally independent of each other given the hidden topic index z of the sentence-pair.
The last two sub-steps (3.d and 3.e) in the BiTam sampling scheme define a translation model, in which an alignment link aj is proposed translation lexicon p(fj |e, aj , zn , B).
and an observation of fj is generated accordingWe assume that, in our model, there are K pos sible topics that a document-pair can bear.
For each document-pair, a K -dimensional Dirichlet random variable θd, referred to as the topic-weight vector of the document, can take values in the (K â1)-simplex following a probability density: to the proposed distributions.
We simplify alignment model of a, as in IBM1, by assuming that aj is sampled uniformly at random.
Given the parameters α, B, and the English part E, the joint conditional distribution of the topic-weight vector θ, the topic indicators z, the alignment vectors A, and the document F can be written as: Î( K αk ) p(θ|α) = k=1 θα1 â1 · · · θαK â1 , (3) p(F,A, θ, z|E, α, B) = k=1 Î(αk ) N (4) where the hyperparameter α is a K -dimension vector with each component αk >0, and Î(x) is the Gamma function.
The alignment is represented by a J -dimension vector a = {a1, a2, · · · , aJ }; for each French word fj at the position j, an position variable aj maps it to anEnglish word eaj at the position aj in English sen p(θ | α) n p(zn |θ)p(fn , an |en , α, Bzn), n=1 where N is the number of the sentence-pair.
Marginalizing out θ and z, we can obtain the marginal conditional probability of generating F from E for each document-pair: p(F, A|E, α, Bzn ) = tence.
The word level translation lexicon probabil- r ( (5) ities are topic-specific, and they are parameterized by the matrix B = {Bk }.
p(θ|α) n) p(zn |θ)p(fn , an |en , Bzn ) dθ, n=1 zn For simplicity, in our current models we omit the modelings of the sentence-number N and the sentence-length Jn, and focus only on the bilingual translation model.
Figure 1 (a) shows the where p(fn, an|en, Bzn ) is a topic-specific sentence-level translation model.
For simplicity, we assume that the French words fj âs are conditionally independent of each other; the alignment variables aj âs are independent of other variables and are uniformly distributed a priori.
Therefore, the distribution for each sentence-pair is: p(fn , an |en , Bzn) = p(fn |en , an , Bzn)p(an |en , Bzn) Jn âNullâ is attached to every target sentence to align the source words which miss their translations.
Specifically, the latent Dirichlet allocation (LDA) in (Blei et al., 2003) can be viewed as a special case of the BiTAM3, in which the target sentence 1 n p(f n n j=1 |eanj , Bzn ).
(6) contains only one word: âNullâ, and the alignment link a is no longer a hidden variable.
Thus, the conditional likelihood for the entire parallel corpus is given by taking the product of the marginal probabilities of each individual document-pair in Eqn.
5.
3.2 BiTAM2: Monolingual Admixture.
In general, the monolingual model for English can also be a rich topic-mixture.
This is realized by using the same topic-weight vector θd and the same topic indicator zdn sampled according to θd, as described in §3.1, to introduce not onlytopic-dependent translation lexicon, but also topic dependent monolingual model of the source language, English in this case, for generating each sentence-pair (Figure 1 (b)).
Now e is generated
SECTION 4: Learning and Inference.
Due to the hybrid nature of the BiTAM models, exact posterior inference of the hidden variables A, z and θ is intractable.
A variational inference is used to approximate the true posteriors of these hidden variables.
The inference scheme is presented for BiTAM1; the algorithms for BiTAM2 and BiTAM3 are straight forward extensions and are omitted.
4.1 Variational Approximation.
To approximate: p(θ, z, A|E, F, α, B), the joint posterior, we use the fully factorized distribution over the same set of hidden variables: q(θ,z, A) â q(θ|γ, α)· from a topic-based language model β, instead of a N Jn (7) uniform distribution in BiTAM1.
We refer to this n q(zn |Ïn ) n q(anj , fnj |Ïnj , en , B), model as BiTAM2.
n=1 j=1 Unlike BiTAM1, where the information observed in ei is indirectly passed to z via the node of fj and the hidden variable aj , in BiTAM2, the topics of corresponding English and French sentences are also strictly aligned so that the information observed in ei can be directly passed to z, in the hope of finding more accurate topics.
The topics are inferred more directly from the observed bilingual data, and as a result, improve alignment.
3.3 BiTAM3: Word-level Admixture.
where the Dirichlet parameter γ, the multinomial parameters (Ï1, · · · , Ïn), and the parameters (Ïn1, · · · , ÏnJn ) are known as variational param eters, and can be optimized with respect to the KullbackLeibler divergence from q(·) to the original p(·) via an iterative fixed-point algorithm.
It can be shown that the fixed-point equations for the variational parameters in BiTAM1 are as follows: Nd γk = αk + ) Ïdnk (8) n=1 K It is straightforward to extend the sentence-level BiTAM1 to a word-level admixture model, by Ïdnk â exp (Ψ(γk ) â Ψ( Jdn Idn ) kt =1 γkt ) · sampling topic indicator zn,j for each word-pair (fj , eaj ) in the ntth sentence-pair, rather than once for all (words) in the sentence (Figure 1 (c)).
exp ( ) ) Ïdnji log Bf ,e ,k (9) j i j=1 i=1 K ( This gives rise to our BiTAM3.
The conditional Ïdnji â exp ) Ïdnk log Bf ,e ,k , (10) k=1 likelihood functions can be obtained by extending where Ψ(·) is a digamma function.
Note that inthe formulas in §3.1 to move the variable zn,j in side the same loop over each of the fn,j . the above formulas Ï dnkis the variational param 3.4 Incorporation of Word âNullâ.
Similar to IBM models, âNullâ word is used for the source words which have no translation counterparts in the target language.
For example, Chinese words âdeâ (ffl) , âbaâ (I\) and âbeiâ (%i) generally do not have translations in English.
eter underlying the topic indicator zdn of the nth sentence-pair in document d, and it can be used to predict the topic distribution of that sentence-pair.
Following a variational EM scheme (Beal and Ghahramani, 2002), we estimate the model parameters α and B in an unsupervised fashion.
Essentially, Eqs.
(810) above constitute the E-step, where the posterior estimations of the latent variables are obtained.
In the M-step, we update α and B so that they improve a lower bound of the log-likelihood defined bellow: L(γ, Ï, Ï; α, B) = Eq [log p(θ|α)]+Eq [log p(z|θ)] +Eq [log p(a)]+Eq [log p(f |z, a, B)]âEq [log q(θ)] âEq [log q(z)]âEq [log q(a)].
(11) The close-form iterative updating formula B is: BDA selects iteratively, for each f , the best aligned e, such that the word-pair (f, e) is the maximum of both row and column, or its neighbors have more aligned pairs than the other combpeting candidates.A close check of {Ïdnji} in Eqn.
10 re veals that it is essentially an exponential model: weighted log probabilities from individual topic- specific translation lexicons; or it can be viewed as weighted geometric mean of the individual lex M Nd Jdn Idn Bf,e,k â ) ) ) ) δ(f, fj )δ(e, ei )Ïdnk Ïdnji (12) d n=1 j=1 i=1 For α, close-form update is not available, and we resort to gradient accent as in (Sjo¨ lander et al., 1996) with restarts to ensure each updated αk >0.
4.2 Data Sparseness and Smoothing.
The translation lexicons Bf,e,k have a potential size of V 2K , assuming the vocabulary sizes for both languages are V . The data sparsity (i.e., lack of large volume of document-pairs) poses a more serious problem in estimating Bf,e,k than the monolingual case, for instance, in (Blei et al., 2003).
To reduce the data sparsity problem, we introduce two remedies in our models.
First: Laplace smoothing.
In this approach, the matrix set B, whose columns correspond to parameters of conditional multinomial distributions, is treated as a collection of random vectors all under a symmetric Dirichlet prior; the posterior expectation of these multinomial parameter vectors can be estimated using Bayesian theory.
Second: interpolation smoothing.
Empirically, we can employ a linear interpolation with IBM1 to avoid overfitting: Bf,e,k = λBf,e,k +(1âλ)p(f |e).
(13) As in Eqn.
1, p(f |e) is learned via IBM1; λ is estimated via EM on held out data.
4.3 Retrieving Word Alignments.
Two word-alignment retrieval schemes are designed for BiTAMs: the uni-direction alignment (UDA) and the bi-direction alignment (BDA).
Both use the posterior mean of the alignment indicators adnji, captured by what we call the poste rior alignment matrix Ï â¡ {Ïdnji}.
UDA uses a French word fdnj (at the jtth position of ntth sentence in the dtth document) to query Ï to get the best aligned English word (by taking the maximum point in a row of Ï): adnj = arg max Ïdnji .
(14) iâ[1,Idn ] iconâs strength.
SECTION 5: Experiments.
We evaluate BiTAM models on the word alignment accuracy and the translation quality.
For word alignment accuracy, F-measure is reported, i.e., the harmonic mean of precision and recall against a gold-standard reference set; for translation quality, Bleu (Papineni et al., 2002) and its variation of NIST scores are reported.
Table 1: Training and Test Data Statistics Tra in #D oc.
#S ent . #T ok en s En gli sh Ch ine se Tr ee b a n k F B IS . B J Si n or a m a Xi nH ua 31 6 6,1 11 2,3 73 19, 14 0 41 72 10 5K 10 3K 11 5K 13 3K 4.1 8M 3.8 1M 3.8 5M 10 5K 3.5 4M 3.6 0M 3.9 3M Tes t 95 62 7 25, 50 0 19, 72 6 We have two training data settings with different sizes (see Table 1).
The small one consists of 316 document-pairs from Tree- bank (LDC2002E17).
For the large training data setting, we collected additional document- pairs from FBIS (LDC2003E14, Beijing part), Sinorama (LDC2002E58), and Xinhua News (LDC2002E18, document boundaries are kept in our sentence-aligner (Zhao and Vogel, 2002)).
There are 27,940 document-pairs, containing 327K sentence-pairs or 12 million (12M) English tokens and 11M Chinese tokens.
To evaluate word alignment, we hand-labeled 627 sentence-pairs from 95 document-pairs sampled from TIDESâ01 dryrun data.
It contains 14,769 alignment-links.
To evaluate translation quality, TIDESâ02 Eval.
test is used as development set, and TIDESâ03 Eval.
test is used as the unseen test data.
5.1 Model Settings.
First, we explore the effects of Null word and smoothing strategies.
Empirically, we find that adding âNullâ word is always beneficial to all models regardless of number of topics selected.
To pics Le xic ons To pic1 To pic2 To pic3 Co oc.
IBM 1 H M M IBM 4 p( Ch ao Xi an (Ji!
$) |K ore an) 0.
06 12 0.
21 38 0.
22 54 3 8 0.2 19 8 0.2 15 7 0.2 10 4 p( Ha nG uo (li!
� )|K ore an) 0.
83 79 0.
61 16 0.
02 43 4 6 0.5 61 9 0.4 72 3 0.4 99 3 Table 2: Topic-specific translation lexicons are learned by a 3-topic BiTAM1.
The third lexicon (Topic-3) prefers to translate the word Korean into ChaoXian (Ji!$:North Korean).
The co-occurrence (Cooc), IBM1&4 and HMM only prefer to translate into HanGuo (li!�:South Korean).
The two candidate translations may both fade out in the learned translation lexicons.
Uni gram rank 1 2 3 4 5 6 7 8 9 1 0 Topi c A. fo rei gn c h i n a u . s . dev elop men t trad e ente rpri ses tech nolo gy cou ntri es y e a r eco nom ic Topi c B. cho ngqi ng com pani es take over s co m pa ny cit y bi lli o n m o r e eco nom ic re a c h e d y u a n Topi c C. sp or ts dis abl ed te a m p e o p l e caus e w at e r na tio na l ga m es han dica ppe d me mb ers Table 3: Three most distinctive topics are displayed.
The English words for each topic are ranked according to p(e|z) estimated from the topic-specific English sentences weighted by {Ïdnk }.
33 functional words were removed to highlight the main content of each topic.
Topic A is about Us-China economic relationships; Topic B relates to Chinese companiesâ merging; Topic C shows the sports of handicapped people.The interpolation smoothing in §4.2 is effec tive, and it gives slightly better performance than Laplace smoothing over different number of topics for BiTAM1.
However, the interpolation leverages the competing baseline lexicon, and this can blur the evaluations of BiTAMâs contributions.
Laplace smoothing is chosen to emphasize more on BiTAMâs strength.
Without any smoothing, F- measure drops very quickly over two topics.
In all our following experiments, we use both Null word and Laplace smoothing for the BiTAM models.
We train, for comparison, IBM1&4 and HMM models with 8 iterations of IBM1, 7 for HMM and 3 for IBM4 (18h743) with Null word and a maximum fertility of 3 for ChineseEnglish.
Choosing the number of topics is a model selection problem.
We performed a tenfold cross- validation, and a setting of three-topic is chosen for both the small and the large training data sets.
The overall computation complexity of the BiTAM is linear to the number of hidden topics.
5.2 Variational Inference.
Under a non-symmetric Dirichlet prior, hyperparameter α is initialized randomly; B (K translation lexicons) are initialized uniformly as did in IBM1.
Better initialization of B can help to avoid local optimal as shown in § 5.5.
With the learned B and α fixed, the variational parameters to be computed in Eqn.
(810) are initialized randomly; the fixed-point iterative updates stop when the change of the likelihood is smaller than 10â5.
The convergent variational parameters, corresponding to the highest likelihood from 20 random restarts, are used for retrieving the word alignment for unseen document-pairs.
To estimate B, β (for BiTAM2) and α, at most eight variational EM iterations are run on the training data.
Figure 2 shows absolute 2â¼3% better F-measure over iterations of variational EM using two and three topics of BiTAM1 comparing with IBM1.
BiTam with Null and Laplace Smoothing Over Var.
EM Iterations 41 40 39 38 37 36 35 BiTamâ1, Topic #=3 34 BiTamâ1, Topic #=2.
IB M â1 33 32 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 Number of EM/Variational EM Iterations for IBMâ1 and BiTamâ1 Figure 2: performances over eight Variational EM iterations of BiTAM1 using both the âNullâ word and the laplace smoothing; IBM1 is shown over eight EM iterations for comparison.
5.3 Topic-Specific Translation.
Lexicons The topic-specific lexicons Bk are smaller in size than IBM1, and, typically, they contain topic trends.
For example, in our training data, North Korean is usually related to politics and translated into âChaoXianâ (Ji!
$); South Korean occurs more often with economics and is translated as âHanGuoâ(li!
�).
BiTAMs discriminate the two by considering the topics of the context.
Table 2 shows the lexicon entries for âKoreanâ learned by a 3-topic BiTAM1.
The values are relatively sharper, and each clearly favors one of the candidates.
The co-occurrence count, however, only favors âHanGuoâ, and this can easily dominate the decisions of IBM and HMM models due to their ignorance of the topical context.
Monolingual topics learned by BiTAMs are, roughly speaking, fuzzy especially when the number of topics is small.
With proper filtering, we find that BiTAMs do capture some topics as illustrated in Table 3.
5.4 Evaluating Word.
Alignments We evaluate word alignment accuracies in various settings.
Notably, BiTAM allows to test alignments in two directions: English-to Chinese (EC) and Chinese-to-English (CE).
Additional heuristics are applied to further improve the accuracies.
Inter takes the intersection of the two directions and generates high-precision alignments; the SE T TI N G IBM 1 H M M IBM 4 B I T A M 1 U D A BDA B I T A M 2 U D A BDA B I T A M 3 U D A BDA C E ( % ) E C ( % ) 36 .2 7 32 .9 4 43 .0 0 44 .2 6 45 .0 0 45 .9 6 40 .13 48.26 36 .52 46.61 40 .26 48.63 37 .35 46.30 40 .47 49.02 37 .54 46.62 R E FI N E D ( % ) U N I O N ( % ) IN TE R (% ) 41 .7 1 32 .1 8 39 .8 6 44 .4 0 42 .9 4 44 .8 7 48 .4 2 43 .7 5 48 .6 5 45 .06 49.02 35 .87 48.66 43 .65 43.85 47 .20 47.61 36 .07 48.99 44 .91 45.18 47 .46 48.18 36 .26 49.35 45 .13 45.48 N I S T B L E U 6.
45 8 15 .7 0 6.
82 2 17 .7 0 6.
92 6 18 .2 5 6.
93 7 6.954 17 .93 18.14 6.
90 4 6.976 18 .13 18.05 6.
96 7 6.962 18 .11 18.25 Table 4: Word Alignment Accuracy (F-measure) and Machine Translation Quality for BiTAM Models, comparing with IBM Models, and HMMs with a training scheme of 18 h7 43 on the Treebank data listed in Table 1.
For each column, the highlighted alignment (the best one under that model setting) is picked up to further evaluate the translation quality.
Union of two directions gives high-recall; Refined grows the intersection with the neighboring word- pairs seen in the union, and yields high-precision and high-recall alignments.
As shown in Table 4, the baseline IBM1 gives its best performance of 36.27% in the CE direc tion; the UDA alignments from BiTAM1â¼3 give 40.13%, 40.26%, and 40.47%, respectively, which are significantly better than IBM1.
A close look at the three BiTAMs does not yield significant difference.
BiTAM3 is slightly better in most settings; BiTAM1 is slightly worse than the other two, because the topics sampled at the sentence level are not very concentrated.
The BDA align ments of BiTAM1â¼3 yield 48.26%, 48.63% and 49.02%, which are even better than HMM and IBM4 â their best performances are at 44.26% and 45.96%, respectively.
This is because BDA partially utilizes similar heuristics on the approximated posterior matrix {Ïdnji} instead of di rect operations on alignments of two directions in the heuristics of Refined.
Practically, we also apply BDA together with heuristics for IBM1, HMM and IBM4, and the best achieved performances are at 40.56%, 46.52% and 49.18%, respectively.
Overall, BiTAM models achieve performances close to or higher than HMM, using only a very simple IBM1 style alignment model.
Similar improvements over IBM models and HMM are preserved after applying the three kinds of heuristics in the above.
As expected, since BDA already encodes some heuristics, it is only slightly improved with the Union heuristic; UDA, similar to the viterbi style alignment in IBM and HMM, is improved better by the Refined heuristic.
We also test BiTAM3 on large training data, and similar improvements are observed over those of the baseline models (see Table.
5).
5.5 Boosting BiTAM Models.
The translation lexicons of Bf,e,k are initialized uniformly in our previous experiments.
Better ini tializations can potentially lead to better performances because it can help to avoid the undesirable local optima in variational EM iterations.
We use the lexicons from IBM Model-4 to initialize Bf,e,k to boost the BiTAM models.
This is one way of applying the proposed BiTAM models into current state-of-the-art SMT systems for further improvement.
The boosted alignments are denoted as BUDA and BBDA in Table.
5, corresponding to the uni-direction and bi-direction alignments, respectively.
We see an improvement in alignment quality.
5.6 Evaluating Translations.
To further evaluate our BiTAM models, word alignments are used in a phrase-based decoder for evaluating translation qualities.
Similar to the Pharoah package (Koehn, 2004), we extract phrase-pairs directly from word alignment together with coherence constraints (Fox, 2002) to remove noisy ones.
We use TIDES Evalâ02 CE test set as development data to tune the decoder parameters; the Evalâ03 data (919 sentences) is the unseen data.
A trigram language model is built using 180 million English words.
Across all the reported comparative settings, the key difference is the bilingual ngram-identity of the phrase-pair, which is collected directly from the underlying word alignment.
Shown in Table 4 are results for the small- data track; the large-data track results are in Table 5.
For the small-data track, the baseline Bleu scores for IBM1, HMM and IBM4 are 15.70, 17.70 and 18.25, respectively.
The UDA alignment of BiTAM1 gives an improvement over the baseline IBM1 from 15.70 to 17.93, and it is close to HMMâs performance, even though BiTAM doesnât exploit any sequential structures of words.
The proposed BiTAM2 and BiTAM 3 are slightly better than BiTAM1.
Similar improvements are observed for the large-data track (see Table 5).
Note that, the boosted BiTAM3 us SE T TI N G IBM 1 H M M IBM 4 B I T A M 3 U D A BDA BUDA B BDA C E ( % ) E C ( % ) 46 .7 3 44 .3 3 49 .1 2 54 .5 6 54 .1 7 55 .0 8 50 .55 56.27 55.80 57.02 51 .59 55.18 54.76 58.76 R E FI N E D ( % ) U N I O N ( % ) I N T E R ( % ) 54 .6 4 42 .4 7 52 .2 4 56 .3 9 51 .5 9 54 .6 9 58 .4 7 52 .6 7 57 .7 4 56 .45 54.57 58.26 56.23 50 .23 57.81 56.19 58.66 52 .44 52.71 54.70 55.35 N I S T B L E U 7.
5 9 19 .1 9 7.
7 7 21 .9 9 7.
8 3 23 .1 8 7.
64 7.68 8.10 8.23 21 .20 21.43 22.97 24.07 Table 5: Evaluating Word Alignment Accuracies and Machine Translation Qualities for BiTAM Models, IBM Models, HMMs, and boosted BiTAMs using all the training data listed in Table.
1.
Other experimental conditions are similar to Table.
4.
ing IBM4 as the seed lexicon, outperform the Refined IBM4: from 23.18 to 24.07 on Bleu score, and from 7.83 to 8.23 on NIST.
This result suggests a straightforward way to leverage BiTAMs to improve statistical machine translations.
SECTION 6: Conclusion.
In this paper, we proposed novel formalism for statistical word alignment based on bilingual admixture (BiTAM) models.
Three BiTAM models were proposed and evaluated on word alignment and translation qualities against state-of- the-art translation models.
The proposed models significantly improve the alignment accuracy and lead to better translation qualities.
Incorporation of within-sentence dependencies such as the alignment-jumps and distortions, and a better treatment of the source monolingual model worth further investigations.
|
6 | H05-1115 | Using Random Walks for Question-focused Sentence Retrieval | We consider the problem of question-focused sentence retrieval from complexnews articles describing multi-event stories published over time.
Annotators generated a list of questions central to understanding each story in our corpus.
Because of the dynamic nature of the stories,many questions are time-sensitive (e.g.How many victims have been found?)Judges found sentences providing an answer to each question.
To address thesentence retrieval problem, we apply astochastic, graph-based method for comparing the relative importance of the textual units, which was previously used successfully for generic summarization.
Currently, we present a topic-sensitive versionof our method and hypothesize that it canoutperform a competitive baseline, whichcompares the similarity of each sentenceto the input question via IDFweightedword overlap.
In our experiments, themethod achieves a TRDR score that is significantly higher than that of the baseline. | Title: Using Random Walks for Question-focused Sentence Retrieval
ABSTRACT
We consider the problem of question-focused sentence retrieval from complexnews articles describing multi-event stories published over time.
Annotators generated a list of questions central to understanding each story in our corpus.
Because of the dynamic nature of the stories,many questions are time-sensitive (e.g.How many victims have been found?)Judges found sentences providing an answer to each question.
To address thesentence retrieval problem, we apply astochastic, graph-based method for comparing the relative importance of the textual units, which was previously used successfully for generic summarization.
Currently, we present a topic-sensitive versionof our method and hypothesize that it canoutperform a competitive baseline, whichcompares the similarity of each sentenceto the input question via IDFweightedword overlap.
In our experiments, themethod achieves a TRDR score that is significantly higher than that of the baseline.
SECTION 1: Introduction
Recent work has motivated the need for systemsthat support Information Synthesis tasks, in whicha user seeks a global understanding of a topic orstory (Amigo et al., 2004).
In contrast to the classical question answering setting (e.g. TREC-style Q&A (Voorhees and Tice, 2000)), in which the userpresents a single question and the system returns acorresponding answer (or a set of likely answers), inthis case the user has a more complex informationneed.
Similarly, when reading about a complex newsstory, such as an emergency situation, users mightseek answers to a set of questions in order to understand it better.
For example, Figure 1 showsthe interface to our Web-based news summarizationsystem, which a user has queried for informationabout Hurricane Isabel.
Understanding such storiesis challenging for a number of reasons.
In particular,complex stories contain many sub-events (e.g. thedevastation of the hurricane, the relief effort, etc.) Inaddition, while some facts surrounding the situationdo not change (such as Which area did the hurricane first hit?), others may change with time (Howmany people have been left homeless?).
Therefore, we are working towards developing a systemfor question answering from clusters of complex stories published over time.
As can be seen at the bottom of Figure 1, we plan to add a component to ourcurrent system that allows users to ask questions asthey read a story.
They may then choose to receiveeither a precise answer or a question-focused summary.
Currently, we address the question-focused sentence retrieval task.
While passage retrieval (PR) isclearly not a new problem (e.g.
(Robertson et al.,1992; Salton et al., 1993)), it remains important andyet often overlooked.
As noted by (Gaizauskas et al.,2004), while PR is the crucial first step for questionanswering, Q&A research has typically not empha915 Hurricane Isabel's outer bands moving onshoreproduced on 09/18, 6:18 AM 2% SummaryThe North Carolina coast braced for a weakened but still potent Hurricane Isabel while already rain-soaked areas as faraway as Pennsylvania prepared for possibly ruinous flooding.
(2:3) A hurricane warning was in effect from CapeFear in southern North Carolina to the VirginiaMaryland line, and tropical storm warnings extended from South Carolinato New Jersey.
(2:14) While the outer edge of the hurricane approached the North Carolina coast Wednesday, the center of the storm was still400 miles south-southeast of Cape Hatteras, N.C., late Wednesday morning.
(3:10) BBC NEWS World AmericasHurricane Isabel prompts US shutdown (4:1) Ask us:What states have been affected by the hurricane so far?
Around 200,000 people in coastal areas of North Carolina and Virginia were ordered to evacuate or risk getting trappedby flooding from storm surges up to 11 feet.
(5:8) The storm was expected to hit with its full fury today, slamming intothe North Carolina coast with 105mph winds and 45-foot wave crests, before moving through Virginia and bashing thecapital with gusts of about 60 mph.
(7:6) Figure 1: Question tracking interface to a summarization system.
sized it.
The specific problem we consider differsfrom the classic task of PR for a Q&A system ininteresting ways, due to the time-sensitive nature ofthe stories in our corpus.
For example, one challengeis that the answer to a users question may be updated and reworded over time by journalists in orderto keep a running story fresh, or because the factsthemselves change.
Therefore, there is often morethan one correct answer to a question.We aim to develop a method for sentence retrieval that goes beyond finding sentences that aresimilar to a single query.
To this end, we propose to use a stochastic, graph-based method.
Recently, graph-based methods have proved useful fora number of NLP and IR tasks such as documentre-ranking in ad hoc IR (Kurland and Lee, 2005)and analyzing sentiments in text (Pang and Lee,2004).
In (Erkan and Radev, 2004), we introducedthe LexRank method and successfully applied it togeneric, multi-document summarization.
Presently,we introduce topic-sensitive LexRank in creating asentence retrieval system.
We evaluate its performance against a competitive baseline, which considers the similarity between each sentence and thequestion (using IDF-weighed word overlap).
Wedemonstrate that LexRank significantly improvesquestion-focused sentence selection over the baseline.
SECTION 2: Formal description of the problem.
Our goal is to build a question-focused sentence retrieval mechanism using a topic-sensitive version ofthe LexRank method.
In contrast to previous PR systems such as Okapi (Robertson et al., 1992), which ranks documents for relevancy and then proceeds tofind paragraphs related to a question, we address thefinergrained problem of finding sentences containing answers.
In addition, the input to our system isa set of documents relevant to the topic of the querythat the user has already identified (e.g. via a searchengine).
Our system does not rank the input documents, nor is it restricted in terms of the number ofsentences that may be selected from the same document.
The output of our system, a ranked list of sentences relevant to the users question, can be subsequently used as input to an answer selection system in order to find specific answers from the extracted sentences.
Alternatively, the sentences canbe returned to the user as a question-focused summary.
This is similar to snippet retrieval (Wu etal., 2004).
However, in our system answers are extracted from a set of multiple documents rather thanon a document-by-document basis.
SECTION 3: Our approach: topic-sensitive LexRank.
3.1 The LexRank method.
In (Erkan and Radev, 2004), the concept of graph-based centrality was used to rank a set of sentences,in producing generic multi-document summaries.
To apply LexRank, a similarity graph is producedfor the sentences in an input document set.
In thegraph, each node represents a sentence.
There areedges between nodes for which the cosine similarity between the respective pair of sentences exceedsa given threshold.
The degree of a given node isan indication of how much information the respective sentence has in common with other sentences.
Therefore, sentences that contain the most salient information in the document set should be very centralwithin the graph.Figure 2 shows an example of a similarity graph for a set of five input sentences, using a cosine similarity threshold of 0.15.
Once the similarity graph isconstructed, the sentences are then ranked accordingto their eigenvector centrality.
As previously mentioned, the original LexRank method performed wellin the context of generic summarization.
Below,we describe a topic-sensitive version of LexRank,which is more appropriate for the question-focusedsentence retrieval problem.
In the new approach, the 916 score of a sentence is determined by a mixture modelof the relevance of the sentence to the query and thesimilarity of the sentence to other high-scoring sentences.
3.2 Relevance to the question.
In topic-sensitive LexRank, we first stem all of thesentences in a set of articles and compute word IDFsby the following formula: idfw = log (N + 1 0.5 + sfw )(1) whereN is the total number of sentences in the cluster, and sfw is the number of sentences that the wordw appears in.
We also stem the question and remove the stop words from it.
Then the relevance of a sentence s tothe question q is computed by: rel(s|q) =Xw?q log(tfw,s + 1)× log(tfw,q + 1) × idfw (2) where tfw,s and tfw,q are the number of times wappears in s and q, respectively.
This model hasproven to be successful in query-based sentence retrieval (Allan et al., 2003), and is used as our competitive baseline in this study (e.g. Tables 4, 5 and7).
3.3 The mixture model.
The baseline system explained above does not makeuse of any inter-sentence information in a cluster.We hypothesize that a sentence that is similar tothe high scoring sentences in the cluster should alsohave a high score.
For instance, if a sentence thatgets a high score in our baseline model is likely tocontain an answer to the question, then a related sentence, which may not be similar to the question itself, is also likely to contain an answer.This idea is captured by the following mixture model, where p(s|q), the score of a sentence s givena question q, is determined as the sum of its relevance to the question (using the same measure asthe baseline described above) and the similarity tothe other sentences in the document cluster: p(s|q) = d rel(s|q)Pz?C rel(z|q) +(1-d)Xv?C sim(s, v)Pz?C sim(z, v) p(v|q) (3) where C is the set of all sentences in the cluster.
Thevalue of d, which we will also refer to as the question bias, is a trade-off between two terms in the Vertices: Sentence IndexSentence Index SalienceSalience SentenceSentence
SECTION 4: 0.1973852892722677 Milan fire brigade officials said that...
1 0.03614457831325301 At least two people are dead, inclu...
0 0.28454242157110576 Officials said the plane was carryin...
2 0.1973852892722677 Italian police said the plane was car..
3 0.28454242157110576 Rescue officials said that at least th...
Graph Figure 2: LexRank example: sentence similaritygraph with a cosine threshold of 0.15.
equation and is determined empirically.
For highervalues of d, we give more importance to the relevance to the question compared to the similarity tothe other sentences in the cluster.
The denominatorsin both terms are for normalization, which are described below.
We use the cosine measure weightedby word IDFs as the similarity between two sentences in a cluster: sim(x, y) = Pw?x,y tfw,xtfw,y(idfw) 2 qPxi?x(tfxi,xidfxi ) 2 ×qP yi?y(tfyi,y idfyi )2 (4) Equation 3 can be written in matrix notation asfollows: p = [dA+ (1- d)B]Tp (5) A is the square matrix such that for a given index i,all the elements in the ith column are proportionalto rel(i|q).
B is also a square matrix such that eachentry B(i, j) is proportional to sim(i, j).
Both matrices are normalized so that row sums add up to 1.Note that as a result of this normalization, all rowsof the resulting square matrixQ = [dA+(1-d)B]also add up to 1.
Such a matrix is called stochasticand defines a Markov chain.
If we view each sentence as a state in a Markov chain, thenQ(i, j) specifies the transition probability from state i to state jin the corresponding Markov chain.
The vector pwe are looking for in Equation 5 is the stationarydistribution of the Markov chain.
An intuitive interpretation of the stationary distribution can be under- 917 stood by the concept of a random walk on the graphrepresentation of the Markov chain.With probability d, a transition is made from the current node (sentence) to the nodes that are similar to the query.
With probability (1-d), a transitionis made to the nodes that are lexically similar to thecurrent node.
Every transition is weighted accordingto the similarity distributions.
Each element of thevector p gives the asymptotic probability of endingup at the corresponding state in the long run regardless of the starting state.
The stationary distributionof a Markov chain can be computed by a simple iterative algorithm, called power method.1 A simpler version of Equation 5, where A is auniform matrix andB is a normalized binary matrix,is known as PageRank (Brin and Page, 1998; Pageet al., 1998) and used to rank the web pages by theGoogle search engine.
It was also the model used torank sentences in (Erkan and Radev, 2004).
3.4 Experiments with topic-sensitive LexRank.
We experimented with different values of d on ourtraining data.
We also considered several thresholdvalues for inter-sentence cosine similarities, wherewe ignored the similarities between the sentencesthat are below the threshold.
In the training phaseof the experiment, we evaluated all combinationsof LexRank with d in the range of [0, 1] (in increments of 0.10) and with a similarity threshold ranging from [0, 0.9] (in increments of 0.05).
We thenfound all configurations that outperformed the baseline.
These configurations were then applied to ourdevelopment/test set.
Finally, our best sentence retrieval system was applied to our test data set andevaluated against the baseline.
The remainder of thepaper will explain this process and the results in detail.
4 Experimental setup.
4.1 Corpus.
We built a corpus of 20 multi-document clusters ofcomplex news stories, such as plane crashes, political controversies and natural disasters.
The data 1The stationary distribution is unique and the power methodis guaranteed to converge provided that the Markov chain isergodic (Seneta, 1981).
A non-ergodic Markov chain can bemade ergodic by reserving a small probability for jumping toany other state from the current state (Page et al., 1998).
clusters and their characteristics are shown in Table 1.
The news articles were collected from varioussources.
Newstracker clusters were collected automatically by our Web-based news summarization system.
The number of clusters randomly assignedto the training, development/test and test data setswere 11, 3 and 6, respectively.Next, we assigned each cluster of articles to an annotator, who was asked to read all articles in thecluster.
He or she then generated a list of factualquestions key to understanding the story.
Once wecollected the questions for each cluster, two judgesindependently annotated nine of the training clusters.
For each sentence and question pair in a givencluster, the judges were asked to indicate whetheror not the sentence contained a complete answerto the question.
Once an acceptable rate of inter-judge agreement was verified on the first nine clusters (Kappa (Carletta, 1996) of 0.68), the remaining11 clusters were annotated by one judge each.In some cases, the judges did not find any sentences containing the answer for a given question.Such questions were removed from the corpus.
Thefinal number of questions annotated for answersover the entire corpus was 341, and the distributionsof questions per cluster can be found in Table 1.
4.2 Evaluation metrics and methods.
To evaluate our sentence retrieval mechanism, weproduced extract files, which contain a list of sentences deemed to be relevant to the question, for thesystem and from human judgment.
To compare different configurations of our system to the baselinesystem, we produced extracts at a fixed length of 20sentences.
While evaluations of question answeringsystems are often based on a shorter list of rankedsentences, we chose to generate longer lists for several reasons.
One is that we are developing a PRsystem, of which the output can then be input to ananswer extraction system for further processing.
Insuch a setting, we would most likely want to generate a relatively longer list of candidate sentences.
Aspreviously mentioned, in our corpus the questionsoften have more than one relevant answer, so ideally,our PR system would find many of the relevant sentences, sending them on to the answer componentto decide which answer(s) should be returned to theuser.
Each systems extract file lists the document 918 Cluster Sources Articles Questions Data set Sample question Algerian terror AFP, UPI 2 12 train What is the condition under whichthreat GIA will take its action?Milan plane MSNBC, CNN, ABC, 9 15 train How many people were in thecrash Fox, USAToday building at the time of the crash?Turkish plane BBC, ABC, 10 12 train To where was the plane headed?crash FoxNews, YahooMoscow terror UPI, AFP, AP 7 7 train How many people were killed inattack the most recent explosion?Rhode Island MSNBC, CNN, ABC, Lycos, 10 8 train Who was to blame forclub fire Fox, BBC, Ananova the fire?FBI most AFP, UPI 3 14 train How much is the State Department offeringwanted for information leading to bin Ladens arrest?Russia bombing AP, AFP 2 11 train What was the cause of the blast?Bali terror CNN, FoxNews, ABC, 10 30 train What were the motivationsattack BBC, Ananova of the attackers?Washington DC FoxNews, Haaretz, BBC, 8 28 train What kinds of equipment or weaponssniper BBC, Washington Times, CBS were used in the killings?GSPC terror Newstracker 8 29 train What are the charges againstgroup the GSPC suspects?China Novelty 43 25 18 train What was the magnitude of theearthquake earthquake in Zhangjiakou?Gulfair ABC, BBC, CNN, USAToday, 11 29 dev/test How many people FoxNews, Washington Post were on board?David Beckham AFP 20 28 dev/test How long had Beckham been playing fortrade MU before he moved to RM?Miami airport Newstracker 12 15 dev/test How many concourses doesevacuation the airport have?US hurricane DUC d04a 14 14 test In which places had the hurricane landed?EgyptAir crash Novelty 4 25 29 test How many people were killed?Kursk submarine Novelty 33 25 30 test When did the Kursk sink?Hebrew University bombing Newstracker 11 27 test How many people were injured?Finland mall bombing Newstracker 9 15 test How many people were in the mall at the time of the bombing?Putin visits Newstracker 12 20 test What issue concerned BritishEngland human rights groups?
Table 1: Corpus of complex news stories.
and sentence numbers of the top 20 sentences.
Thegold standard extracts list the sentences judged ascontaining answers to a given question by the annotators (and therefore have variable sizes) in no particular order.2 We evaluated the performance of the systems using two metrics - Mean Reciprocal Rank (MRR)(Voorhees and Tice, 2000) and Total ReciprocalDocument Rank (TRDR) (Radev et al., 2005).MRR, used in the TREC Q&A evaluations, is thereciprocal rank of the first correct answer (or sentence, in our case) to a given question.
This measuregives us an idea of how far down we must look in theranked list in order to find a correct answer.
To contrast, TRDR is the total of the reciprocal ranks of allanswers found by the system.
In the context of answering questions from complex stories, where thereis often more than one correct answer to a question,and where answers are typically time-dependent, weshould focus on maximizing TRDR, which gives us 2For clusters annotated by two judges, all sentences chosenby at least one judge were included.
a measure of how many of the relevant sentenceswere identified by the system.
However, we reportboth the average MRR and TRDR over all questionsin a given data set.
SECTION 5: LexRank versus the baseline system.
In the training phase, we searched the parameterspace for the values of d (the question bias) and thesimilarity threshold in order to optimize the resultingTRDR scores.
For our problem, we expected that arelatively low similarity threshold pair with a highquestion bias would achieve the best results.
Table 2shows the effect of varying the similarity threshold.3 The notation LR[a, d] is used, where a is the similarity threshold and d is the question bias.
The optimal range for the parameter a was between 0.14 and0.20.
This is intuitive because if the threshold is toohigh, such that only the most lexically similar sentences are represented in the graph, the method doesnot find sentences that are related but are more lex3A threshold of -1 means that no threshold was used suchthat all sentences were included in the graph.
919 System Ave. MRR Ave. TRDR LR[-1.0,0.65] 0.5270 0.8117LR[0.02,0.65] 0.5261 0.7950LR[0.16,0.65] 0.5131 0.8134LR[0.18,0.65] 0.5062 0.8020LR[0.20,0.65] 0.5091 0.7944LR[-1.0,0.80] 0.5288 0.8152LR[0.02,0.80] 0.5324 0.8043LR[0.16,0.80] 0.5184 0.8160LR[0.18,0.80] 0.5199 0.8154LR[0.20,0.80] 0.5282 0.8152 Table 2: Training phase: effect of similarity threshold (a) on Ave. MRR and TRDR.
System Ave. MRR Ave. TRDR LR[0.02,0.65] 0.5261 0.7950LR[0.02,0.70] 0.5290 0.7997LR[0.02,0.75] 0.5299 0.8013LR[0.02,0.80] 0.5324 0.8043LR[0.02,0.85] 0.5322 0.8038LR[0.02,0.90] 0.5323 0.8077LR[0.20,0.65] 0.5091 0.7944LR[0.20,0.70] 0.5244 0.8105LR[0.20,0.75] 0.5285 0.8137LR[0.20,0.80] 0.5282 0.8152LR[0.20,0.85] 0.5317 0.8203LR[0.20,0.90] 0.5368 0.8265 Table 3: Training phase: effect of question bias (d)on Ave. MRR and TRDR.
ically diverse (e.g. paraphrases).
Table 3 shows theeffect of varying the question bias at two differentsimilarity thresholds (0.02 and 0.20).
It is clear that ahigh question bias is needed.
However, a small probability for jumping to a node that is lexically similar to the given sentence (rather than the questionitself) is needed.
Table 4 shows the configurationsof LexRank that performed better than the baselinesystem on the training data, based on mean TRDRscores over the 184 training questions.
We appliedall four of these configurations to our unseen development/test data, in order to see if we could furtherdifferentiate their performances.
5.1 Development/testing phase.
The scores for the four LexRank systems and thebaseline on the development/test data are shown in System Ave. MRR Ave. TRDR Baseline 0.5518 0.8297 LR[0.14,0.95] 0.5267 0.8305LR[0.18,0.90] 0.5376 0.8382LR[0.18,0.95] 0.5421 0.8382LR[0.20,0.95] 0.5404 0.8311 Table 4: Training phase: systems outperforming thebaseline in terms of TRDR score.
System Ave. MRR Ave. TRDR Baseline 0.5709 1.0002 LR[0.14,0.95] 0.5882 1.0469LR[0.18,0.90] 0.5820 1.0288LR[0.18,0.95] 0.5956 1.0411LR[0.20,0.95] 0.6068 1.0601 Table 5: Development testing evaluation.
Cluster B-MRR LRMRR B-TRDR LRTRDR Gulfair 0.5446 0.5461 0.9116 0.9797David Beckham trade 0.5074 0.5919 0.7088 0.7991Miami airport 0.7401 0.7517 1.7157 1.7028evacuation Table 6: Average scores by cluster: baseline versusLR[0.20,0.95].
Table 5.
This time, all four LexRank systems outperformed the baseline, both in terms of average MRRand TRDR scores.
An analysis of the average scoresover the 72 questions within each of the three clusters for the best system, LR[0.20,0.95], is shownin Table 6.
While LexRank outperforms the baseline system on the first two clusters both in termsof MRR and TRDR, their performances are not substantially different on the third cluster.
Therefore,we examined properties of the questions within eachcluster in order to see what effect they might have onsystem performance.We hypothesized that the baseline system, which compares the similarity of each sentence to the question using IDF-weighted word overlap, should perform well on questions that provide many contentwords.
To contrast, LexRank might perform better when the question provides fewer content words,since it considers both similarity to the query andinter-sentence similarity.
Out of the 72 questions inthe development/test set, the baseline system outperformed LexRank on 22 of the questions.
In fact, theaverage number of content words among these 22questions was slightly, but not significantly, higherthan the average on the remaining questions (3.63words per question versus 3.46).
Given this observation, we experimented with two mixed strategies,in which the number of content words in a questiondetermined whether LexRank or the baseline systemwas used for sentence retrieval.
We tried thresholdvalues of 4 and 6 content words, however, this didnot improve the performance over the pure strategyof system LR[0.20,0.95].
Therefore, we applied this 920 Ave. MRR Ave. TRDR Baseline 0.5780 0.8673 LR[0.20,0.95] 0.6189 0.9906p-value na 0.0619 Table 7: Testing phase: baseline vs. LR[0.20,0.95].
system versus the baseline to our unseen test set of134 questions.
5.2 Testing phase.
As shown in Table 7, LR[0.20,0.95] outperformedthe baseline system on the test data both in termsof average MRR and TRDR scores.
The improvement in average TRDR score was statistically significant with a p-value of 0.0619.
Since we are interested in a passage retrieval mechanism that findssentences relevant to a given question, providing input to the question answering component of our system, the improvement in average TRDR score isvery promising.
While we saw in Section 5.1 thatLR[0.20,0.95] may perform better on some questionor cluster types than others, we conclude that it beatsthe competitive baseline when one is looking to optimize mean TRDR scores over a large set of questions.
However, in future work, we will continueto improve the performance, perhaps by developing mixed strategies using different configurationsof LexRank.
SECTION 6: Discussion.
The idea behind using LexRank for sentence retrieval is that a system that considers only the similarity between candidate sentences and the inputquery, and not the similarity between the candidatesentences themselves, is likely to miss some important sentences.
When using any metric to comparesentences and a query, there is always likely to bea tie between multiple sentences (or, similarly, theremay be cases where fewer than the number of desired sentences have similarity scores above zero).LexRank effectively provides a means to break suchties.
An example of such a scenario is illustrated inTables 8 and 9, which show the top ranked sentencesby the baseline and LexRank, respectively for thequestion What caused the Kursk to sink? from theKursk submarine cluster.
It can be seen that all topfive sentences chosen by the baseline system have Rank Sentence Score Relevant?
1 The Russian governmental commission on the 4.2282 Naccident of the submarine Kursk sinking inthe Barents Sea on August 12 has rejected11 original explanations for the disaster,but still cannot conclude what caused the.
tragedy indeed, Russian Deputy Premier IlyaKlebanov said here Friday.
2 There has been no final word on what caused 4.2282 Nthe submarine to sink while participatingin a major naval exercise, but DefenseMinister Igor Sergeyev said the theory.
that Kursk may have collided with anotherobject is receiving increasingly concrete confirmation.3 Russian Deputy Prime Minister Ilya Klebanov 4.2282 Y said Thursday that collision with a bigobject caused the Kursk nuclear submarineto sink to the bottom of the Barents Sea.
4 Russian Deputy Prime Minister Ilya Klebanov 4.2282 Ysaid Thursday that collision with a big.
object caused the Kursk nuclear submarineto sink to the bottom of the Barents Sea.
5 President Clintons national security adviser, 4.2282 NSamuel Berger, has provided his Russian.
counterpart with a written summary of whatU.S. naval and intelligence officials believe caused the nuclear-powered submarine Kursk tosink last month in the Barents Sea, officials said Wednesday.
Table 8: Top ranked sentences using baseline systemon the question What caused the Kursk to sink?.
the same sentence score (similarity to the query), yetthe top ranking two sentences are not actually relevant according to the judges.
To contrast, LexRankachieved a better ranking of the sentences since it isbetter able to differentiate between them.
It shouldbe noted that both for the LexRank and baseline systems, chronological ordering of the documents andsentences is preserved, such that in cases where twosentences have the same score, the one publishedearlier is ranked higher.
SECTION 7: Conclusion.
We presented topic-sensitive LexRank and appliedit to the problem of sentence retrieval.
In a Web-based news summarization setting, users of our system could choose to see the retrieved sentences (asin Table 9) as a question-focused summary.
As indicated in Table 9, each of the top three sentenceswere judged by our annotators as providing a complete answer to the respective question.
While thefirst two sentences provide the same answer (a collision caused the Kursk to sink), the third sentenceprovides a different answer (an explosion caused thedisaster).
While the last two sentences do not provide answers according to our judges, they do provide context information about the situation.
Alternatively, the user might prefer to see the extracted 921 Rank Sentence Score Relevant?
1 Russian Deputy Prime Minister Ilya Klebanov 0.0133 Ysaid Thursday that collision with a big.
object caused the Kursk nuclear submarineto sink to the bottom of the Barents Sea.
2 Russian Deputy Prime Minister Ilya Klebanov 0.0133 Ysaid Thursday that collision with a big.
object caused the Kursk nuclear submarineto sink to the bottom of the Barents Sea.
3 The Russian navy refused to confirm this, 0.0125 Ybut officers have said an explosion in thetorpedo compartment at the front of the.
submarine apparently caused the Kursk to sink.4 President Clintons national security adviser, 0.0124 N Samuel Berger, has provided his Russiancounterpart with a written summary of whatU.S. naval and intelligence officials believe caused the nuclear-powered submarine Kursk tosink last month in the Barents Sea, officials said Wednesday.5 There has been no final word on what caused 0.0123 N the submarine to sink while participatingin a major naval exercise, but DefenseMinister Igor Sergeyev said the theory that Kursk may have collided with anotherobject is receiving increasingly concrete confirmation.
Table 9: Top ranked sentences using theLR[0.20,0.95] system on the question What causedthe Kursk to sink? answers from the retrieved sentences.
In this case,the sentences selected by our system would be sentto an answer identification component for furtherprocessing.
As discussed in Section 2, our goal wasto develop a topic-sensitive version of LexRank andto use it to improve a baseline system, which hadpreviously been used successfully for query-basedsentence retrieval (Allan et al., 2003).
In terms ofthis task, we have shown that over a large set of unaltered questions written by our annotators, LexRankcan, on average, outperform the baseline system,particularly in terms of TRDR scores.
SECTION 8: Acknowledgments.
We would like to thank the members of the CLAIRgroup at Michigan and in particular Siwei Shen andYang Ye for their assistance with this project.
|
7 | W06-3909 | A Bootstrapping Algorithm for Automatically Harvesting Semantic Relations | "In this paper, we present Espresso, a weakly-supervised iterative algorithm combined with a web-bas(...TRUNCATED) | "Title: A Bootstrapping Algorithm for Automatically Harvesting Semantic Relations\n\nABSTRACT\nIn th(...TRUNCATED) |
8 | C10-1045 | Better Arabic Parsing: Baselines, Evaluations, and Analysis | "In this paper, we offer broad insight into the underperformance of Arabic constituency parsing by a(...TRUNCATED) | "Title: Better Arabic Parsing: Baselines, Evaluations, and Analysis\n\nABSTRACT\nIn this paper, we o(...TRUNCATED) |
9 | C02-1025 | Named Entity Recognition: A Maximum Entropy Approach Using Global Information | "This paper presents a maximum entropy-based named entity recognizer (NER).\nIt differs from previou(...TRUNCATED) | "Title: Named Entity Recognition: A Maximum Entropy Approach Using Global Information\n\nABSTRACT\nT(...TRUNCATED) |
10 | P00-1025 | Finite-State Non-Concatenative Morphotactics | "Finite-state morphology in the general tradition of the Two-Level and Xerox implementations has pro(...TRUNCATED) | "Title: Finite-State Non-Concatenative Morphotactics\n\nABSTRACT\nFinite-state morphology in the gen(...TRUNCATED) |
End of preview. Expand
in Data Studio
Dataset Card for MIRA RAG Corpus
MIRA RAG Corpus is a dataset comprising documents derived from research papers in the ACL Anthology. The dataset is designed for tasks such as information retrieval and question answering.
The related datasets can be found at:
Licensing Information
The dataset is released under the [CC BY 4.0](Creative Commons Attribution 4.0 International) license.
Source Information
These papers were acquired from the ACL ontology and cover a wide range of topics in computational linguistics and natural language processing.
- Downloads last month
- 65