## **Word-Alignment-Based Segment-Level Machine Translation Evaluation** **using Word Embeddings**
**Junki Matsuo** and **Mamoru Komachi**
Graduate School of System Design,
Tokyo Metropolitan University, Japan
matsuo-junki@ed.tmu.ac.jp,
komachi@tmu.ac.jp
**Abstract**
One of the most important problems in
machine translation (MT) evaluation is to
evaluate the similarity between translation
hypotheses with different surface forms
from the reference, especially at the segment level. We propose to use word
embeddings to perform word alignment
for segment-level MT evaluation. We
performed experiments with three types
of alignment methods using word embeddings. We evaluated our proposed
methods with various translation datasets.
Experimental results show that our proposed methods outperform previous word
embeddings-based methods.
**1** **Introduction**
Automatic evaluation of machine translation (MT)
systems without human intervention has gained
importance. For example, BLEU (Papineni et al.,
2002) has improved the MT research in the last
decade. However, BLEU has little correlation
with human judgment on the segment level since
it is originally proposed for system-level evaluation. Segment-level evaluation is crucial for analyzing MT outputs to improve the system accuracy, but there are few studies addressing the issue
of segment-level evaluation of MT outputs.
Another issue in MT evaluation is to evaluate MT hypotheses that are semantically equivalent with different surfaces from the reference.
For instance, BLEU does not consider any words
that do not match the reference at the surface level. METEOR-Universal (Denkowski and
Lavie, 2014) handles word similarities better,
_∗_ The last author is currently affiliated with Nara Institute
of Science and Technology, Japan.
**Katsuhito Sudoh**
NTT Communication Science
Laboratories, Japan
sudoh@is.naist.jp _[∗]_
but it uses external resources that require timeconsuming annotations. It is also not as simple
as BLEU and its score is difficult to interpret.
DREEM (Chen and Guo, 2015), another metric
that addresses the issue of word similarity, does
not require human annotations and uses distributed
representations for MT evaluation. It shows higher
accuracy than popular metrics such as BLEU and
METEOR.
Therefore, we follow the approach of DREEM
to propose a lightweight MT evaluation measure
that employs only a raw corpus as an external resource. We adopt sentence similarity measures
proposed by Song and Roth (2015) for a Semantic
Textual Similarity (STS) task. They use word embeddings to align words so that the sentence similarity score takes near-synonymous expressions
into account and propose three types of heuristics using m:n (average), 1:n (maximum) and 1:1
(Hungarian) alignments. It has been reported that
sentence similarity calculated with a word alignment based on word embeddings shows high accuracy on STS tasks.
We evaluated the word-alignment-based sentence similarity for MT evaluation to use the
WMT12, WMT13, and WMT15 datasets of
European–English translation and WAT2015 and
NTCIR8 datasets of Japanese–English translation.
Experimental results confirmed that the maximum alignment similarity outperforms previous
word embeddings-based methods in European–
English translation tasks and the average alignment similarity has the highest human correlation
in Japanese–English translation tasks.
**2** **Related Work**
Several studies have examined automatic evaluation of MT systems. The de facto standard automatic MT evaluation metrics BLEU
(Papineni et al., 2002) may assign inappropriate score to a translation hypothesis that uses
similar but different words because it considers only word n-gram precision (Callison-Burch
et al., 2006). METEOR-Universal (Denkowski
and Lavie, 2014) alleviates the problem of surface mismatch by using a thesaurus and a stemmer
but it needs external resources, such as WordNet.
In this work, we used a distributed word representation to evaluate semantic relatedness between
the hypothesis and reference sentences. This approach has the advantage that it can be implemented only with only a raw monolingual corpus.
To address the problem of word n-gram precision, Wang and Merlo (2016) propose to smooth
it by word embeddings. They also employ maximum alignment between n-grams of hypothesis and reference sentences and a threshold to
cut off n-gram embeddings with low similarity.
Their work is similar to our maximum alignment
similarity method, but they only experimented
in European–English datasets, where maximum
alignment works better than average alignment.
The previous method most similar to ours is
DREEM (Chen and Guo, 2015). It has shown
to achieve state-of-the-art accuracy compared with
popular metrics such as BLEU and METEOR. It
uses various types of representations such as word
and sentence representations. Word representations are trained with a neural network and sentence representations are trained with a recursive
auto-encoder, respectively. DREEM uses cosine
similarity between distributed representations of
hypothesis and reference as a translation evaluation score. Both their and our methods employ
word embeddings to compute sentence similarity
score, but our method differs in the use of alignment and length penalty. As for alignment, we set
a threshold to remove noisy alignments, whereas
they use a hyper-parameter to down-weight overall sentence similarity. As for length penalty,
we compared average, maximum, and Hungarian
alignments to compensate for the difference between the lengths of translation hypothesis and
reference, whereas they use an exponential penalty
to normalize the length.
Another way to improve the robustness of MT
evaluation is to use a character-based model.
CHRF (Popovi´c, 2015) is one such metric that
uses character n-grams. It is a harmonic mean
of character n-gram precision and recall. It works
MASasym( _a, b_ ) = [1]
_|a|_
well for morphologically rich languages. We, instead, adopt a word-based approach because our
target language, English, is morphologically simple but etymologically complex.
**3** **Word-Alignment-Based Sentence**
**Similarity using Word Embeddings**
In this section, we introduce word-alignmentbased sentence similarity (Song and Roth, 2015)
applied as an MT evaluation metrics. Song and
Roth (2015) propose to use word embeddings to
align words in a pair of sentences. Their approach
shows promising results in STS tasks.
In MT evaluation, a word in the source language aligns to either a word or a phrase in the target language; therefore, it is not likely for a word
to align with the whole sentence. Thus, we use
several heuristics to constrain word alignment between the hypothesis and reference sentences.
In the following subsections, we present three
sentence similarity measures. All of them use cosine similarity to calculate word similarity. To
avoid alignment between unrelated words, we cut
off word alignment whose similarity is less than a
threshold value.
**3.1** **Average Alignment Similarity**
First, the average alignment similarity (AAS)
heuristic aligns a word with multiple words in a
sentence pair. Similarity of words between a hypothesis sentence and a reference sentence is calculated. AAS is given by averaging word similarity scores of all combinations of words in _|x||y|_ .
_|y|_
- _φ_ ( _xi, yj_ ) (1)
_j_ =1
1
AAS( _x, y_ ) =
_|x||y|_
_|x|_
_i_ =1
Here, _x_ is a hypothesis and _y_ is a reference; and _xi_
and _yj_ represent words in each sentence.
**3.2** **Maximum Alignment Similarity**
Second, we propose the maximum alignment similarity (MAS) heuristic averaging only the word
that has the maximum similarity score of each
aligned word pair. By definition, MAS itself is an
asymmetric score so we symmetrize it by averaging the score in both directions.
_|a|_
- max _φ_ ( _ai, bj_ ) (2)
_j_
_i_ =1
tence _y_ by the Hungarian method (Kuhn, 1955).
1
HAS( _x, y_ ) =
min( _|x|, |y|_ )
**4** **Experiment**
_|x|_
- _φ_ ( _xi, h_ ( _xi_ )) (4)
_i_ =1
Figure 1: Correlation of each word-alignmentbased method with varying the threshold for WMT
datasets.
Figure 2: Correlation of each word-alignmentbased method with varying the threshold for
WAT2015 and NTCIR8 datasets.
MAS( _x, y_ ) = [1]
2 [(MAS][asym][(] _[x, y]_ [)+MAS][asym][(] _[y, x]_ [))]
We report the results of MT evaluation in a
European–English translation task of the WMT12,
WMT13, and WMT15 datasets and Japanese–
English task of WAT2015 and NTCIR8 datasets.
For the WMT datasets, we compared our metrics
with BLEU and DREEM taken from the official
score of the WMT15 metric task (Stanojevi´c et al.,
2015). For WAT2015 and NTCIR8 datasets, the
three types of proposed methods are compared.
**4.1** **Experimental Setting**
We used the WMT12, WMT13, and WMT15
datasets containing a total of 137,007 sentences
in French, Finnish, German, Czech, and Russian
translated to English. As Japanese–English translation datasets, WAT2015 includes 600 sentences
and NTCIR8 includes 1,200 sentences. We measured correlation between human adequacy score
and each of the evaluation metrics. We used
Kendall’s _τ_ for segment-level evaluation. We used
a pre-trained model of word2vec using the Google
News corpus for calculating word similarity using
our proposed methods. [1]
**4.2** **Result**
Table 1 shows a breakdown of correlation scores
for each language pair in WMT15. MAS shows
the best accuracy among all the proposed metrics
for all language pairs. Its accuracy is better than
that of DREEM for all language pairs except for
Czech–English. This result shows that removal of
noisy word embeddings by either using a threshold or 1:n alignment is important for European–
English datasets.
Figure 1 shows correlation of word-alignmentbased methods for WMT datasets with varying threshold values. For the WMT datasets,
MAS has the highest correlation scores among the
three word-alignment-based methods. A threshold value of 0.2 gives the maximum correlation for
MAS for all WMT datasets.
Figure 2 shows correlation of word-alignmentbased methods for the two Japanese–English
[1https://code.google.com/archive/p/](https://code.google.com/archive/p/word2vec/)
[word2vec/](https://code.google.com/archive/p/word2vec/)
(3)
Here, _a_ and _b_ are words in a hypothesis and a reference sentence, respectively.
**3.3** **Hungarian Alignment Similarity**
Third, we introduce the Hungarian alignment similarity (HAS) to restrict word alignment to 1:1.
HAS formulates the task of word alignment as bipartite graph matching where the words in a hypothesis and a reference are represented as nodes
whose edges have weight _φ_ ( _xi, yi_ ). One-to-one
word alignment is achieved by calculating maximum alignment of the perfect bipartite graph. For
each word _xi_ included in a hypothesis sentence,
HAS chooses the word _h_ ( _xi_ ) in a reference sen
|Evaluation Metrics|Fr-En|Fi-En|De-En|Cs-En|Ru-En|Average|
|---|---|---|---|---|---|---|
|Average Alignment Similarity
Maximum Alignment Similarity
Hungarian Alignment Similarity|0.324
**0.368**
0.223|0.247
**0.355**
0.211|0.304
**0.392**
0.259|0.288
0.400
0.251|0.273
**0.349**
0.239|0.287
**0.373**
0.237|
|BLEU (Stanojevi´c et al., 2015)
DREEM (Chen and Guo, 2015)|0.358
0.362|0.308
0.340|0.360
0.368|0.391
**0.423**|0.329
0.348|0.349
0.368|
Table 1: Kendall’s _τ_ correlations of automatic evaluation metrics and official human judgements for the
WMT15 dataset. (Fr: French, Fi: Finnish, De: German, Cs: Czech, Ru: Russian, En: English)
|Evaluation Metrics|WMT12|WMT13|WMT15|WAT2015|NTCIR8|
|---|---|---|---|---|---|
|Average Alignment Similariy
Maximum Alignment Similarity
Hungarian Alignment Similarity|0.211
**0.353**
0.106|0.312
**0.381**
0.272|0.287
**0.373**
0.237|**0.332**
0.235
0.092|**0.343**
0.171
0.075|
Table 2: Kendall’s _τ_ correlations of word-alignment-based methods and the official human judgements
for each dataset. (WMT12, WMT13, and WMT15: European–English datasets, and WAT2015 and
NTCIR8: Japanese–English datasets)
datasets with a varying threshold. Although MAS
has the highest correlation for the WMT datasets,
AAS has the highest correlation for the WAT2015
and NTCIR8 datasets.
Table 2 describes segment-level correlation results for WMT, WAT2015, and NTCIR8 datasets.
MAS has the highest correlation score for the
WMT datasets, whereas AAS has the highest correlation score for WAT2015 and NTCIR8 datasets.
**5** **Discussion**
Figure 1 demonstrated that MAS and AAS are
more stable than HAS for European–English
datasets. This may be because it is relatively
easy for the AAS and MAS to perform word
alignment using word embeddings in translation
pairs of similar languages, but HAS suffers from
alignment sparsity more than the other methods.
In European–English translation, all the wordalignment-based methods perform poorly when
using no word embeddings.
Unlike the European–English translation task,
the Japanese–English translation task exhibits a
different tendency. Figure 2 shows the comparison between three types of word-alignment-based
methods for each threshold. This is partly because
word embeddings help evaluating lexically similar
word pairs but fail to model syntactic variations.
Also, we note that in Japanese–English datasets,
AAS achieved the highest correlation. We suppose
that this is because in Japanese–English transla
tion, it is difficult to cover all the source information in the target language, resulting in misalignment of inadequate words by HAS and MAS.
Table 2 shows that MAS performs stably on the
WMT datasets. In particular, Kendall’s _τ_ score of
HAS in WMT12 exhibits very low correlation. It
seems that the 1:1 alignment is too strict to calculate sentence similarity in MT evaluation, while
the 1:m (MAS) alignment performs well, possibly
because of the removal of noisy word alignment.
On the other hand, AAS is more stable than MAS
and HAS for WAT2015 and NTCIR8 datasets. As
a rule of thumb, AAS with high threshold values
(0.6–0.9) shows stable high correlation across all
language pairs, but if it is possible to use development data to tune the parameters, MAS with different values of thresholds should be considered.
**6** **Conclusion**
In this paper, we presented word-alignment-based
MT evaluation metrics using distributed word representations. In our experiments, MAS showed
higher correlation with human evaluation than
other automatic MT metrics such as BLEU and
DREEM for European–English datasets. On the
other hand, for Japanese–English datasets, AAS
showed higher correlation with human evaluation
than other metrics. These results indicate that appropriate word alignment using word embeddings
is helpful in evaluating the MT output.
**References**
Chris Callison-Burch, Miles Osborne, and Philipp
Koehn. 2006. Re-evaluating the Role of BLEU in
Machine Translation Research. In _Proceedings of_
_the 11th Conference of the European Chapter of the_
_Association for Computational Linguistics_ . pages
249–256.
Boxing Chen and Hongyu Guo. 2015. Representation
Based Translation Evaluation Metrics. In _Proceed-_
_ings of the 53rd Annual Meeting of the Association_
_for Computational Linguistics and the 7th Interna-_
_tional Joint Conference on Natural Language Pro-_
_cessing (Volume 2: Short Papers)_ . pages 150–155.
Michael Denkowski and Alon Lavie. 2014. Meteor
Universal: Language Specific Translation Evaluation for Any Target Language. In _Proceedings of the_
_Ninth Workshop on Statistical Machine Translation_ .
pages 376–380.
Harold W. Kuhn. 1955. The Hungarian Method for the
Assignment Problem. In _Naval Research Logistics_
_Quarterly_ . pages 83–97.
Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a Method for Automatic
Evaluation of Machine Translation. In _Proceed-_
_ings of the 40th annual meeting on association for_
_computational linguistics. Association for Computa-_
_tional Linguistics_ . pages 311–318.
Maja Popovi´c. 2015. ChrF: Character n-gram F-score
for Automatic MT Evaluation. In _Proceedings of the_
_Tenth Workshop on Statistical Machine Translation_ .
pages 392–395.
Yangqui Song and Dan Roth. 2015. Unsupervised
Sparse Vector Densification for Short Text Similarity. In _Proceedings of the 2015 Annual Conference_
_of the North American Chapter of the ACL_ . pages
1275–1280.
Miloˇs Stanojevi´c, Amir Kamran, Philipp Koehn, and
Ondˇrej Bojar. 2015. Results of the WMT15 Metrics
Shared Task. In _Proceedings of the Tenth Workshop_
_on Statistical Machine Translation_ . pages 256–273.
Haozhou Wang and Paola Merlo. 2016. Modifications of Machine Translation Evaluation Metrics by
Using Word Embeddings. In _Proceedings of the_
_Sixth Workshop on Hybrid Approaches to Transla-_
_tion (HyTra6)_ . pages 33–41.