Datasets:
metadata
license: mit
task_categories:
- translation
language:
- zh
- en
size_categories:
- 1K<n<10K
Beyond Literal Mapping: Benchmarking and Improving Non-Literal Translation Evaluation
We introduce MENT (Meta-Evaluation dataset of Non-Literal Translation), a human-annotated meta-evaluation dataset to systematically assess MT evaluation metrics.
