|
|
--- |
|
|
license: mit |
|
|
task_categories: |
|
|
- translation |
|
|
language: |
|
|
- zh |
|
|
- en |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
--- |
|
|
# Beyond Literal Mapping: Benchmarking and Improving Non-Literal Translation Evaluation |
|
|
|
|
|
> [!NOTE] |
|
|
> 📄Paper [arXiv](https://arxiv.org/abs/2601.07338) | 💻Code [GitHub](https://github.com/BITHLP/RATE) |
|
|
|
|
|
|
|
|
We introduce **MENT** (<u>M</u>eta-<u>E</u>valuation dataset of <u>N</u>on-Literal <u>T</u>ranslation), a human-annotated meta-evaluation dataset to systematically assess MT evaluation metrics. |
|
|
|
|
|
<p align="center"> <img src="assets/data.png" width="800"> |