You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Beyond Literal Mapping: Benchmarking and Improving Non-Literal Translation Evaluation

📄Paper arXiv | 💻Code GitHub

We introduce MENT (Meta-Evaluation dataset of Non-Literal Translation), a human-annotated meta-evaluation dataset to systematically assess MT evaluation metrics.

Downloads last month
16

Paper for yztian/MENT