Datasets:
metadata
license: cc-by-4.0
task_categories:
- translation
language:
- zh
- en
size_categories:
- n<1K
DiscoX Translation Benchmark
DiscoX is a benchmark for the evaluation of LLMs on discourse- and expert-level translation tasks.
Dataset At A Glance
- Languages: English ⇄ Chinese (100 English→Chinese tasks, 100 Chinese→English tasks)
- Total samples: 200 discourse- and exprt-level translation items
- Average passage length: ~1.7k characters (min 0.73k, max 3.04k)
- Meta fields: primary & secondary domain labels, structured rubrics, prompt IDs,etc
- Reference Rubrics: every task ships with multiple rubrics annotated by experts, capturing key points for evaluating translation quality
Primary domain coverage:
Primary Domain Samples Share 学术论文 (Academic papers) 121 60.5% 非学术论文 (Non-Academic tasks) 79 39.5%
Secondary domain highlights include Social Scienices(社会科学),Natural Sciences(自然科学),Humanities(人文科学),Applied Disciplines(应用学科),News&Information(新闻资讯),Domain-Specific Scenarios(垂类场景) and Literature&Arts(文学艺术).
File Structure
discox.json: the core dataset. Each record containsori_text: the source text to be translatedprompt: text adding translation instructionsreference_list: rubrics designed for evaluating translation resultsPrimary_Domain,Secondary_Domain: high-level topic labelsprompt_id,__internal_uuid__: identifiers for specific tasks
Notes & Recommendations
- The reference_list entries are designed to enable targeted verification of translation fidelity: by converting them into structured checks (e.g., terminology, tone, and named entities), the evaluation performs fine-grained, pointwise assessments of key translation aspects.
- Translation instruction in pormpt describe desired output language in Chinese.
License
Our data is under cc-by-4.0 license.