Add model card information and metadata

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +96 -3
README.md CHANGED
@@ -1,3 +1,96 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-sa-4.0
3
+ library_name: transformers
4
+ pipeline_tag: translation
5
+ ---
6
+
7
+ ---
8
+ language:
9
+ - en
10
+ - zh
11
+ ---
12
+
13
+ # ExTrans: Multilingual Deep Reasoning Translation via Exemplar-Enhanced Reinforcement Learning
14
+
15
+ This model was presented in the paper [ExTrans: Multilingual Deep Reasoning Translation via Exemplar-Enhanced Reinforcement Learning](https://arxiv.org/abs/2505.12996).
16
+
17
+ ## Abstract
18
+
19
+ In recent years, the emergence of large reasoning models (LRMs), such as OpenAI-o1 and DeepSeek-R1, has shown impressive capabilities in complex problems, e.g., mathematics and coding. Some pioneering studies attempt to bring the success of LRMs in neural machine translation (MT). They try to build LRMs with deep reasoning MT ability via reinforcement learning (RL). Despite some progress that has been made, these attempts generally focus on several high-resource languages, e.g., English and Chinese, leaving the performance on other languages unclear. Besides, the reward modeling methods in previous work do not fully unleash the potential of reinforcement learning in MT. In this work, we first design a new reward modeling method that compares the translation results of the policy MT model with a strong LRM (i.e., DeepSeek-R1-671B), and quantifies the comparisons to provide rewards. Experimental results demonstrate the superiority of the reward modeling method. Using Qwen2.5-7B-Instruct as the backbone, the trained model achieves the new state-of-the-art performance in literary translation, and outperforms strong LRMs including OpenAI-o1 and DeepSeeK-R1. Furthermore, we extend our method to the multilingual settings with 11 languages. With a carefully designed lightweight reward modeling in RL, we can simply transfer the strong MT ability from a single direction into multiple (i.e., 90) translation directions and achieve impressive multilingual MT performance.
20
+
21
+ # Deep Reasoning Translation (DRT) Project
22
+
23
+ This repository contains the resources for our work:
24
+ - [**ExTrans**: Multilingual Deep Reasoning Translation via Exemplar-Enhanced Reinforcement Learning](https://arxiv.org/abs/2505.12996) (arXiv preprint 2025)
25
+ - [**DeepTrans**: Deep Reasoning Translation via Reinforcement Learning](https://arxiv.org/abs/2504.10187) (arXiv preprint 2025)
26
+ - [**DRT**: Deep Reasoning Translation via Long Chain-of-Thought](https://arxiv.org/abs/2412.17498) (ACL 2025 Findings)
27
+
28
+ ### Updates:
29
+ - *2025.05.27*: We released the full data of our DRT work, including the synthesized thought contents and translations. Please refer to `data/MetaphorTrans_*.jsonl`
30
+ - *2025.05.19*: We released [ExTrans paper](https://arxiv.org/abs/2505.12996) with 🤗 <a href="https://huggingface.co/Krystalan/ExTrans-7B">ExTrans-7B</a> and 🤗 <a href="https://huggingface.co/Krystalan/mExTrans-7B">mExTrans-7B</a>. Check it out!
31
+ - *2025.05.16*: Our [DRT paper](https://arxiv.org/abs/2412.17498) is accepted to **ACL 2025 Findings**.
32
+ - *2025.04.14*: We released [DeepTrans paper](https://arxiv.org/abs/2504.10187) with 🤗 <a href="https://huggingface.co/Krystalan/DeepTrans-7B">DeepTrans-7B</a>. Check it out!
33
+ - *2024.12.31*: We updated [DRT paper](https://arxiv.org/abs/2412.17498) with more detals and analyses. Check it out!
34
+ - *2024.12.24*: We released [DRT paper](https://arxiv.org/abs/2412.17498) with 🤗 <a href="https://huggingface.co/Krystalan/DRT-7B">DRT-7B</a>, 🤗 <a href="https://huggingface.co/Krystalan/DRT-8B">DRT-8B</a> and 🤗 <a href="https://huggingface.co/Krystalan/DRT-14B">DRT-14B</a>. Check it out!
35
+
36
+ If you find this work is useful, please consider cite our paper:
37
+ ```
38
+ @article{wang2025extrans,
39
+ title={ExTrans: Multilingual Deep Reasoning Translation via Exemplar-Enhanced Reinforcement Learning},
40
+ author={Wang, Jiaan and Meng, Fandong and Zhou, Jie},
41
+ journal={arXiv preprint arXiv:2505.12996},
42
+ year={2025}
43
+ }
44
+ ```
45
+
46
+ ## ExTrans
47
+
48
+ ![](./images/extrans-reward-framework.png)
49
+
50
+ In this work, we propose ExTrans-7B, which aims at enhancing the free translation ability of deep reasoning LLMs via **exemplar-enhanced** RL. In detail, for each training MT sample, we use DeepSeek-R1 (671B) to generate a exemplar translation, and compare the translation results of the policy model with the exemplar translations to provide rewards for the policy model.
51
+
52
+ The model checkpoint can be accessed from the following link:
53
+ | | Backbone | Model Access |
54
+ | :--: | :--: | :--: |
55
+ | ExTrans-7B | 🤗 <a href="https://huggingface.co/Qwen/Qwen2.5-7B-Instruct">Qwen2.5-7B-Instruct</a> | 🤗 <a href="https://huggingface.co/Krystalan/ExTrans-7B">ExTrans-7B</a> |
56
+
57
+ deploying LLMs:
58
+ ```bash
59
+ python3 -m vllm.entrypoints.openai.api_server --model [model_ckpt] --served-model-name [model_name]
60
+ ```
61
+
62
+ calling LLMs:
63
+ ```python
64
+ from openai import OpenAI
65
+ # Set OpenAI's API key and API base to use vLLM's API server.
66
+ openai_api_key = "EMPTY"
67
+ openai_api_base = "http://localhost:8000/v1"
68
+
69
+ client = OpenAI(
70
+ api_key=openai_api_key,
71
+ base_url=openai_api_base,
72
+ )
73
+
74
+ prompt = "你是一个翻译专���,擅长将英文翻译成中文。你在翻译过程中非常擅长思考,会先进行思考再给出翻译结果。你的输出格式为:
75
+ <think>
76
+ [思考过程]
77
+ </think>[翻译结果]
78
+
79
+ 在你思考完之后,也就是</think>之后,你会给出最终的翻译即“[翻译结果]”,且[翻译结果]中不需要给出任何解释和描述,只需要提供英文的翻译结果。
80
+ 现在请你翻译以下这句英语:
81
+ " + "The mother, with her feet propped up on a stool, seemed to be trying to get to the bottom of that answer, whose feminine profundity had struck her all of a heap."
82
+
83
+ chat_response = client.chat.completions.create(
84
+ model=[model_name],
85
+ messages=[
86
+ {"role": "user", "content": prompt},
87
+ ],
88
+ temperature=0.1,
89
+ top_p=0.8,
90
+ max_tokens=2048,
91
+ extra_body={
92
+ "repetition_penalty": 1.05,
93
+ },
94
+ )
95
+ print("Chat response:", chat_response)
96
+ ```