Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,59 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
---
|
| 4 |
+
**[ALMA-R](https://arxiv.org/abs/2401.08417)** builds upon [ALMA models](https://arxiv.org/abs/2309.11674), with further LoRA fine-tuning with our proposed **Contrastive Preference Optimization (CPO)** as opposed to the Supervised Fine-tuning used in ALMA. CPO fine-tuning requires our [triplet preference data](https://huggingface.co/datasets/haoranxu/ALMA-R-Preference) for preference learning. ALMA-R now can matches or even exceeds GPT-4 or WMT winners!
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
# Download ALMA(-R) Models and Dataset 🚀
|
| 8 |
+
|
| 9 |
+
We release six translation models presented in the paper:
|
| 10 |
+
- ALMA-7B
|
| 11 |
+
- ALMA-7B-LoRA
|
| 12 |
+
- **ALMA-7B-R (NEW!)**: Further LoRA fine-tuning upon ALMA-7B-LoRA with contrastive preference optimization.
|
| 13 |
+
- ALMA-13B
|
| 14 |
+
- ALMA-13B-LoRA
|
| 15 |
+
- **ALMA-13B-R (NEW!)**: Further LoRA fine-tuning upon ALMA-13B-LoRA with contrastive preference optimization (BEST MODEL!).
|
| 16 |
+
|
| 17 |
+
*We have also provided the WMT'22 and WMT'23 translation outputs from ALMA-13B-LoRA and ALMA-13B-R in the `outputs` directory. These outputs also includes our outputs of baselines and can be directly accessed and used for subsequent evaluations.*
|
| 18 |
+
|
| 19 |
+
Model checkpoints are released at huggingface:
|
| 20 |
+
| Models | Base Model Link | LoRA Link |
|
| 21 |
+
|:-------------:|:---------------:|:---------:|
|
| 22 |
+
| ALMA-7B | [haoranxu/ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B) | - |
|
| 23 |
+
| ALMA-7B-LoRA | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-7B-Pretrain-LoRA) |
|
| 24 |
+
| **ALMA-7B-R (NEW!)** | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-R](https://huggingface.co/haoranxu/ALMA-7B-R) |
|
| 25 |
+
| ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - |
|
| 26 |
+
| ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) |
|
| 27 |
+
| **ALMA-13B-R (NEW!)** | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-R](https://huggingface.co/haoranxu/ALMA-13B-R) |
|
| 28 |
+
|
| 29 |
+
**Note that `ALMA-7B-Pretrain` and `ALMA-13B-Pretrain` are NOT translation models. They only experience stage 1 monolingual fine-tuning (20B tokens for the 7B model and 12B tokens for the 13B model), and should be utilized in conjunction with their LoRA models.**
|
| 30 |
+
|
| 31 |
+
Datasets used by ALMA and ALMA-R are also released at huggingface now (NEW!)
|
| 32 |
+
| Datasets | Train / Validation| Test |
|
| 33 |
+
|:-------------:|:---------------:|:---------:|
|
| 34 |
+
| Human-Written Parallel Data (ALMA) | [train and validation](https://huggingface.co/datasets/haoranxu/ALMA-Human-Parallel) | [WMT'22](https://huggingface.co/datasets/haoranxu/WMT22-Test) |
|
| 35 |
+
| Triplet Preference Data | [train](https://huggingface.co/datasets/haoranxu/ALMA-R-Preference) | [WMT'22](https://huggingface.co/datasets/haoranxu/WMT22-Test) and [WMT'23](https://huggingface.co/datasets/haoranxu/WMT23-Test) |
|
| 36 |
+
|
| 37 |
+
|
| 38 |
+
A quick start to use our best system (ALMA-13B-R) for translation. An example of translating "我爱机器翻译。" into English:
|
| 39 |
+
```
|
| 40 |
+
import torch
|
| 41 |
+
from peft import PeftModel
|
| 42 |
+
from transformers import AutoModelForCausalLM
|
| 43 |
+
from transformers import LlamaTokenizer
|
| 44 |
+
|
| 45 |
+
# Load base model and LoRA weights
|
| 46 |
+
model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-Pretrain", torch_dtype=torch.float16, device_map="auto")
|
| 47 |
+
model = PeftModel.from_pretrained(model, "haoranxu/ALMA-13B-R")
|
| 48 |
+
tokenizer = LlamaTokenizer.from_pretrained("haoranxu/ALMA-13B-Pretrain", padding_side='left')
|
| 49 |
+
|
| 50 |
+
# Add the source sentence into the prompt template
|
| 51 |
+
prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"
|
| 52 |
+
input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda()
|
| 53 |
+
|
| 54 |
+
# Translation
|
| 55 |
+
with torch.no_grad():
|
| 56 |
+
generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9)
|
| 57 |
+
outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
|
| 58 |
+
print(outputs)
|
| 59 |
+
```
|