GGUF
DataSoul commited on
Commit
ae1fff9
·
verified ·
1 Parent(s): cabcde4

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -1
README.md CHANGED
@@ -1,3 +1,75 @@
1
  ---
2
- license: unknown
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
  ---
4
+ **[ALMA-R](https://arxiv.org/abs/2401.08417)** builds upon [ALMA models](https://arxiv.org/abs/2309.11674), with further LoRA fine-tuning with our proposed **Contrastive Preference Optimization (CPO)** as opposed to the Supervised Fine-tuning used in ALMA. CPO fine-tuning requires our [triplet preference data](https://huggingface.co/datasets/haoranxu/ALMA-R-Preference) for preference learning. ALMA-R now can matches or even exceeds GPT-4 or WMT winners!
5
+ ```
6
+ @misc{xu2024contrastive,
7
+ title={Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation},
8
+ author={Haoran Xu and Amr Sharaf and Yunmo Chen and Weiting Tan and Lingfeng Shen and Benjamin Van Durme and Kenton Murray and Young Jin Kim},
9
+ year={2024},
10
+ eprint={2401.08417},
11
+ archivePrefix={arXiv},
12
+ primaryClass={cs.CL}
13
+ }
14
+ ```
15
+ ```
16
+ @misc{xu2023paradigm,
17
+ title={A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models},
18
+ author={Haoran Xu and Young Jin Kim and Amr Sharaf and Hany Hassan Awadalla},
19
+ year={2023},
20
+ eprint={2309.11674},
21
+ archivePrefix={arXiv},
22
+ primaryClass={cs.CL}
23
+ }
24
+ ```
25
+ # Download ALMA(-R) Models and Dataset 🚀
26
+
27
+ We release six translation models presented in the paper:
28
+ - ALMA-7B
29
+ - ALMA-7B-LoRA
30
+ - **ALMA-7B-R (NEW!)**: Further LoRA fine-tuning upon ALMA-7B-LoRA with contrastive preference optimization.
31
+ - ALMA-13B
32
+ - ALMA-13B-LoRA
33
+ - **ALMA-13B-R (NEW!)**: Further LoRA fine-tuning upon ALMA-13B-LoRA with contrastive preference optimization (BEST MODEL!).
34
+
35
+ Model checkpoints are released at huggingface:
36
+ | Models | Base Model Link | LoRA Link |
37
+ |:-------------:|:---------------:|:---------:|
38
+ | ALMA-7B | [haoranxu/ALMA-7B](https://huggingface.co/haoranxu/ALMA-7B) | - |
39
+ | ALMA-7B-LoRA | [haoranxu/ALMA-7B-Pretrain](https://huggingface.co/haoranxu/ALMA-7B-Pretrain) | [haoranxu/ALMA-7B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-7B-Pretrain-LoRA) |
40
+ | **ALMA-7B-R (NEW!)** | [haoranxu/ALMA-7B-R (LoRA merged)](https://huggingface.co/haoranxu/ALMA-7B-R) | - |
41
+ | ALMA-13B | [haoranxu/ALMA-13B](https://huggingface.co/haoranxu/ALMA-13B) | - |
42
+ | ALMA-13B-LoRA | [haoranxu/ALMA-13B-Pretrain](https://huggingface.co/haoranxu/ALMA-13B-Pretrain) | [haoranxu/ALMA-13B-Pretrain-LoRA](https://huggingface.co/haoranxu/ALMA-13B-Pretrain-LoRA) |
43
+ | **ALMA-13B-R (NEW!)** | [haoranxu/ALMA-13B-R (LoRA merged)](https://huggingface.co/haoranxu/ALMA-13B-R) | - |
44
+
45
+ **Note that `ALMA-7B-Pretrain` and `ALMA-13B-Pretrain` are NOT translation models. They only experience stage 1 monolingual fine-tuning (20B tokens for the 7B model and 12B tokens for the 13B model), and should be utilized in conjunction with their LoRA models.**
46
+
47
+ Datasets used by ALMA and ALMA-R are also released at huggingface now (NEW!)
48
+ | Datasets | Train / Validation| Test |
49
+ |:-------------:|:---------------:|:---------:|
50
+ | Human-Written Parallel Data (ALMA) | [train and validation](https://huggingface.co/datasets/haoranxu/ALMA-Human-Parallel) | [WMT'22](https://huggingface.co/datasets/haoranxu/WMT22-Test) |
51
+ | Triplet Preference Data | [train](https://huggingface.co/datasets/haoranxu/ALMA-R-Preference) | [WMT'22](https://huggingface.co/datasets/haoranxu/WMT22-Test) and [WMT'23](https://huggingface.co/datasets/haoranxu/WMT23-Test) |
52
+
53
+
54
+ A quick start to use our best system (ALMA-13B-R) for translation. An example of translating "我爱机器翻译。" into English:
55
+ ```
56
+ import torch
57
+ from transformers import AutoModelForCausalLM
58
+ from transformers import AutoTokenizer
59
+
60
+ # Load base model and LoRA weights
61
+ model = AutoModelForCausalLM.from_pretrained("haoranxu/ALMA-13B-R", torch_dtype=torch.float16, device_map="auto")
62
+ tokenizer = AutoTokenizer.from_pretrained("haoranxu/ALMA-13B-R", padding_side='left')
63
+
64
+ # Add the source sentence into the prompt template
65
+ prompt="Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"
66
+ input_ids = tokenizer(prompt, return_tensors="pt", padding=True, max_length=40, truncation=True).input_ids.cuda()
67
+
68
+ # Translation
69
+ with torch.no_grad():
70
+ generated_ids = model.generate(input_ids=input_ids, num_beams=5, max_new_tokens=20, do_sample=True, temperature=0.6, top_p=0.9)
71
+ outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
72
+ print(outputs)
73
+ ```
74
+
75
+ Please find more details in our [GitHub repository](https://github.com/fe1ixxu/ALMA)