Commit
·
2eb5497
1
Parent(s):
f23e579
Update README.md
Browse files
README.md
CHANGED
|
@@ -76,29 +76,14 @@ python3 -m fastchat.serve.cli --model-path LLM360/AmberSafe
|
|
| 76 |
## DataMix
|
| 77 |
| Subset | Number of rows | License |
|
| 78 |
| ----------- | ----------- | ----------- |
|
| 79 |
-
| PKU-Alignment/PKU-SafeRLHF |
|
| 80 |
-
| Total |
|
| 81 |
-
|
| 82 |
-
##
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
| Intermediate Size (MLPs) | 11008 |
|
| 88 |
-
| Number of Attention Heads | 32 |
|
| 89 |
-
| Number of Hidden Lyaers | 32 |
|
| 90 |
-
| RMSNorm ɛ | 1e^-6 |
|
| 91 |
-
| Max Seq Length | 2048 |
|
| 92 |
-
| Vocab Size | 32000 |
|
| 93 |
-
|
| 94 |
-
| Training Hyperparameter | Value |
|
| 95 |
-
| ----------- | ----------- |
|
| 96 |
-
| learning_rate | 2e-5 |
|
| 97 |
-
| num_train_epochs | 3 |
|
| 98 |
-
| per_device_train_batch_size | 2 |
|
| 99 |
-
| gradient_accumulation_steps | 16 |
|
| 100 |
-
| warmup_ratio | 0.04 |
|
| 101 |
-
| model_max_length | 2048 |
|
| 102 |
|
| 103 |
|
| 104 |
# Evaluation
|
|
@@ -107,7 +92,7 @@ python3 -m fastchat.serve.cli --model-path LLM360/AmberSafe
|
|
| 107 |
|------------------------------------------------------|------------------------------------------------------------|
|
| 108 |
| LLM360/Amber 359 | 2.48750 |
|
| 109 |
| LLM360/AmberChat | 5.428125 |
|
| 110 |
-
| **LLM360/AmberSafe** | **
|
| 111 |
|
| 112 |
# Citation
|
| 113 |
|
|
|
|
| 76 |
## DataMix
|
| 77 |
| Subset | Number of rows | License |
|
| 78 |
| ----------- | ----------- | ----------- |
|
| 79 |
+
| [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) | 330k | cc-by-nc-4.0 |
|
| 80 |
+
| Total | 330k | |
|
| 81 |
+
|
| 82 |
+
## Method
|
| 83 |
+
We followed the instructions in the [dpo repo](https://github.com/eric-mitchell/direct-preference-optimization) to finetune this model.
|
| 84 |
+
|
| 85 |
+
1. Run supervised fine-tuning (SFT) on the dataset(s) of interest.
|
| 86 |
+
2. Run preference learning on the model from step 1, using preference data (ideally from the same distribution as the SFT examples).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 87 |
|
| 88 |
|
| 89 |
# Evaluation
|
|
|
|
| 92 |
|------------------------------------------------------|------------------------------------------------------------|
|
| 93 |
| LLM360/Amber 359 | 2.48750 |
|
| 94 |
| LLM360/AmberChat | 5.428125 |
|
| 95 |
+
| **LLM360/AmberSafe** | **4.971264** |
|
| 96 |
|
| 97 |
# Citation
|
| 98 |
|