Transformers
Apel-sin commited on
Commit
0ace837
·
1 Parent(s): cf3c9ef

add measurement.json

Browse files
Files changed (2) hide show
  1. README.md +65 -0
  2. measurement.json +0 -0
README.md ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: gemma
3
+ library_name: transformers
4
+ datasets:
5
+ - jondurbin/gutenberg-dpo-v0.1
6
+ ---
7
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
8
+ should probably proofread and complete it, then remove this comment. -->
9
+
10
+ # ifable/gemma-2-Ifable-9B
11
+ This model ranked first on the Creative Writing Benchmark (https://eqbench.com/creative_writing.html) on September 10, 2024
12
+
13
+ ## Training and evaluation data
14
+
15
+ - Gutenberg: https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1
16
+ - Carefully curated proprietary creative writing dataset
17
+
18
+ ## Training procedure
19
+
20
+ Training method: SimPO (GitHub - princeton-nlp/SimPO: SimPO: Simple Preference Optimization with a Reference-Free Reward)
21
+
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 1.0163
24
+ - Rewards/chosen: -21.6822
25
+ - Rewards/rejected: -47.8754
26
+ - Rewards/accuracies: 0.9167
27
+ - Rewards/margins: 26.1931
28
+ - Logps/rejected: -4.7875
29
+ - Logps/chosen: -2.1682
30
+ - Logits/rejected: -17.0475
31
+ - Logits/chosen: -12.0041
32
+
33
+ ### Training hyperparameters
34
+
35
+ The following hyperparameters were used during training:
36
+ - learning_rate: 8e-07
37
+ - train_batch_size: 1
38
+ - eval_batch_size: 1
39
+ - seed: 42
40
+ - distributed_type: multi-GPU
41
+ - num_devices: 8
42
+ - gradient_accumulation_steps: 16
43
+ - total_train_batch_size: 128
44
+ - total_eval_batch_size: 8
45
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
+ - lr_scheduler_type: cosine
47
+ - lr_scheduler_warmup_ratio: 0.1
48
+ - num_epochs: 1.0
49
+
50
+ ### Training results
51
+
52
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen | Sft Loss |
53
+ |:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|:--------:|
54
+ | 1.4444 | 0.9807 | 35 | 1.0163 | -21.6822 | -47.8754 | 0.9167 | 26.1931 | -4.7875 | -2.1682 | -17.0475 | -12.0041 | 0.0184 |
55
+
56
+
57
+ ### Framework versions
58
+
59
+ - Transformers 4.43.4
60
+ - Pytorch 2.3.0a0+ebedce2
61
+ - Datasets 2.20.0
62
+ - Tokenizers 0.19.1
63
+
64
+
65
+ We are looking for product manager and operations managers to build applications through our model, and also open for business cooperation, and also AI engineer to join us, contact with : contact@ifable.ai
measurement.json ADDED
The diff for this file is too large to render. See raw diff