Delta-Vector commited on
Commit
a6052dc
·
verified ·
1 Parent(s): 7345203

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +193 -20
README.md CHANGED
@@ -1,35 +1,208 @@
1
  ---
2
- base_model:
3
- - NewEden/MistralAI-Nemo-Instruct-ChatML
4
- - NewEden/daring-mango-r1
5
  library_name: transformers
6
  tags:
7
- - mergekit
8
- - merge
9
-
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
- # mag-se
12
 
13
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
 
 
14
 
15
- ## Merge Details
16
- ### Merge Method
17
 
18
- This model was merged using the passthrough merge method using [NewEden/MistralAI-Nemo-Instruct-ChatML](https://huggingface.co/NewEden/MistralAI-Nemo-Instruct-ChatML) + [NewEden/daring-mango-r1](https://huggingface.co/NewEden/daring-mango-r1) as a base.
19
 
20
- ### Models Merged
21
 
22
- The following models were included in the merge:
 
 
 
 
 
 
 
 
 
 
 
 
23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
- ### Configuration
 
 
 
 
 
 
26
 
27
- The following YAML configuration was used to produce this model:
 
 
 
 
28
 
29
  ```yaml
30
- base_model: NewEden/MistralAI-Nemo-Instruct-ChatML+NewEden/daring-mango-r1
31
- dtype: bfloat16
32
- merge_method: passthrough
33
- models:
34
- - model: NewEden/MistralAI-Nemo-Instruct-ChatML+NewEden/daring-mango-r1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
  ```
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
 
4
  library_name: transformers
5
  tags:
6
+ - chat
7
+ pipeline_tag: text-generation
8
+ datasets:
9
+ - AquaV/c2-sharegpt-advanced-prefills-filtered
10
+ - AquaV/c1-sharegpt-advanced-prefills-filtered
11
+ - AquaV/rainy-sharegpt-advanced-prefills-filtered
12
+ - anthracite-core/Gryphe-Opus-Charcard-Roleplay
13
+ - anthracite-org/kalo-opus-instruct-22k-no-refusal
14
+ - lodrick-the-lafted/kalo-opus-instruct-3k-filtered
15
+ - anthracite-org/nopm_claude_writing_fixed
16
+ - anthracite-org/kalo_opus_misc_240827
17
+ - anthracite-org/kalo_misc_part2
18
+ - NewEden/Claude-Instruct-2.7K
19
+ - NewEden/Claude-Instruct-5K
20
  ---
 
21
 
22
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/66c26b6fb01b19d8c3c2467b/nqMkoIsmScaTFHCFirGsc.png" width="500px" />
23
+
24
+ This is a model designed to replicate the prose quality of the Claude 3 series of models. specifically Sonnet and Opus - Made with a prototype magnum V5 datamix.
25
 
26
+ This model is fine-tuned on top of [Mistral-Nemo-Instruct(chatML'ified)](https://huggingface.co/NewEden/MistralAI-Nemo-Instruct-ChatML).
27
+ ## Quants
28
 
29
+ EXL2:
30
 
31
+ GGUF:
32
 
33
+ ## Prompting
34
+ A typical input would look like this:
35
+
36
+ ```py
37
+ """<|im_start|>user
38
+ Hi there!<|im_end|>
39
+ <|im_start|>assistant
40
+ Nice to meet you!<|im_end|>
41
+ <|im_start|>user
42
+ Can I ask a question?<|im_end|>
43
+ <|im_start|>assistant
44
+ """
45
+ ```
46
 
47
+ I would highly recommend using either Euryale's system prompt with the model.
48
+
49
+ <details><summary>See Sao10k's Euryale System Prompt</summary>
50
+
51
+ ```
52
+ Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.
53
+ <Guidelines>
54
+ • Maintain the character persona but allow it to evolve with the story.
55
+ • Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.
56
+ • All types of outputs are encouraged; respond accordingly to the narrative.
57
+ • Include dialogues, actions, and thoughts in each response.
58
+ • Utilize all five senses to describe scenarios within {{char}}'s dialogue.
59
+ • Use emotional symbols such as "!" and "~" in appropriate contexts.
60
+ • Incorporate onomatopoeia when suitable.
61
+ • Allow time for {{user}} to respond with their own input, respecting their agency.
62
+ • Act as secondary characters and NPCs as needed, and remove them when appropriate.
63
+ • When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.
64
+ </Guidelines>
65
 
66
+ <Forbidden>
67
+ • Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.
68
+ • Writing for, speaking, thinking, acting, or replying as {{user}} in your response.
69
+ • Repetitive and monotonous outputs.
70
+ • Positivity bias in your replies.
71
+ • Being overly extreme or NSFW when the narrative context is inappropriate.
72
+ </Forbidden>
73
 
74
+ </details><br>
75
+
76
+ ## Axolotl config
77
+
78
+ <details><summary>See axolotl config</summary>
79
 
80
  ```yaml
81
+ ## model
82
+ base_model: NewEden_nemo-chatml
83
+ model_type: AutoModelForCausalLM
84
+ tokenizer_type: AutoTokenizer
85
+
86
+ ## qlora COPE
87
+ load_in_8bit: false
88
+ load_in_4bit: false
89
+ strict: false
90
+
91
+ ## data
92
+ datasets:
93
+ - path: AquaV/c2-sharegpt-advanced-prefills-filtered
94
+ type: sharegpt
95
+ - path: AquaV/c1-sharegpt-advanced-prefills-filtered
96
+ type: sharegpt
97
+ - path: AquaV/rainy-sharegpt-advanced-prefills-filtered
98
+ type: sharegpt
99
+ - path: anthracite-core/Gryphe-Opus-Charcard-Roleplay
100
+ type: sharegpt
101
+ - path: anthracite-org/kalo-opus-instruct-22k-no-refusal
102
+ type: sharegpt
103
+ - path: lodrick-the-lafted/kalo-opus-instruct-3k-filtered
104
+ type: sharegpt
105
+ - path: anthracite-org/nopm_claude_writing_fixed
106
+ type: sharegpt
107
+ - path: anthracite-org/kalo_opus_misc_240827
108
+ type: sharegpt
109
+ - path: anthracite-org/kalo_misc_part2
110
+ type: sharegpt
111
+ - path: NewEden/Claude-Instruct-2.7K
112
+ type: sharegpt
113
+ - path: NewEden/Claude-Instruct-5K
114
+ type: sharegpt
115
+ shuffle_merged_datasets: true
116
+ dataset_prepared_path: dataset_prepared
117
+ val_set_size: 0.02
118
+ output_dir: 12b-out-rslora-SE
119
+
120
+ ## LIGGER
121
+ plugins:
122
+ - axolotl.integrations.liger.LigerPlugin
123
+ liger_rope: true
124
+ liger_rms_norm: true
125
+ liger_layer_norm: true
126
+ liger_glu_activation: true
127
+ liger_fused_linear_cross_entropy: true
128
+
129
+ ## CTX settings
130
+ sequence_len: 16384
131
+ sample_packing: true
132
+ eval_sample_packing: true
133
+ pad_to_sequence_len: true
134
+
135
+ ## Lora
136
+ adapter: lora
137
+ lora_model_dir:
138
+ lora_r: 128
139
+ lora_alpha: 16
140
+ lora_dropout: 0.05
141
+ lora_target_linear: true
142
+ lora_fan_in_fan_out:
143
+ peft_use_rslora: true
144
+ lora_modules_to_save:
145
+ - embed_tokens
146
+ - lm_head
147
+
148
+ ## WandB
149
+ wandb_project: rei
150
+ wandb_entity:
151
+ wandb_watch:
152
+ wandb_name: daring-mango
153
+ wandb_log_model:
154
+
155
+ ## evals
156
+ evals_per_epoch: 4
157
+ eval_table_size:
158
+ eval_max_new_tokens: 128
159
+
160
+ ## hoe params
161
+ gradient_accumulation_steps: 4
162
+ micro_batch_size: 1
163
+ num_epochs: 2
164
+ optimizer: paged_ademamix_8bit
165
+ # optimizer: paged_adamw_8bit
166
+ lr_scheduler: cosine
167
+ learning_rate: 2.83e-5
168
+
169
+ train_on_inputs: false
170
+ group_by_length: false
171
+ bf16: auto
172
+ fp16:
173
+ tf32: false
174
+
175
+ gradient_checkpointing: unsloth
176
+ early_stopping_patience:
177
+ resume_from_checkpoint:
178
+ local_rank:
179
+ logging_steps: 1
180
+ xformers_attention:
181
+ flash_attention: true
182
+ s2_attention:
183
+
184
+ warmup_steps: 40
185
+ saves_per_epoch: 2
186
+ debug:
187
+ ## for ademiamix
188
+ deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16_cpuoffload_params.json
189
+ ## for adamw
190
+ # deepspeed: ./deepspeed_configs/zero3_bf16.json
191
+ weight_decay: 0.01
192
+ fsdp:
193
+ fsdp_config:
194
+ special_tokens:
195
+ pad_token: <pad>
196
+
197
  ```
198
+ </details><br>
199
+
200
+
201
+ ## Training
202
+ The training was done for 2 epochs. We used 4x[3090s](https://www.nvidia.com/en-us/geforce/graphics-cards/30-series/rtx-3090-3090ti/) GPUs graciously provided by @intervitens for the fine-tuning of the model.
203
+
204
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
205
+
206
+ ## Safety
207
+
208
+ But why?