TheBloke commited on
Commit
016df4f
·
1 Parent(s): f6dda26

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +498 -0
README.md ADDED
@@ -0,0 +1,498 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: SanjiWatsuki/Sonya-7B
3
+ inference: false
4
+ language:
5
+ - en
6
+ license: cc-by-4.0
7
+ model_creator: Sanji Watsuki
8
+ model_name: Sonya 7B
9
+ model_type: mistral
10
+ prompt_template: 'Below is an instruction that describes a task. Write a response
11
+ that appropriately completes the request.
12
+
13
+
14
+ ### Instruction:
15
+
16
+ {prompt}
17
+
18
+
19
+ ### Response:
20
+
21
+ '
22
+ quantized_by: TheBloke
23
+ tags:
24
+ - merge
25
+ ---
26
+ <!-- markdownlint-disable MD041 -->
27
+
28
+ <!-- header start -->
29
+ <!-- 200823 -->
30
+ <div style="width: auto; margin-left: auto; margin-right: auto">
31
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
32
+ </div>
33
+ <div style="display: flex; justify-content: space-between; width: 100%;">
34
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
35
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
36
+ </div>
37
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
38
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
39
+ </div>
40
+ </div>
41
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
42
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
43
+ <!-- header end -->
44
+
45
+ # Sonya 7B - AWQ
46
+ - Model creator: [Sanji Watsuki](https://huggingface.co/SanjiWatsuki)
47
+ - Original model: [Sonya 7B](https://huggingface.co/SanjiWatsuki/Sonya-7B)
48
+
49
+ <!-- description start -->
50
+ ## Description
51
+
52
+ This repo contains AWQ model files for [Sanji Watsuki's Sonya 7B](https://huggingface.co/SanjiWatsuki/Sonya-7B).
53
+
54
+ These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
55
+
56
+
57
+ ### About AWQ
58
+
59
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
60
+
61
+ AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
62
+
63
+ It is supported by:
64
+
65
+ - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
66
+ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
67
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
68
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
69
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
70
+
71
+ <!-- description end -->
72
+ <!-- repositories-available start -->
73
+ ## Repositories available
74
+
75
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Sonya-7B-AWQ)
76
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Sonya-7B-GPTQ)
77
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Sonya-7B-GGUF)
78
+ * [Sanji Watsuki's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/SanjiWatsuki/Sonya-7B)
79
+ <!-- repositories-available end -->
80
+
81
+ <!-- prompt-template start -->
82
+ ## Prompt template: Alpaca
83
+
84
+ ```
85
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
86
+
87
+ ### Instruction:
88
+ {prompt}
89
+
90
+ ### Response:
91
+
92
+ ```
93
+
94
+ <!-- prompt-template end -->
95
+
96
+
97
+ <!-- README_AWQ.md-provided-files start -->
98
+ ## Provided files, and AWQ parameters
99
+
100
+ I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
101
+
102
+ Models are released as sharded safetensors files.
103
+
104
+ | Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
105
+ | ------ | ---- | -- | ----------- | ------- | ---- |
106
+ | [main](https://huggingface.co/TheBloke/Sonya-7B-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.15 GB
107
+
108
+ <!-- README_AWQ.md-provided-files end -->
109
+
110
+ <!-- README_AWQ.md-text-generation-webui start -->
111
+ ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
112
+
113
+ Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
114
+
115
+ It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
116
+
117
+ 1. Click the **Model tab**.
118
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/Sonya-7B-AWQ`.
119
+ 3. Click **Download**.
120
+ 4. The model will start downloading. Once it's finished it will say "Done".
121
+ 5. In the top left, click the refresh icon next to **Model**.
122
+ 6. In the **Model** dropdown, choose the model you just downloaded: `Sonya-7B-AWQ`
123
+ 7. Select **Loader: AutoAWQ**.
124
+ 8. Click Load, and the model will load and is now ready for use.
125
+ 9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
126
+ 10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
127
+ <!-- README_AWQ.md-text-generation-webui end -->
128
+
129
+ <!-- README_AWQ.md-use-from-vllm start -->
130
+ ## Multi-user inference server: vLLM
131
+
132
+ Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
133
+
134
+ - Please ensure you are using vLLM version 0.2 or later.
135
+ - When using vLLM as a server, pass the `--quantization awq` parameter.
136
+
137
+ For example:
138
+
139
+ ```shell
140
+ python3 -m vllm.entrypoints.api_server --model TheBloke/Sonya-7B-AWQ --quantization awq --dtype auto
141
+ ```
142
+
143
+ - When using vLLM from Python code, again set `quantization=awq`.
144
+
145
+ For example:
146
+
147
+ ```python
148
+ from vllm import LLM, SamplingParams
149
+
150
+ prompts = [
151
+ "Tell me about AI",
152
+ "Write a story about llamas",
153
+ "What is 291 - 150?",
154
+ "How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
155
+ ]
156
+ prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
157
+
158
+ ### Instruction:
159
+ {prompt}
160
+
161
+ ### Response:
162
+ '''
163
+
164
+ prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
165
+
166
+ sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
167
+
168
+ llm = LLM(model="TheBloke/Sonya-7B-AWQ", quantization="awq", dtype="auto")
169
+
170
+ outputs = llm.generate(prompts, sampling_params)
171
+
172
+ # Print the outputs.
173
+ for output in outputs:
174
+ prompt = output.prompt
175
+ generated_text = output.outputs[0].text
176
+ print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
177
+ ```
178
+ <!-- README_AWQ.md-use-from-vllm start -->
179
+
180
+ <!-- README_AWQ.md-use-from-tgi start -->
181
+ ## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
182
+
183
+ Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
184
+
185
+ Example Docker parameters:
186
+
187
+ ```shell
188
+ --model-id TheBloke/Sonya-7B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
189
+ ```
190
+
191
+ Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
192
+
193
+ ```shell
194
+ pip3 install huggingface-hub
195
+ ```
196
+
197
+ ```python
198
+ from huggingface_hub import InferenceClient
199
+
200
+ endpoint_url = "https://your-endpoint-url-here"
201
+
202
+ prompt = "Tell me about AI"
203
+ prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
204
+
205
+ ### Instruction:
206
+ {prompt}
207
+
208
+ ### Response:
209
+ '''
210
+
211
+ client = InferenceClient(endpoint_url)
212
+ response = client.text_generation(prompt,
213
+ max_new_tokens=128,
214
+ do_sample=True,
215
+ temperature=0.7,
216
+ top_p=0.95,
217
+ top_k=40,
218
+ repetition_penalty=1.1)
219
+
220
+ print(f"Model output: ", response)
221
+ ```
222
+ <!-- README_AWQ.md-use-from-tgi end -->
223
+
224
+ <!-- README_AWQ.md-use-from-python start -->
225
+ ## Inference from Python code using Transformers
226
+
227
+ ### Install the necessary packages
228
+
229
+ - Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
230
+ - Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
231
+
232
+ ```shell
233
+ pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
234
+ ```
235
+
236
+ Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
237
+
238
+ If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
239
+
240
+ ```shell
241
+ pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
242
+ ```
243
+
244
+ If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
245
+
246
+ ```shell
247
+ pip3 uninstall -y autoawq
248
+ git clone https://github.com/casper-hansen/AutoAWQ
249
+ cd AutoAWQ
250
+ pip3 install .
251
+ ```
252
+
253
+ ### Transformers example code (requires Transformers 4.35.0 and later)
254
+
255
+ ```python
256
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
257
+
258
+ model_name_or_path = "TheBloke/Sonya-7B-AWQ"
259
+
260
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
261
+ model = AutoModelForCausalLM.from_pretrained(
262
+ model_name_or_path,
263
+ low_cpu_mem_usage=True,
264
+ device_map="cuda:0"
265
+ )
266
+
267
+ # Using the text streamer to stream output one token at a time
268
+ streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
269
+
270
+ prompt = "Tell me about AI"
271
+ prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
272
+
273
+ ### Instruction:
274
+ {prompt}
275
+
276
+ ### Response:
277
+ '''
278
+
279
+ # Convert prompt to tokens
280
+ tokens = tokenizer(
281
+ prompt_template,
282
+ return_tensors='pt'
283
+ ).input_ids.cuda()
284
+
285
+ generation_params = {
286
+ "do_sample": True,
287
+ "temperature": 0.7,
288
+ "top_p": 0.95,
289
+ "top_k": 40,
290
+ "max_new_tokens": 512,
291
+ "repetition_penalty": 1.1
292
+ }
293
+
294
+ # Generate streamed output, visible one token at a time
295
+ generation_output = model.generate(
296
+ tokens,
297
+ streamer=streamer,
298
+ **generation_params
299
+ )
300
+
301
+ # Generation without a streamer, which will include the prompt in the output
302
+ generation_output = model.generate(
303
+ tokens,
304
+ **generation_params
305
+ )
306
+
307
+ # Get the tokens from the output, decode them, print them
308
+ token_output = generation_output[0]
309
+ text_output = tokenizer.decode(token_output)
310
+ print("model.generate output: ", text_output)
311
+
312
+ # Inference is also possible via Transformers' pipeline
313
+ from transformers import pipeline
314
+
315
+ pipe = pipeline(
316
+ "text-generation",
317
+ model=model,
318
+ tokenizer=tokenizer,
319
+ **generation_params
320
+ )
321
+
322
+ pipe_output = pipe(prompt_template)[0]['generated_text']
323
+ print("pipeline output: ", pipe_output)
324
+
325
+ ```
326
+ <!-- README_AWQ.md-use-from-python end -->
327
+
328
+ <!-- README_AWQ.md-compatibility start -->
329
+ ## Compatibility
330
+
331
+ The files provided are tested to work with:
332
+
333
+ - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
334
+ - [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
335
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
336
+ - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
337
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
338
+
339
+ <!-- README_AWQ.md-compatibility end -->
340
+
341
+ <!-- footer start -->
342
+ <!-- 200823 -->
343
+ ## Discord
344
+
345
+ For further support, and discussions on these models and AI in general, join us at:
346
+
347
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
348
+
349
+ ## Thanks, and how to contribute
350
+
351
+ Thanks to the [chirper.ai](https://chirper.ai) team!
352
+
353
+ Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
354
+
355
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
356
+
357
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
358
+
359
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
360
+
361
+ * Patreon: https://patreon.com/TheBlokeAI
362
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
363
+
364
+ **Special thanks to**: Aemon Algiz.
365
+
366
+ **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
367
+
368
+
369
+ Thank you to all my generous patrons and donaters!
370
+
371
+ And thank you again to a16z for their generous grant.
372
+
373
+ <!-- footer end -->
374
+
375
+ # Original model card: Sanji Watsuki's Sonya 7B
376
+
377
+
378
+ <div style="display: flex; justify-content: center; align-items: center">
379
+ <img src="https://huggingface.co/SanjiWatsuki/Sonya-7B/resolve/main/assets/Sonya.jpg">
380
+ </div
381
+ >
382
+
383
+ <p align="center">
384
+ <big><b>Top 1 Performer MT-bench 🤪</b></big>
385
+ </p>
386
+
387
+ ## WTF is This?
388
+
389
+ Sonya-7B is, at the time of writing, the **#1 performing model in MT-Bench first turn, ahead of GPT-4, and overall the #2 model in MT-Bench**, to the best of my knowledge. Sonya-7B should be a good all-purpose model for all tasks including assistant, RP, etc.
390
+
391
+ Sonya-7B has a similar structure to my previous model, [Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B), and uses a very similar merge. It's a merge of [xDAN-AI/xDAN-L1-Chat-RL-v1](https://huggingface.co/xDAN-AI/xDAN-L1-Chat-RL-v1), [Jan-Ai's Stealth v1.2](https://huggingface.co/jan-hq/stealth-v1.2), [chargoddard/piano-medley-7b](https://huggingface.co/chargoddard/piano-medley-7b), [NeverSleep/Noromaid-7B-v0.2](https://huggingface.co/NeverSleep/Noromaid-7b-v0.2), and [athirdpath/NSFW_DPO_vmgb-7b](athirdpath/NSFW_DPO_vmgb-7b). Sauce is below. Somehow, by combining these pieces, it substantially outscores any of its parents on MT-Bench.
392
+
393
+ I picked these models because:
394
+ * MT-Bench normally correlates well with real world model quality and xDAN performs well on it.
395
+ * Almost all models in the mix were Alpaca prompt formatted which gives prompt consistency.
396
+ * Stealth v1.2 has been a magic sprinkle that seems to increase my MT-Bench scores.
397
+ * I added RP models because it boosted the Writing and Roleplay benchmarks 👀
398
+
399
+ Based on the parent models, I expect this model to be used with an 8192 context window. Please use NTK scaling alpha of 2.6 to experimentally try out 16384 context.
400
+
401
+ **Let me be candid:** Despite the test scores, this model is **NOT is a GPT killer**. I think it's a very sharp model **for a 7B**, it probably punches way above its weight **for a 7B**, but it's still a 7B model. Even for a 7B model, I think **it's quirky and has some weird outputs**, probably due to how Frankenstein this merge is. Keep your expectations in check 😉
402
+
403
+ **MT-Bench Average Turn**
404
+ | model | score | size
405
+ |--------------------|-----------|--------
406
+ | gpt-4 | 8.99 | -
407
+ | **Sonya-7B** | **8.52** | **7b**
408
+ | xDAN-L1-Chat-RL-v1 | 8.34 | 7b
409
+ | Starling-7B | 8.09 | 7b
410
+ | Claude-2 | 8.06 | -
411
+ | *Silicon-Maid* | *7.96* | *7b*
412
+ | *Loyal-Macaroni-Maid*| *7.95* | *7b*
413
+ | gpt-3.5-turbo | 7.94 | 20b?
414
+ | Claude-1 | 7.90 | -
415
+ | OpenChat-3.5 | 7.81 | -
416
+ | vicuna-33b-v1.3 | 7.12 | 33b
417
+ | wizardlm-30b | 7.01 | 30b
418
+ | Llama-2-70b-chat | 6.86 | 70b
419
+
420
+ <img src="https://huggingface.co/SanjiWatsuki/Sonya-7B/resolve/main/assets/mt-bench-gpt.png">
421
+
422
+ <img src="https://huggingface.co/SanjiWatsuki/Sonya-7B/resolve/main/assets/mt-bench-comparison.png">
423
+
424
+ ### The Sauce
425
+
426
+ ```
427
+ models:
428
+ - model: xDAN-AI/xDAN-L1-Chat-RL-v1
429
+ parameters:
430
+ weight: 1
431
+ density: 1
432
+ - model: chargoddard/piano-medley-7b
433
+ parameters:
434
+ weight: 0.3
435
+ - model: jan-hq/stealth-v1.2
436
+ parameters:
437
+ weight: 0.2
438
+ - model: NeverSleep/Noromaid-7b-v0.2
439
+ parameters:
440
+ weight: 0.2
441
+ - model: athirdpath/NSFW_DPO_vmgb-7b
442
+ parameters:
443
+ weight: 0.2
444
+ merge_method: ties
445
+ base_model: mistralai/Mistral-7B-v0.1
446
+ parameters:
447
+ density: 0.4
448
+ int8_mask: true
449
+ normalize: true
450
+ dtype: bfloat16
451
+ ```
452
+
453
+ **There was no additional training, finetuning, or DPO.** This is a straight merger.
454
+
455
+ ### Prompt Template (Alpaca)
456
+
457
+ ```
458
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
459
+
460
+ ### Instruction:
461
+ {prompt}
462
+
463
+ ### Response:
464
+ ```
465
+
466
+ I found that this model **performed worse** with the xDAN prompt format so, despite the heavy weight of xDAN in this merger, I recommeend *against* its use.
467
+
468
+ ### Other Benchmark Stuff
469
+
470
+ **########## First turn ##########**
471
+ | model | turn | score | size
472
+ |--------------------|------|----------|--------
473
+ | **Sonya-7B** | 1 | **9.06875** | **7b**
474
+ | gpt-4 | 1 | 8.95625 | -
475
+ | xDAN-L1-Chat-RL-v1 | 1 | *8.87500* | *7b*
476
+ | xDAN-L2-Chat-RL-v2 | 1 | 8.78750 | 30b
477
+ | claude-v1 | 1 | 8.15000 | -
478
+ | gpt-3.5-turbo | 1 | 8.07500 | 20b
479
+ | vicuna-33b-v1.3 | 1 | 7.45625 | 33b
480
+ | wizardlm-30b | 1 | 7.13125 | 30b
481
+ | oasst-sft-7-llama-30b | 1 | 7.10625 | 30b
482
+ | Llama-2-70b-chat | 1 | 6.98750 | 70b
483
+
484
+
485
+ ########## Second turn ##########
486
+ | model | turn | score | size
487
+ |--------------------|------|-----------|--------
488
+ | gpt-4 | 2 | 9.025000 | -
489
+ | xDAN-L2-Chat-RL-v2 | 2 | 8.087500 | 30b
490
+ | **Sonya-7B** | 2 | **7.962500** | **7b**
491
+ | xDAN-L1-Chat-RL-v1 | 2 | 7.825000 | 7b
492
+ | gpt-3.5-turbo | 2 | 7.812500 | 20b
493
+ | claude-v1 | 2 | 7.650000 | -
494
+ | wizardlm-30b | 2 | 6.887500 | 30b
495
+ | vicuna-33b-v1.3 | 2 | 6.787500 | 33b
496
+ | Llama-2-70b-chat | 2 | 6.725000 | 70b
497
+
498
+ If you'd like to replicate the MT-Bench run, please ensure that the Alpaca prompt template is applied to the model. I did this by putting "alpaca" in the model path to trigger the `AlpacaAdapter`.