JosephusCheung commited on
Commit
e4a0d4d
·
1 Parent(s): 4d49d73

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -1
README.md CHANGED
@@ -90,6 +90,18 @@ Hard acc:48.03
90
 
91
  **Zero-shot ACC 0.5921152388172858** (Outperforms WizardMath-7B and Qwen-7B)
92
 
 
 
 
 
 
 
 
 
 
 
 
 
93
  # 因果语言模型 7B - 与 Meta LLaMA 2 完全兼容
94
  使用无需远程/外部代码的transformers库加载模型,AutoModelForCausalLM和AutoTokenizer(或者手动指定LlamaForCausalLM加载LM, GPT2Tokenizer加载Tokenizer),并且模型量化与GGUF(llama.cpp)、GPTQ、AWQ完全兼容。
95
 
@@ -144,4 +156,16 @@ STEM准确率:61.67
144
 
145
  ## GSM8K
146
 
147
- **零样本准确率0.5921152388172858** (优于WizardMath-7B和Qwen-7B)
 
 
 
 
 
 
 
 
 
 
 
 
 
90
 
91
  **Zero-shot ACC 0.5921152388172858** (Outperforms WizardMath-7B and Qwen-7B)
92
 
93
+ ## MT-Behch on DPO Version
94
+ | Model | MT-Bench |
95
+ | ------------------------- | ------------ |
96
+ | GPT-4 | 8.99 |
97
+ | GPT-3.5-Turbo | 7.94 |
98
+ | | |
99
+ | Zephyr-7b-β (Overfitting) | 7.34 |
100
+ | Zephyr-7b-α | 6.88 |
101
+ | | |
102
+ | **[CausalLM/14B-DPO-α](https://huggingface.co/CausalLM/14B-DPO-alpha)** | **7.618868** |
103
+ | **[CausalLM/7B-DPO-α](https://huggingface.co/CausalLM/7B-DPO-alpha)** | **7.038125** |
104
+
105
  # 因果语言模型 7B - 与 Meta LLaMA 2 完全兼容
106
  使用无需远程/外部代码的transformers库加载模型,AutoModelForCausalLM和AutoTokenizer(或者手动指定LlamaForCausalLM加载LM, GPT2Tokenizer加载Tokenizer),并且模型量化与GGUF(llama.cpp)、GPTQ、AWQ完全兼容。
107
 
 
156
 
157
  ## GSM8K
158
 
159
+ **零样本准确率0.5921152388172858** (优于WizardMath-7B和Qwen-7B)
160
+
161
+ ## DPO 版本的 MT-Behch
162
+ | Model | MT-Bench |
163
+ | ------------------------- | ------------ |
164
+ | GPT-4 | 8.99 |
165
+ | GPT-3.5-Turbo | 7.94 |
166
+ | | |
167
+ | Zephyr-7b-β (Overfitting) | 7.34 |
168
+ | Zephyr-7b-α | 6.88 |
169
+ | | |
170
+ | **[CausalLM/14B-DPO-α](https://huggingface.co/CausalLM/14B-DPO-alpha)** | **7.618868** |
171
+ | **[CausalLM/7B-DPO-α](https://huggingface.co/CausalLM/7B-DPO-alpha)** | **7.038125** |