Typo
#1
by
otomoto
- opened
README.md
CHANGED
|
@@ -17,7 +17,7 @@ OpenChat is a series of open-source language models fine-tuned on a diverse and
|
|
| 17 |
- **🤗 Only used 6K data for finetuning!!!**
|
| 18 |
- OpenChat-8192: based on LLaMA-13B (extended to 8192 context length)
|
| 19 |
- **106.6%** of ChatGPT score on Vicuna GPT-4 evaluation
|
| 20 |
-
- **79.5%**
|
| 21 |
|
| 22 |
**Code models:**
|
| 23 |
|
|
|
|
| 17 |
- **🤗 Only used 6K data for finetuning!!!**
|
| 18 |
- OpenChat-8192: based on LLaMA-13B (extended to 8192 context length)
|
| 19 |
- **106.6%** of ChatGPT score on Vicuna GPT-4 evaluation
|
| 20 |
+
- **79.5%** Win-rate on AlpacaEval
|
| 21 |
|
| 22 |
**Code models:**
|
| 23 |
|