Commit ·
e23a160
1
Parent(s): daf77b7
Update README.md
Browse files
README.md
CHANGED
|
@@ -38,13 +38,13 @@ pipeline_tag: text-generation
|
|
| 38 |
|
| 39 |
| Model | HumanEval+ |
|
| 40 |
|-----------------------------|------------|
|
| 41 |
-
| WizardCoder-Python-34B-V1.0 | 64.6 |
|
| 42 |
| GPT-3.5 (December 2023) | 64.6 |
|
| 43 |
| **OpenChat 3.5 1210** | **63.4** |
|
|
|
|
| 44 |
| OpenHermes 2.5 | 41.5 |
|
| 45 |
|
| 46 |
<div align="center" style="justify-content: center; align-items: center; "'>
|
| 47 |
-
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/
|
| 48 |
</div>
|
| 49 |
|
| 50 |
OpenChat is an innovative library of open-source language models, fine-tuned with [C-RLFT](https://arxiv.org/pdf/2309.11235.pdf) - a strategy inspired by offline reinforcement learning. Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model. Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision.
|
|
|
|
| 38 |
|
| 39 |
| Model | HumanEval+ |
|
| 40 |
|-----------------------------|------------|
|
|
|
|
| 41 |
| GPT-3.5 (December 2023) | 64.6 |
|
| 42 |
| **OpenChat 3.5 1210** | **63.4** |
|
| 43 |
+
| GPT-3.5 (March 2023) | 64.6 |
|
| 44 |
| OpenHermes 2.5 | 41.5 |
|
| 45 |
|
| 46 |
<div align="center" style="justify-content: center; align-items: center; "'>
|
| 47 |
+
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/3.5-benchmarks.png?raw=true" style="width: 100%; border-radius: 0.5em">
|
| 48 |
</div>
|
| 49 |
|
| 50 |
OpenChat is an innovative library of open-source language models, fine-tuned with [C-RLFT](https://arxiv.org/pdf/2309.11235.pdf) - a strategy inspired by offline reinforcement learning. Our models learn from mixed-quality data without preference labels, delivering exceptional performance on par with ChatGPT, even with a 7B model. Despite our simple approach, we are committed to developing a high-performance, commercially viable, open-source large language model, and we continue to make significant strides toward this vision.
|