Update README.md
Browse files
README.md
CHANGED
|
@@ -2,7 +2,6 @@
|
|
| 2 |
license: llama2
|
| 3 |
---
|
| 4 |
|
| 5 |
-
This is the **Full-Weight** of WizardLM-70B V1.0 model, this model is trained from **Llama-2 70b**.
|
| 6 |
|
| 7 |
## WizardLM: Empowering Large Pre-Trained Language Models to Follow Complex Instructions
|
| 8 |
|
|
@@ -29,11 +28,22 @@ This is the **Full-Weight** of WizardLM-70B V1.0 model, this model is trained fr
|
|
| 29 |
| <sup>WizardCoder-15B-V1.0</sup> | <sup> π€ <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a></sup> | <sup>π <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a></sup> | | ||<sup> 57.3 pass@1 </sup> | <sup> <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a></sup> |
|
| 30 |
</font>
|
| 31 |
|
|
|
|
|
|
|
| 32 |
**Github Repo**: https://github.com/nlpxucan/WizardLM
|
| 33 |
|
| 34 |
**Twitter**:
|
| 35 |
|
| 36 |
**Discord**: https://discord.gg/bpmeZD7V
|
| 37 |
|
| 38 |
-
|
| 39 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
license: llama2
|
| 3 |
---
|
| 4 |
|
|
|
|
| 5 |
|
| 6 |
## WizardLM: Empowering Large Pre-Trained Language Models to Follow Complex Instructions
|
| 7 |
|
|
|
|
| 28 |
| <sup>WizardCoder-15B-V1.0</sup> | <sup> π€ <a href="https://huggingface.co/WizardLM/WizardCoder-15B-V1.0" target="_blank">HF Link</a></sup> | <sup>π <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a></sup> | | ||<sup> 57.3 pass@1 </sup> | <sup> <a href="https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement" target="_blank">OpenRAIL-M</a></sup> |
|
| 29 |
</font>
|
| 30 |
|
| 31 |
+
- π₯π₯π₯ [08/09/2023] We released **WizardLM-70B-V1.0** model.
|
| 32 |
+
|
| 33 |
**Github Repo**: https://github.com/nlpxucan/WizardLM
|
| 34 |
|
| 35 |
**Twitter**:
|
| 36 |
|
| 37 |
**Discord**: https://discord.gg/bpmeZD7V
|
| 38 |
|
| 39 |
+
**Demo**:
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
β<b>Note for model system prompts usage:</b>
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
<b>WizardLM</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
|
| 46 |
+
|
| 47 |
+
```
|
| 48 |
+
A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: hello, who are you? ASSISTANT:
|
| 49 |
+
```
|