Quantizations of https://huggingface.co/WizardLMTeam/WizardLM-13B-V1.2

Inference Clients/UIs


From original readme

Note for model system prompts usage:

WizardLM adopts the prompt format from Vicuna and supports multi-turn conversation. The prompt should be as following:

A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>USER: Who are you? ASSISTANT: I am WizardLM.</s>......

Inference WizardLM Demo Script

We provide the inference WizardLM demo code here.

Please cite the paper if you use the data or code from WizardLM.

Downloads last month
678
GGUF
Model size
13B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support