Instructions to use openchat/openchat_8192 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use openchat/openchat_8192 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="openchat/openchat_8192")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("openchat/openchat_8192") model = AutoModelForCausalLM.from_pretrained("openchat/openchat_8192") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use openchat/openchat_8192 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "openchat/openchat_8192" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "openchat/openchat_8192", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/openchat/openchat_8192
- SGLang
How to use openchat/openchat_8192 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "openchat/openchat_8192" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "openchat/openchat_8192", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "openchat/openchat_8192" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "openchat/openchat_8192", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use openchat/openchat_8192 with Docker Model Runner:
docker model run hf.co/openchat/openchat_8192
Update README
Browse files
README.md
CHANGED
|
@@ -7,24 +7,33 @@ tags:
|
|
| 7 |
|
| 8 |
# OpenChat: Less is More for Open-source Models
|
| 9 |
|
| 10 |
-
OpenChat is a series of open-source language models fine-tuned on
|
| 11 |
|
| 12 |
-
Generic models:
|
| 13 |
|
| 14 |
- OpenChat: based on LLaMA-13B (2048 context length)
|
| 15 |
-
- **105.7%** of ChatGPT score on Vicuna GPT-4 evaluation
|
| 16 |
-
- **80.
|
| 17 |
-
- **
|
| 18 |
- OpenChat-8192: based on LLaMA-13B (extended to 8192 context length)
|
| 19 |
- **106.6%** of ChatGPT score on Vicuna GPT-4 evaluation
|
|
|
|
| 20 |
|
| 21 |
-
Code models:
|
| 22 |
|
| 23 |
- OpenCoderPlus: based on StarCoderPlus (native 8192 context length)
|
| 24 |
- **102.5%** of ChatGPT score on Vicuna GPT-4 evaluation
|
| 25 |
-
- **78.
|
| 26 |
|
| 27 |
-
*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
## Conversation Template
|
| 30 |
|
|
|
|
| 7 |
|
| 8 |
# OpenChat: Less is More for Open-source Models
|
| 9 |
|
| 10 |
+
OpenChat is a series of open-source language models fine-tuned on a diverse and high-quality dataset of multi-round conversations. With only ~6K GPT-4 conversations filtered from the ~90K ShareGPT conversations, OpenChat is designed to achieve high performance with limited data.
|
| 11 |
|
| 12 |
+
**Generic models:**
|
| 13 |
|
| 14 |
- OpenChat: based on LLaMA-13B (2048 context length)
|
| 15 |
+
- **🚀 105.7%** of ChatGPT score on Vicuna GPT-4 evaluation
|
| 16 |
+
- **🔥 80.9%** Win-rate on AlpacaEval
|
| 17 |
+
- **🤗 Only used 6K data for finetuning!!!**
|
| 18 |
- OpenChat-8192: based on LLaMA-13B (extended to 8192 context length)
|
| 19 |
- **106.6%** of ChatGPT score on Vicuna GPT-4 evaluation
|
| 20 |
+
- **79.5%** of ChatGPT score on Vicuna GPT-4 evaluation
|
| 21 |
|
| 22 |
+
**Code models:**
|
| 23 |
|
| 24 |
- OpenCoderPlus: based on StarCoderPlus (native 8192 context length)
|
| 25 |
- **102.5%** of ChatGPT score on Vicuna GPT-4 evaluation
|
| 26 |
+
- **78.7%** Win-rate on AlpacaEval
|
| 27 |
|
| 28 |
+
*Note:* Please load the pretrained models using *bfloat16*
|
| 29 |
+
|
| 30 |
+
## Code and Inference Server
|
| 31 |
+
|
| 32 |
+
We provide the full source code, including an inference server compatible with the "ChatCompletions" API, in the [OpenChat](https://github.com/imoneoi/openchat) GitHub repository.
|
| 33 |
+
|
| 34 |
+
## Web UI
|
| 35 |
+
|
| 36 |
+
OpenChat also includes a web UI for a better user experience. See the GitHub repository for instructions.
|
| 37 |
|
| 38 |
## Conversation Template
|
| 39 |
|