Instructions to use openchat/openchat_3.5 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use openchat/openchat_3.5 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="openchat/openchat_3.5") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("openchat/openchat_3.5") model = AutoModelForCausalLM.from_pretrained("openchat/openchat_3.5") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use openchat/openchat_3.5 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "openchat/openchat_3.5" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "openchat/openchat_3.5", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/openchat/openchat_3.5
- SGLang
How to use openchat/openchat_3.5 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "openchat/openchat_3.5" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "openchat/openchat_3.5", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "openchat/openchat_3.5" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "openchat/openchat_3.5", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use openchat/openchat_3.5 with Docker Model Runner:
docker model run hf.co/openchat/openchat_3.5
Question about openchat3.5 gsmk8 score on openllm leaderboard.
First of all, this model is amazing, its seems to speak Japanese and write rhyming poetry in English, and it gave great code and technical advice. It feels smarter than even llama 30b models I have interacted with. But it has a surprising low score on openLLM leaderboard despite this figure:
reporting near parity with chatgpt. One source of this is that gsm8k seems to be reported as 26.84 on the best run in openLLM leaderboard, whereas on your chart I think it is reported as 77.3 (or at least greater than 62.4).
What's the story here? I'm ready to believe based on my interaction with this that its better than the leaderboard score would indicate, but I'm curious why there might be mismatch.
gsm8k usually uses CoT for evaluation, but the open llm leaderboard does not use any CoT, see details here. When you look at the MMLU results (HF has previously corrected the MMLU evaluation), the results are better than reported and within the 70B range.
Ah, thanks for clarifying. Excited to see future openchat models!
