Instructions to use Open-Orca/Mistral-7B-SlimOrca with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Open-Orca/Mistral-7B-SlimOrca with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Open-Orca/Mistral-7B-SlimOrca")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Open-Orca/Mistral-7B-SlimOrca") model = AutoModelForCausalLM.from_pretrained("Open-Orca/Mistral-7B-SlimOrca") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Open-Orca/Mistral-7B-SlimOrca with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Open-Orca/Mistral-7B-SlimOrca" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Open-Orca/Mistral-7B-SlimOrca", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Open-Orca/Mistral-7B-SlimOrca
- SGLang
How to use Open-Orca/Mistral-7B-SlimOrca with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Open-Orca/Mistral-7B-SlimOrca" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Open-Orca/Mistral-7B-SlimOrca", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Open-Orca/Mistral-7B-SlimOrca" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Open-Orca/Mistral-7B-SlimOrca", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Open-Orca/Mistral-7B-SlimOrca with Docker Model Runner:
docker model run hf.co/Open-Orca/Mistral-7B-SlimOrca
finetuning parameters
hi, can you share the finetuning hyper-parameters?
I have finetuned the https://huggingface.co/mistralai/Mistral-7B-v0.1 with your dataset, but the metric of ARC and hellaswag decreases significantly during the training
here are some information of my hyper-parameters
- full parameters finetuning
- learning rate = 5e-6
- batch_size=64
- epoch=3
The batch_size is too huge and you should set to 4 or 6 is better.
I would prefer set the learning rate as 2e-5
Thanks for your response~
I will try!
@xDAN2099 and others, I'm trying to finetune Mistral 7B with SlimOrca and the MT-bench score is consistently well-below the OpenOrca benchmarked 6.84. Could you please share the exact training script along with any other details on hardware used to get on-par score on MT-bench? Any help in that regard is much appreciated, thank you!