Instructions to use QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF", filename="Llama-3-Instruct-8B-SimPO-ExPO.Q2_K.gguf", )
llm.create_chat_completion( messages = [ { "role": "user", "content": "What is the capital of France?" } ] ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF:Q4_K_M
Use Docker
docker model run hf.co/QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF:Q4_K_M
- LM Studio
- Jan
- vLLM
How to use QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF:Q4_K_M
- Ollama
How to use QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF with Ollama:
ollama run hf.co/QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF:Q4_K_M
- Unsloth Studio new
How to use QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF to start chatting
- Docker Model Runner
How to use QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF with Docker Model Runner:
docker model run hf.co/QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF:Q4_K_M
- Lemonade
How to use QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.Llama-3-Instruct-8B-SimPO-ExPO-GGUF-Q4_K_M
List all available models
lemonade list
Llama-3-Instruct-8B-SimPO-ExPO-GGUF
This is quantized version of chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO created using llama.cpp
Model Description
The extrapolated (ExPO) model based on princeton-nlp/Llama-3-Instruct-8B-SimPO and meta-llama/Meta-Llama-3-8B-Instruct, as in the "Weak-to-Strong Extrapolation Expedites Alignment" paper.
Specifically, we obtain this model by extrapolating (alpha = 0.3) from the weights of the SFT and DPO/RLHF checkpoints, achieving superior alignment with human preference.
This extrapolated model achieves the 40.6% win rate and 45.8% LC win rate on AlpacaEval 2.0, outperforming the original Llama-3-Instruct-8B-SimPO's 40.5% and 44.7%, respectively.
Evaluation Results
Evaluation results on the AlpacaEval 2.0 benchmark (you can find the evaluation outputs on the official GitHub repo):
| Win Rate (Ori) | LC Win Rate (Ori) | Win Rate (+ ExPO) | LC Win Rate (+ ExPO) | |
|---|---|---|---|---|
HuggingFaceH4/zephyr-7b-alpha |
6.7% | 10.0% | 10.6% | 13.6% |
HuggingFaceH4/zephyr-7b-beta |
10.2% | 13.2% | 11.1% | 14.0% |
berkeley-nest/Starling-LM-7B-alpha |
15.0% | 18.3% | 18.2% | 19.5% |
Nexusflow/Starling-LM-7B-beta |
26.6% | 25.8% | 29.6% | 26.4% |
snorkelai/Snorkel-Mistral-PairRM |
24.7% | 24.0% | 28.8% | 26.4% |
RLHFlow/LLaMA3-iterative-DPO-final |
29.2% | 36.0% | 32.7% | 37.8% |
internlm/internlm2-chat-1.8b |
3.8% | 4.0% | 5.2% | 4.3% |
internlm/internlm2-chat-7b |
20.5% | 18.3% | 28.1% | 22.7% |
internlm/internlm2-chat-20b |
36.1% | 24.9% | 46.2% | 27.2% |
allenai/tulu-2-dpo-7b |
8.5% | 10.2% | 11.5% | 11.7% |
allenai/tulu-2-dpo-13b |
11.2% | 15.5% | 15.6% | 17.6% |
allenai/tulu-2-dpo-70b |
15.4% | 21.2% | 23.0% | 25.7% |
Evaluation results on the MT-Bench benchmark (you can find the evaluation outputs on the official GitHub repo):
| Original | + ExPO | |
|---|---|---|
HuggingFaceH4/zephyr-7b-alpha |
6.85 | 6.87 |
HuggingFaceH4/zephyr-7b-beta |
7.02 | 7.06 |
berkeley-nest/Starling-LM-7B-alpha |
7.82 | 7.91 |
Nexusflow/Starling-LM-7B-beta |
8.10 | 8.18 |
snorkelai/Snorkel-Mistral-PairRM |
7.63 | 7.69 |
RLHFlow/LLaMA3-iterative-DPO-final |
8.08 | 8.45 |
internlm/internlm2-chat-1.8b |
5.17 | 5.26 |
internlm/internlm2-chat-7b |
7.72 | 7.80 |
internlm/internlm2-chat-20b |
8.13 | 8.26 |
allenai/tulu-2-dpo-7b |
6.35 | 6.38 |
allenai/tulu-2-dpo-13b |
7.00 | 7.26 |
allenai/tulu-2-dpo-70b |
7.79 | 8.03 |
- Downloads last month
- 353
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Model tree for QuantFactory/Llama-3-Instruct-8B-SimPO-ExPO-GGUF
Base model
chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO