Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Paper • 2203.05482 • Published • 8
How to use ddh0/EstopianOrcaMaid-13b with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="ddh0/EstopianOrcaMaid-13b") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("ddh0/EstopianOrcaMaid-13b")
model = AutoModelForCausalLM.from_pretrained("ddh0/EstopianOrcaMaid-13b")How to use ddh0/EstopianOrcaMaid-13b with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "ddh0/EstopianOrcaMaid-13b"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "ddh0/EstopianOrcaMaid-13b",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/ddh0/EstopianOrcaMaid-13b
How to use ddh0/EstopianOrcaMaid-13b with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "ddh0/EstopianOrcaMaid-13b" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "ddh0/EstopianOrcaMaid-13b",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "ddh0/EstopianOrcaMaid-13b" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "ddh0/EstopianOrcaMaid-13b",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use ddh0/EstopianOrcaMaid-13b with Docker Model Runner:
docker model run hf.co/ddh0/EstopianOrcaMaid-13b
This is a merge of pre-trained language models created using mergekit.
The goal of this merge is to create an unusually intelligent and human-like model especially for RP.
The prompt format is Alpaca. You can use the standard format as shown, but for best results, you should customize the system prompt to your specific needs.
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{YOUR MESSAGE HERE}
### Response:
{BOT MESSAGE HERE}
<s></s>4096This model was merged using the linear merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: /Volumes/Sabrent/LLMs/EstopianMaid-13B
parameters:
weight: 0.8
- model: "/Volumes/SanDisk/LLM Archive/Orca-2-13b"
parameters:
weight: 0.2
merge_method: linear
dtype: float16
Base model
KatyTheCutie/EstopianMaid-13B