Editing Models with Task Arithmetic
Paper • 2212.04089 • Published • 8
How to use grimjim/Magnolia-v10-12B with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="grimjim/Magnolia-v10-12B") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("grimjim/Magnolia-v10-12B")
model = AutoModelForCausalLM.from_pretrained("grimjim/Magnolia-v10-12B")How to use grimjim/Magnolia-v10-12B with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "grimjim/Magnolia-v10-12B"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "grimjim/Magnolia-v10-12B",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/grimjim/Magnolia-v10-12B
How to use grimjim/Magnolia-v10-12B with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "grimjim/Magnolia-v10-12B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "grimjim/Magnolia-v10-12B",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "grimjim/Magnolia-v10-12B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "grimjim/Magnolia-v10-12B",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use grimjim/Magnolia-v10-12B with Docker Model Runner:
docker model run hf.co/grimjim/Magnolia-v10-12B
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Task Arithmetic merge method using grimjim/mistralai-Mistral-Nemo-Base-2407 as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
base_model: grimjim/mistralai-Mistral-Nemo-Base-2407
dtype: bfloat16
merge_method: task_arithmetic
parameters:
normalize: true
models:
- model: grimjim/mistralai-Mistral-Nemo-Base-2407
- model: grimjim/mistralai-Mistral-Nemo-Instruct-2407
parameters:
weight: 0.875
- model: grimjim/magnum-consolidatum-v1-12b
parameters:
weight: 0.015625
- model: grimjim/magnum-twilight-12b
parameters:
weight: 0.001953125
- model: nbeerbower/Mistral-Nemo-Prism-12B
parameters:
weight: 0.0625