Model Stock: All we need is just a few fine-tuned models
Paper • 2403.19522 • Published • 15
# Install SGLang from pip:
pip install sglang# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "TareksLab/Thinker-R1-LLaMa-70B" \
--host 0.0.0.0 \
--port 30000# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "TareksLab/Thinker-R1-LLaMa-70B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "TareksLab/Thinker-R1-LLaMa-70B" \
--host 0.0.0.0 \
--port 30000# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "TareksLab/Thinker-R1-LLaMa-70B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated
- model: watt-ai/watt-tool-70B
- model: Daemontatox/Llama3.3-70B-CogniLink
- model: deepcogito/cogito-v1-preview-llama-70B
base_model: huihui-ai/DeepSeek-R1-Distill-Llama-70B-abliterated
merge_method: model_stock
parameters:
int8_mask: true
dtype: float32
out_dtype: bfloat16
chat_template: llama3
tokenizer:
source: base
pad_to_multiple_of: 8
# Gated model: Login with a HF token with gated access permission hf auth login