FuseChat: Knowledge Fusion of Chat Models
Paper • 2408.07990 • Published • 15
docker model run hf.co/tomasmcm/QwQ-Coder-R1-Distill-32BThis is a merge of pre-trained language models created using mergekit.
Following up on tomasmcm/sky-t1-coder-32b-flash, this experiment tries to merge 2 reasoning models based on Qwen 32B with a Coder model. But it seems to have caused the model to loose it's thinking abilities, even when adding <think> to the prompt.
This model was merged using the SCE merge method using Qwen/Qwen2.5-32B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
# Pivot model
- model: Qwen/Qwen2.5-32B
# Target models
- model: Qwen/Qwen2.5-Coder-32B-Instruct
- model: deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
- model: Qwen/QwQ-32B
merge_method: sce
base_model: Qwen/Qwen2.5-32B
parameters:
select_topk: 1.0
dtype: bfloat16
Install from pip and serve model
# Install vLLM from pip: pip install vllm# Start the vLLM server: vllm serve "tomasmcm/QwQ-Coder-R1-Distill-32B"# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "tomasmcm/QwQ-Coder-R1-Distill-32B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'