Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
Paper • 2311.03099 • Published • 33
This is a merge of pre-trained language models created using mergekit.
This model was merged using the DARE TIES merge method using Qwen/Qwen2.5-3B-Instruct as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: dare_ties
dtype: bfloat16
normalize: true
base_model: Qwen/Qwen2.5-3B-Instruct
models:
- model: PowerInfer/SmallThinker-3B-Preview # CoT 2 - Detailed & Nuanced
parameters:
weight: 0.4
density: 0.7
- model: prithivMLmods/QwQ-LCoT-3B-Instruct # CoT 3 - Structured & Focused
parameters:
weight: 0.6
density: 0.8
Install from pip and serve model
# Install vLLM from pip: pip install vllm# Start the vLLM server: vllm serve "Ahdoot/StructuredThinker"# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Ahdoot/StructuredThinker", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'