quantum-keek-v2-7b
quantum-keek-v2-7b is a merge of the following models using mergekit:
- Dongwei/DeepSeek-R1-Distill-Qwen-7B-GRPO
- lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual
- huihui-ai/DeepSeek-R1-Distill-Qwen-7B-abliterated
- deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
🧩 Configuration
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
tokenizer_source: base
dtype: bfloat16
models:
- model: Dongwei/DeepSeek-R1-Distill-Qwen-7B-GRPO
parameters:
density: 0.40
weight: 0.20
- model: lightblue/DeepSeek-R1-Distill-Qwen-7B-Multilingual
parameters:
density: 0.35
weight: 0.15
- model: huihui-ai/DeepSeek-R1-Distill-Qwen-7B-abliterated
parameters:
density: 0.12
weight: 0.05
- model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
parameters:
density: 0.30
weight: 0.10
parameters:
normalize: true```
- Downloads last month
- 1