---
base_model:
- Qwen/Qwen2.5-32B
- Qwen/QwQ-32B
- trashpanda-org/QwQ-32B-Snowdrop-v0
- ArliAI/QwQ-32B-ArliAI-RpR-v1
- deepcogito/cogito-v1-preview-qwen-32B
library_name: transformers
tags:
- mergekit
- merge
---
SnowDrogito-RpR-32B
Download IQ4_XS IMATRIX GGUF
## Overview
SnowDrogito-RpR-32B is a QwQ RP Reasoning merge to add smarts to the popular Snowdrop roleplay model, with a little ArliAI RpR and Deepcogito for the smarts. Built using the TIES merge method, it attempts to combine strengths from multiple fine-tuned QwQ-32B models. Uploading because the PPL was lower, have been getting more varied/longer/more creative responses with this, but maybe it lacks contextual awareness compared to snowdrop? Not sure.
## Setup for Reasoning and ChatML
- **ChatML Formatting**: Use ChatML with `<|im_start|>role\ncontent<|im_end|>\n` (e.g., `<|im_start|>user\nHello!<|im_end|>\n`).
- **Reasoning Settings**: Set "include names" to "never." Start reply with `\n` to enable reasoning.
- **Sampler Settings**: From Snowdrop: Try temperature 0.9, min_p 0.05, top_a 0.3, TFS 0.75, repetition_penalty 1.03, DRY if available.
- **My Settings**: Response (tokens): 2048
Context (tokens): 40960
Temperature: 3.25
Top P: 0.98
Min P: 0.04
Top nsigna: 2.5
Repetition Penalty: 1.03
(XTC) Threshold: 0.3
(XTC) Probability: 0.3
Dry Multiplier: 0.8
Dry Base: 1.75
Dry Allowed Length: 4
Dry Penalty Range: 1024
For more details, see the setup guides and master import for ST for Snowdrop and other info on ArliAI RpR.
## Performance
- Perplexity under identical conditions (IQ4_XS, 40,960 context, Q8_0 KV cache, on a 150K-token chat dataset) SnowDrogito-RpR-32B vs QwQ-32B-Snowdrop-v0:
```
4.5597 ± 0.02554
4.6779 ± 0.02671
```
- IQ4_xs fits 40960 context 24GB VRAM using Q8 KV Cache with full GPU offload.
## Model Details
- Base Model: Qwen/Qwen2.5-32B
- Architecture: Qwen 2.5 (32B parameters)
- Context Length: 40,960 tokens
## Merge Configuration
This model was created using mergekit with the following TIES merge configuration:
```
models:
- model: trashpanda-org/QwQ-32B-Snowdrop-v0
parameters:
weight: 0.75
density: 0.5
- model: deepcogito/cogito-v1-preview-qwen-32B
parameters:
weight: 0.15
density: 0.5
- model: ArliAI/QwQ-32B-ArliAI-RpR-v1
parameters:
weight: 0.1
density: 0.5
merge_method: ties
base_model: Qwen/Qwen2.5-32B
parameters:
weight: 0.9
density: 0.9
normalize: true
int8_mask: true
tokenizer_source: Qwen/Qwen2.5-32B-Instruct
dtype: bfloat16
```
## Acknowledgments
- mergekit for merging.
- llama.cpp for quantization.
- Original model creators: Qwen, trashpanda-org, deepcogito, ArliAI.