SnowDrogito-RpR-32B / README.md
skatardude10's picture
Update README.md
47ce4ba verified
---
base_model:
- Qwen/Qwen2.5-32B
- Qwen/QwQ-32B
- trashpanda-org/QwQ-32B-Snowdrop-v0
- ArliAI/QwQ-32B-ArliAI-RpR-v1
- deepcogito/cogito-v1-preview-qwen-32B
library_name: transformers
tags:
- mergekit
- merge
---
<h1 align="center">
<span style="color: #ADD8E6; font-weight: bold;">SnowDr</span><span style="color: #00FF00; font-weight: bold; font-style: italic;">ogito</span><span style="color: #FFFFFF; font-weight: bold;">-</span><span style="color: #FF9999; font-weight: bold;">RpR</span>-32B
</h1>
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/633e3b4136e87ddc64ad584d/XriPrqbrwSAju1XrNoxLK.png" alt="SnowDrogito-RpR-32B Banner" width="600"/>
</p>
<p align="center">
<a href="https://huggingface.co/skatardude10/SnowDrogito-RpR-32B_IQ4-XS" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">Download IQ4_XS IMATRIX GGUF</a>
</p>
## <span style="color: #CCFFCC;">Overview</span>
SnowDrogito-RpR-32B is a QwQ RP Reasoning merge to add smarts to the popular <span style="color: #ADD8E6;">Snowdrop</span> roleplay model, with a little <span style="color: #FF9999;">ArliAI RpR</span> and <span style="color: #00FF00;">Deepcogito</span> for the smarts. Built using the TIES merge method, it attempts to combine strengths from multiple fine-tuned QwQ-32B models. Uploading because the PPL was lower, have been getting more varied/longer/more creative responses with this, but maybe it lacks contextual awareness compared to snowdrop? Not sure.
## <span style="color: #CCFFCC;">Setup for Reasoning and ChatML</span>
- **ChatML Formatting**: Use ChatML with `<|im_start|>role\ncontent<|im_end|>\n` (e.g., `<|im_start|>user\nHello!<|im_end|>\n`).
- **Reasoning Settings**: Set "include names" to "never." Start reply with `<think>\n` to enable reasoning.
- **Sampler Settings**: From Snowdrop: Try temperature 0.9, min_p 0.05, top_a 0.3, TFS 0.75, repetition_penalty 1.03, DRY if available.
- **My Settings**: Response (tokens): 2048
Context (tokens): 40960
Temperature: 3.25
Top P: 0.98
Min P: 0.04
Top nsigna: 2.5
Repetition Penalty: 1.03
(XTC) Threshold: 0.3
(XTC) Probability: 0.3
Dry Multiplier: 0.8
Dry Base: 1.75
Dry Allowed Length: 4
Dry Penalty Range: 1024
For more details, see the setup guides and master import for ST for <a href="https://huggingface.co/trashpanda-org/QwQ-32B-Snowdrop-v0" style="color: #ADD8E6; text-decoration: none;" onmouseover="this.style.color='#E6E6FA'" onmouseout="this.style.color='#ADD8E6'">Snowdrop</a> and other info on <a href="https://huggingface.co/ArliAI/QwQ-32B-ArliAI-RpR-v1" style="color: #FF9999; text-decoration: none;" onmouseover="this.style.color='#E6E6FA'" onmouseout="this.style.color='#FF9999'">ArliAI RpR</a>.
## <span style="color: #CCFFCC;">Performance</span>
- Perplexity under identical conditions (IQ4_XS, 40,960 context, Q8_0 KV cache, on a 150K-token chat dataset) SnowDrogito-RpR-32B vs <span style="color: #ADD8E6;">QwQ-32B-Snowdrop-v0</span>:
```
4.5597 ± 0.02554
4.6779 ± 0.02671
```
- IQ4_xs fits 40960 context 24GB VRAM using Q8 KV Cache with full GPU offload.
## <span style="color: #CCFFCC;">Model Details</span>
- Base Model: <a href="https://huggingface.co/Qwen/Qwen2.5-32B" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">Qwen/Qwen2.5-32B</a>
- Architecture: Qwen 2.5 (32B parameters)
- Context Length: 40,960 tokens
## <span style="color: #CCFFCC;">Merge Configuration</span>
This model was created using mergekit with the following TIES merge configuration:
```
models:
- model: trashpanda-org/QwQ-32B-Snowdrop-v0
parameters:
weight: 0.75
density: 0.5
- model: deepcogito/cogito-v1-preview-qwen-32B
parameters:
weight: 0.15
density: 0.5
- model: ArliAI/QwQ-32B-ArliAI-RpR-v1
parameters:
weight: 0.1
density: 0.5
merge_method: ties
base_model: Qwen/Qwen2.5-32B
parameters:
weight: 0.9
density: 0.9
normalize: true
int8_mask: true
tokenizer_source: Qwen/Qwen2.5-32B-Instruct
dtype: bfloat16
```
## <span style="color: #CCFFCC;">Acknowledgments</span>
- <a href="https://github.com/arcee-ai/mergekit" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">mergekit</a> for merging.
- <a href="https://github.com/ggerganov/llama.cpp" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">llama.cpp</a> for quantization.
- Original model creators: <a href="https://huggingface.co/Qwen" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">Qwen</a>, <a href="https://huggingface.co/trashpanda-org" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">trashpanda-org</a>, <a href="https://huggingface.co/deepcogito" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">deepcogito</a>, <a href="https://huggingface.co/ArliAI" style="color: #E6E6FA; text-decoration: none;" onmouseover="this.style.color='#ADD8E6'" onmouseout="this.style.color='#E6E6FA'">ArliAI</a>.