ConicCat/Qwen3.5-27B-Writer
A writing & roleplay finetune of Qwen3.5 27B. The primary emphasis is on writing quality as it strongly generalizes across both domains. This model is also trained from ConicCat/Qwen3.5-Antirep-27B to mitigate repetition issues.
The basic idea is to use a curriculum learning setup to overcome the lack of high quality roleplay data by first training on lower quality roleplay data, then training on higher quality writing data. Starting from ConicCat/Qwen3.5-Antirep-27B, the model was trained on a roughly equal mixture of instruct / roleplay / writing data for three epochs. The model was then trained for eleven epochs on a smaller dataset of short story anthologies by critically acclaimed authors.
Recommended Settings
- Chatml template with
<think>\n\n</think>or{{char}}:prefill. Only non-thinking was trained, but thinking probably still works. - temperature =
0.7 - top_p =
0.95 - I do not recommend using high rep pen values like Qwen suggests for the base model. rep_pen =
1.05or a moderate dry setting should suffice. - For quants, Q4_K_M runs well with
~100kcontext on 24GB Vram - IQ4_XS should fit on 16GB Vram with about
20-24kcontext with the vulkan backend, although it's pretty tight and may require some fiddling around with open programs e.t.c.
Datasets
- ConicCat/AntiRep to mitigate repetitition.
- internlm/Condor-SFT-20K for instruct; even though instruct capabilities are not the primary focus, adding some instruct data helps mitigate forgetting and maintains general intellect and instruction following capabilites.
- PJMixers-Dev/C2-Logs-Sonnet-4.5-all for roleplay. Pretty much exactly what it says on the tin, the venerable C2 logs with the last turn regenerated by Sonnet 4.5 and refusals removed.
- ConicCat/Gutenberg-SFT. A reformatted version of the original Gutenberg DPO dataset by jondurbin for SFT with some slight augmentation to address many of the samples being overly long.
- A dataset of short story anthologies. Unfortunately, I am unable to release this set as all of the data is under copyright.
- Downloads last month
- 238
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support