27B-v2 Jackrong Suffer from dead vision experts?
if I use 27B Jackrong v2 model with the original vision weights from qwen will there still be a dead experts issue unless this type of merge is performed on the dense model or is only MoE qwen model affected?
No, dead experts is a MoE-only issue. Dense models don't have experts, so there's nothing to "die."
The Jackrong 27B is a dense model β all 27B parameters activate on every forward pass. Dead experts only occur in MoE architectures where the router fails to route tokens to certain experts.
However, if you're trying to graft Qwen's original vision weights onto the Jackrong 27B (which was fine-tuned as text-only), you may face a vision-text alignment issue β not dead experts, but the vision encoder projections may not align well with the SFT-modified text layers. A short vision QA fine-tuning pass should fix that.
Or, you can simply use our Darwin-35B-A3B-Opus directly β it's built on Qwen3.5-35B-A3B which has native vision support out of the box. No need to graft anything.