KorReason-35B-Darwin

Darwin v4 evolutionary merge: Qwen3.5-35B-A3B Instruct + Claude Opus 4.6 Reasoning Distilled via DARE-TIES.

Merge Config

  • Model A: Qwen/Qwen3.5-35B-A3B (Instruct, weight=0.5)
  • Model B: Jackrong/Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled (weight=0.5)
  • Method: DARE-TIES
  • Genome: [global=0.5, attn=0.5, ffn=0.5, embed=0.5, density_a=0.85, density_b=0.85]
  • Architecture: qwen3_5_moe (35.95B total, 3B active MoE)
  • Size: 71.9GB (bfloat16)

Benchmark (vs Base)

Benchmark Merged Base
GPQA Diamond 0.4747 0.4798
MMLU 0.8330 0.8353
ARC-Challenge 0.6600 0.6300

Darwin v4

Produced by the Darwin v4 A2AP evolutionary LLM merging platform.

Downloads last month
-
Safetensors
Model size
36B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Be2Jay/KorReason-35B-Darwin