a0108982779

a0108982

AI & ML interests

None yet

Recent Activity

reacted to SeaWolf-AI's post with ๐Ÿ”ฅ about 14 hours ago
๐Ÿงฌ Darwin Family: Zero Gradient Steps, GPQA Diamond 88.89% How far can we push LLM reasoning *without* training? Our team at VIDRAFT submitted this paper to Daily Papers yesterday, and it's currently #3. Huge thanks to everyone who upvoted โ€” sharing the core ideas below. ๐Ÿ”— Paper: https://huggingface.co/papers/2605.14386 ๐Ÿ”— arXiv: https://arxiv.org/abs/2605.14386 ๐Ÿ”— Model: https://huggingface.co/FINAL-Bench/Darwin-28B-Opus --- TL;DR Darwin Family is a training-free evolutionary merging framework. By recombining the weight spaces of existing LLM checkpoints โ€” with zero gradient-based training โ€” it reaches frontier-level reasoning. - ๐Ÿ† Darwin-28B-Opus: GPQA Diamond 88.89% - ๐Ÿ’ธ Zero gradient steps โ€” not a single B200 or H200 hour needed - ๐Ÿงฌ Consistent gains across 4B โ†’ 35B scale - ๐Ÿ”€ Cross-architecture breeding between Transformer and Mamba families - ๐Ÿ” Stable recursive multi-generation evolution #Three Core Mechanisms โ‘  14-dim Adaptive Merge Genome โ€” fine-grained recombination at both component level (Attention / FFN / MLP / LayerNorm / Embedding) and block level, expanding the prior evolutionary-merge search space. โ‘ก MRI-Trust Fusion โ€” we diagnose each layer's reasoning contribution via an **MRI (Model Reasoning Importance)** signal and fuse it with evolutionary search through a **learnable trust parameter**. Trust the diagnostic too much and search collapses; ignore it and search becomes inefficient โ€” Darwin learns the balance from data. โ‘ข Architecture Mapper โ€” weight-space breeding across heterogeneous families. Attention ร— SSM crossover actually works. Why It Matters > Diagnose latent capabilities already encoded in open checkpoints, > and recombine them โ€” no gradients required. Replies and critiques welcome ๐Ÿ™Œ
liked a model about 1 month ago
ginigen-ai/Rogue-27B-KR
View all activity

Organizations

None yet