Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
97.9
TFLOPS
2
1
leoa
laaaaaaaaaaaaaaaaaaaa
Follow
0 followers
ยท
7 following
AI & ML interests
None yet
Recent Activity
reacted
to
SeaWolf-AI
's
post
with ๐ฅ
about 11 hours ago
๐งฌ Darwin-27B-Opus: 86.9% on GPQA Diamond โ World #5, Zero Training We are excited to share Darwin-27B-Opus, a 27B model that achieved 86.9% on GPQA Diamond โ ranking #5 globally on the HuggingFace leaderboard โ without a single gradient update. How? Darwin breeds pretrained models through evolutionary FFN crossbreeding. The father (Qwen3.5-27B) provides the reasoning architecture; the mother (Claude 4.6 Opus Reasoning Distilled) contributes structured chain-of-thought knowledge. CMA-ES automatically discovers optimal per-layer blending ratios โ no human tuning required. The result surpasses the original Qwen3.5-27B (85.5%), GLM-5.1 (744B, 86.2%), and Qwen3.5-122B (86.6%). A 27B model outperforming 744B โ with zero training, zero data, one GPU, ~2 hours. We also confirmed hybrid vigor on Korean benchmarks: Darwin-27B-KR (2nd generation offspring) surpassed both parents on CLIcK, winning 7 out of 11 categories. The evolutionary optimizer independently assigned 93% of FFN from the Korean-specialized mother while preserving 93% of attention from the reasoning-specialized father โ autonomously validating our core principle: FFN carries knowledge, Attention carries reasoning. ๐ Public release: 10 days โ 300+ community derivatives, 120K+ downloads. ๐ Links: Darwin-27B-Opus: https://huggingface.co/FINAL-Bench/Darwin-27B-Opus article: https://huggingface.co/blog/FINAL-Bench/darwin-gpqa Darwin Family Collection: https://huggingface.co/collections/FINAL-Bench/darwin-family If foundation models are raw ore, Darwin is the forge. We are just getting started. ๐ฅ
reacted
to
SeaWolf-AI
's
post
with ๐คฏ
about 11 hours ago
๐งฌ Darwin-27B-Opus: 86.9% on GPQA Diamond โ World #5, Zero Training We are excited to share Darwin-27B-Opus, a 27B model that achieved 86.9% on GPQA Diamond โ ranking #5 globally on the HuggingFace leaderboard โ without a single gradient update. How? Darwin breeds pretrained models through evolutionary FFN crossbreeding. The father (Qwen3.5-27B) provides the reasoning architecture; the mother (Claude 4.6 Opus Reasoning Distilled) contributes structured chain-of-thought knowledge. CMA-ES automatically discovers optimal per-layer blending ratios โ no human tuning required. The result surpasses the original Qwen3.5-27B (85.5%), GLM-5.1 (744B, 86.2%), and Qwen3.5-122B (86.6%). A 27B model outperforming 744B โ with zero training, zero data, one GPU, ~2 hours. We also confirmed hybrid vigor on Korean benchmarks: Darwin-27B-KR (2nd generation offspring) surpassed both parents on CLIcK, winning 7 out of 11 categories. The evolutionary optimizer independently assigned 93% of FFN from the Korean-specialized mother while preserving 93% of attention from the reasoning-specialized father โ autonomously validating our core principle: FFN carries knowledge, Attention carries reasoning. ๐ Public release: 10 days โ 300+ community derivatives, 120K+ downloads. ๐ Links: Darwin-27B-Opus: https://huggingface.co/FINAL-Bench/Darwin-27B-Opus article: https://huggingface.co/blog/FINAL-Bench/darwin-gpqa Darwin Family Collection: https://huggingface.co/collections/FINAL-Bench/darwin-family If foundation models are raw ore, Darwin is the forge. We are just getting started. ๐ฅ
new
activity
about 21 hours ago
unsloth/MiniMax-M2.7-GGUF:
Comparison of different Quants?
View all activity
Organizations
None yet
models
1
laaaaaaaaaaaaaaaaaaaa/Qwen3-4b-Finetune-Opus4.6-Gpt5Codex
4B
โข
Updated
Feb 28
โข
24
โข
1
datasets
1
laaaaaaaaaaaaaaaaaaaa/dataset
Viewer
โข
Updated
25 days ago
โข
36.2k
โข
16