Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
5
4
2
Demian L. P.
very-cooluser
Follow
21world's profile picture
Gargaz's profile picture
DualityAI-RebekahBogdanoff's profile picture
4 followers
·
17 following
AI & ML interests
Anything that can run on ~3GB of memory is a instant thumbs up to me
Recent Activity
reacted
to
kelsend
's
post
with 👍
20 days ago
Tencent Open-Sources Hunyuan 3D World Model 2.0 Generate Editable 3D Game Worlds with One Sentence, Compatible with Unity/UE Tencent has officially released and open-sourced Hunyuan 3D World Model 2.0 (HY-World 2.0), enabling AI to evolve from video generation to creating playable, editable 3D world Core Highlights Text / Image / Video → Directly generate exportable 3D assets (Mesh / 3DGS / Point Cloud) Seamlessly integrates with Unity / Unreal Engine for game maps and level prototyping One-click reconstruction of digital twin scenes from single images/videos, no camera parameters required Spatial Agent for intelligent navigation trajectories no wall penetration, consistent spatial height All-new HY-Pano-2.0 + WorldMirror 2.0 architecture, achieving SOTA in 3D reconstruction and novel view synthesis Key Breakthrough Unlike Genie 3 and Hunyuan 1.5, which only output videos, HY-World 2.0 generates re-editable 3D worlds that support collision, interaction, and engine import Application Scenarios Game Development, Indoor Preview, Urban Planning, Digitalization of Cultural Heritage, Embodied AI Simulation
reacted
to
DedeProGames
's
post
with 🚀
21 days ago
🔥 GRM-2.5 - The most POWERFUL model for local inference The GRM-2.5 is the newest model from Orion LLM Labs. It has consistent RAW reasoning and is capable of generating very precise responses, similar to large models, while maintaining a parameter size of 4b. The GRM-2.5 family consists of these models: https://huggingface.co/OrionLLM/GRM-2.5 (4b) https://huggingface.co/OrionLLM/GRM-2.5-Air (0.8b) Furthermore, the GRM-2.5 is the best option for local agentic environments, being very good in code, terminal agent, etc. It is capable of generating 1000 lines of consistent code and programming like large models. The GRM-2.5 is the best base for FineTune to date and has vision, which means it can interpret images and videos.
reacted
to
SeaWolf-AI
's
post
with 🔥
23 days ago
🧬 Darwin-27B-Opus: 86.9% on GPQA Diamond — World #5, Zero Training We are excited to share Darwin-27B-Opus, a 27B model that achieved 86.9% on GPQA Diamond — ranking #5 globally on the HuggingFace leaderboard — without a single gradient update. How? Darwin breeds pretrained models through evolutionary FFN crossbreeding. The father (Qwen3.5-27B) provides the reasoning architecture; the mother (Claude 4.6 Opus Reasoning Distilled) contributes structured chain-of-thought knowledge. CMA-ES automatically discovers optimal per-layer blending ratios — no human tuning required. The result surpasses the original Qwen3.5-27B (85.5%), GLM-5.1 (744B, 86.2%), and Qwen3.5-122B (86.6%). A 27B model outperforming 744B — with zero training, zero data, one GPU, ~2 hours. We also confirmed hybrid vigor on Korean benchmarks: Darwin-27B-KR (2nd generation offspring) surpassed both parents on CLIcK, winning 7 out of 11 categories. The evolutionary optimizer independently assigned 93% of FFN from the Korean-specialized mother while preserving 93% of attention from the reasoning-specialized father — autonomously validating our core principle: FFN carries knowledge, Attention carries reasoning. 📊 Public release: 10 days → 300+ community derivatives, 120K+ downloads. 🔗 Links: Darwin-27B-Opus: https://huggingface.co/FINAL-Bench/Darwin-27B-Opus article: https://huggingface.co/blog/FINAL-Bench/darwin-gpqa Darwin Family Collection: https://huggingface.co/collections/FINAL-Bench/darwin-family If foundation models are raw ore, Darwin is the forge. We are just getting started. 🔥
View all activity
Organizations
None yet
very-cooluser
's activity
All
Models
Datasets
Spaces
Buckets
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
liked
2 models
3 months ago
z-lab/Qwen3-Coder-30B-A3B-DFlash
Text Generation
•
0.5B
•
Updated
30 days ago
•
1.88k
•
29
Qwen/Qwen3-TTS-Tokenizer-12Hz
Audio-to-Audio
•
Updated
Jan 29
•
65.4k
•
62