Built a WeChat Mini Program in 20 minutes flat with Hy3 Preview + WorkBuddy…
and I didn’t type a single line of code. Not even a semicolon.
This Coding Agent is on steroids. Its comprehension in long back-and-forths is night and day better, and that 256K context window swallows the entire project structure whole.
Tell it what you want, and it actually gets the full picture no confused blank stares from the AI.
And we’re not messing around with dinky little code snippets here. It spits out a fully functional project
app.json, every page’s wxml/wxss/js/json, even Mock data pre-packed. Import it into WeChat Dev Tools and it runs on the first try
Only one tiny visual nitpick, zero logic bugs. Point out the flaw, and it fixes it instantly no new bugs, no passive-aggressive code breaks, no headaches
The entire vibe Tell it your idea → Get a complete working project → Mention a tiny flaw → AI polishes it.
No coding, no endless edits, no soul-crushing debugging that makes you want to throw your laptop. Absolute game-changer
Our lab recently released a paper where we introduce ShadowPEFT, a new Parameter-Efficient Fine-Tuning (PEFT) paradigm tailored for edge computing scenarios.
Unlike traditional approaches such as LoRA and its variants, which inject trainable parameters directly into the weights of Transformer, requiring tight coupling with the backbone.
ShadowPEFT instead enhances the frozen large base model by adding a lightweight, centralized, pretrainable, and detachable Shadow network. This shadow network operates in parallel with the base model, delivering learned corrections to each decoder layer. Because the shadow module is architecturally decoupled from the backbone, it can be independently trained, stored, and deployed, benefiting edge computing scenarios and edge-cloud collaboration computing.
✅ 295B total / 21B active / 256K context ✅ Fused fast-and-slow thinking in a single model ✅ First model trained on Hunyuan's rebuilt pretraining + RL infra (Feb → Apr)
Benchmarks: 👉 SWE-Bench Verified, Terminal-Bench 2.0, BrowseComp, WideSearch — competitive results, particularly strong on agentic tool use 👉 Top score on Tsinghua's 2026 Spring math PhD qualifying exam 👉 Strong context-learning and instruction-following on Tencent's CL-bench / CL-bench-Life