Liquid AI, You NEED to Make a 16B MoE Next!

#5
by tanyiades - opened

I’m really impressed with the 8B A1B MoE, and 2.6B Danse models from Liquid AI — they’re lightweight, fast, smart and remakable stable.

If Liquid AI eventually releases a larger model, like a 16B MoE with a 1.5B inference path, I believe it could easily become a strong alternative to Gemini 2.5 Flash, GPT-5 Nano, or even Claude 4.5 Haiku for on-device performance.

So I’d like to ask and suggest: is there any plan for a 16B MoE model? It feels like the perfect next step, especially since modern hardware like Ryzen AI Max, Ryzen AI HX370, Mac Mini, and even CPU-only systems are now capable of handling more advanced MoE routing.

A 16B MoE would be the most realistic and impactful upgrade while staying aligned with Liquid AI’s focus on speed and efficiency.

Liquid AI org

Thanks for your message! We're currently exploring different model sizes. Please stay tuned :)

@mlabonne this model is awesome! and fast and really stable! thanks for this incredible work!! kudos

Totally agreed. I see this model as an ideal main expert to build a sparse MoE around.

@mlabonne +1 here. The 8B A1B MoE is legit impressive and super fast, but in my experiments and study cases the 2.6B Danse actually worked better overall. So if we’re talking about the next MoE step, jumping straight to a 16B MoE feels like the right move. Especially if it can still run on my 24GB Mac mini 😅

Absolutely! Leveling up from 8B A1B/2.6B Danse isn’t huge, but a much bigger 16B MoE, especially rocking a 1.5B inference path, would really pop. 14B or even 20B could work too. Fast, efficient, and perfect for on-device, low-resource, or enterprise use.

Sign up or log in to comment