A Flagship 73B model, like what we remmeber in the past 3 years, lands in MoE.
This model base on Qwen-Coder-Next, which is my fav, was the winner for everything below 397B with my system prompt, till Qwen3.6 27B changed the game.
What can I say? that 27B model is too good, I cannot deny, but look at my poor Arc 770, which seems be abandoned, I decide to continue the journey with my little 'old' Queen,
So here the model is.
For all GPU poor.
Inference: ik_llama.cpp
Thanks:
Qwen Team for bring the base model;
Kimi as assistant;
llama.cpp to make local inference possible;
ik_llama.cpp for SOTA Quants, and faster inference for cpu.
- Downloads last month
- 1,202
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Model tree for Jahaz/Qwen-Coder-NX-73B
Base model
Qwen/Qwen3-Coder-Next