A Flagship 73B model, like what we remmeber in the past 3 years, lands in MoE.


This model base on Qwen-Coder-Next, which is my fav, was the winner for everything below 397B with my system prompt, till Qwen3.6 27B changed the game.

What can I say? that 27B model is too good, I cannot deny, but look at my poor Arc 770, which seems be abandoned, I decide to continue the journey with my little 'old' Queen,

So here the model is.

For all GPU poor.


Inference: ik_llama.cpp

Thanks:

Qwen Team for bring the base model;

Kimi as assistant;

llama.cpp to make local inference possible;

ik_llama.cpp for SOTA Quants, and faster inference for cpu.


Downloads last month
1,202
GGUF
Model size
73B params
Architecture
qwen3next
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Jahaz/Qwen-Coder-NX-73B

Finetuned
(33)
this model