Why not compare it with Qwen3-4B or Qwen3-7B?
It would be a stronger comparison — but not Qwen2.5-3B.
InclusionAI’s LLaDA-MoE-7B-A1B-Base benchmarks include comparisons against Qwen2.5-3B, a smaller and more lightweight model.
While this shows that LLaDA-MoE-7B offers a clear step up in reasoning and language quality, the comparison doesn’t fully represent how it performs against next-generation models of similar size, such as Qwen3-4B or Qwen3-7B.
A fairer evaluation would involve models closer in scale and architecture — especially since the Qwen3 series brings significant improvements in tokenization, training efficiency, and multi-round dialogue capabilities.
This note is not a critique, but an observation:
👉 Comparing a 7B MoE model to a 3B dense model naturally creates an advantage, and future benchmarks will likely include Qwen3-4B or Qwen3-7B for a more balanced performance analysis.
Thank you for this insightful observation. We completely agree that a more direct comparison against similarly-sized, next-generation models like Qwen3-4B and Qwen3-7B is essential for a balanced analysis.
This is precisely what we have planned for our forthcoming LLaDA 2.0 series. As a first glimpse, we've just released a mini-preview which you can explore here:
👉 https://huggingface.co/inclusionAI/LLaDA2.0-mini-preview
The comprehensive benchmarks you suggested will be featured in subsequent releases within the LLaDA 2.0 series. For all future updates, we encourage you to follow our main InclusionAI profile. Thanks again for the excellent suggestion, and we hope you'll follow our progress!