I believe your large-scale model is not trained well
Firstly, DeepSeek v3.2 and Kimi K2.5 are models that simultaneously support both reasoning and instructing, making their training cost lower than training two separate models.
However, their performance of instructing on leaderboards is predictably inferior to an individual instruct model .Your model's total parameters and activated parameters are significantly greater than those of DeepSeek v3.2 and Kimi K2.5.
Because many metrics for other models are not directly available, but judging from the leaderboards we can see, your model doesn't perform very well, especially compared to individual instruct models
The relevant results can be obtained from other model pages.
- Although instruct mode results in a fusion model are not as good as those in a standalone instruct model, you can compare your 1T instruct model with the latest Qwen3.5-397B-A17B (instruct mode)
(Actually, this might be unfair to Qwen, as they have fewer total parameters and activated parameters, and the instruct mode results of the fusion model may not be high). But perhaps their smaller model performs better.
(because Qwen3.5 provides metrics for their thinking mode).
5 I noticed that you've also released reasoning model, but similarly, performance on the metrics you provided is not good, especially considering very large number of parameters.
Looking forward to your reply
I hope that a smaller and better model will be available in the future for easier testing.

