DeepseekV3ForCausalLM
The diff reflects that most differences between modeling_glm4_moe_lite.py and modeling_deepseek_v3.py are just naming changes.
Even TODO is copied: https://github.com/huggingface/transformers/blob/main/src/transformers/models/glm4_moe_lite/modeling_glm4_moe_lite.py#L187-L188
Question: can we simply use DeepseekV3ForCausalLM here?
Right, for transformers, I just tried it too, and using DeepseekV3ForCausalLM for simple conversation works, but for sglang and vLLM, they use different hooks, which could cause errors (especially sglang, kernel is different and now still in progress)
As for why even the Todos are the same, it's because the Attention implement in transformers is indeed completely identical to DeepseekV3Attention, and using modular will copy all tools from DeepseekV3.