Kunpeng-4x7B-mistral

Architecture: Mixture of Experts (MoE)

A Moe Model of "Mistral-7B-Instruct-v0.2", "Mistral-7B-v0.1", "Starling-LM-7B-alpha", and "Mistral-7B-Instruct-v0.1" then fine-tuned with "WizardLM_evol_instruct_70k" for q_proj, v_proj, and gate.

Downloads last month
70
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mzbac/Kunpeng-4x7B-mistral

Quantizations
2 models