SEAD-14B: Self-Evolving Agent for Multi-Turn Service Dialogue

SEAD (Self-Evolving Agent for Service Dialogue) is a co-evolutionary reinforcement learning framework designed for training dialogue agents that adapt to diverse user scenarios without requiring large-scale human annotations. This model is a 14B parameter agent based on Qwen2.5-14B-Instruct, fine-tuned using the SEAD framework.

Model Description

Large Language Models often exhibit suboptimal performance in service dialogues due to data scarcity and the difficulty of simulating authentic user behaviors. SEAD addresses these issues by decoupling user modeling into two components:

  1. Profile Controller: Generates diverse user states to manage the training curriculum.
  2. User Role-play Model: Focuses on realistic role-playing.

This design ensures the training environment provides adaptive scenarios rather than acting as an adversary, allowing the agent to learn effective strategies through self-evolution.

Performance

Experiments demonstrate that SEAD significantly outperforms open-source foundation models and commercial closed-source models. It improves task completion rate (CR) by 17.6% and dialogue efficiency by 11.1% compared to baselines.

Method Params Completion Rate (CR)
Qwen2.5-14B-Instruct 14B 38.7%
GPT-4o -- 44.2%
SEAD (Ours) 14B 52.0%

Citation

If you find this model or the SEAD framework useful, please cite:

@article{SEADv1,
  title={SEAD: Self-Evolving Agent for Multi-Turn Service Dialogue},
  author={Yuqin Dai, Ning Gao, Wei Zhang, Jie Wang, Zichen Luo, Jinpeng Wang, Yujie Wang, Ruiyuan Wu, Chaozheng Wang},
  journal={arXiv preprint arXiv:2602.03548},
  year={2026}
}
Downloads last month
19
Safetensors
Model size
15B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for dayll/SEAD-14B

Base model

Qwen/Qwen2.5-14B
Finetuned
(269)
this model
Quantizations
1 model

Collection including dayll/SEAD-14B

Paper for dayll/SEAD-14B