EFT-CoT: A Multi-Agent Chain-of-Thought Framework for Emotion-Focused Therapy
This repository hosts the official LoRA adapter weights for the paper "EFT-CoT: A Multi-Agent Chain-of-Thought Framework for Emotion-Focused Therapy".
π Introduction
The EFT-CoT model is fine-tuned from Qwen2.5-7B-Instruct using the EFT-Instruct dataset. It is specifically designed to implement Emotion-Focused Therapy (EFT) principles within a structured Multi-Agent Chain-of-Thought framework. The model excels at organizing intervention strategies (e.g., emotional exploration, evocative responding, and empathetic attunement) into a coherent therapeutic flow.
π₯ System Demonstration
For reviewers to evaluate the framework's interactive capabilities and multi-agent synergy, a full walkthrough video is provided: π Watch EFT-CoT.mp4
π§ Model Specifications
- Base Model: Qwen/Qwen2.5-7B-Instruct
- Adapter Type: LoRA (Low-Rank Adaptation)
- Training Data: EFT-Instruct (Domain-specific therapeutic dialogue dataset)
- Training Framework: LLaMA-Factory (PEFT)
π¦ Files Included
adapter_model.safetensors: Core fine-tuned parameters.adapter_config.json: Configuration for LoRA adapter.tokenizer_config.json& others: Essential files for consistent tokenization.
π Quick Start (Inference)
You can load this adapter using the following code snippet:
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
model_id = "Qwen/Qwen2.5-7B-Instruct"
adapter_id = "EFT-CoT-Review/EFT-CoT" # Path to your HF repo
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
model = PeftModel.from_pretrained(model, adapter_id)
print("EFT-CoT model successfully loaded.")