EFT-CoT: A Multi-Agent Chain-of-Thought Framework for Emotion-Focused Therapy

This repository hosts the official LoRA adapter weights for the paper "EFT-CoT: A Multi-Agent Chain-of-Thought Framework for Emotion-Focused Therapy".

🌟 Introduction

The EFT-CoT model is fine-tuned from Qwen2.5-7B-Instruct using the EFT-Instruct dataset. It is specifically designed to implement Emotion-Focused Therapy (EFT) principles within a structured Multi-Agent Chain-of-Thought framework. The model excels at organizing intervention strategies (e.g., emotional exploration, evocative responding, and empathetic attunement) into a coherent therapeutic flow.

πŸŽ₯ System Demonstration

For reviewers to evaluate the framework's interactive capabilities and multi-agent synergy, a full walkthrough video is provided: πŸ‘‰ Watch EFT-CoT.mp4

πŸ”§ Model Specifications

  • Base Model: Qwen/Qwen2.5-7B-Instruct
  • Adapter Type: LoRA (Low-Rank Adaptation)
  • Training Data: EFT-Instruct (Domain-specific therapeutic dialogue dataset)
  • Training Framework: LLaMA-Factory (PEFT)

πŸ“¦ Files Included

  • adapter_model.safetensors: Core fine-tuned parameters.
  • adapter_config.json: Configuration for LoRA adapter.
  • tokenizer_config.json & others: Essential files for consistent tokenization.

πŸš€ Quick Start (Inference)

You can load this adapter using the following code snippet:

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

model_id = "Qwen/Qwen2.5-7B-Instruct"
adapter_id = "EFT-CoT-Review/EFT-CoT" # Path to your HF repo

tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto")
model = PeftModel.from_pretrained(model, adapter_id)

print("EFT-CoT model successfully loaded.")
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for EFT-CoT-Review/EFT-CoT

Base model

Qwen/Qwen2.5-7B
Finetuned
(2964)
this model