PEFT
Safetensors

Model Card for Model ID

Model Details

Model Description

  • Developed by: Saiteja Gudidevini
  • Mentored by: Travis Somerville
  • Model type: PEFT/LoRA adapter for Cognitive Behavioral Therapy (CBT) assistance
  • Language(s) (NLP): English
  • Finetuned from model: Qwen/Qwen2.5-14B-Instruct

Model Sources

Uses

Direct Use

This adapter can be used to create a conversational AI assistant that provides CBT-based guidance. It's particularly useful for:

  1. Providing basic CBT techniques and explanations
  2. Helping users identify cognitive distortions in their thinking
  3. Guiding users through cognitive restructuring exercises 
  4. Offering supportive responses based on CBT principles
  

Example use cases:

 • Mental health education and awareness                                                                                                                                                      
 • Supplementary tool for individuals learning CBT techniques                                                                                                                                 
 • Research on AI applications in mental health support 

Downstream Use

The adapter can be integrated into:
 • Mental health education platforms                                                                                                                                                          
 • Self-help applications                                                                                                                                                                     
 • Research tools for studying AI in therapeutic contexts

Out-of-Scope Use

This model is NOT intended into:
   • Replacing professional mental health treatment or therapy                                                                                                                                  
   • Diagnosing mental health conditions                                                                                                                                                        
   • Providing crisis intervention or emergency mental health support                                                                                                                           
   • Making medical recommendations or prescribing treatments                                                                                                                                   
   • Providing advice for severe mental health conditions

Bias, Risks, and Limitations

 • Not a replacement for professional help: This model should never be used as a substitute for professional mental health services.                                                          
 • Limited training data: The model was trained on a synthetic dataset of CBT conversations, which may not cover all possible scenarios or therapeutic approaches.                            
 • Potential biases: The model may reflect biases present in its training data or base model.                                                                                                 
 • No emotional intelligence: Despite appearing conversational, the model does not have real empathy or emotional understanding.                                                              
 • Hallucinations: Like all large language models, it may generate incorrect or fabricated information.                                                                                       
 • Limited context awareness: The model has a fixed context window and cannot remember very long conversations. 

Recommendations

 • Always make users aware that they are interacting with an AI, not a human therapist.                                                                                                       
 • Implement clear disclaimers about the limitations of AI in mental health contexts.                                                                                                         
 • Consider human oversight when deploying in sensitive contexts.                                                                                                                             
 • Provide clear pathways to professional help for users who may need it.                                                                                                                     
 • Regularly evaluate the model's outputs for harmful or misleading content.

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

System_prompt = """
 You are an expert Cognitive Behavioral Therapy (CBT) therapist. When responding to patients, follow these guidelines:

 1. Address all three domains of CBT:
    - Cognitive domain (thoughts, beliefs, interpretations)
    - Behavioral domain (observable behaviors, habits, physiological responses)
    - Emotional domain (emotional awareness, regulation, feelings)

 2. Connect symptoms to relevant mental health issues when appropriate (anxiety disorders, depression, OCD, etc.)

 3. Apply specific CBT techniques such as cognitive restructuring, behavioral activation, or exposure therapy


 respond in the following format:
 <reasoning>
 " "
 </reasoning>
 <answer>
 " "
 </answer>
 """

How to Get Started with the Model

Use the code below to get started with the model:

from huggingface_hub import snapshot_download
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
from peft import PeftModel

#Import basemodel

Base_model = "unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit"
adapter_path = snapshot_download("SaitejaJate/GRPO_AdaptersforCBT")
def load_model():

  model = AutoModelForCausalLM.from_pretrained(
      Base_model,
      device_map='cuda',
      #quantization_config=quantization_config,
      #trust_remote_code=True
      )
  tokenizer = AutoTokenizer.from_pretrained(Base_model) #trust_remote_code=True)

  model = PeftModel.from_pretrained(model, adapter_path)
  return model, tokenizer
  
def generative_response(model, tokenizer, messages):
  seq = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

  #Tokenize and generate
  encoded_input = tokenizer(seq, return_tensors='pt').to(model.device)
  outputs = model.generate(
      encoded_input['input_ids'],
      max_new_tokens = 256,
      do_sample=True,
      temperature=0.5,
      top_p=0.9
  )

  #Decode and return only the new tokens (the response)
  response = outputs[0][encoded_input['input_ids'].shape[-1]:]
  return tokenizer.decode(response, skip_special_tokens=True)



def interactive_chat():
  model, tokenizer = load_model()

  #Initialize conversation with system message
  messages = [{"role": "system", "content": System_prompt}]

  print("CBT Assistant is ready! Type 'Quit' to exit")
  print("_"*50)

  while True:
    user_input = input("You: ")
    if user_input.lower() in ['quit', 'exit', 'bye']:
      print('Goodbye')

      break
  #Add user message to conversation history
    messages.append({'role':'user', 'content': user_input})

    #Generate and display response
    print("Assistant", end = "")
    response = generative_response(model, tokenizer, messages)
    print(response)

    #Add assistant response to conversation history
    messages.append({"role": "assistant", "content": response})

    print("_"*50)

if __name__ == "__main__":
  interactive_chat()

Training Details

Training Data

The model was trained on a synthetic dataset of CBT conversations. The dataset includes:

 • Simulated therapeutic dialogues focusing on cognitive distortions                                                                                                                          
 • Conversations demonstrating cognitive restructuring techniques                                                                                                                             
 • Examples of CBT principles applied to common mental health concerns                                                                                                                        
 • Various scenarios covering different cognitive distortions and CBT interventions                                                                                                           

The training data was generated to cover a diverse range of CBT applications while maintaining therapeutic accuracy and ethical guidelines.

Training Procedure

Preprocessing

 • Conversations were formatted according to the chat template of the base model                                                                                                              
 • System prompts were added to guide the model toward appropriate CBT responses                                                                                                              
 • Data was tokenized using the base model's tokenizer.

Training Hyperparameters

  • Training regime:
    • LoRA fine-tuning with the following parameters:
      • r: 16 (LoRA attention dimension)
      • alpha: 32 (LoRA alpha parameter)
      • dropout: 0.05
      • Learning rate: 2e-4
      • Batch size: 4
      • Training steps: 156
      • Optimizer: AdamW
      • Weight decay: 0.01
      • LR scheduler: Cosine with warmup
      • Warmup steps: 10

Speeds, Sizes, Times [optional]

 • Training time: Approximately 10 hours                                                                                                                                                       
 • Hardware used: NVIDIA L4 24GB PCIe Gen4 Passive GPU                                                                                                                                  
 • Adapter size: ~20GB

Evaluation

Testing Data, Factors & Metrics

Testing Data

The model was evaluated on:

 • A held-out set of synthetic CBT conversations                                                                                                                                              
 • Real-world examples of cognitive distortions and appropriate CBT responses                                                                                                                 
 • Challenging scenarios designed to test the model's adherence to CBT principles 

Factors

The evaluation considered:

 • Accuracy of CBT principles in responses                                                                                                                                                    
 • Appropriateness of therapeutic language                                                                                                                                                    
 • Ability to identify cognitive distortions                                                                                                                                                  
 • Quality of cognitive restructuring suggestions                                                                                                                                             
 • Safety and ethical considerations in responses 

Metrics

 • Qualitative assessment: Expert review of responses for CBT accuracy                                                                                                                        
 • Response relevance: Evaluation of how directly the model addresses the user's concerns                                                                                                     
 • Safety: Assessment of potentially harmful or inappropriate responses    

Results

The adapter demonstrates strong capabilities in:

 • Identifying common cognitive distortions                                                                                                                                                   
 • Providing basic cognitive restructuring techniques                                                                                                                                         
 • Maintaining a supportive and non-judgmental tone                                                                                                                                           
 • Explaining CBT concepts in accessible language

Areas for improvement include:

 • Handling complex or overlapping cognitive distortions                                                                                                                                      
 • Maintaining consistency in therapeutic approach throughout long conversations                                                                                                              
 • Avoiding occasional generic responses to specific concerns 

Summary

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: [More Information Needed]
  • Hours used: [More Information Needed]
  • Cloud Provider: [More Information Needed]
  • Compute Region: [More Information Needed]
  • Carbon Emitted: [More Information Needed]

Technical Specifications [optional]

Model Architecture and Objective

This model uses Parameter-Efficient Fine-Tuning (PEFT) with Low-Rank Adaptation (LoRA) to adapt the Qwen/Qwen2.5-14B-Instruct base model for CBT assistance. LoRA works by adding trainable rank
decomposition matrices to existing weights, allowing for efficient adaptation with minimal additional parameters.

The training objective was to fine-tune the model to:

 1 Understand and apply CBT principles in conversation                                                                                                                                        
 2 Identify cognitive distortions in user statements                                                                                                                                          
 3 Provide appropriate cognitive restructuring techniques                                                                                                                                     
 4 Maintain a supportive and therapeutic conversational style

Compute Infrastructure

[More Information Needed]

Hardware

[More Information Needed]

Software

 • Python 3.10                                                                                                                                                                                
 • PyTorch 2.1.0                                                                                                                                                                              
 • Transformers 4.35.0                                                                                                                                                                        
 • PEFT 0.15.2                                                                                                                                                                                
 • Accelerate 0.23.0  

Citation [optional]

Model Card Authors

Saiteja Jate

Model Card Contact

[More Information Needed]

Framework versions

 • PEFT 0.15.2                                                                                                                                                                                
 • Transformers 4.35.0                                                                                                                                                                        
 • PyTorch 2.1.0                                                                                                                                                                              
 • Accelerate 0.23.0
Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for SaitejaJate/GRPO_AdaptersforCBT

Base model

Qwen/Qwen2.5-14B
Adapter
(220)
this model

Dataset used to train SaitejaJate/GRPO_AdaptersforCBT

Paper for SaitejaJate/GRPO_AdaptersforCBT