File size: 3,444 Bytes
3562ec0
 
 
 
 
 
 
 
 
 
 
d741fc9
 
 
 
3562ec0
 
767eeab
3562ec0
 
 
 
 
 
 
767eeab
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4d41496
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
datasets:
- mattwesney/CoT_Heartbreak_and_Breakups
pipeline_tag: text-generation
library_name: peft
---

# Model Description

- **Developed by:** khazarai
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit

This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth)


This model is a QLoRA fine-tuned version of unsloth/qwen3-14b-unsloth-bnb-4bit, originally based on the Qwen3-14B architecture developed by the Qwen Team.
The model has been fine-tuned on the Chain of Thought – Heartbreak & Breakups Dataset (MIT Licensed), consisting of 9.8k high-quality Q&A pairs focused on emotional processing, coping strategies, and relationship dynamics following breakups.
The goal of this fine-tuning is to enhance:

- Emotional reasoning capability
- Structured chain-of-thought generation
- Empathetic and psychologically grounded responses
- Relationship pattern analysis
- Identity reconstruction & self-esteem rebuilding guidance


# 🧠 Base Model

- Base architecture: Qwen3-14B
- Variant: unsloth/qwen3-14b-unsloth-bnb-4bit
- Quantization: 4-bit (bitsandbytes)
- Fine-tuning method: QLoRA
- Adapter type: LoRA
- Training precision: 4-bit base + 16-bit adapters


# 🎯 Intended Use

This model is intended for:

- Mental health–adjacent AI assistants
- Relationship guidance systems
- Emotional reasoning research
- Chain-of-thought alignment experiments
- NLP research on structured reasoning in affective domains

The model aims to produce:

- Step-by-step reasoning
- Balanced perspectives
- Reduced reactive or extreme advice

⚠️ Limitations

- Not a substitute for licensed therapy
- May generate plausible but non-clinically validated advice
- Trained on synthetic / curated emotional scenarios
- Chain-of-thought exposure may increase verbosity
- Emotional nuance outside breakup domain may be limited

This model should not be used for crisis intervention or high-risk mental health scenarios.

# How to get started with Model

``` Python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel


tokenizer = AutoTokenizer.from_pretrained("unsloth/qwen3-14b-unsloth-bnb-4bit")
base_model = AutoModelForCausalLM.from_pretrained(
    "unsloth/qwen3-14b-unsloth-bnb-4bit",
    device_map={"": 0}
)

model = PeftModel.from_pretrained(base_model,"khazarai/Med-R1-14B")

question = """
How can someone work through and move past deeply painful memories associated with trauma, understanding that "moving past" doesn't mean forgetting but rather integrating the experience in a healthy way?
"""

messages = [
    {"role" : "user", "content" : question}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize = False,
    add_generation_prompt = True,
    enable_thinking = True,
)

from transformers import TextStreamer
_ = model.generate(
    **tokenizer(text, return_tensors = "pt").to("cuda"),
    max_new_tokens = 2048,
    temperature = 0.6,
    top_p = 0.95,
    top_k = 20,
    streamer = TextStreamer(tokenizer, skip_prompt = True),
)
```

# 🧪 Future Work

- Domain expansion to broader emotional intelligence tasks
- Controlled reasoning output (hidden CoT vs visible CoT)
- Evaluation via human annotation
- Cross-cultural emotional adaptation