Update README.md
Browse files
README.md
CHANGED
|
@@ -15,7 +15,7 @@ pipeline_tag: text-generation
|
|
| 15 |
library_name: peft
|
| 16 |
---
|
| 17 |
|
| 18 |
-
#
|
| 19 |
|
| 20 |
- **Developed by:** khazarai
|
| 21 |
- **License:** apache-2.0
|
|
@@ -23,4 +23,54 @@ library_name: peft
|
|
| 23 |
|
| 24 |
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth)
|
| 25 |
|
| 26 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
library_name: peft
|
| 16 |
---
|
| 17 |
|
| 18 |
+
# Model Description
|
| 19 |
|
| 20 |
- **Developed by:** khazarai
|
| 21 |
- **License:** apache-2.0
|
|
|
|
| 23 |
|
| 24 |
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth)
|
| 25 |
|
| 26 |
+
|
| 27 |
+
This model is a QLoRA fine-tuned version of unsloth/qwen3-14b-unsloth-bnb-4bit, originally based on the Qwen3-14B architecture developed by the Qwen Team.
|
| 28 |
+
The model has been fine-tuned on the Chain of Thought – Heartbreak & Breakups Dataset (MIT Licensed), consisting of 9.8k high-quality Q&A pairs focused on emotional processing, coping strategies, and relationship dynamics following breakups.
|
| 29 |
+
The goal of this fine-tuning is to enhance:
|
| 30 |
+
|
| 31 |
+
- Emotional reasoning capability
|
| 32 |
+
- Structured chain-of-thought generation
|
| 33 |
+
- Empathetic and psychologically grounded responses
|
| 34 |
+
- Relationship pattern analysis
|
| 35 |
+
- Identity reconstruction & self-esteem rebuilding guidance
|
| 36 |
+
|
| 37 |
+
|
| 38 |
+
# 🧠 Base Model
|
| 39 |
+
|
| 40 |
+
- Base architecture: Qwen3-14B
|
| 41 |
+
- Variant: unsloth/qwen3-14b-unsloth-bnb-4bit
|
| 42 |
+
- Quantization: 4-bit (bitsandbytes)
|
| 43 |
+
- Fine-tuning method: QLoRA
|
| 44 |
+
- Adapter type: LoRA
|
| 45 |
+
- Training precision: 4-bit base + 16-bit adapters
|
| 46 |
+
|
| 47 |
+
|
| 48 |
+
|
| 49 |
+
# 🎯 Intended Use
|
| 50 |
+
|
| 51 |
+
This model is intended for:
|
| 52 |
+
|
| 53 |
+
- Mental health–adjacent AI assistants
|
| 54 |
+
- Relationship guidance systems
|
| 55 |
+
- Emotional reasoning research
|
| 56 |
+
- Chain-of-thought alignment experiments
|
| 57 |
+
- NLP research on structured reasoning in affective domains
|
| 58 |
+
|
| 59 |
+
The model aims to produce:
|
| 60 |
+
|
| 61 |
+
- Step-by-step reasoning
|
| 62 |
+
- Balanced perspectives
|
| 63 |
+
- Reduced reactive or extreme advice
|
| 64 |
+
|
| 65 |
+
⚠️ Limitations
|
| 66 |
+
|
| 67 |
+
- Not a substitute for licensed therapy
|
| 68 |
+
- May generate plausible but non-clinically validated advice
|
| 69 |
+
- Trained on synthetic / curated emotional scenarios
|
| 70 |
+
- Chain-of-thought exposure may increase verbosity
|
| 71 |
+
- Emotional nuance outside breakup domain may be limited
|
| 72 |
+
|
| 73 |
+
This model should not be used for crisis intervention or high-risk mental health scenarios.
|
| 74 |
+
|
| 75 |
+
|
| 76 |
+
|