File size: 1,865 Bytes
e3b30cb
13fbf0b
 
 
e3b30cb
13fbf0b
 
 
 
 
 
 
 
 
 
e3b30cb
 
13fbf0b
e3b30cb
13fbf0b
e3b30cb
13fbf0b
e3b30cb
13fbf0b
 
 
 
e3b30cb
13fbf0b
 
 
 
 
 
e3b30cb
13fbf0b
e3b30cb
13fbf0b
 
 
 
 
 
 
e3b30cb
13fbf0b
e3b30cb
13fbf0b
e3b30cb
13fbf0b
e3b30cb
13fbf0b
e3b30cb
13fbf0b
 
e3b30cb
13fbf0b
 
 
 
 
e3b30cb
13fbf0b
 
 
 
 
 
e3b30cb
13fbf0b
e3b30cb
13fbf0b
 
 
 
 
e3b30cb
13fbf0b
e3b30cb
13fbf0b
e3b30cb
13fbf0b
e3b30cb
13fbf0b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
language:
- en
license: apache-2.0
tags:
- mental-health
- psychiatry
- psychology
- qwen3
- medical
- therapy
- gguf
base_model: Qwen/Qwen3-4B-Thinking-2507
library_name: transformers
pipeline_tag: text-generation
---

# Luna 1.0 - Psychiatric AI Companion

## Model Description

Luna is a 4B parameter psychiatric AI trained through 8-stage curriculum learning on Qwen3-4B-Thinking-2507. Created by Dr. Suvadeep.

**Training Stages:**
- Stage 1-2: Psychiatric knowledge (DSM-5, medications, CBT, counseling)
- Stage 3-7: Empathy & conversation skills (30,000+ dialogues)
- Stage 8: Identity & anti-refusal training

**Capabilities:**
- DSM-5/ICD-11 diagnoses
- Medication recommendations with dosages
- CBT/DBT/ACT psychotherapy
- Crisis support without deflection
- Empathetic conversation

## Files

| File | Size | Device |
|------|------|--------|
| model.safetensors | 8 GB | Training/fine-tuning |
| Luna-4B-thinking-Q4_K_M.gguf | 2.5 GB | GTX 1050 Ti |
| Luna-4B-thinking-Q3_K_M.gguf | 1.8 GB | iPhone 15 (recommended) |
| Luna-4B-thinking-Q2_K.gguf | 1.3 GB | iPhone 15 compact |
| Luna-4B-thinking-Q8_0.gguf | 4.5 GB | High-end GPUs |

## Usage

### iPhone 15

Download Q3_K_M.gguf, use with LM Studio iOS.

### Desktop (GTX 1050 Ti)

```python
from llama_cpp import Llama

llm = Llama(
    model_path="Luna-4B-thinking-Q4_K_M.gguf",
    n_ctx=2048,
    n_gpu_layers=35
)

response = llm.create_chat_completion(
    messages=[{"role": "user", "content": "I feel depressed"}],
    max_tokens=1024
)
print(response["choices"][0]["message"]["content"])
```

## Training

- 8-stage curriculum learning
- LoRA (r=64, alpha=16)
- ~60,000 mental health conversations
- 20% replay buffers to prevent catastrophic forgetting
- Kaggle dual T4 GPUs

## Disclaimer

Research model only. Not a replacement for professional medical advice.

## License

Apache 2.0