File size: 4,237 Bytes
f3f02ec
 
3e2bad8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f3f02ec
 
3e2bad8
f3f02ec
3e2bad8
f3f02ec
3e2bad8
 
f3f02ec
3e2bad8
 
 
f3f02ec
 
 
 
 
2411c2f
 
3e2bad8
 
 
 
 
f3f02ec
3e2bad8
 
f3f02ec
3e2bad8
f3f02ec
3e2bad8
f3f02ec
3e2bad8
 
 
f3f02ec
3e2bad8
f3f02ec
 
 
 
 
3e2bad8
f3f02ec
3e2bad8
 
 
f3f02ec
3e2bad8
f3f02ec
3e2bad8
 
 
f3f02ec
3e2bad8
 
 
 
 
f3f02ec
 
 
3e2bad8
 
 
 
f3f02ec
3e2bad8
f3f02ec
 
 
3e2bad8
 
 
 
f3f02ec
3e2bad8
f3f02ec
 
 
3e2bad8
 
f3f02ec
3e2bad8
f3f02ec
 
 
3e2bad8
b616524
 
3e2bad8
 
f3f02ec
3e2bad8
 
f3f02ec
3e2bad8
 
b616524
3e2bad8
f3f02ec
3e2bad8
f3f02ec
3e2bad8
 
f3f02ec
3e2bad8
 
cb97333
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
---
library_name: transformers
tags:
- gemma
- psychology
- mental-health
- lora
- instruction-tuning
- neuroquestai
license: mit
datasets:
- jkhedri/psychology-dataset
language:
- en
metrics:
- perplexity
base_model:
- google/gemma-2b-it
pipeline_tag: text-generation
---

# Model Card for Gemma-2b-it-Psych

## Model Summary

**Gemma-2b-it-Psych** is a domain-adapted version of `google/gemma-2b-it`, fine-tuned using LoRA on an instruction-based psychology dataset.  
The model is optimized to generate **empathetic, supportive, and professionally aligned psychological responses**, primarily for educational and research purposes.

This repository contains **LoRA adapters only**. The base model must be loaded separately.

---

## Model Details

### Model Description

- **Author:** Ederson Corbari (e@NeuroQuest.ai)
- **Date:** February 01, 2026
- **Model type:** Causal Language Model (LLM)
- **Language(s):** English
- **License:** Same as base model (`google/gemma-2b-it`)
- **Finetuned from model:** `google/gemma-2b-it`
- **Fine-tuning method:** LoRA / QLoRA (parameter-efficient fine-tuning)

This model was fine-tuned using instruction–response pairs focused on psychological support.  
Only empathetic and therapeutically appropriate responses were retained during training, while judgmental or aggressive alternatives were excluded.

---

### Model Sources

- **Hugging Face Repository:** https://huggingface.co/ecorbari/Gemma-2b-it-Psych  
- **GitHub Repository:** https://github.com/edersoncorbari/fine-tune-llm 
- **Base Model:** https://huggingface.co/google/gemma-2b-it  

---

## Uses

### Direct Use

This model is intended for:

- Research and experimentation with instruction-tuned LLMs
- Educational demonstrations of LoRA fine-tuning
- Prompt engineering and behavioral analysis in psychology-related domains

The model requires the base Gemma-2B weights to be loaded together with the LoRA adapters.

---

### Downstream Use

- Further fine-tuning on related domains
- Adapter merging to create a standalone model
- Quantization for efficient local inference (e.g., GGUF formats)

---

### Out-of-Scope Use

- Clinical diagnosis or treatment
- Real-world mental health interventions without professional supervision
- High-stakes decision-making
- Autonomous counseling systems

---

## Bias, Risks, and Limitations

- The model may generate inaccurate or incomplete information.
- It does not replace licensed mental health professionals.
- Responses may reflect biases present in the training data.
- Empathy does not guarantee correctness or safety in all contexts.

---

### Recommendations

Users should apply human oversight, especially in sensitive scenarios.  
This model is best suited for **research, learning, and proof-of-concept applications**.

---

## How to Get Started with the Model

```python
import torch

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

base_model = "google/gemma-2b-it"
adapter_model = "ecorbari/Gemma-2b-it-Psych"

tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(
    base_model, dtype=torch.float16, device_map="auto"
)

model = PeftModel.from_pretrained(model, adapter_model)

prompt = "How can I cope with anxiety during stressful situations?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

## Evaluation

### Testing Data

The model was evaluated on a held-out validation split of the psychology instruction dataset used during fine-tuning.

---

### Metrics

The following metrics were used to evaluate the model during training:

- Cross-entropy loss (per token)
- Perplexity (exp(loss))

---

### Results

| Metric     | Value (approx.) |
|------------|-----------------|
| Eval Loss  | 0.60 – 0.70     |
| Perplexity | 1.8 – 2.0       |

Perplexity was computed as the exponential of the evaluation loss. Lower values indicate higher confidence in next-token prediction.

These metrics reflect convergence and generalization within the target domain, but do not directly assess clinical correctness or psychological safety.