File size: 2,281 Bytes
2220b96 54fc316 51e5764 e2d4828 b6114f8 e2d4828 b6114f8 e2d4828 b6114f8 e2d4828 b6114f8 e2d4828 b6114f8 e2d4828 b6114f8 e2d4828 b6114f8 e2d4828 b6114f8 e2d4828 b6114f8 e2d4828 b6114f8 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 | ---
license: apache-2.0
language:
- en
base_model:
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
pipeline_tag: text-generation
datasets:
- ShenLab/MentalChat16K
tags:
- unsloth
- lora
- peft
- mental-health
---
# TinyLlama MentalChat LoRA
This repository contains a **LoRA adapter** fine-tuned on the
[ShenLab/MentalChat16K](https://huggingface.co/datasets/ShenLab/MentalChat16K) dataset
for **mental health–related supportive dialogue**.
⚠️ **This is not a full model.**
It is a lightweight **LoRA adapter** that must be used together with the base model.
---
## 🔍 Model Overview
- **Base Model**: TinyLlama/TinyLlama-1.1B-Chat-v1.0
- **Fine-tuning Method**: LoRA (PEFT)
- **Domain**: Mental health supportive conversations
- **Language**: English
- **Adapter Size**: ~50 MB
---
## 📚 Training Data
The model was fine-tuned using the **MentalChat16K** dataset, which consists of
mental health–related conversations between users and assistants.
- **Dataset**: `ShenLab/MentalChat16K`
- **Language**: English
- **Task**: Supportive, empathetic responses in mental health contexts
---
## 🚀 Usage
### Load Base Model + LoRA Adapter
```python
from unsloth import FastLanguageModel
from peft import PeftModel
import torch
# Load base model
base_model, tokenizer = FastLanguageModel.from_pretrained(
"TinyLlama/TinyLlama-1.1B-Chat-v1.0",
max_seq_length=2048,
load_in_4bit=True,
)
# Load LoRA adapter
lora_model = PeftModel.from_pretrained(
base_model,
"BEncoderRT/tinyllama-mentalchat-lora",
)
FastLanguageModel.for_inference(lora_model)
FastLanguageModel.for_inference(base_model)
def generate(model, prompt, max_new_tokens=200):
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=max_new_tokens,
do_sample=True,
temperature=0.7,
top_p=0.9,
)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
prompt = """### Instruction:
I feel empty and hopeless lately. Nothing seems meaningful.
### Response:
"""
print("=== Base Model ===")
print(generate(base_model, prompt))
print("\n=== LoRA Model ===")
print(generate(lora_model, prompt))
|