BEncoderRT commited on
Commit
e2d4828
·
verified ·
1 Parent(s): b6114f8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -19
README.md CHANGED
@@ -8,28 +8,30 @@ pipeline_tag: text-generation
8
  datasets:
9
  - ShenLab/MentalChat16K
10
  tags:
11
- - usloth
12
- metrics:
13
- - accuracy
 
14
  ---
 
15
  # TinyLlama MentalChat LoRA
16
 
17
- This repository contains a **LoRA adapter** fine-tuned on the
18
- [ShenLab/MentalChat16K](https://huggingface.co/datasets/ShenLab/MentalChat16K)
19
- dataset for **mental health–related supportive dialogue**.
20
 
21
- ⚠️ This is **not a full model**. It is a lightweight LoRA adapter that must be
22
- used together with the base model.
23
 
24
  ---
25
 
26
  ## 🔍 Model Overview
27
 
28
- - **Base Model**: TinyLlama/TinyLlama-1.1B-Chat-v1.0
29
- - **Fine-tuning Method**: LoRA (PEFT)
30
- - **Domain**: Mental health supportive conversations
31
- - **Language**: English
32
- - **Parameter Size (Adapter)**: ~50MB
33
 
34
  ---
35
 
@@ -38,9 +40,9 @@ used together with the base model.
38
  The model was fine-tuned using the **MentalChat16K** dataset, which consists of
39
  mental health–related conversations between users and assistants.
40
 
41
- - Dataset: `ShenLab/MentalChat16K`
42
- - Language: English
43
- - Task: Supportive, empathetic responses in mental health contexts
44
 
45
  ---
46
 
@@ -48,18 +50,19 @@ mental health–related conversations between users and assistants.
48
 
49
  ### Load Base Model + LoRA Adapter
50
 
 
51
  from unsloth import FastLanguageModel
52
  from peft import PeftModel
53
  import torch
54
 
55
- # Base model
56
  base_model, tokenizer = FastLanguageModel.from_pretrained(
57
  "TinyLlama/TinyLlama-1.1B-Chat-v1.0",
58
  max_seq_length=2048,
59
  load_in_4bit=True,
60
  )
61
 
62
- # LoRA model
63
  lora_model = PeftModel.from_pretrained(
64
  base_model,
65
  "BEncoderRT/tinyllama-mentalchat-lora",
@@ -82,6 +85,7 @@ def generate(model, prompt, max_new_tokens=200):
82
 
83
  return tokenizer.decode(outputs[0], skip_special_tokens=True)
84
 
 
85
  prompt = """### Instruction:
86
  I feel empty and hopeless lately. Nothing seems meaningful.
87
 
@@ -93,4 +97,3 @@ print(generate(base_model, prompt))
93
 
94
  print("\n=== LoRA Model ===")
95
  print(generate(lora_model, prompt))
96
-
 
8
  datasets:
9
  - ShenLab/MentalChat16K
10
  tags:
11
+ - unsloth
12
+ - lora
13
+ - peft
14
+ - mental-health
15
  ---
16
+
17
  # TinyLlama MentalChat LoRA
18
 
19
+ This repository contains a **LoRA adapter** fine-tuned on the
20
+ [ShenLab/MentalChat16K](https://huggingface.co/datasets/ShenLab/MentalChat16K) dataset
21
+ for **mental health–related supportive dialogue**.
22
 
23
+ ⚠️ **This is not a full model.**
24
+ It is a lightweight **LoRA adapter** that must be used together with the base model.
25
 
26
  ---
27
 
28
  ## 🔍 Model Overview
29
 
30
+ - **Base Model**: TinyLlama/TinyLlama-1.1B-Chat-v1.0
31
+ - **Fine-tuning Method**: LoRA (PEFT)
32
+ - **Domain**: Mental health supportive conversations
33
+ - **Language**: English
34
+ - **Adapter Size**: ~50 MB
35
 
36
  ---
37
 
 
40
  The model was fine-tuned using the **MentalChat16K** dataset, which consists of
41
  mental health–related conversations between users and assistants.
42
 
43
+ - **Dataset**: `ShenLab/MentalChat16K`
44
+ - **Language**: English
45
+ - **Task**: Supportive, empathetic responses in mental health contexts
46
 
47
  ---
48
 
 
50
 
51
  ### Load Base Model + LoRA Adapter
52
 
53
+ ```python
54
  from unsloth import FastLanguageModel
55
  from peft import PeftModel
56
  import torch
57
 
58
+ # Load base model
59
  base_model, tokenizer = FastLanguageModel.from_pretrained(
60
  "TinyLlama/TinyLlama-1.1B-Chat-v1.0",
61
  max_seq_length=2048,
62
  load_in_4bit=True,
63
  )
64
 
65
+ # Load LoRA adapter
66
  lora_model = PeftModel.from_pretrained(
67
  base_model,
68
  "BEncoderRT/tinyllama-mentalchat-lora",
 
85
 
86
  return tokenizer.decode(outputs[0], skip_special_tokens=True)
87
 
88
+
89
  prompt = """### Instruction:
90
  I feel empty and hopeless lately. Nothing seems meaningful.
91
 
 
97
 
98
  print("\n=== LoRA Model ===")
99
  print(generate(lora_model, prompt))