maliksaad commited on
Commit
f6cdeca
·
verified ·
1 Parent(s): 0b7118e

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +67 -36
README.md CHANGED
@@ -1,58 +1,89 @@
1
  ---
 
2
  base_model: HuggingFaceTB/SmolLM2-135M-Instruct
3
- library_name: transformers
4
- model_name: empathLM
5
  tags:
6
- - generated_from_trainer
7
- - trl
8
- - sft
9
- licence: license
 
 
 
 
 
10
  ---
11
 
12
- # Model Card for empathLM
13
 
14
- This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct).
15
- It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
- ## Quick start
 
18
 
19
- ```python
20
- from transformers import pipeline
21
 
22
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
- generator = pipeline("text-generation", model="maliksaad/empathLM", device="cuda")
24
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
- print(output["generated_text"])
26
- ```
 
27
 
28
- ## Training procedure
29
 
30
-
 
31
 
 
32
 
 
33
 
34
- This model was trained with SFT.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
- ### Framework versions
37
 
38
- - TRL: 0.29.1
39
- - Transformers: 5.0.0
40
- - Pytorch: 2.10.0+cu128
41
- - Datasets: 4.8.3
42
- - Tokenizers: 0.22.2
 
 
 
 
43
 
44
- ## Citations
45
 
 
46
 
 
47
 
48
- Cite TRL as:
49
-
50
  ```bibtex
51
- @software{vonwerra2020trl,
52
- title = {{TRL: Transformers Reinforcement Learning}},
53
- author = {von Werra, Leandro and Belkada, Younes and Tunstall, Lewis and Beeching, Edward and Thrush, Tristan and Lambert, Nathan and Huang, Shengyi and Rasul, Kashif and Gallouédec, Quentin},
54
- license = {Apache-2.0},
55
- url = {https://github.com/huggingface/trl},
56
- year = {2020}
57
  }
58
- ```
 
1
  ---
2
+ license: mit
3
  base_model: HuggingFaceTB/SmolLM2-135M-Instruct
 
 
4
  tags:
5
+ - empathy
6
+ - mental-health
7
+ - motivational-interviewing
8
+ - cognitive-behavioral-therapy
9
+ - fine-tuned
10
+ - emotional-support
11
+ - empathLM
12
+ language:
13
+ - en
14
  ---
15
 
16
+ # 🧠 EmpathLM
17
 
18
+ **Fine-tuned for Psychologically Safe & Persuasive Emotional Support**
 
19
 
20
+ EmpathLM is a fine-tuned version of [SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct)
21
+ trained to generate responses that combine **Motivational Interviewing (MI)** and **Cognitive Behavioral Therapy (CBT)** principles.
22
 
23
+ ## What Makes EmpathLM Unique
 
24
 
25
+ Unlike general-purpose language models, EmpathLM is specifically optimized to:
26
+ - **Validate emotions** without judgment
27
+ - **Reflect feelings** back to the person warmly
28
+ - ✅ **Gently shift perspective** without being manipulative
29
+ - ✅ **Ask powerful open questions** that encourage self-reflection
30
+ - ❌ **Never give unsolicited advice**
31
 
32
+ ## Benchmark Results
33
 
34
+ EmpathLM was benchmarked against GPT-4o-mini and a Groq baseline on 20 unseen test situations,
35
+ scored across: emotional_validation, advice_avoidance, perspective_shift, and overall_empathy.
36
 
37
+ *See the [GitHub repository](https://github.com/maliksaad/empathLM) for full benchmark results.*
38
 
39
+ ## Usage
40
 
41
+ ```python
42
+ from transformers import AutoTokenizer, AutoModelForCausalLM
43
+
44
+ tokenizer = AutoTokenizer.from_pretrained("maliksaad/empathLM")
45
+ model = AutoModelForCausalLM.from_pretrained("maliksaad/empathLM")
46
+
47
+ SYSTEM_PROMPT = """You are EmpathLM — an emotionally intelligent AI trained in Motivational Interviewing
48
+ and Cognitive Behavioral Therapy. When someone shares emotional pain:
49
+ - Validate their feelings without judgment
50
+ - Reflect their emotions back to them
51
+ - Ask one powerful open-ended question
52
+ - NEVER give unsolicited advice"""
53
+
54
+ messages = [
55
+ {"role": "system", "content": SYSTEM_PROMPT},
56
+ {"role": "user", "content": "I failed my exam again. I feel like I'm just not smart enough."},
57
+ ]
58
+
59
+ inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
60
+ outputs = model.generate(inputs, max_new_tokens=200, temperature=0.7)
61
+ print(tokenizer.decode(outputs[0][inputs.shape[1]:], skip_special_tokens=True))
62
+ ```
63
 
64
+ ## Training Details
65
 
66
+ | Parameter | Value |
67
+ |-----------|-------|
68
+ | Base Model | SmolLM2-135M-Instruct |
69
+ | Training Examples | ~180 (90% of 200) |
70
+ | Epochs | 3 |
71
+ | Batch Size | 8 |
72
+ | Learning Rate | 2e-5 |
73
+ | Max Sequence Length | 512 |
74
+ | Training Platform | Kaggle (Free GPU) |
75
 
76
+ ## Dataset
77
 
78
+ Trained on [maliksaad/empathLM-dataset](https://huggingface.co/datasets/maliksaad/empathLM-dataset)
79
 
80
+ ## Citation
81
 
 
 
82
  ```bibtex
83
+ @model{saad2025empathLM,
84
+ title = {EmpathLM: A Psychologically-Grounded Empathetic Response Model},
85
+ author = {Muhammad Saad},
86
+ year = {2025},
87
+ url = {https://huggingface.co/maliksaad/empathLM}
 
88
  }
89
+ ```