ecorbari commited on
Commit
dd4fafc
·
verified ·
1 Parent(s): 656ab66

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -2
README.md CHANGED
@@ -30,7 +30,7 @@ The model is optimized to generate **empathetic, supportive, and professionally
30
 
31
  ### Model Description
32
 
33
- - **Author:** Ederson Corbari (NeuroQuest.ai)
34
  - **Date:** February 01, 2026
35
  - **Model type:** Causal Language Model (LLM)
36
  - **Language(s):** English
@@ -82,4 +82,23 @@ pipe = pipeline(
82
  prompt = "I feel anxious and overwhelmed lately. What should I do?"
83
  result = pipe(prompt, max_new_tokens=200)
84
 
85
- print(result[0]["generated_text"])
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
  ### Model Description
32
 
33
+ - **Author:** Ederson Corbari (e@NeuroQuest.ai)
34
  - **Date:** February 01, 2026
35
  - **Model type:** Causal Language Model (LLM)
36
  - **Language(s):** English
 
82
  prompt = "I feel anxious and overwhelmed lately. What should I do?"
83
  result = pipe(prompt, max_new_tokens=200)
84
 
85
+ print(result[0]["generated_text"])
86
+ ```
87
+
88
+ ---
89
+
90
+ ## Bias, Risks, and Limitations
91
+
92
+ Safety Disclaimer: The model may generate inaccurate information. Empathy in text generation does not imply clinical safety or medical correctness.
93
+
94
+ Data Bias: Responses may reflect biases inherent in the `jkhedri/psychology-dataset`.
95
+
96
+ Human Oversight: Users should apply human judgment, especially in sensitive conversational settings.
97
+
98
+ ---
99
+
100
+ ## Training and Merge Process
101
+
102
+ The workflow involved loading the `google/gemma-2b-it` model in float16 precision, attaching the LoRA adapters trained on
103
+ preference-based psychological data, and merging the weights into a single model for downstream use. This ensures compatibility with
104
+ environments that do not support PEFT or require lower latency for inference.