Update README.md
Browse files
README.md
CHANGED
|
@@ -25,6 +25,7 @@ and directly outputs a **structured JSON** containing a professional risk evalua
|
|
| 25 |
|
| 26 |
### Output Format
|
| 27 |
```json
|
|
|
|
| 28 |
{
|
| 29 |
"intent": ["passive_suicide_ideation", "mild_distress", ...],
|
| 30 |
"risk": "low" | "medium" | "high" | "ambiguous",
|
|
@@ -33,6 +34,8 @@ and directly outputs a **structured JSON** containing a professional risk evalua
|
|
| 33 |
"recommended_action": ["empathize", "deep_assessment", ...]
|
| 34 |
}
|
| 35 |
|
|
|
|
|
|
|
| 36 |
Key Capabilities
|
| 37 |
|
| 38 |
Accurately detects subtle and indirect expressions of psychological distress common in Chinese (e.g., “活着没意思”、“快受不了了”、“不如解脱”)
|
|
@@ -57,8 +60,7 @@ Adapter type: LoRA (r=16, alpha=32, targeting q/k/v/o_proj)
|
|
| 57 |
Dataset: Custom high-quality Chinese mental health risk assessment data (single-turn + multi-turn)
|
| 58 |
Training objective: Supervised fine-tuning with strict JSON output formatting and EOS enforcement for clean generation
|
| 59 |
|
| 60 |
-
|
| 61 |
-
'''python
|
| 62 |
from peft import PeftModel, PeftConfig
|
| 63 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 64 |
|
|
@@ -79,4 +81,6 @@ prompt = """### 任务指令:
|
|
| 79 |
|
| 80 |
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
|
| 81 |
outputs = model.generate(**inputs, max_new_tokens=256, do_sample=False)
|
| 82 |
-
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
|
|
|
|
|
|
|
|
| 25 |
|
| 26 |
### Output Format
|
| 27 |
```json
|
| 28 |
+
|
| 29 |
{
|
| 30 |
"intent": ["passive_suicide_ideation", "mild_distress", ...],
|
| 31 |
"risk": "low" | "medium" | "high" | "ambiguous",
|
|
|
|
| 34 |
"recommended_action": ["empathize", "deep_assessment", ...]
|
| 35 |
}
|
| 36 |
|
| 37 |
+
```
|
| 38 |
+
|
| 39 |
Key Capabilities
|
| 40 |
|
| 41 |
Accurately detects subtle and indirect expressions of psychological distress common in Chinese (e.g., “活着没意思”、“快受不了了”、“不如解脱”)
|
|
|
|
| 60 |
Dataset: Custom high-quality Chinese mental health risk assessment data (single-turn + multi-turn)
|
| 61 |
Training objective: Supervised fine-tuning with strict JSON output formatting and EOS enforcement for clean generation
|
| 62 |
|
| 63 |
+
```python
|
|
|
|
| 64 |
from peft import PeftModel, PeftConfig
|
| 65 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 66 |
|
|
|
|
| 81 |
|
| 82 |
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
|
| 83 |
outputs = model.generate(**inputs, max_new_tokens=256, do_sample=False)
|
| 84 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
| 85 |
+
|
| 86 |
+
```
|