Upload adapter_config.json
Browse files# User_Intent_Risk_Triage (Qwen2.5-0.5B LoRA)
This is a **mental health risk assessment model** based on **Qwen/Qwen2.5-0.5B-Instruct**, fine-tuned with LoRA on a Chinese psychological risk dataset containing both single-turn and multi-turn conversations.
### Model Purpose
The model analyzes user messages or full conversation histories to detect potential mental health risks, especially depression and suicidal ideation expressed in natural, often indirect Chinese language (e.g., "活着没意思", "快受不了了", "想解脱").
Given input text or a conversation, it directly outputs a structured JSON with:
- `intent`: Detected psychological states (e.g., "depression", "suicide_ideation", "mild_distress")
- `risk`: Risk level ("low", "medium", "high")
- `strategy` / `recommended_action`: Suggested response strategies (e.g., "empathize", "support", "clarify", "escalate", "provide_resources")
- `uncertainty`: Confidence in the assessment
### Use Cases
- Safety layer in mental health chatbots or counseling apps
- Early detection of suicidal ideation in user conversations
- Automated triage for online psychological support platforms
- Research on mental health trends in Chinese text data
### Limitations
- Trained primarily on Chinese data — performance on other languages is not guaranteed
- Model size is small (0.5B); may miss very subtle or context-heavy cases
- Always combine with human oversight in real-world crisis intervention
### Base Model
- Qwen/Qwen2.5-0.5B-Instruct
### Fine-tuning Method
- LoRA (r=16, alpha=32, targeting attention layers)
- Supervised fine-tuning focused on clean JSON output with EOS enforcement
Feel free to modify or extend this card. Your work contributes meaningfully to mental health AI safety — thank you for building it!
- adapter_config.json +43 -0
|
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"alora_invocation_tokens": null,
|
| 3 |
+
"alpha_pattern": {},
|
| 4 |
+
"arrow_config": null,
|
| 5 |
+
"auto_mapping": null,
|
| 6 |
+
"base_model_name_or_path": "Qwen/Qwen2.5-0.5B-Instruct",
|
| 7 |
+
"bias": "none",
|
| 8 |
+
"corda_config": null,
|
| 9 |
+
"ensure_weight_tying": false,
|
| 10 |
+
"eva_config": null,
|
| 11 |
+
"exclude_modules": null,
|
| 12 |
+
"fan_in_fan_out": false,
|
| 13 |
+
"inference_mode": true,
|
| 14 |
+
"init_lora_weights": true,
|
| 15 |
+
"layer_replication": null,
|
| 16 |
+
"layers_pattern": null,
|
| 17 |
+
"layers_to_transform": null,
|
| 18 |
+
"loftq_config": {},
|
| 19 |
+
"lora_alpha": 32,
|
| 20 |
+
"lora_bias": false,
|
| 21 |
+
"lora_dropout": 0.05,
|
| 22 |
+
"megatron_config": null,
|
| 23 |
+
"megatron_core": "megatron.core",
|
| 24 |
+
"modules_to_save": null,
|
| 25 |
+
"peft_type": "LORA",
|
| 26 |
+
"peft_version": "0.18.0",
|
| 27 |
+
"qalora_group_size": 16,
|
| 28 |
+
"r": 16,
|
| 29 |
+
"rank_pattern": {},
|
| 30 |
+
"revision": null,
|
| 31 |
+
"target_modules": [
|
| 32 |
+
"q_proj",
|
| 33 |
+
"v_proj",
|
| 34 |
+
"k_proj",
|
| 35 |
+
"o_proj"
|
| 36 |
+
],
|
| 37 |
+
"target_parameters": null,
|
| 38 |
+
"task_type": "CAUSAL_LM",
|
| 39 |
+
"trainable_token_indices": null,
|
| 40 |
+
"use_dora": false,
|
| 41 |
+
"use_qalora": false,
|
| 42 |
+
"use_rslora": false
|
| 43 |
+
}
|