File size: 2,090 Bytes
0acede6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
language: ko
license: mit
library_name: transformers
tags:
- text-classification
- korean
- mental-health
- depression-detection
- bert
pipeline_tag: text-classification
---

# Korean Depression/Anxiety Detection Model

ํ•œ๊ตญ์–ด ํ…์ŠคํŠธ ๊ธฐ๋ฐ˜ ์šฐ์šธ/๋ถˆ์•ˆ ๊ฐ์ง€ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.

## Model Description

- **Model Type:** BERT for Sequence Classification
- **Language:** Korean (ko)
- **Task:** Binary Classification (์ •์ƒ vs ์šฐ์šธ/๋ถˆ์•ˆ)
- **Base Model:** BERT (Korean)

## Labels

| Label | Description |
|-------|-------------|
| 0 | ์ •์ƒ (Normal) |
| 1 | ์šฐ์šธ/๋ถˆ์•ˆ (Depression/Anxiety) |

## Usage

```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

# ๋ชจ๋ธ ๋กœ๋“œ
tokenizer = AutoTokenizer.from_pretrained("YOUR_USERNAME/final_depression_model")
model = AutoModelForSequenceClassification.from_pretrained("YOUR_USERNAME/final_depression_model")
model.eval()

# ์˜ˆ์ธก
def predict(text):
    inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True)
    with torch.no_grad():
        outputs = model(**inputs)
        probs = torch.softmax(outputs.logits, dim=-1)
        prediction = torch.argmax(probs, dim=-1).item()
    return {
        "label": prediction,  # 0=์ •์ƒ, 1=์šฐ์šธ/๋ถˆ์•ˆ
        "confidence": probs[0][prediction].item()
    }

# ์‚ฌ์šฉ ์˜ˆ์‹œ
result = predict("์š”์ฆ˜ ๋„ˆ๋ฌด ํž˜๋“ค๊ณ  ์•„๋ฌด๊ฒƒ๋„ ํ•˜๊ธฐ ์‹ซ์–ด์š”")
print(result)
```

## Model Details

- **Architecture:** BertForSequenceClassification
- **Hidden Size:** 768
- **Attention Heads:** 12
- **Hidden Layers:** 12
- **Vocab Size:** 30,000
- **Max Position Embeddings:** 300

## Intended Use

์ด ๋ชจ๋ธ์€ ์ •์‹ ๊ฑด๊ฐ• ๊ด€๋ จ ์—ฐ๊ตฌ ๋ฐ ์ฑ—๋ด‡ ์„œ๋น„์Šค์—์„œ ์‚ฌ์šฉ์ž์˜ ๊ฐ์ • ์ƒํƒœ๋ฅผ ํŒŒ์•…ํ•˜๊ธฐ ์œ„ํ•œ ๋ชฉ์ ์œผ๋กœ ๊ฐœ๋ฐœ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.

## Limitations

- ์ด ๋ชจ๋ธ์€ ์ „๋ฌธ์ ์ธ ์˜๋ฃŒ ์ง„๋‹จ ๋„๊ตฌ๊ฐ€ ์•„๋‹™๋‹ˆ๋‹ค.
- ์‹ค์ œ ์šฐ์šธ์ฆ/๋ถˆ์•ˆ์žฅ์•  ์ง„๋‹จ์€ ๋ฐ˜๋“œ์‹œ ์ „๋ฌธ ์˜๋ฃŒ์ง„๊ณผ ์ƒ๋‹ดํ•˜์„ธ์š”.
- ๋ชจ๋ธ์˜ ์˜ˆ์ธก ๊ฒฐ๊ณผ๋Š” ์ฐธ๊ณ ์šฉ์œผ๋กœ๋งŒ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.

## License

MIT License