Korean Depression/Anxiety Detection Model

ํ•œ๊ตญ์–ด ํ…์ŠคํŠธ ๊ธฐ๋ฐ˜ ์šฐ์šธ/๋ถˆ์•ˆ ๊ฐ์ง€ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.

Model Description

  • Model Type: BERT for Sequence Classification
  • Language: Korean (ko)
  • Task: Binary Classification (์ •์ƒ vs ์šฐ์šธ/๋ถˆ์•ˆ)
  • Base Model: BERT (Korean)

Labels

Label Description
0 ์ •์ƒ (Normal)
1 ์šฐ์šธ/๋ถˆ์•ˆ (Depression/Anxiety)

Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

# ๋ชจ๋ธ ๋กœ๋“œ
tokenizer = AutoTokenizer.from_pretrained("YOUR_USERNAME/final_depression_model")
model = AutoModelForSequenceClassification.from_pretrained("YOUR_USERNAME/final_depression_model")
model.eval()

# ์˜ˆ์ธก
def predict(text):
    inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True)
    with torch.no_grad():
        outputs = model(**inputs)
        probs = torch.softmax(outputs.logits, dim=-1)
        prediction = torch.argmax(probs, dim=-1).item()
    return {
        "label": prediction,  # 0=์ •์ƒ, 1=์šฐ์šธ/๋ถˆ์•ˆ
        "confidence": probs[0][prediction].item()
    }

# ์‚ฌ์šฉ ์˜ˆ์‹œ
result = predict("์š”์ฆ˜ ๋„ˆ๋ฌด ํž˜๋“ค๊ณ  ์•„๋ฌด๊ฒƒ๋„ ํ•˜๊ธฐ ์‹ซ์–ด์š”")
print(result)

Model Details

  • Architecture: BertForSequenceClassification
  • Hidden Size: 768
  • Attention Heads: 12
  • Hidden Layers: 12
  • Vocab Size: 30,000
  • Max Position Embeddings: 300

Intended Use

์ด ๋ชจ๋ธ์€ ์ •์‹ ๊ฑด๊ฐ• ๊ด€๋ จ ์—ฐ๊ตฌ ๋ฐ ์ฑ—๋ด‡ ์„œ๋น„์Šค์—์„œ ์‚ฌ์šฉ์ž์˜ ๊ฐ์ • ์ƒํƒœ๋ฅผ ํŒŒ์•…ํ•˜๊ธฐ ์œ„ํ•œ ๋ชฉ์ ์œผ๋กœ ๊ฐœ๋ฐœ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.

Limitations

  • ์ด ๋ชจ๋ธ์€ ์ „๋ฌธ์ ์ธ ์˜๋ฃŒ ์ง„๋‹จ ๋„๊ตฌ๊ฐ€ ์•„๋‹™๋‹ˆ๋‹ค.
  • ์‹ค์ œ ์šฐ์šธ์ฆ/๋ถˆ์•ˆ์žฅ์•  ์ง„๋‹จ์€ ๋ฐ˜๋“œ์‹œ ์ „๋ฌธ ์˜๋ฃŒ์ง„๊ณผ ์ƒ๋‹ดํ•˜์„ธ์š”.
  • ๋ชจ๋ธ์˜ ์˜ˆ์ธก ๊ฒฐ๊ณผ๋Š” ์ฐธ๊ณ ์šฉ์œผ๋กœ๋งŒ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.

License

MIT License

Downloads last month
13
Safetensors
Model size
0.1B params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support