File size: 831 Bytes
b529e28 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
# KoBERT ๋ถ๋ฅ ๋ชจ๋ธ
์ด ๋ชจ๋ธ์ KoBERT๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ํ
์คํธ ๋ถ๋ฅ๋ฅผ ์ํด ํ์ธํ๋๋ ๋ชจ๋ธ์
๋๋ค.
## ๋ชจ๋ธ ์ ๋ณด
- ๊ธฐ๋ณธ ๋ชจ๋ธ: beomi/kcbert-base
- ํด๋์ค ์: 12
- ์ฌ์ฉ ๋ฐฉ๋ฒ: ์๋ ์ฝ๋๋ฅผ ์ฐธ์กฐํ์ธ์
## ์ฌ์ฉ ์์
```python
from transformers import BertForSequenceClassification, BertTokenizer
# ๋ชจ๋ธ๊ณผ ํ ํฌ๋์ด์ ๋ก๋
model_name = "rmsdud/kobert-classifier"
tokenizer = BertTokenizer.from_pretrained(model_name)
model = BertForSequenceClassification.from_pretrained(model_name)
# ์ถ๋ก
text = "๋ถ๋ฅํ ํ
์คํธ๋ฅผ ์
๋ ฅํ์ธ์."
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=128)
outputs = model(**inputs)
logits = outputs.logits
predicted_class = logits.argmax(-1).item()
print(f"์์ธก ํด๋์ค: predicted_class")
```
|