| # KoBERT ๋ถ๋ฅ ๋ชจ๋ธ | |
| ์ด ๋ชจ๋ธ์ KoBERT๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ํ ์คํธ ๋ถ๋ฅ๋ฅผ ์ํด ํ์ธํ๋๋ ๋ชจ๋ธ์ ๋๋ค. | |
| ## ๋ชจ๋ธ ์ ๋ณด | |
| - ๊ธฐ๋ณธ ๋ชจ๋ธ: beomi/kcbert-base | |
| - ํด๋์ค ์: 12 | |
| - ์ฌ์ฉ ๋ฐฉ๋ฒ: ์๋ ์ฝ๋๋ฅผ ์ฐธ์กฐํ์ธ์ | |
| ## ์ฌ์ฉ ์์ | |
| ```python | |
| from transformers import BertForSequenceClassification, BertTokenizer | |
| # ๋ชจ๋ธ๊ณผ ํ ํฌ๋์ด์ ๋ก๋ | |
| model_name = "rmsdud/kobert-classifier" | |
| tokenizer = BertTokenizer.from_pretrained(model_name) | |
| model = BertForSequenceClassification.from_pretrained(model_name) | |
| # ์ถ๋ก | |
| text = "๋ถ๋ฅํ ํ ์คํธ๋ฅผ ์ ๋ ฅํ์ธ์." | |
| inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=128) | |
| outputs = model(**inputs) | |
| logits = outputs.logits | |
| predicted_class = logits.argmax(-1).item() | |
| print(f"์์ธก ํด๋์ค: predicted_class") | |
| ``` | |