๋ชจ๋ธ ์ด๋ฆ
khamrai/nsmc-sentiment-lora
๋ชจ๋ธ ์ค๋ช
NSMC ๋ฐ์ดํฐ์ ์ผ๋ก ํ์ธํ๋๋ ํ๊ตญ์ด ๊ฐ์ ๋ถ์ ๋ชจ๋ธ์ ๋๋ค.
๋ชจ๋ธ ์์ธ
- ๋ฒ ์ด์ค ๋ชจ๋ธ : klue/bert-base
- ํ์ธํ๋ ๋ฐฉ๋ฒ : LoRA
- ์ธ์ด : ํ๊ตญ์ด
LoRA ์ค์
- ํ์ต ํ๋ผ๋ฏธํฐ: ์ฝ 0.3% (~300K)
- ๋ฐ์ดํฐ์ : NSMC (๋ค์ด๋ฒ ์ํ ๋ฆฌ๋ทฐ)
- Task: ์ด์ง ๋ถ๋ฅ (๊ธ์ /๋ถ์ )
ํ์ต ์ค์
- LoRA Rank (r): 8
- LoRA Alpha: 16
- Target Modules: query, value
- Dropout: 0.1
- ํ์ต ์ํญ: 5
- ํ์ต๋ฅ : 5e-4
ํ์ต ๊ฒฐ๊ณผ
=== ์ฑ๋ด ํ ์คํธ ===
์ง๋ฌธ: ์๋ ํ์ธ์? ๋ต๋ณ: ์ ๋ ๋ ์ฌ๋์ด์์.
์ง๋ฌธ: ์ค๋ ๋ ์จ๊ฐ ์ด๋? ๋ต๋ณ: ๋ ์จ๊ฐ ์ฐธ ๋ฐ๋ปํ์ฃ .
์ง๋ฌธ: ๋ฐฐ๊ณ ํ๋ฐ ๋ญ ๋จน์๊น? ๋ต๋ณ: ์ปคํผ๋ ๋์ธ์.
์ง๋ฌธ: ์ฃผ๋ง์ ๋ญํ์ง? ๋ต๋ณ: ๋ฐ์ดํธ ์๊ฐ ๋นผ๊ณกํ๊ฒ ์ฐจ๋ ค๋ณด์ธ์.
์ฌ์ฉ ๋ฐฉ๋ฒ
from transformers import AutoModelForSequenceClassification, AutoTokenizer
from peft import PeftModel
import torch
# ๋ฒ ์ด์ค ๋ชจ๋ธ ๋ก๋
base_model = AutoModelForSequenceClassification.from_pretrained(
"klue/bert-base",
num_labels=2
)
# LoRA ์ด๋ํฐ ๋ก๋
model = PeftModel.from_pretrained(base_model, "khamrai/nsmc-sentiment-lora")
tokenizer = AutoTokenizer.from_pretrained("khamrai/nsmc-sentiment-lora")
# ์ถ๋ก
text = "์ด ์ํ ์ ๋ง ์ฌ๋ฏธ์์ด์!"
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=128)
outputs = model(**inputs)
probs = torch.softmax(outputs.logits, dim=-1)
pred = torch.argmax(probs, dim=-1).item()
label = "๊ธ์ " if pred == 1 else "๋ถ์ "
confidence = probs[0][pred].item()
print(f"๊ฒฐ๊ณผ: {label} (ํ์ ๋: {confidence:.2%})")
๋ชจ๋ธ ์ ๋ณด
- ๋ฒ ์ด์ค ๋ชจ๋ธ: klue/bert-base
- ํ์ธํ๋ ๋ฐฉ๋ฒ: LoRA (Low-Rank Adaptation)
- ํ์ต ํ๋ผ๋ฏธํฐ: ์ฝ 0.3% (~300K)
- ๋ฐ์ดํฐ์ : NSMC (๋ค์ด๋ฒ ์ํ ๋ฆฌ๋ทฐ)
- Task: ์ด์ง ๋ถ๋ฅ (๊ธ์ /๋ถ์ )
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support