|
|
--- |
|
|
license: mit |
|
|
base_model: beomi/open-llama-2-ko-7b |
|
|
tags: |
|
|
- llama |
|
|
- lora |
|
|
- korean |
|
|
- text-generation |
|
|
language: |
|
|
- ko |
|
|
--- |
|
|
|
|
|
# Korean Chatbot (LoRA Fine-tuned) |
|
|
|
|
|
์ด ๋ชจ๋ธ์ ํ๊ตญ์ด ๋ํ ๋ชจ๋ธ์
๋๋ค. |
|
|
|
|
|
## ์ฌ์ฉ ๋ฐฉ๋ฒ |
|
|
```python |
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
from peft import PeftModel |
|
|
import torch |
|
|
|
|
|
# ๋ฒ ์ด์ค ๋ชจ๋ธ ๋ก๋ |
|
|
base_model = AutoModelForCausalLM.from_pretrained( |
|
|
"beomi/open-llama-2-ko-7b", |
|
|
device_map="auto", |
|
|
torch_dtype=torch.float16 |
|
|
) |
|
|
|
|
|
# LoRA ์ด๋ํฐ ๋ก๋ |
|
|
model = PeftModel.from_pretrained(base_model, "JINIIII/korean-chatbot-lora") |
|
|
tokenizer = AutoTokenizer.from_pretrained("JINIIII/korean-chatbot-lora") |
|
|
|
|
|
# ์ถ๋ก |
|
|
prompt = "์ง๋ฌธ: ์ธ๊ณต์ง๋ฅ์ด๋ ๋ฌด์์ธ๊ฐ์?\n๋ต๋ณ:" |
|
|
|
|
|
inputs = tokenizer(prompt, return_tensors="pt").to(model.device) |
|
|
outputs = model.generate(**inputs, max_length=200, temperature=0.7) |
|
|
print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
|
|
``` |
|
|
## ํ์ฉ ์์ # |
|
|
|
|
|
- ์ํ ๋ฆฌ๋ทฐ ๊ฐ์ ๋ถ์ |
|
|
- ์ํ ๋ฆฌ๋ทฐ ๋ถ์ |
|
|
- SNS ๊ฐ์ ๋ชจ๋ํฐ๋ง |
|
|
|
|
|
## ๋ผ์ด์ ์ค |
|
|
|
|
|
MIT License |
|
|
|
|
|
## ์์ฑ์ |
|
|
|
|
|
๋๋์ผ ์ง์์ผใ
|
|
|
|
|
|
**Note**: ์ด ๋ชจ๋ธ์ ๊ต์ก ๋ชฉ์ ์ผ๋ก ๋ง๋ค์ด์ก์ต๋๋ค. |
|
|
|