Korean Chatbot (LoRA Fine-tuned)
μ΄ λͺ¨λΈμ νκ΅μ΄ λν λͺ¨λΈμ λλ€.
μ¬μ© λ°©λ²
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# λ² μ΄μ€ λͺ¨λΈ λ‘λ
base_model = AutoModelForCausalLM.from_pretrained(
"beomi/open-llama-2-ko-7b",
device_map="auto",
torch_dtype=torch.float16
)
# LoRA μ΄λν° λ‘λ
model = PeftModel.from_pretrained(base_model, "JINIIII/korean-chatbot-lora")
tokenizer = AutoTokenizer.from_pretrained("JINIIII/korean-chatbot-lora")
# μΆλ‘
prompt = "μ§λ¬Έ: μΈκ³΅μ§λ₯μ΄λ 무μμΈκ°μ?\nλ΅λ³:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=200, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
νμ© μμ
- μν 리뷰 κ°μ λΆμ
- μν 리뷰 λΆμ
- SNS κ°μ λͺ¨λν°λ§
λΌμ΄μ μ€
MIT License
μμ±μ
λλμΌ μ§μμΌγ
Note: μ΄ λͺ¨λΈμ κ΅μ‘ λͺ©μ μΌλ‘ λ§λ€μ΄μ‘μ΅λλ€.
Model tree for JINIIII/korean-chatbot-lora
Base model
beomi/open-llama-2-ko-7b