Korean Chatbot (LoRA Fine-tuned)

이 λͺ¨λΈμ€ ν•œκ΅­μ–΄ λŒ€ν™” λͺ¨λΈμž…λ‹ˆλ‹€.

μ‚¬μš© 방법

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch

# 베이슀 λͺ¨λΈ λ‘œλ“œ
base_model = AutoModelForCausalLM.from_pretrained(
    "beomi/open-llama-2-ko-7b",
    device_map="auto",
    torch_dtype=torch.float16
)

# LoRA μ–΄λŒ‘ν„° λ‘œλ“œ
model = PeftModel.from_pretrained(base_model, "JINIIII/korean-chatbot-lora")
tokenizer = AutoTokenizer.from_pretrained("JINIIII/korean-chatbot-lora")

# μΆ”λ‘ 
prompt = "질문: 인곡지λŠ₯μ΄λž€ λ¬΄μ—‡μΈκ°€μš”?\nλ‹΅λ³€:"

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=200, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

ν™œμš© μ˜ˆμ‹œ

  • μ˜ν™” 리뷰 감정 뢄석
  • μƒν’ˆ 리뷰 뢄석
  • SNS 감정 λͺ¨λ‹ˆν„°λ§

λΌμ΄μ„ μŠ€

MIT License

μž‘μ„±μž

λ‚˜λŠ”μ•Ό μ§„μˆ˜μ•Όγ…

Note: 이 λͺ¨λΈμ€ ꡐ윑 λͺ©μ μœΌλ‘œ λ§Œλ“€μ–΄μ‘ŒμŠ΅λ‹ˆλ‹€.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for JINIIII/korean-chatbot-lora

Adapter
(9)
this model