metadata
license: apache-2.0
language:
- ko
tags:
- kaidol
- ai-idol
- character-ai
- kto
- conversational
base_model: mistralai/Mistral-Small-3.1-24B-Instruct-2503
KAIdol ๊ฐ์จ KTO
KAIdol ๊ฐ์จ ์บ๋ฆญํฐ KTO ๋ชจ๋ธ (๋ค์ ๋จ, ENFJ)
Model Description
KAIdol ํ๋ก์ ํธ์ AI ์์ด๋ ์บ๋ฆญํฐ ๋ชจ๋ธ์ ๋๋ค. KTO (Kahneman-Tversky Optimization) ๋ฐฉ๋ฒ๋ก ์ผ๋ก ์บ๋ฆญํฐ ์ผ๊ด์ฑ์ ๊ฐํํ์ต๋๋ค.
์บ๋ฆญํฐ ์ ๋ณด
- ์ด๋ฆ: ๊ฐ์จ
- ์ฑ๊ฒฉ: ๋ค์ ๋จ (ENFJ)
- ํน์ฑ: ๋ค์ ํ๊ณ ๋ฐฐ๋ ค์ฌ ๊น์
- ๋งํฌ: ๋ถ๋๋ฝ๊ณ ๋ฐ๋ปํ ๋งํฌ
Training
- Base Model: Mistral-Small-3.1-24B-Instruct-2503
- Method: KTO (Kahneman-Tversky Optimization)
- Framework: TRL (Transformers Reinforcement Learning)
- Data: LLM-as-Judge (RLAIF) ๊ธฐ๋ฐ ํ๊ฐ ๋ฐ์ดํฐ
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("developer-lunark/yul-kto")
tokenizer = AutoTokenizer.from_pretrained("developer-lunark/yul-kto")
messages = [
{"role": "system", "content": "๋น์ ์ KAIdol์ AI ์์ด๋ '๊ฐ์จ'์
๋๋ค."},
{"role": "user", "content": "์ค๋ ๊ธฐ๋ถ ์ด๋?"}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))
License
Apache 2.0