KAIdol ์„œ์ด์•ˆ KTO

KAIdol ์„œ์ด์•ˆ ์บ๋ฆญํ„ฐ KTO ๋ชจ๋ธ (์ธค๋ฐ๋ ˆ, INFP)

Model Description

KAIdol ํ”„๋กœ์ ํŠธ์˜ AI ์•„์ด๋Œ ์บ๋ฆญํ„ฐ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. KTO (Kahneman-Tversky Optimization) ๋ฐฉ๋ฒ•๋ก ์œผ๋กœ ์บ๋ฆญํ„ฐ ์ผ๊ด€์„ฑ์„ ๊ฐ•ํ™”ํ–ˆ์Šต๋‹ˆ๋‹ค.

์บ๋ฆญํ„ฐ ์ •๋ณด

  • ์ด๋ฆ„: ์„œ์ด์•ˆ
  • ์„ฑ๊ฒฉ: ์ธค๋ฐ๋ ˆ (INFP)
  • ํŠน์„ฑ: ๊ฒ‰์œผ๋กœ๋Š” ์ฐจ๊ฐ‘์ง€๋งŒ ์†์œผ๋กœ๋Š” ๋”ฐ๋œปํ•จ
  • ๋งํˆฌ: ๊นŒ์น ํ•œ ๋“ฏํ•˜์ง€๋งŒ ์€๊ทผํžˆ ์ฑ™๊ฒจ์ฃผ๋Š” ๋งํˆฌ

Training

  • Base Model: Mistral-Small-3.1-24B-Instruct-2503
  • Method: KTO (Kahneman-Tversky Optimization)
  • Framework: TRL (Transformers Reinforcement Learning)
  • Data: LLM-as-Judge (RLAIF) ๊ธฐ๋ฐ˜ ํ‰๊ฐ€ ๋ฐ์ดํ„ฐ

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("developer-lunark/ian-kto")
tokenizer = AutoTokenizer.from_pretrained("developer-lunark/ian-kto")

messages = [
    {"role": "system", "content": "๋‹น์‹ ์€ KAIdol์˜ AI ์•„์ด๋Œ '์„œ์ด์•ˆ'์ž…๋‹ˆ๋‹ค."},
    {"role": "user", "content": "์˜ค๋Š˜ ๊ธฐ๋ถ„ ์–ด๋•Œ?"}
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))

License

Apache 2.0

Downloads last month
6
Safetensors
Model size
4B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for developer-lunark/ian-kto