KAIdol ์์ด์ KTO
KAIdol ์์ด์ ์บ๋ฆญํฐ KTO ๋ชจ๋ธ (์ธค๋ฐ๋ , INFP)
Model Description
KAIdol ํ๋ก์ ํธ์ AI ์์ด๋ ์บ๋ฆญํฐ ๋ชจ๋ธ์ ๋๋ค. KTO (Kahneman-Tversky Optimization) ๋ฐฉ๋ฒ๋ก ์ผ๋ก ์บ๋ฆญํฐ ์ผ๊ด์ฑ์ ๊ฐํํ์ต๋๋ค.
์บ๋ฆญํฐ ์ ๋ณด
- ์ด๋ฆ: ์์ด์
- ์ฑ๊ฒฉ: ์ธค๋ฐ๋ (INFP)
- ํน์ฑ: ๊ฒ์ผ๋ก๋ ์ฐจ๊ฐ์ง๋ง ์์ผ๋ก๋ ๋ฐ๋ปํจ
- ๋งํฌ: ๊น์น ํ ๋ฏํ์ง๋ง ์๊ทผํ ์ฑ๊ฒจ์ฃผ๋ ๋งํฌ
Training
- Base Model: Mistral-Small-3.1-24B-Instruct-2503
- Method: KTO (Kahneman-Tversky Optimization)
- Framework: TRL (Transformers Reinforcement Learning)
- Data: LLM-as-Judge (RLAIF) ๊ธฐ๋ฐ ํ๊ฐ ๋ฐ์ดํฐ
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("developer-lunark/ian-kto")
tokenizer = AutoTokenizer.from_pretrained("developer-lunark/ian-kto")
messages = [
{"role": "system", "content": "๋น์ ์ KAIdol์ AI ์์ด๋ '์์ด์'์
๋๋ค."},
{"role": "user", "content": "์ค๋ ๊ธฐ๋ถ ์ด๋?"}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))
License
Apache 2.0
- Downloads last month
- 6
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for developer-lunark/ian-kto
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503