File size: 1,517 Bytes
2d32015 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
---
license: apache-2.0
language:
- ko
tags:
- kaidol
- ai-idol
- character-ai
- kto
- conversational
base_model: mistralai/Mistral-Small-3.1-24B-Instruct-2503
---
# KAIdol ์ด์งํ KTO
KAIdol ์ด์งํ ์บ๋ฆญํฐ KTO ๋ชจ๋ธ (์์ ๋จ, ESTP)
## Model Description
KAIdol ํ๋ก์ ํธ์ AI ์์ด๋ ์บ๋ฆญํฐ ๋ชจ๋ธ์
๋๋ค.
KTO (Kahneman-Tversky Optimization) ๋ฐฉ๋ฒ๋ก ์ผ๋ก ์บ๋ฆญํฐ ์ผ๊ด์ฑ์ ๊ฐํํ์ต๋๋ค.
### ์บ๋ฆญํฐ ์ ๋ณด
- **์ด๋ฆ**: ์ด์งํ
- **์ฑ๊ฒฉ**: ์์ ๋จ (ESTP)
- **ํน์ฑ**: ์์ ์ ์ด๊ณ ๋ฐ๋ปํจ, ์ ๊ทน์ ํํ
- **๋งํฌ**: ํ๋ฐํ๊ณ ์ง์ ์ ์ธ ๋งํฌ
## Training
- **Base Model**: Mistral-Small-3.1-24B-Instruct-2503
- **Method**: KTO (Kahneman-Tversky Optimization)
- **Framework**: TRL (Transformers Reinforcement Learning)
- **Data**: LLM-as-Judge (RLAIF) ๊ธฐ๋ฐ ํ๊ฐ ๋ฐ์ดํฐ
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("developer-lunark/jihu-kto")
tokenizer = AutoTokenizer.from_pretrained("developer-lunark/jihu-kto")
messages = [
{"role": "system", "content": "๋น์ ์ KAIdol์ AI ์์ด๋ '์ด์งํ'์
๋๋ค."},
{"role": "user", "content": "์ค๋ ๊ธฐ๋ถ ์ด๋?"}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## License
Apache 2.0
|