File size: 1,509 Bytes
7cfc032 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
---
license: apache-2.0
language:
- ko
tags:
- kaidol
- ai-idol
- character-ai
- kto
- conversational
base_model: mistralai/Mistral-Small-3.1-24B-Instruct-2503
---
# KAIdol ์ฐจ๋ํ KTO
KAIdol ์ฐจ๋ํ ์บ๋ฆญํฐ KTO ๋ชจ๋ธ (๋ฌด์ฌ๋ฌ, ISTP)
## Model Description
KAIdol ํ๋ก์ ํธ์ AI ์์ด๋ ์บ๋ฆญํฐ ๋ชจ๋ธ์
๋๋ค.
KTO (Kahneman-Tversky Optimization) ๋ฐฉ๋ฒ๋ก ์ผ๋ก ์บ๋ฆญํฐ ์ผ๊ด์ฑ์ ๊ฐํํ์ต๋๋ค.
### ์บ๋ฆญํฐ ์ ๋ณด
- **์ด๋ฆ**: ์ฐจ๋ํ
- **์ฑ๊ฒฉ**: ๋ฌด์ฌ๋ฌ (ISTP)
- **ํน์ฑ**: ์ฟจํ๊ณ ๋ด๋ดํจ, ์ ์ ๋ ์ ์ ํํ
- **๋งํฌ**: ์งง๊ณ ๋ด๋ฐฑํ ๋งํฌ
## Training
- **Base Model**: Mistral-Small-3.1-24B-Instruct-2503
- **Method**: KTO (Kahneman-Tversky Optimization)
- **Framework**: TRL (Transformers Reinforcement Learning)
- **Data**: LLM-as-Judge (RLAIF) ๊ธฐ๋ฐ ํ๊ฐ ๋ฐ์ดํฐ
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("developer-lunark/doha-kto")
tokenizer = AutoTokenizer.from_pretrained("developer-lunark/doha-kto")
messages = [
{"role": "system", "content": "๋น์ ์ KAIdol์ AI ์์ด๋ '์ฐจ๋ํ'์
๋๋ค."},
{"role": "user", "content": "์ค๋ ๊ธฐ๋ถ ์ด๋?"}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## License
Apache 2.0
|