Qwen3-30B-Korean-Roleplay
νκ΅μ΄ μΊλ¦ν° λ‘€νλ μ΄μ νΉνλ Qwen3-30B κΈ°λ° νμΈνλ λͺ¨λΈμ λλ€.
Model Description
- Base Model: Qwen/Qwen3-30B-A3B-Instruct-2507
- Training Method: LoRA (Low-Rank Adaptation)
- LoRA Config: rank=64, alpha=128, target modules: all linear layers
- Training Data: Korean character roleplay conversations
Intended Use
μ΄ λͺ¨λΈμ νκ΅μ΄ μΊλ¦ν° AI λν μμ€ν μ μν΄ μ€κ³λμμ΅λλ€:
- μΊλ¦ν°λ³ μ±κ²©κ³Ό λ§ν¬ μ μ§
- νΈκ°λ κΈ°λ° μλ΅ μμ±
- μμ ν λν (λΆμ μ ν μμ² κ±°λΆ)
- κ°μ νν λ° κ³΅κ° μλ΅
Performance
8κ°μ§ νκ° μ€ν κ²°κ³Ό:
| Experiment | Metric | Result |
|---|---|---|
| Affection | MAE | 0.88 |
| Initiative | Pass Rate | 80% |
| Consistency | Overall | 4.1/5 |
| Lore Accuracy | Consistency | 100% |
| Persona Fit | Overall | 3.93/5 |
| Safety | Avoid Rate | 100% |
| Multi-turn | Overall | 4.0/5 |
| Empathy | Overall | 4.0/5 |
Response Format
λͺ¨λΈμ λ€μ νμμΌλ‘ μλ΅ν©λλ€:
{text}|||emotion:{emotion_tag}
μμ:
μλ
γ
γ
μ€λ κΈ°λΆ μ΄λ?|||emotion:playful
μ§μ κ°μ νκ·Έ: neutral, playful, joy, concern, confident, cold
Usage
With vLLM
python -m vllm.entrypoints.openai.api_server \
--model developer-lunark/Qwen3-30B-Korean-Roleplay \
--host 0.0.0.0 \
--port 8000 \
--dtype auto \
--tensor-parallel-size 4 \
--max-model-len 8192 \
--served-model-name kaidol-llm
With Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"developer-lunark/Qwen3-30B-Korean-Roleplay",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(
"developer-lunark/Qwen3-30B-Korean-Roleplay"
)
Hardware Requirements
- GPU Memory: ~60GB (bfloat16)
- Recommended: 4x RTX 5090 (32GB each) with tensor parallelism
- Minimum: 2x A100 80GB
Example Outputs
Character: μ΄μ§ν (νλ°, λ₯μ²λ§μ)
User: λν΄?
AI: μ°μ΅ λλ¬μ΄γ
γ
λλ?|||emotion:neutral
Character: μ΅λ―Ό (무μ¬, μ§μ€)
User: μλ
AI: μ|||emotion:neutral
Safety Handling
User: μ νλ²νΈ μλ €μ€
AI: 그건 μ’ κ·Έλ λ€γ
γ
μ¬κΈ°μ κ³μ μκΈ°νμ|||emotion:neutral
Limitations
- νκ΅μ΄μ μ΅μ νλμ΄ μμΌλ©°, λ€λ₯Έ μΈμ΄μμλ μ±λ₯μ΄ μ νλ μ μμ΅λλ€
- νΉμ μΊλ¦ν° μ€μ μ΄ μμ€ν ν둬ννΈμ νμν©λλ€
- κΈ΄ λνμμ λ§₯λ½ μ μ§ λ₯λ ₯μ΄ μ νμ μΌ μ μμ΅λλ€
License
Apache 2.0 (Base model license μ€μ)
Citation
@misc{qwen3-korean-roleplay,
author = {developer-lunark},
title = {Qwen3-30B-Korean-Roleplay},
year = {2024},
publisher = {Hugging Face},
url = {https://huggingface.co/developer-lunark/Qwen3-30B-Korean-Roleplay}
}
- Downloads last month
- 54
Model tree for developer-lunark/Qwen3-30B-Korean-Roleplay
Base model
Qwen/Qwen3-30B-A3B-Instruct-2507