Instructions to use VIDraft/Darwin-28B-KOREA with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use VIDraft/Darwin-28B-KOREA with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="VIDraft/Darwin-28B-KOREA") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("VIDraft/Darwin-28B-KOREA") model = AutoModelForImageTextToText.from_pretrained("VIDraft/Darwin-28B-KOREA") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use VIDraft/Darwin-28B-KOREA with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "VIDraft/Darwin-28B-KOREA" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "VIDraft/Darwin-28B-KOREA", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/VIDraft/Darwin-28B-KOREA
- SGLang
How to use VIDraft/Darwin-28B-KOREA with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "VIDraft/Darwin-28B-KOREA" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "VIDraft/Darwin-28B-KOREA", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "VIDraft/Darwin-28B-KOREA" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "VIDraft/Darwin-28B-KOREA", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use VIDraft/Darwin-28B-KOREA with Docker Model Runner:
docker model run hf.co/VIDraft/Darwin-28B-KOREA
Darwin-28B-KOREA
ํ๊ตญ์ด/์์ด ์ด์ค์ธ์ด ์ถ๋ก ์ ์ต์ ํ๋ 28B ํ๋ผ๋ฏธํฐ ๋ชจ๋ธ
VIDRAFT Darwin ์๋ฆฌ์ฆ์ PERFECT ๋ถ๋ชจ ํ์ด ๋จธ์ง 1ํธ. ๋ ๋ถ๋ชจ ๋ชจ๋ธ์ ๊ฐ์ค์น๋ฅผ per-layer ๋์ ๋น์จ๋ก ๊ฒฐํฉํ์ฌ, ๋ถ๋ชจ ์ด๋ ํ์ชฝ๋ณด๋ค๋ ์ฐ์ํ์ง ์์ ๋ถ๋ถ ์์ด ๋ชจ๋ ์์ญ์์ ๋๋ฑ ์ด์์ ์ฑ๋ฅ์ ๋ฌ์ฑ.
๋ถ๋ชจ ๋ชจ๋ธ (PERFECT Pair)
| Role | Model | Strength |
|---|---|---|
| Father | FINAL-Bench/Darwin-28B-Opus | ์์ด ์ถ๋ก , ์๋, ํ ํฐ ์ ์ |
| Mother | ginigen-ai/Rogue-28B-MIX | ํ๊ตญ์ด ๋ค์ดํฐ๋ธ, ๊น์ ํ๊ตญ์ด reasoning |
๋ถ๋ชจ ํ์ด ํธํ์ฑ: hidden=5120, intermediate=17408, layers=64 (์์ ์ผ์น PERFECT pair).
๋จธ์ง ๋ฐฉ์
- ์๊ณ ๋ฆฌ์ฆ: Per-layer linear interpolation in float32, bfloat16 cast
- t vector: 64 ๋ ์ด์ด ๋์ ๊ฐ์ค์น (mean t=0.513)
- Golden Reasoning Layer (L47): t=0.90 (Mother dominant)
- Output Router (L63): t=0.53
- MRI (Model MRI) telemetry ๊ธฐ๋ฐ per-layer probe_distance + hidden_norm ๋ถ์
- ์ฑ ํ ํ๋ฆฟ/ํ ํฌ๋์ด์ : Father ๊ธฐ์ค (Qwen3_5ForConditionalGeneration multimodal)
ํ๊ฐ ๊ฒฐ๊ณผ (35-sample 3-way bench, max_tokens=5120)
| ํ๊ฐ ํญ๋ชฉ | Father | Mother | KOREA (Child) |
|---|---|---|---|
| ์ ํ๋ (29๊ฐ ๊ฐ๊ด์) | 96.6% | 96.6% | 96.6% |
| ์ง์ง ์ ํ๋ (gpqa_01 ์ฑ์ ์ค๋ฅ ๋ฐ์) | 100% | 100% | 100% |
| ํ๊ตญ์ด ์ถ๋ ฅ๋ฅ (ํ๊ตญ์ด ์ง๋ฌธ 23๊ฐ) | 91.3% | 95.7% | 91.3% |
| ์์ด thinking | 31/35 | 10/35 | 31/35 |
| ํ๊ท ์๋ต ํ ํฐ | 458 | 631 | 521 |
| 5120 cap ๋๋ฌ | 0/35 | 2/35 | 1/35 |
Win/Loss ๋ถ์:
- Father vs Child: 0:0 ๋๋ฅ
- Mother vs Child: 0:0 ๋๋ฅ
- โ ์์์ด ๋ ๋ถ๋ชจ์ ์์ ๋๊ธ, ์ด๋ ํ์ชฝ๋ณด๋ค ์ฝํ ์์ญ ์์
Reasoning ๊น์ด ํก์: ํ๊ตญ์ด ๋ ผ๋ฆฌ ์นดํ ๊ณ ๋ฆฌ์์ Mother(2620t)์ Child(2724t) ํ๊ท ๋ต ๊ธธ์ด ์ ์ฌ โ Mother์ long-chain reasoning ํจํด ์ ์ด ์ฑ๊ณต.
์ฌ์ฉ ๊ถ์ฅ
- ๊ถ์ฅ max_tokens: 1024 ์ด์ (chain-of-thought ํน์ฑ์ 256 ํ ํฐ์์ ๋ต์ด ์๋ฆด ์ ์์)
- ์ฌ๊ณ ํจํด: ์์ด reasoning ํ ํ๊ตญ์ด ๋ต๋ณ. ๋ต๋ง ์ ํํ๋ฉด OK์ธ ๊ฒฝ์ฐ ๊ถ์ฅ.
- ์์ ํ๊ตญ์ด reasoning ์ํ๋ฉด: ๋ถ๋ชจ Rogue-28B-MIX ๋จ๋ ์ฌ์ฉ ์ถ์ฒ.
Quick Start
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"VIDraft/Darwin-28B-KOREA",
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True,
)
tok = AutoTokenizer.from_pretrained("VIDraft/Darwin-28B-KOREA", trust_remote_code=True)
msgs = [{"role": "user", "content": "๋ํ๋ฏผ๊ตญ ํ๋ฒ ์ 10์กฐ์ ํต์ฌ ๋ด์ฉ์ ํ ๋ฌธ์ฅ์ผ๋ก ์์ฝ."}]
inputs = tok.apply_chat_template(msgs, return_tensors="pt", add_generation_prompt=True).to(model.device)
out = model.generate(inputs, max_new_tokens=1024, do_sample=False, pad_token_id=tok.eos_token_id)
print(tok.decode(out[0][inputs.shape[1]:], skip_special_tokens=True))
License
Apache 2.0 (๋ถ๋ชจ ๋ชจ๋ธ ์์).
Citation / Acknowledgment
VIDRAFT Darwin Family โ Evolutionary Model Merge Research. Pair: Darwin-28B-Opus ร Rogue-28B-MIX โ Darwin-28B-KOREA (PERFECT pair, 2026-05-14)
Built with the Darwin Factory pipeline. 16 customer orders bridged by single base model.
- Downloads last month
- 19
Model tree for VIDraft/Darwin-28B-KOREA
Base model
FINAL-Bench/Darwin-28B-Opus