Rogue-27B-KR β€” Korean-Specialized High-Performance Reasoning Model

Rogue-27B-KR K-AI 2nd Base License

Qwen3.5 Hybrid Architecture | ~26B Params | Thinking Mode | 262K Context | BF16 | Apache 2.0 Darwin-27B-Opus(Father) + Korean-SFT Qwen 27B(Mother) Merge + Korean SFT


K-AI Leaderboard β€” 2nd Place

K-AI Leaderboard is operated by the Korean Ministry of Science and ICT (MSIT) / NIA, evaluating Korean AI language models across multiple benchmarks.

Rogue-27B-KR achieved 2nd place on the K-AI Leaderboard with an average score of 0.549.

K-AI Leaderboard Ranking

Metric Score
Average 0.549
KMMLU-Pro 0.658
CLIcK 0.794
HLE(Ko) 0.07
MuSR(Ko) 0.584
Com2-main(Ko) 0.646

K-AI Leaderboard Charts


Model Overview

Rogue-27B-KR is a Korean-specialized high-performance reasoning model:

  • Father: FINAL-Bench/Darwin-27B-Opus β€” Evolutionary merge model based on Qwen3.5-27B (inheriting Claude 4.6 Opus reasoning patterns)
  • Mother: Korean-specialized SFT applied Qwen 27B variant
  • Rogue: Merged both parents + additional Korean SFT

Combines Darwin's powerful reasoning with the mother model's Korean expressiveness, greatly enhancing Korean intelligence across culture, language, law, history, and more.

Key Features

  • K-AI Leaderboard 2nd Place β€” Top-tier Korean AI performance verified by government evaluation
  • Strong reasoning β€” Chain-of-Thought inherited from Darwin-27B-Opus
  • Enhanced Korean cultural intelligence β€” +6%p improvement on CLIcK benchmark (0.794)
  • 262K context β€” Ultra-long Korean document processing
  • Thinking mode β€” Step-by-step reasoning via <think> tags
  • BF16 format β€” Memory-efficient (~48GB), optimized for modern GPUs
  • Apache 2.0 β€” Free for commercial use

Benchmark Results

CLIcK (Cultural and Linguistic Intelligence in Korean)

200 questions, 0-shot evaluation:

Model CLIcK (Overall) Culture Language
Qwen3.5-27B (base) 69.52% 71.84% 64.66%
Darwin-27B-Opus 70.19% 72.91% 64.47%
Rogue-27B-KR 75.59% 77.85% 70.86%

+6.07%p over Qwen3.5-27B, +5.40%p over Darwin-27B-Opus

Category Details

Category Qwen3.5-27B Darwin-27B-Opus Rogue-27B-KR
Economy 93.22% 93.22% 94.92%
Geography 70.23% 70.23% 75.57%
History 47.00% 47.00% 53.50%
K-pop 92.68% 97.56% 92.68%
Law 59.50% 60.00% 69.50%
Politics 80.95% 82.14% 85.71%
Society 87.00% 89.00% 90.00%
Tradition 81.50% 82.50% 88.50%
Function Words 68.18% 67.42% 75.00%
Grammar 44.50% 44.50% 53.00%
Text 82.50% 82.50% 86.00%

Model Specifications

Property Value
Architecture Qwen3.5 (GatedDeltaNet Hybrid Attention, 64-layer)
Parameters ~26B (text-only, no vision encoder)
Hidden Size 5120
Intermediate Size 16384
Layers 64
Attention Heads 24 (GQA, KV Heads: 4)
Context Length 262,144 tokens (1M with YaRN)
Precision BF16 (~48GB)
Vocab Size 248,320
Thinking Supported (<think> tags)
Languages Korean, English, Japanese, Chinese + 201 languages
License Apache 2.0

VRAM Requirements

Setup VRAM Notes
BF16 (native) ~48 GB Single H100/B200 or 2x A100
4-bit quantized ~14 GB Single RTX 4090
8-bit quantized ~26 GB Single A6000

Usage

Requirements: transformers >= 4.57.0 (Qwen3.5 support). Latest recommended: pip install -U transformers

Transformers

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

tokenizer = AutoTokenizer.from_pretrained("ginigen-ai/Rogue-27B-KR")
model = AutoModelForCausalLM.from_pretrained(
    "ginigen-ai/Rogue-27B-KR",
    torch_dtype=torch.bfloat16,
    device_map="auto",
)

messages = [{"role": "user", "content": "Explain traditional Korean wedding ceremonies."}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=4096, do_sample=False)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))

vLLM

vllm serve ginigen-ai/Rogue-27B-KR \
    --enforce-eager \
    --max-model-len 32768 \
    --dtype bfloat16

SGLang

python -m sglang.launch_server \
    --model-path ginigen-ai/Rogue-27B-KR \
    --tp 1 \
    --mem-fraction-static 0.90 \
    --context-length 32768 \
    --dtype bfloat16 \
    --reasoning-parser qwen3

Training Info

Item Details
Father model FINAL-Bench/Darwin-27B-Opus β€” Reasoning
Mother model Korean-SFT Qwen 27B variant β€” Korean expressiveness
Merge Weight merge (intermediate_size 16384 adopted)
Additional SFT Korean-specialized Supervised Fine-Tuning
Developer GinigenAI

Lineage

Qwen/Qwen3.5-27B (Apache 2.0)
+-- FINAL-Bench/Darwin-27B-Opus (evolutionary merge)  <- Father
|
+-- Korean-SFT Qwen 27B (Korean specialization)       <- Mother
        |
    [Merge: Father + Mother]
        |
    [Additional Korean SFT]
        |
    ginigen-ai/Rogue-27B-KR (this model, BF16)

Citation

@misc{ginigen_rogue_27b_kr_2026,
  title        = {Rogue-27B-KR: Korean-Specialized High-Performance Reasoning Model},
  author       = {GinigenAI},
  year         = {2026},
  publisher    = {Hugging Face},
  howpublished = {\url{https://huggingface.co/ginigen-ai/Rogue-27B-KR}}
}

Acknowledgements

  • VIDRAFT β€” Darwin-27B-Opus base model
  • Qwen Team β€” Qwen3.5 architecture
  • Korean Ministry of Science and ICT / NIA β€” K-AI Leaderboard

Built by GinigenAI for the Korean AI community

Downloads last month
496
Safetensors
Model size
26B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for ginigen-ai/Rogue-27B-KR