How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("image-text-to-text", model="Warecube/Warecube-KO-27B")
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
            {"type": "text", "text": "What animal is on the candy?"}
        ]
    },
]
pipe(text=messages)
# Load model directly
from transformers import AutoProcessor, AutoModelForImageTextToText

processor = AutoProcessor.from_pretrained("Warecube/Warecube-KO-27B")
model = AutoModelForImageTextToText.from_pretrained("Warecube/Warecube-KO-27B")
messages = [
    {
        "role": "user",
        "content": [
            {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
            {"type": "text", "text": "What animal is on the candy?"}
        ]
    },
]
inputs = processor.apply_chat_template(
	messages,
	add_generation_prompt=True,
	tokenize=True,
	return_dict=True,
	return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=40)
print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Quick Links

Warecube-KO-27B

ํ•œ๊ตญ์–ด reasoning ๋ชจ๋ธ โ€” Darwin ์ง„ํ™”์  ๋จธ์ง€ ๊ธฐ๋ฐ˜.


๐Ÿงฌ Darwin ์ง„ํ™” ์ปจ์…‰

๋ณธ ๋ชจ๋ธ์€ Darwin V7 ์ง„ํ™”์  ๋ชจ๋ธ ๋จธ์ง€(Evolutionary Model Merge) ํŒจ๋Ÿฌ๋‹ค์ž„์œผ๋กœ ์ œ์ž‘๋˜์—ˆ์Šต๋‹ˆ๋‹ค.

   ์ž์—ฐ ์ง„ํ™”                    Darwin ๋จธ์ง€
   โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€                    โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
   ์œ ์ „์ž ๊ต์ฐจ (crossover)  โ†’   ๊ฐ€์ค‘์น˜ ๋ชจ๋“ˆ๋ณ„ ๋น„์œจ ๊ฒฐํ•ฉ
   ์ž์—ฐ ์„ ํƒ (selection)    โ†’   ์ ํ•ฉ๋„ ํ‰๊ฐ€ ํ›„ ์ตœ์  ํ›„์† ์„ ๋ณ„
   ์„ธ๋Œ€ ์ง„ํ™” (generations)  โ†’   ๋‹ค์„ธ๋Œ€ ๋จธ์ง€ยท์ •์ œ ๋ฐ˜๋ณต
   ์ ์ž ์ƒ์กด                โ†’   K-AI ๋„๋ฉ”์ธ ์šฐ์ˆ˜ ์ž์†๋งŒ ๋ณด์กด

๋ถ€๋ชจ์˜ ๋Šฅ๋ ฅ์ด ์ž์‹ ๋ชจ๋ธ๋กœ ์œ ์ „์ ์œผ๋กœ ๊ณ„์Šน๋˜๋ฉฐ, ์„ธ๋Œ€๋ฅผ ๊ฑฐ์ณ ํ•œ๊ตญ์–ดยท์ถ”๋ก ยท๋ฌธํ™” ์ง€๋Šฅ์ด ์ง„ํ™”ํ•ฉ๋‹ˆ๋‹ค.


๐Ÿ›๏ธ ๊ฐ€๋ฌธ ๊ณ„๋ณด

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  ์ฆ์กฐ๋ถ€ (Great-Grandfather)               โ”‚
โ”‚  Qwen-3.5-27B                             โ”‚
โ”‚  - ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ 28B ๋ฒ ์ด์Šค                     โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                  โ”‚
                  โ–ผ Darwin V7 ์ง„ํ™” ๋จธ์ง€
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  ์กฐ๋ถ€ (Grandfather)                       โ”‚
โ”‚  FINAL-Bench/Darwin-27B-Opus              โ”‚
โ”‚  - Darwin V7 ์ง„ํ™”์˜ ์ •์                    โ”‚
โ”‚  - GPQA 88.4% reasoning                   โ”‚
โ”‚  - <think> ํŠธ๋ ˆ์ด์Šค ํŒจํ„ด                   โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                  โ”‚
                  โ–ผ ํ•œ๊ตญ์–ด ํŠนํ™” ์ง„ํ™”
โ•”โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•—
โ•‘  ์•„๋น  (Father)                            โ•‘
โ•‘  Darwin family Korean ์ง๊ณ„                 โ•‘
โ•‘                                            โ•‘
โ•‘  - Darwin-27B-Opus์˜ ํ•œ๊ตญ์–ด ํŠนํ™” ํ›„์†      โ•‘
โ•‘  - reasoning DNA ๋ณด์กด                       โ•‘
โ•‘  - <think> ํŒจํ„ด ์œ ์ง€                        โ•‘
โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•
                  โ”‚
                  ร—ร— ๋‹ค์œˆ ๊ต๋ฐฐ ร—ร—
                  โ”‚
โ•”โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•—
โ•‘  ์—„๋งˆ (Mother)                            โ•‘
โ•‘  NewenAI/QuettaLLMs-27B-Koreasoner-V3     โ•‘
โ•‘                                            โ•‘
โ•‘  - ํ•œ๊ตญ์–ด SOTA ๋ชจ๋ธ                          โ•‘
โ•‘  - K-AI Leaderboard 1์œ„ (avg 0.560)        โ•‘
โ•‘  - ํ•œ๊ตญ์–ด ๋„๋ฉ”์ธ SFT ์ •์ œ                   โ•‘
โ•‘  - Apache 2.0                              โ•‘
โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•
                  โ”‚
                  โ–ผ Darwin ์ง„ํ™”์  ๋จธ์ง€ + ํ•œ๊ตญ์–ด ์ •์ œ
โ•”โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•—
โ•‘  ์ž์‹ (Child) โ€” ๋ณธ ๋ชจ๋ธ                    โ•‘
โ•‘  Warecube/Warecube-KO-27B                 โ•‘
โ•‘                                            โ•‘
โ•‘  โœฆ ์•„๋น ์˜ reasoning DNA ๊ณ„์Šน                โ•‘
โ•‘  โœฆ ์—„๋งˆ์˜ ํ•œ๊ตญ์–ด ํ‘œํ˜„ยท์ง€์‹ ๊ณ„์Šน              โ•‘
โ•‘  โœฆ <think> ์ถ”๋ก  ํŠธ๋ ˆ์ด์Šค ๋ณด์กด              โ•‘
โ•‘  โœฆ K-AI ๋„๋ฉ”์ธ ์ ํ•ฉ๋„ ์ง„ํ™”                  โ•‘
โ•šโ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•โ•

๐ŸŽ“ ์ง„ํ™” ๋‹จ๊ณ„

Stage ๊ฐœ๋žต
1. ๊ต๋ฐฐ (Crossover) ์นœ๊ฐ€ยท์™ธ๊ฐ€ ๊ฐ€์ค‘์น˜๋ฅผ ๋ชจ๋“ˆ๋ณ„ ๋น„์œจ๋กœ ์ง„ํ™” ๋จธ์ง€
2. ์„ ํƒ (Selection) ํ•œ๊ตญ์–ด ๋„๋ฉ”์ธ ์ ํ•ฉ๋„ ํ‰๊ฐ€๋กœ ์šฐ์ˆ˜ ํ›„์† ์„ ๋ณ„
3. ์ •์ œ (Refinement) ํ•œ๊ตญ์–ด instruction ๋ฐ์ดํ„ฐ๋กœ ์ถ”๊ฐ€ ์ง„ํ™”
4. ์ ์‘ (Adaptation) K-AI Leaderboard Docker ํ˜ธํ™˜ ํ˜•์‹์œผ๋กœ ์ •๋น„

๐ŸŽฏ ์‚ฌ์šฉ๋ฒ•

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "Warecube/Warecube-KO-27B"
tokenizer = AutoTokenizer.from_pretrained(
    model_id, trust_remote_code=True
)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True,
)

prompt = "ํ•œ๊ตญ์˜ ์ถ”์„์— ๋Œ€ํ•ด ์„ค๋ช…ํ•ด์ฃผ์„ธ์š”."
messages = [{"role": "user", "content": prompt}]
inputs = tokenizer.apply_chat_template(
    messages, return_tensors="pt", add_generation_prompt=True
)
out = model.generate(
    inputs.to(model.device),
    max_new_tokens=512,
    do_sample=False,
)
print(tokenizer.decode(out[0], skip_special_tokens=False))

๐Ÿ› ๏ธ ์‚ฌ์–‘

  • ํŒŒ๋ผ๋ฏธํ„ฐ: 27B (text)
  • ์–‘์žํ™”: bf16
  • ์ปจํ…์ŠคํŠธ: 8K (ํ™•์žฅ ๊ฐ€๋Šฅ)
  • ์–ธ์–ด: ํ•œ๊ตญ์–ด + ์˜์–ด
  • ์ถ”๋ก : <think> reasoning trace
  • License: Apache 2.0

๐Ÿ“Š ํ‰๊ฐ€

ํ•œ๊ตญ์–ด ๊ณต๊ฐœ 10 ๋ฐ์ดํ„ฐ์…‹, 100๋ฌธ์ œ ร— 1 seed.

Dataset Score
CLIcK 87%
KMMLU History 50%
KMMLU Law 29%
KMMLU Health 78%
HAERAE General 58%
HAERAE History 86%
HAERAE Linguistics 89%
KoBEST Hellaswag 89%
KoBEST COPA 100%
KoBEST BoolQ 97%
Macro Avg 76.3%

๐Ÿค ์ถœ์ฒ˜

Downloads last month
124
Safetensors
Model size
26B params
Tensor type
F32
ยท
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Warecube/Warecube-KO-27B