TenOS-Ko-27B-v2
FINAL-Bench/Darwin-27B-Opus 기반 한국어 특화 SFT 모델입니다.
Method
- Base Model: Darwin-27B-Opus (Qwen3.5-27B family)
- Korean SFT: 한국어 문화, 역사, 법률, 경제, 사회 등 한국 특화 지식 중심 Supervised Fine-Tuning
- Thinking Mode:
<think>태그를 통한 Chain-of-Thought 추론 지원
Model Specifications
| Property | Value |
|---|---|
| Architecture | Qwen3.5 Hybrid (64 layers) |
| Parameters | ~27B |
| Context Length | 262,144 tokens |
| Precision | BF16 |
| License | Apache 2.0 |
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
"honey90/TenOS-Ko-27B-v2",
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("honey90/TenOS-Ko-27B-v2")
messages = [{"role": "user", "content": "한국의 전통 명절에 대해 설명해주세요."}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=1024, do_sample=False)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))
Acknowledgements
- FINAL-Bench — Darwin-27B-Opus base model
- Qwen Team — Qwen3.5 architecture
- Downloads last month
- -
Model tree for honey90/TenOS-Ko-27B-v2
Base model
Qwen/Qwen3.5-27B Finetuned
FINAL-Bench/Darwin-27B-Opus