You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Card for F1-24B-Instruct

F1-24B-InstructFormosa-1(F1) 系列的 24B 指令微調模型,建構於 F1-24B-Base 之上,以繁體中文指令對話資料完成 SFT,提供 24B 級繁中對話與在地語境之回答能力。

⚠️ 規格重點: 本模型為 24B 參數、純文本單模態,已完成指令微調可直接對話。

Model Details

24B 級模型在能力與部署成本之間提供良好平衡,適合企業級應用。F1-24B-InstructF1-24B-Base 之繁中 CPT 底座上做指令微調,目標是讓繁中對話模型在 24B 級規模具備穩定的台灣語境理解與回答能力。

核心特點 (Key Features)

  1. 24B 級繁中對話:能力與成本兼顧,適合中大型企業繁中應用部署。
  2. 台灣語境對齊:訓練資料以繁中與台灣常見任務為主,補強原版 Mistral-Small-24B 在繁中流暢度與在地語境上的不足。
  3. F1 家族成員:可與 F1-24B-ReasonerF1-24B-Instruct-Cybersecurity 等專業版本互補使用。

Model Description

Model Sources

Citation

@misc{f1_24b_instruct,
  title        = {F1-24B-Instruct: A Traditional Chinese Instruction-Tuned Mistral-24B Model for Taiwan},
  author       = {Huang, Liang Hsun},
  year         = {2025},
  howpublished = {\url{https://huggingface.co/lianghsun/F1-24B-Instruct}}
}

Acknowledge

  • 特此感謝 APMIC 的算力支援。

Model Card Authors

Huang Liang Hsun

Model Card Contact

Huang Liang Hsun

Downloads last month
-
Safetensors
Model size
24B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lianghsun/F1-24B-Instruct

Finetuned
(3)
this model