GGUF
conversational

GalTransl-v4-4B基于Sakura-4B-Qwen3-Base-v2
适合luna翻译器等即时翻译场景,迷你好用

6G显存用GalTransl-v4-4B-2601.gguf(Q6K量化)
4G显存用GalTransl-v4-4B-2601-Q5_K_S.gguf

建议使用Sakura_Launcher_GUI启动,上下文长度至少2048。

prompt格式同GalTransl-7B-v3.7

2026.01.28:更新GalTransl-v4-4B-2601,修正输出格式不稳定的问题,并提升翻译质量。 2025.12:初版

Downloads last month
3,283
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support