GGUF
conversational

GalTransl-v4-4B-2512基于Sakura-4B-Qwen3-Base-v2
适合luna翻译器等即时翻译场景,迷你好用

6G显存用GalTransl-v4-4B-2512.gguf(Q6K量化)
4G显存用GalTransl-v4-4B-2512-Q5_K_S.gguf

建议使用Sakura_Launcher_GUI启动,上下文长度至少2048。

prompt格式同GalTransl-7B-v3.7

Downloads last month
1,427
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support