hex-1-f32-GGUF

Hex1 is a 4B parameter, open-source language model optimized for Indian languages, designed to address India’s linguistic diversity by enabling intelligent systems that natively understand and respond in Hindi, Kannada, Telugu, Tamil, and Malayalam. Built on the Qwen/Qwen3-4B-Base, Hex1 outperforms leading models such as Gemma-2B, LLaMA-3.2-3B, and Sarvam-1 on key benchmarks like MMLU, making it one of the most capable options for Indic language tasks. The model supports commercial licensing to empower businesses and developers to build applications without restrictive usage terms, and promises future expansion to cover more Indian languages. With robust multilingual performance and accessible open-source availability, Hex1 aims to include the 90% of India’s population underserved by English-centric AI, unlocking the potential of Generative AI across India’s rich linguistic landscape.

Model Files

File Name Quant Type File Size
hex-1.BF16.gguf BF16 8.05 GB
hex-1.F16.gguf F16 8.05 GB
hex-1.F32.gguf F32 16.1 GB
hex-1.Q2_K.gguf Q2_K 1.67 GB
hex-1.Q3_K_L.gguf Q3_K_L 2.24 GB
hex-1.Q3_K_M.gguf Q3_K_M 2.08 GB
hex-1.Q3_K_S.gguf Q3_K_S 1.89 GB
hex-1.Q4_K_M.gguf Q4_K_M 2.5 GB
hex-1.Q4_K_S.gguf Q4_K_S 2.38 GB
hex-1.Q5_K_M.gguf Q5_K_M 2.89 GB
hex-1.Q5_K_S.gguf Q5_K_S 2.82 GB
hex-1.Q6_K.gguf Q6_K 3.31 GB
hex-1.Q8_0.gguf Q8_0 4.28 GB

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
48
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/hex-1-f32-GGUF

Base model

Qwen/Qwen3-4B-Base
Finetuned
budecosystem/hex-1
Quantized
(3)
this model