Qwen3-4B-MegaScience-GGUF

Qwen3-4B-MegaScience is a large language model fine-tuned on the MegaScience dataset, specifically designed for advanced scientific reasoning, and built on top of Qwen3-4B-Base; it leverages a meticulously curated set of 1.25 million high-quality scientific questions and answers sourced from university-level textbooks and various open datasets, covering seven scientific disciplines and evaluated across 15 benchmarks, demonstrating superior reasoning ability and training efficiency compared to existing open-source science models; the model supports seamless integration via the Hugging Face transformers library, operates efficiently with bfloat16 precision, and comes with an open-source dataset, evaluation pipeline, and reproducibility code, facilitating research and applications in scientific AI reasoning, with full resources, paper, and code available via the MegaScience official website and GitHub repository.

Model Files

File Name Size Quant Type
Qwen3-4B-MegaScience.BF16.gguf 8.05 GB BF16
Qwen3-4B-MegaScience.F16.gguf 8.05 GB F16
Qwen3-4B-MegaScience.F32.gguf 16.1 GB F32
Qwen3-4B-MegaScience.Q3_K_L.gguf 2.24 GB Q3_K_L
Qwen3-4B-MegaScience.Q3_K_S.gguf 1.89 GB Q3_K_S
Qwen3-4B-MegaScience.Q4_K_M.gguf 2.5 GB Q4_K_M
Qwen3-4B-MegaScience.Q4_K_S.gguf 2.38 GB Q4_K_S
Qwen3-4B-MegaScience.Q5_K_M.gguf 2.89 GB Q5_K_M
Qwen3-4B-MegaScience.Q5_K_S.gguf 2.82 GB Q5_K_S
Qwen3-4B-MegaScience.Q6_K.gguf 3.31 GB Q6_K
Qwen3-4B-MegaScience.Q8_0.gguf 4.28 GB Q8_0

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
32
GGUF
Model size
4B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for prithivMLmods/Qwen3-4B-MegaScience-GGUF

Base model

Qwen/Qwen3-4B-Base
Quantized
(1)
this model