| | --- |
| | license: apache-2.0 |
| | language: |
| | - en |
| | pipeline_tag: text-generation |
| | library_name: transformers |
| | tags: |
| | - text-generation-inference |
| | base_model: |
| | - MegaScience/Qwen3-4B-MegaScience |
| | - MegaScience/Qwen3-1.7B-MegaScience |
| | - MegaScience/Qwen2.5-3B-MegaScience |
| | - MegaScience/Qwen2.5-1.5B-MegaScience |
| | datasets: |
| | - MegaScience/MegaScience |
| | --- |
| | |
| | # **MegaScience-Qwen-GGUF** |
| |
|
| | > MegaScience-Qwen models are a series of large language models based on the Qwen3 and Qwen2.5 architectures, meticulously fine-tuned on the MegaScience dataset to advance scientific reasoning in AI. This dataset blends over 1.25 million high-quality, university-level scientific questions and answers sourced from open textbooks and diverse scientific benchmarks, covering seven scientific disciplines. The MegaScience-Qwen lineup includes variants from smaller Qwen2.5-1.5B up to Qwen3-30B, with models such as Qwen3-4B-MegaScience, Qwen3-8B-MegaScience, and Qwen3-14B-MegaScience, each showing pronounced gains over their official instruction-tuned counterparts—especially as model scale increases. These models demonstrate state-of-the-art or leading performance on scientific reasoning, general knowledge, and mathematical benchmarks, achieving not only higher accuracy but also more concise and efficient responses. The MegaScience project also provides a rigorous evaluation system, an open-source curation pipeline, and all model checkpoints, empowering further research and application in scientific AI reasoning and education. |
| |
|
| | ## MegaScience Qwen Models (GGUF Format) |
| |
|
| | | Model Name | GGUF Repository Link | |
| | |--------------------------------|----------------------------------------------------------------------------------------| |
| | | Qwen3-8B-MegaScience-GGUF | [Hugging Face ↗](https://huggingface.co/prithivMLmods/MegaScience-Qwen-GGUF/tree/main/Qwen3-8B-MegaScience-GGUF)| |
| | | Qwen3-4B-MegaScience-GGUF | [Hugging Face ↗](https://huggingface.co/prithivMLmods/MegaScience-Qwen-GGUF/tree/main/Qwen3-4B-MegaScience) | |
| | | Qwen3-1.7B-MegaScience-GGUF | [Hugging Face ↗](https://huggingface.co/prithivMLmods/MegaScience-Qwen-GGUF/tree/main/Qwen3-1.7B-MegaScience) | |
| | | Qwen2.5-3B-MegaScience-GGUF | [Hugging Face ↗](https://huggingface.co/prithivMLmods/MegaScience-Qwen-GGUF/tree/main/Qwen2.5-3B-MegaScience) | |
| | | Qwen2.5-1.5B-MegaScience-GGUF | [Hugging Face ↗](https://huggingface.co/prithivMLmods/MegaScience-Qwen-GGUF/tree/main/Qwen2.5-1.5B-MegaScience) | |
| | | Qwen2.5-7B-MegaScience-GGUF | [Hugging Face ↗](https://huggingface.co/prithivMLmods/MegaScience-Qwen-GGUF/tree/main/Qwen2.5-7B-MegaScience-GGUF) | |
| |
|
| | # **Model Files** |
| |
|
| | ## Qwen3-8B-MegaScience |
| |
|
| |
|
| | | File Name | Quant Type | File Size | |
| | | - | - | - | |
| | | Qwen3-8B-MegaScience.BF16.gguf | BF16 | 16.4 GB | |
| | | Qwen3-8B-MegaScience.F16.gguf | F16 | 16.4 GB | |
| | | Qwen3-8B-MegaScience.Q8_0.gguf | Q8_0 | 8.71 GB | |
| |
|
| | ## Qwen3-4B-MegaScience |
| |
|
| | | File Name | Size | Quant Type | |
| | |-----------|------|------------| |
| | | Qwen3-4B-MegaScience.BF16.gguf | 8.05 GB | BF16 | |
| | | Qwen3-4B-MegaScience.F16.gguf | 8.05 GB | F16 | |
| | | Qwen3-4B-MegaScience.F32.gguf | 16.1 GB | F32 | |
| | | Qwen3-4B-MegaScience.Q2_K.gguf | 1.67 GB | Q2_K | |
| | | Qwen3-4B-MegaScience.Q3_K_L.gguf | 2.24 GB | Q3_K_L | |
| | | Qwen3-4B-MegaScience.Q3_K_M.gguf | 2.08 GB | Q3_K_M | |
| | | Qwen3-4B-MegaScience.Q3_K_S.gguf | 1.89 GB | Q3_K_S | |
| | | Qwen3-4B-MegaScience.Q4_K_M.gguf | 2.5 GB | Q4_K_M | |
| | | Qwen3-4B-MegaScience.Q4_K_S.gguf | 2.38 GB | Q4_K_S | |
| | | Qwen3-4B-MegaScience.Q5_K_M.gguf | 2.89 GB | Q5_K_M | |
| | | Qwen3-4B-MegaScience.Q5_K_S.gguf | 2.82 GB | Q5_K_S | |
| | | Qwen3-4B-MegaScience.Q6_K.gguf | 3.31 GB | Q6_K | |
| | | Qwen3-4B-MegaScience.Q8_0.gguf | 4.28 GB | Q8_0 | |
| |
|
| | ## Qwen3-1.7B-MegaScience |
| |
|
| | | File Name | Size | Quant Type | |
| | |-----------|------|------------| |
| | | Qwen3-1.7B-MegaScience.BF16.gguf | 3.45 GB | BF16 | |
| | | Qwen3-1.7B-MegaScience.F16.gguf | 3.45 GB | F16 | |
| | | Qwen3-1.7B-MegaScience.F32.gguf | 6.89 GB | F32 | |
| | | Qwen3-1.7B-MegaScience.Q2_K.gguf | 778 MB | Q2_K | |
| | | Qwen3-1.7B-MegaScience.Q3_K_L.gguf | 1 GB | Q3_K_L | |
| | | Qwen3-1.7B-MegaScience.Q3_K_M.gguf | 940 MB | Q3_K_M | |
| | | Qwen3-1.7B-MegaScience.Q3_K_S.gguf | 867 MB | Q3_K_S | |
| | | Qwen3-1.7B-MegaScience.Q4_K_M.gguf | 1.11 GB | Q4_K_M | |
| | | Qwen3-1.7B-MegaScience.Q4_K_S.gguf | 1.06 GB | Q4_K_S | |
| | | Qwen3-1.7B-MegaScience.Q5_K_M.gguf | 1.26 GB | Q5_K_M | |
| | | Qwen3-1.7B-MegaScience.Q5_K_S.gguf | 1.23 GB | Q5_K_S | |
| | | Qwen3-1.7B-MegaScience.Q6_K.gguf | 1.42 GB | Q6_K | |
| | | Qwen3-1.7B-MegaScience.Q8_0.gguf | 1.83 GB | Q8_0 | |
| |
|
| | ## Qwen2.5-3B-MegaScience |
| |
|
| | | File Name | Size | Quant Type | |
| | |-----------|------|------------| |
| | | Qwen2.5-3B-MegaScience.BF16.gguf | 6.18 GB | BF16 | |
| | | Qwen2.5-3B-MegaScience.F16.gguf | 6.18 GB | F16 | |
| | | Qwen2.5-3B-MegaScience.F32.gguf | 12.3 GB | F32 | |
| | | Qwen2.5-3B-MegaScience.Q2_K.gguf | 1.27 GB | Q2_K | |
| | | Qwen2.5-3B-MegaScience.Q3_K_L.gguf | 1.71 GB | Q3_K_L | |
| | | Qwen2.5-3B-MegaScience.Q3_K_M.gguf | 1.59 GB | Q3_K_M | |
| | | Qwen2.5-3B-MegaScience.Q3_K_S.gguf | 1.45 GB | Q3_K_S | |
| | | Qwen2.5-3B-MegaScience.Q4_K_M.gguf | 1.93 GB | Q4_K_M | |
| | | Qwen2.5-3B-MegaScience.Q4_K_S.gguf | 1.83 GB | Q4_K_S | |
| | | Qwen2.5-3B-MegaScience.Q5_K_M.gguf | 2.22 GB | Q5_K_M | |
| | | Qwen2.5-3B-MegaScience.Q5_K_S.gguf | 2.17 GB | Q5_K_S | |
| | | Qwen2.5-3B-MegaScience.Q6_K.gguf | 2.54 GB | Q6_K | |
| | | Qwen2.5-3B-MegaScience.Q8_0.gguf | 3.29 GB | Q8_0 | |
| |
|
| | ## Qwen2.5-1.5B-MegaScience |
| |
|
| | | File Name | Size | Quant Type | |
| | |-----------|------|------------| |
| | | Qwen2.5-1.5B-MegaScience.BF16.gguf | 3.09 GB | BF16 | |
| | | Qwen2.5-1.5B-MegaScience.F16.gguf | 3.09 GB | F16 | |
| | | Qwen2.5-1.5B-MegaScience.F32.gguf | 6.18 GB | F32 | |
| | | Qwen2.5-1.5B-MegaScience.Q2_K.gguf | 676 MB | Q2_K | |
| | | Qwen2.5-1.5B-MegaScience.Q3_K_L.gguf | 880 MB | Q3_K_L | |
| | | Qwen2.5-1.5B-MegaScience.Q3_K_M.gguf | 824 MB | Q3_K_M | |
| | | Qwen2.5-1.5B-MegaScience.Q3_K_S.gguf | 761 MB | Q3_K_S | |
| | | Qwen2.5-1.5B-MegaScience.Q4_K_M.gguf | 986 MB | Q4_K_M | |
| | | Qwen2.5-1.5B-MegaScience.Q4_K_S.gguf | 940 MB | Q4_K_S | |
| | | Qwen2.5-1.5B-MegaScience.Q5_K_M.gguf | 1.13 GB | Q5_K_M | |
| | | Qwen2.5-1.5B-MegaScience.Q5_K_S.gguf | 1.1 GB | Q5_K_S | |
| | | Qwen2.5-1.5B-MegaScience.Q6_K.gguf | 1.27 GB | Q6_K | |
| | | Qwen2.5-1.5B-MegaScience.Q8_0.gguf | 1.65 GB | Q8_0 | |
| |
|
| | ## Qwen2.5-7B-MegaScience |
| |
|
| | | File Name | Quant Type | File Size | |
| | | - | - | - | |
| | | Qwen2.5-7B-MegaScience.BF16.gguf | BF16 | 15.2 GB | |
| | | Qwen2.5-7B-MegaScience.F16.gguf | F16 | 15.2 GB | |
| | | Qwen2.5-7B-MegaScience.F32.gguf | F32 | 30.5 GB | |
| |
|
| | ## Quants Usage |
| |
|
| | (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) |
| |
|
| | Here is a handy graph by ikawrakow comparing some lower-quality quant |
| | types (lower is better): |
| |
|
| |  |