File size: 2,697 Bytes
eb6d906
 
 
984a63f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e1e6a6f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
datasets:
- prithivMLmods/Open-Omega-Explora-2.5M
base_model:
- prithivMLmods/Explora-0.6B
tags:
- text-generation-inference
- moe
- code
- science
- biology
- chemistry
- thinking
license: apache-2.0
language:
- en
pipeline_tag: text-generation
library_name: transformers
---

# **Explora-0.6B-GGUF**

> Explora-0.6B is a lightweight and efficient general-purpose reasoning model, fine-tuned on Qwen3-0.6B using the first 100,000 entries of the Open-Omega-Explora-2.5M dataset. It is tailored for science and code-focused reasoning tasks, combining symbolic clarity with fluent instruction-following, ideal for exploratory workflows in STEM domains.

# Model Files

| File Name | Format | Size | Precision | Description |
|-----------|---------|------|-----------|-------------|
| Explora-0.6B.F32.gguf | GGUF | 2.39 GB | 32-bit Float | Full precision model, highest quality |
| Explora-0.6B.F16.gguf | GGUF | 1.2 GB | 16-bit Float | Half precision, good balance of size and quality |
| Explora-0.6B.BF16.gguf | GGUF | 1.2 GB | 16-bit BFloat | Brain floating point, optimized for inference |
| Explora-0.6B.Q8_0.gguf | GGUF | 639 MB | 8-bit Quantized | High quality quantized model |
| Explora-0.6B.Q6_K.gguf | GGUF | 495 MB | 6-bit Quantized | Very good quality with smaller size |
| Explora-0.6B.Q5_K_M.gguf | GGUF | 444 MB | 5-bit Quantized (Medium) | Good quality, balanced compression |
| Explora-0.6B.Q5_K_S.gguf | GGUF | 437 MB | 5-bit Quantized (Small) | Good quality, higher compression |
| Explora-0.6B.Q4_K_M.gguf | GGUF | 397 MB | 4-bit Quantized (Medium) | Decent quality with good compression |
| Explora-0.6B.Q4_K_S.gguf | GGUF | 383 MB | 4-bit Quantized (Small) | Decent quality, higher compression |
| Explora-0.6B.Q3_K_L.gguf | GGUF | 368 MB | 3-bit Quantized (Large) | Lower quality but very compact |
| Explora-0.6B.Q3_K_M.gguf | GGUF | 347 MB | 3-bit Quantized (Medium) | Lower quality, more compact |
| Explora-0.6B.Q3_K_S.gguf | GGUF | 323 MB | 3-bit Quantized (Small) | Lower quality, most compact |
| Explora-0.6B.Q2_K.gguf | GGUF | 296 MB | 2-bit Quantized | Minimal quality, maximum compression |

## Configuration Files

| File Name | Size | Description |
|-----------|------|-------------|
| config.json | 29 Bytes | Model configuration parameters |
| .gitattributes | 2.3 kB | Git LFS configuration for large files |
| README.md | 280 Bytes | Project documentation |

## Quants Usage 

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)