Update README.md
Browse files
README.md
CHANGED
|
@@ -16,4 +16,43 @@ language:
|
|
| 16 |
- en
|
| 17 |
pipeline_tag: text-generation
|
| 18 |
library_name: transformers
|
| 19 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
- en
|
| 17 |
pipeline_tag: text-generation
|
| 18 |
library_name: transformers
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
# **Explora-0.6B-GGUF**
|
| 22 |
+
|
| 23 |
+
> Explora-0.6B is a lightweight and efficient general-purpose reasoning model, fine-tuned on Qwen3-0.6B using the first 100,000 entries of the Open-Omega-Explora-2.5M dataset. It is tailored for science and code-focused reasoning tasks, combining symbolic clarity with fluent instruction-following, ideal for exploratory workflows in STEM domains.
|
| 24 |
+
|
| 25 |
+
# Model Files
|
| 26 |
+
|
| 27 |
+
| File Name | Format | Size | Precision | Description |
|
| 28 |
+
|-----------|---------|------|-----------|-------------|
|
| 29 |
+
| Explora-0.6B.F32.gguf | GGUF | 2.39 GB | 32-bit Float | Full precision model, highest quality |
|
| 30 |
+
| Explora-0.6B.F16.gguf | GGUF | 1.2 GB | 16-bit Float | Half precision, good balance of size and quality |
|
| 31 |
+
| Explora-0.6B.BF16.gguf | GGUF | 1.2 GB | 16-bit BFloat | Brain floating point, optimized for inference |
|
| 32 |
+
| Explora-0.6B.Q8_0.gguf | GGUF | 639 MB | 8-bit Quantized | High quality quantized model |
|
| 33 |
+
| Explora-0.6B.Q6_K.gguf | GGUF | 495 MB | 6-bit Quantized | Very good quality with smaller size |
|
| 34 |
+
| Explora-0.6B.Q5_K_M.gguf | GGUF | 444 MB | 5-bit Quantized (Medium) | Good quality, balanced compression |
|
| 35 |
+
| Explora-0.6B.Q5_K_S.gguf | GGUF | 437 MB | 5-bit Quantized (Small) | Good quality, higher compression |
|
| 36 |
+
| Explora-0.6B.Q4_K_M.gguf | GGUF | 397 MB | 4-bit Quantized (Medium) | Decent quality with good compression |
|
| 37 |
+
| Explora-0.6B.Q4_K_S.gguf | GGUF | 383 MB | 4-bit Quantized (Small) | Decent quality, higher compression |
|
| 38 |
+
| Explora-0.6B.Q3_K_L.gguf | GGUF | 368 MB | 3-bit Quantized (Large) | Lower quality but very compact |
|
| 39 |
+
| Explora-0.6B.Q3_K_M.gguf | GGUF | 347 MB | 3-bit Quantized (Medium) | Lower quality, more compact |
|
| 40 |
+
| Explora-0.6B.Q3_K_S.gguf | GGUF | 323 MB | 3-bit Quantized (Small) | Lower quality, most compact |
|
| 41 |
+
| Explora-0.6B.Q2_K.gguf | GGUF | 296 MB | 2-bit Quantized | Minimal quality, maximum compression |
|
| 42 |
+
|
| 43 |
+
## Configuration Files
|
| 44 |
+
|
| 45 |
+
| File Name | Size | Description |
|
| 46 |
+
|-----------|------|-------------|
|
| 47 |
+
| config.json | 29 Bytes | Model configuration parameters |
|
| 48 |
+
| .gitattributes | 2.3 kB | Git LFS configuration for large files |
|
| 49 |
+
| README.md | 280 Bytes | Project documentation |
|
| 50 |
+
|
| 51 |
+
## Quants Usage
|
| 52 |
+
|
| 53 |
+
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
|
| 54 |
+
|
| 55 |
+
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
| 56 |
+
types (lower is better):
|
| 57 |
+
|
| 58 |
+

|