File size: 1,291 Bytes
191877f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
license: apache-2.0
tags:
  - llm
  - gguf
  - mistral
  - qwen3
  - mirror
library_name: llama.cpp
---

# LLM Mirror (A.I.M.I)

Mirror of A.I.M.I's default text-LLM GGUFs, re-hosted for stable URLs. Contents unmodified from upstream unsloth/Qwen quantizations.

Used by A.I.M.I's chat engine via llama.cpp. Qwen3-8B is the 16 GB tier default; Mistral Small 3.2 24B is the 24 GB+ tier upgrade.

## Files

| File | Upstream | Size | Tier |
|---|---|---|---|
| `Qwen3-8B-Q4_K_M.gguf` | [Qwen/Qwen3-8B-GGUF](https://huggingface.co/Qwen/Qwen3-8B-GGUF) | ~5.0 GB | 16 GB default |
| `Mistral-Small-3.2-24B-Instruct-2506-Q4_K_M.gguf` | [unsloth/Mistral-Small-3.2-24B-Instruct-2506-GGUF](https://huggingface.co/unsloth/Mistral-Small-3.2-24B-Instruct-2506-GGUF) | ~14.3 GB | 24 GB+ default |

Total: ~19 GB.

## License

Both models **Apache 2.0**:
- Mistral Small 3.2 24B Instruct: Apache 2.0 from Mistral AI. Unsloth's GGUF re-quantization inherits Apache 2.0.
- Qwen3-8B: Apache 2.0 from Alibaba Cloud / Qwen team. GGUF by Qwen team directly.

Redistributed unchanged.

## Attribution

- **Mistral Small 3.2**: Mistral AI (2025). Base Apache 2.0 release.
- **Qwen3-8B**: Alibaba Cloud / Qwen team (2025). Base Apache 2.0 release.
- **GGUF conversions**: unsloth (Mistral), Qwen team (Qwen3).