goniz commited on
Commit
09f4300
·
verified ·
1 Parent(s): e5dc903

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - gguf
4
+ - llama.cpp
5
+ - quantization
6
+ base_model: 0xSero/MiniMax-M2.1-REAP-40
7
+ ---
8
+
9
+ # MiniMax-M2.1-REAP-40-GGUF
10
+
11
+ This model was converted to GGUF format from [`0xSero/MiniMax-M2.1-REAP-40`](https://huggingface.co/0xSero/MiniMax-M2.1-REAP-40) using GGUF Forge.
12
+
13
+ ## Quants
14
+ The following quants are available:
15
+ Q3_K_L, Q4_K_S, Q4_K_M, Q5_K_M, Q6_K, Q8_0
16
+
17
+ ## Conversion Stats
18
+
19
+ | Metric | Value |
20
+ |--------|-------|
21
+ | Job ID | `a9834b56-d9ba-457b-b5db-7b960a984439` |
22
+ | GGUF Forge Version | v6.0 |
23
+ | Total Time | 9.5h |
24
+ | Avg Time per Quant | 43.7min |
25
+
26
+ ### Step Breakdown
27
+ - Download: 35.4min
28
+ - FP16 Conversion: 2.5h
29
+ - Quantization: 6.4h
30
+
31
+ ## 🚀 Convert Your Own Models
32
+
33
+ **Want to convert more models to GGUF?**
34
+
35
+ 👉 **[gguforge.com](https://gguforge.com)** — Free hosted GGUF conversion service. Login with HuggingFace and request conversions instantly!
36
+
37
+ ## Links
38
+
39
+ - 🌐 **Free Hosted Service**: [gguforge.com](https://gguforge.com)
40
+ - 🛠️ Self-host GGUF Forge: [GitHub](https://github.com/Akicuo/automaticConversion)
41
+ - 📦 llama.cpp (quantization engine): [GitHub](https://github.com/ggerganov/llama.cpp)
42
+ - 💬 Community & Support: [Discord](https://discord.gg/4vafUgVX3a)
43
+
44
+
45
+ ---
46
+ *Converted automatically by [GGUF Forge](https://gguforge.com) v6.0*
47
+