maincode-prabod commited on
Commit
602967f
·
verified ·
1 Parent(s): 1298850

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,10 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Maincoder-1B-BF16.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Maincoder-1B-F16.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Maincoder-1B-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Maincoder-1B-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Maincoder-1B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Maincoder-1B-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Maincoder-1B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Maincoder-1B-BF16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25b5610963c865908cf8b7848472fa250db60dcc6db89b2408ff23de50ac537b
3
+ size 2058570752
Maincoder-1B-F16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a3e551e0586cd4cd09bdd7ae5a37a9605461ec15ee9436a93a7852f27980b40
3
+ size 2058570752
Maincoder-1B-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e431a504ed43a7867890c5677f4ba26f248fa7c29855b94aee5ef0561a7e8b32
3
+ size 643722752
Maincoder-1B-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8dc2a67c138a038c9c284d82e039f8570dadfaa8f0273f1fa30930a4d036ee47
3
+ size 672108032
Maincoder-1B-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07759f41ec03757148d419bbad0e68369b1ef7364ccf481ccb0ea1004f22ff2c
3
+ size 757435904
Maincoder-1B-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2a28c16426d6eb32bfe7afa965e6816984956ae4380d33efc3fd211a92f01e26
3
+ size 848096768
Maincoder-1B-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:43cd43d942de46327ff82a77e63717ef9a34d97787a3c5dcd309a9da3978a668
3
+ size 1096604672
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ tags:
7
+ - code
8
+ - python
9
+ - maincoder
10
+ - code-generation
11
+ - gguf
12
+ - quantized
13
+ pipeline_tag: text-generation
14
+ base_model: Maincode/Maincoder-1B
15
+ ---
16
+ <img src="https://huggingface.co/datasets/Maincode/assets/resolve/e51154e034201be1a5dad0e9c8de31d8b9f17643/maincoder_logo.png" alt="" width="1250">
17
+
18
+ # Maincoder-1B-GGUF
19
+
20
+ GGUF quantizations of [**Maincoder-1B**](https://huggingface.co/Maincode/Maincoder-1B), a code-focused language model optimized for code generation and completion tasks. These quantized versions are designed for efficient local deployment with [llama.cpp](https://github.com/ggerganov/llama.cpp).
21
+
22
+ Find more details in the original model card: https://huggingface.co/Maincode/Maincoder-1B
23
+
24
+ ## How to run Maincoder
25
+
26
+ Example usage with llama.cpp:
27
+
28
+ ```bash
29
+ llama-cli -hf Maincode/Maincoder-1B-GGUF
30
+ ```
31
+
32
+ Or with a specific quantization:
33
+
34
+ ```bash
35
+ llama-cli -hf Maincode/Maincoder-1B-GGUF -m Maincoder-1B-Q4_K_M.gguf
36
+ ```
37
+
38
+ Code completion example:
39
+
40
+ ```bash
41
+ llama-cli -hf Maincode/Maincoder-1B-GGUF -p 'def fibonacci(n: int) -> int:
42
+ """Return the n-th Fibonacci number."""
43
+ ' -n 256
44
+ ```
45
+
46
+ ## Available Quantizations
47
+
48
+ | Filename | Size | Description |
49
+ |----------|------|-------------|
50
+ | Maincoder-1B-BF16.gguf | 1.9 GB | BFloat16 - Full precision, best quality |
51
+ | Maincoder-1B-F16.gguf | 1.9 GB | Float16 - Full precision |
52
+ | Maincoder-1B-Q8_0.gguf | 1.0 GB | 8-bit quantization - Highest quality quantized |
53
+ | Maincoder-1B-Q6_K.gguf | 809 MB | 6-bit quantization - High quality |
54
+ | Maincoder-1B-Q5_K_M.gguf | 722 MB | 5-bit quantization - Great balance |
55
+ | Maincoder-1B-Q4_K_M.gguf | 641 MB | 4-bit quantization - Recommended |
56
+ | Maincoder-1B-Q4_0.gguf | 614 MB | 4-bit quantization - Smallest, fastest |
57
+
58
+ ## 📄 License
59
+
60
+ This model is released under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
61
+
62
+ ## 🔗 Links
63
+
64
+ - [Original Model](https://huggingface.co/Maincode/Maincoder-1B)
65
+ - [Maincode](https://maincode.com)
66
+ - [llama.cpp](https://github.com/ggerganov/llama.cpp)
67
+