bigatuna commited on
Commit
b209df6
·
verified ·
1 Parent(s): 3bbf6b5

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +44 -0
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: NousResearch/NousCoder-14B
3
+ library_name: llama.cpp
4
+ tags:
5
+ - gguf
6
+ - quantized
7
+ - coding
8
+ - nouscoder
9
+ license: apache-2.0
10
+ ---
11
+
12
+ # NousCoder-14B GGUF
13
+
14
+ GGUF quantizations of [NousResearch/NousCoder-14B](https://huggingface.co/NousResearch/NousCoder-14B) for use with llama.cpp, Ollama, LM Studio, and other GGUF-compatible inference engines.
15
+
16
+ ## Credits
17
+
18
+ All credit goes to **[NousResearch](https://huggingface.co/NousResearch)** for training and releasing the original NousCoder-14B model. This repo only provides quantized GGUF versions for easier local inference.
19
+
20
+ ## Available Quants
21
+
22
+ | Filename | Quant | Size | Description |
23
+ |----------|-------|------|-------------|
24
+ | `nouscoder-14b-q4_k_m.gguf` | Q4_K_M | 8.4 GB | Good balance of quality and size |
25
+
26
+ *More quantizations coming soon.*
27
+
28
+ ## Usage
29
+
30
+ ### llama.cpp
31
+ ```bash
32
+ ./llama-cli -m nouscoder-14b-q4_k_m.gguf -p "def fibonacci(n):"
33
+ ```
34
+
35
+ ### Ollama
36
+ ```bash
37
+ ollama run hf.co/bigatuna/NousCoder-14B-GGUF:Q4_K_M
38
+ ```
39
+
40
+ ## Original Model
41
+
42
+ - **Model**: [NousResearch/NousCoder-14B](https://huggingface.co/NousResearch/NousCoder-14B)
43
+ - **License**: Apache 2.0
44
+ - **Parameters**: 14B