fmasterpro27 commited on
Commit
3d2d432
Β·
verified Β·
1 Parent(s): 5419b7d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +117 -3
README.md CHANGED
@@ -1,5 +1,119 @@
1
  ---
2
- license: mit
 
 
 
3
  base_model:
4
- - Qwen/Qwen3-0.6B
5
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE
5
+ pipeline_tag: text-generation
6
  base_model:
7
+ - Qwen/Qwen3-0.6B-Base
8
+ tags:
9
+ - open4bits
10
+ - qwen
11
+ - qwen3
12
+ ---
13
+
14
+ # Open4bits / Qwen3 0.6B GGUF
15
+
16
+ This repository provides **GGUF-format quantized builds of the Qwen3 0.6B model**, published by Open4bits for efficient local inference using `llama.cpp`-compatible runtimes.
17
+
18
+ The underlying Qwen3 model architecture and weights are owned by the original model authors. This repository contains **only converted and quantized GGUF files** and does not include training code or datasets.
19
+
20
+ These builds are intended for fast, low-memory inference on CPUs and GPUs across a wide range of hardware.
21
+
22
+ ---
23
+
24
+ ## Model Overview
25
+
26
+ Qwen3 0.6B is a small-scale transformer language model designed for lightweight text generation tasks.
27
+ The GGUF format enables efficient execution in environments such as `llama.cpp`, `llama-cpp-python`, and compatible frontends.
28
+
29
+ This repository includes multiple quantization variants to balance **quality, speed, and memory usage**.
30
+
31
+ ---
32
+
33
+ ## Model Details
34
+
35
+ - **Model family:** Qwen3
36
+ - **Model size:** 0.6B parameters
37
+ - **Format:** GGUF
38
+ - **Task:** Text Generation
39
+ - **Compatibility:** llama.cpp, llama-cpp-python, GGUF-compatible runtimes
40
+
41
+ ---
42
+
43
+ ## Available Files
44
+
45
+ The following quantized variants are provided:
46
+
47
+ ### FP16
48
+ - `qwen3-0.6b-f16.gguf` β€” 1.51 GB
49
+
50
+ ### Q8
51
+ - `qwen3-0.6b-Q8_0.gguf` β€” 805 MB
52
+
53
+ ### Q6
54
+ - `qwen3-0.6b-Q6_K.gguf` β€” 623 MB
55
+
56
+ ### Q5
57
+ - `qwen3-0.6b-Q5_0.gguf` β€” 544 MB
58
+ - `qwen3-0.6b-Q5_1.gguf` β€” 581 MB
59
+ - `qwen3-0.6b-Q5_K_M.gguf` β€” 551 MB
60
+ - `qwen3-0.6b-Q5_K_S.gguf` β€” 544 MB
61
+
62
+ ### Q4
63
+ - `qwen3-0.6b-Q4_0.gguf` β€” 469 MB
64
+ - `qwen3-0.6b-Q4_K_M.gguf` β€” 484 MB
65
+ - `qwen3-0.6b-Q4_K_S.gguf` β€” 471 MB
66
+ - `qwen3-0.6b-IQ4_NL.gguf` β€” 470 MB
67
+ - `qwen3-0.6b-IQ4_XS.gguf` β€” 452 MB
68
+
69
+ ---
70
+
71
+ ## Intended Use
72
+
73
+ These GGUF builds are intended for:
74
+ - Local text generation
75
+ - CPU or low-VRAM GPU inference
76
+ - Embedded and edge deployments
77
+ - Research, experimentation, and prototyping
78
+
79
+ ---
80
+
81
+ ## Usage
82
+
83
+ Example usage with `llama-cpp-python`:
84
+
85
+ ```python
86
+ from llama_cpp import Llama
87
+
88
+ llm = Llama(
89
+ model_path="qwen3-0.6b-Q4_K_M.gguf",
90
+ n_ctx=2048
91
+ )
92
+
93
+ output = llm("Write a short explanation of quantization.")
94
+ print(output["choices"][0]["text"])
95
+ ````
96
+
97
+ ---
98
+
99
+ ## Limitations
100
+
101
+ * Output quality is limited by the small model size
102
+ * Lower-bit quantizations may reduce accuracy
103
+ * Not instruction-tuned unless combined with external prompting strategies
104
+
105
+ ---
106
+
107
+ ## License
108
+
109
+ This repository follows the **Apache License 2.0**, consistent with the upstream model licensing.
110
+
111
+ The original Qwen3 model and associated intellectual property are owned by the original model authors.
112
+
113
+ ---
114
+
115
+ ## Support
116
+
117
+ If you find this model useful, please consider supporting the project.
118
+ Your support helps us continue releasing and maintaining high-quality open models.
119
+ Support us with a heart.