nhannt201 commited on
Commit
425b216
·
verified ·
1 Parent(s): 82a7317

Upload 9 files

Browse files
.gitattributes CHANGED
@@ -46,3 +46,11 @@ gguf/airy-q2_k_s.gguf filter=lfs diff=lfs merge=lfs -text
46
  gguf/airy-q2_k.gguf filter=lfs diff=lfs merge=lfs -text
47
  gguf/airy-tq1_0.gguf filter=lfs diff=lfs merge=lfs -text
48
  gguf/airy-tq2_0.gguf filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
46
  gguf/airy-q2_k.gguf filter=lfs diff=lfs merge=lfs -text
47
  gguf/airy-tq1_0.gguf filter=lfs diff=lfs merge=lfs -text
48
  gguf/airy-tq2_0.gguf filter=lfs diff=lfs merge=lfs -text
49
+ gguf/acnoryx-0.8b-iq1_m.gguf filter=lfs diff=lfs merge=lfs -text
50
+ gguf/acnoryx-0.8b-iq1_s.gguf filter=lfs diff=lfs merge=lfs -text
51
+ gguf/acnoryx-0.8b-iq2_m.gguf filter=lfs diff=lfs merge=lfs -text
52
+ gguf/acnoryx-0.8b-iq2_xs.gguf filter=lfs diff=lfs merge=lfs -text
53
+ gguf/acnoryx-0.8b-iq2_xxs.gguf filter=lfs diff=lfs merge=lfs -text
54
+ gguf/acnoryx-0.8b-iq3_m.gguf filter=lfs diff=lfs merge=lfs -text
55
+ gguf/acnoryx-0.8b-q2_k.gguf filter=lfs diff=lfs merge=lfs -text
56
+ gguf/acnoryx-0.8b-q3_k_m.gguf filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,177 +1,95 @@
1
- ---
2
- language:
3
- - vi
4
- - en
5
- license: apache-2.0
6
- tags:
7
- - gguf
8
- - qwen3
9
- - quantization
10
- - llama-cpp
11
- - acne
12
- - skincare
13
- base_model: Qwen/Qwen3.5-0.8B
14
- model_type: qwen3
15
- pipeline_tag: text-generation
16
- library_name: gguf
17
- ---
18
-
19
- # Acnoryx/Airy GGUFs
20
-
21
- GGUF pack for the fine-tuned Acnoryx acne-care assistant model, prepared for direct publishing under `Acnoryx/Airy`.
22
-
23
- - Hugging Face repo: `Acnoryx/Airy`
24
- - Product app: [Acnoryx - AI for Acne Care](https://play.google.com/store/apps/details?id=com.fivecanh.acnoryx)
25
-
26
- ## Important scope
27
-
28
- This repo is a **research snapshot**: it explores how far the model can be compressed while still keeping acceptable acne-domain quality.
29
-
30
- It is **not** the production model used inside the Acnoryx app; the app may use different quantization, prompt format, runtime settings, and evaluation criteria.
31
-
32
- ## What this repo contains
33
-
34
- This repo focuses on a small set of high-value research GGUFs for Acnoryx/Airy. The most important published candidates are listed in the evaluation table below.
35
-
36
- ## Model Quality Summary
37
-
38
- The tables below group the models by practical quality. Models with <50% thinking accuracy are separated into a “Low-quality / not recommended” section so the “Core + Airy” table stays focused on the usable candidates.
39
-
40
- ### Core (release floor)
41
-
42
- | Model | Size | Thinking Pass | Thinking % | Non-Think Pass | Non-Think % | Think t/s | Non-Think t/s |
43
- |---|---:|---:|---:|---:|---:|---:|---:|
44
- | `acnoryx-0.8b-f16` | 1446 MB | 99/100 | 99.0% | 100/100 | 100.0% | 44 | 44 |
45
- | `acnoryx-0.8b-q8_0` | 774 MB | 98/100 | 98.0% | 98/100 | 98.0% | 69 | 69 |
46
- | `acnoryx-0.8b-q4_k_m` | 505 MB | 98/100 | 98.0% | 100/100 | 100.0% | 75 | 80 |
47
- | `acnoryx-0.8b-iq4_nl` | 493 MB | 99/100 | 99.0% | 100/100 | 100.0% | 93 | 93 |
48
- | `acnoryx-0.8b-iq4_xs` | 482 MB | 99/100 | 99.0% | 98/100 | 98.0% | 92 | 92 |
49
- | `acnoryx-0.8b-q3_k_m` | 445 MB | 99/100 | 99.0% | 97/100 | 97.0% | 96 | 95 |
50
- | `acnoryx-0.8b-iq3_m` | 433 MB | 91/100 | 91.0% | 96/100 | 96.0% | 88 | 88 |
51
-
52
- ### Airy candidates (acceptable quality)
53
-
54
- | Model | Size | Thinking Pass | Thinking % | Non-Think Pass | Non-Think % | Think t/s | Non-Think t/s |
55
- |---|---:|---:|---:|---:|---:|---:|---:|
56
- | `airy-iq3_s.gguf` | 415.6 MB | 100/100 | 100% | 100/100 | 100% | ~20 | ~20 |
57
- | `airy-iq3_xs.gguf` | 408.4 MB | 100/100 | 100% | 98/100 | 98% | ~19 | ~19 |
58
- | `airy-q2_k.gguf` | 377.4 MB | 95/100 | 95% | 97/100 | 97% | ~23 | ~23 |
59
- | `airy-q2_k_s.gguf` | 370.2 MB | 98/100 | 98% | 93/100 | 93% | ~20 | ~20 |
60
- | `airy-iq2_s.gguf` | 320.2 MB | 78/100 | 78% | 81/100 | 81% | ~21 | ~21 |
61
- | `airy-iq2_xs.gguf` | 317.5 MB | 82/100 | 82% | 74/100 | 74% | ~23 | ~23 |
62
-
63
- ### Low-quality / not recommended (thinking < 50%)
64
-
65
- These models are included for transparency but are not recommended for deployment due to very low thinking accuracy.
66
-
67
- | Model | Size | Thinking Pass | Thinking % | Non-Think Pass | Non-Think % | Think t/s | Non-Think t/s |
68
- |---|---:|---:|---:|---:|---:|---:|---:|
69
- | `airy-iq2_xxs.gguf` | 303.1 MB | 49/100 | 49% | 50/100 | 50% | ~21 | ~21 |
70
- | `airy-iq2_m.gguf` | 334.3 MB | 44/100 | 44% | 48/100 | 48% | ~22 | ~22 |
71
- | `airy-tq1_0.gguf` | 311.4 MB | 11/100 | 11% | 10/100 | 10% | (high) | (high) |
72
- | `airy-iq1_m.gguf` | 285.5 MB | 14/100 | 14% | 14/100 | 14% | ~30 | ~30 |
73
- | `airy-iq1_s.gguf` | 275.0 MB | 7/100 | 7% | 6/100 | 6% | ~30 | ~30 |
74
-
75
- ## Key Takeaways
76
-
77
- | Finding | Notes |
78
- |---|---|
79
- | `airy-iq3_s.gguf` is the best overall file | Highest quality, matches or exceeds reference floor while being smaller.
80
- | `airy-q2_k.gguf` is the best value | Largest size reduction while keeping high accuracy.
81
- | `airy-q2_k_s.gguf` is the best aggressive option | Very high thinking accuracy with a modest non-thinking drop.
82
- | `airy-iq2_s.gguf` and `airy-iq2_xs.gguf` are the lower usable edge | These are the smallest usable models before quality drops sharply.
83
- | `airy-iq2_xxs.gguf` and below are not reliable | Performance and accuracy degrade too far for typical deployment.
84
- | Some files lack final benchmark records | `airy-iq3_xxs.gguf` and `airy-tq2_0.gguf` were generated but not merged into final evaluation.
85
-
86
- ## Recommendations
87
-
88
- | Recommendation | Model(s) | Why |
89
- |---|---|---|
90
- | Default (publish) | `airy-iq3_s.gguf` | Best quality for minimal size; safest drop-in.
91
- | Best compression/value | `airy-q2_k.gguf` | Great size reduction with minimal quality loss.
92
- | Aggressive small | `airy-q2_k_s.gguf` | Strong thinking mode; OK non-thinking.
93
- | Smallest still usable | `airy-iq2_s.gguf` | Lowest model that retains practical utility.
94
- | Archive / not recommended | `airy-iq2_xxs.gguf`, `airy-iq2_m.gguf`, `airy-tq1_0.gguf`, `airy-iq1_m.gguf`, `airy-iq1_s.gguf` | Included for transparency but not for deployment.
95
-
96
- ## How To Run
97
-
98
- ### 1. Install Python packages
99
-
100
- ```bash
101
- pip install huggingface_hub llama-cpp-python
102
- ```
103
-
104
- `huggingface_hub` is used to download the GGUF file from Hugging Face.
105
-
106
- `llama-cpp-python` is used to actually load and run the GGUF model in Python.
107
-
108
- ### 2. Download a GGUF from Hugging Face in Python
109
-
110
- ```python
111
- from huggingface_hub import hf_hub_download
112
-
113
- model_path = hf_hub_download(
114
- repo_id="Acnoryx/Airy",
115
- filename="airy-iq3_s.gguf",
116
- local_dir="gguf",
117
- )
118
-
119
- print(model_path)
120
- ```
121
-
122
- ### 3. Run inference in Python
123
-
124
- ```python
125
- from huggingface_hub import hf_hub_download
126
- from llama_cpp import Llama
127
-
128
- model_path = hf_hub_download(
129
- repo_id="Acnoryx/Airy",
130
- filename="airy-iq3_s.gguf",
131
- local_dir="gguf",
132
- )
133
-
134
- llm = Llama(
135
- model_path=model_path,
136
- n_ctx=4096,
137
- n_gpu_layers=-1,
138
- verbose=False,
139
- )
140
-
141
- result = llm.create_chat_completion(
142
- messages=[
143
- {"role": "system", "content": "You are Acnoryx AI, a dermatology assistant focused on acne and skincare."},
144
- {"role": "user", "content": "What are blackheads?"},
145
- ],
146
- temperature=0.2,
147
- )
148
-
149
- print(result["choices"][0]["message"]["content"])
150
- ```
151
-
152
- ### 4. Swap model files quickly
153
-
154
- Just change the `filename` value:
155
-
156
- - `airy-iq3_s.gguf`
157
- - `airy-q2_k.gguf`
158
- - `airy-iq2_s.gguf`
159
-
160
- ### 5. Quick pick guide
161
-
162
- - Choose `airy-iq3_s.gguf` for the strongest overall result.
163
- - Choose `airy-q2_k.gguf` for the best compression/value balance.
164
- - Choose `airy-iq2_s.gguf` only if you need a much smaller file and can accept a visible quality drop.
165
-
166
- ## Final Conclusion
167
-
168
- The current Airy lineup proves that the model can go below the old release-floor size while still keeping strong acne-domain quality.
169
-
170
- The best files in this repo are:
171
-
172
- - `airy-iq3_s.gguf`
173
- - `airy-iq3_xs.gguf`
174
- - `airy-q2_k.gguf`
175
- - `airy-q2_k_s.gguf`
176
-
177
- Among them, `airy-iq3_s.gguf` is the safest default publish choice, while `airy-q2_k.gguf` is the best efficiency result.
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - vi
5
+ - en
6
+ tags:
7
+ - acne
8
+ - skincare
9
+ - dermatology
10
+ - gguf
11
+ - qwen3
12
+ base_model: Qwen/Qwen3.5-0.8B
13
+ pipeline_tag: text-generation
14
+ ---
15
+
16
+ # Acnoryx/Airy — Research GGUF Bundle
17
+
18
+ Research & evaluation companion to the main release. Contains sub-4-bit quantizations
19
+ (<4-bit) for low-memory benchmarking on the **0.8B** model.
20
+
21
+ ## Model Details
22
+
23
+ | | |
24
+ |-|-|
25
+ | **Base model** | Qwen/Qwen3.5-0.8B (752M params, FLA + Triton) |
26
+ | **Fine-tune** | SFT on 30,007 acne/skincare/dermatology samples |
27
+ | **Training** | 4 epochs, batch=2, grad_acc=8, lr=5e-5 |
28
+ | **Training loss** | 0.1217 (eval_loss: 0.0876) |
29
+ | **Languages** | Vietnamese, English |
30
+ | **Domain** | Acne analysis, skincare routines, scan interpretation |
31
+ | **Identity** | Acnoryx AI — in-app dermatology assistant |
32
+
33
+ ## Research Quantization Results
34
+
35
+ Tested with 100 domain-specific questions × 2 modes (thinking / non-thinking).
36
+ All quantizations in this bundle are **sub-4-bit (<4-bit)**, ordered high-to-low bit depth.
37
+
38
+ | Quant | Size | Thinking | Non-Think | Avg | Status |
39
+ |-------|------|----------|-----------|-----|--------|
40
+ | **Q3_K_M** | 549 MB | **92%** | **94%** | **93.0%** | ✅ Excellent |
41
+ | **IQ3_M** | 537 MB | **76%** | **75%** | **75.5%** | ⚠️ Usable but degraded |
42
+ | **Q2_K** | 482 MB | 54% | 58% | 56.0% | ⚠️ Marginal |
43
+ | **IQ2_M** | 439 MB | 10% | 12% | 11.0% | ❌ Not usable |
44
+ | **IQ2_XS** | 397 MB | 0% | 0% | 0.0% | Skipped (early-stop) |
45
+ | **IQ2_XXS** | 383 MB | 0% | 0% | 0.0% | Skipped (early-stop) |
46
+ | **IQ1_M** | 365 MB | 0% | 0% | 0.0% | Skipped (early-stop) |
47
+ | **IQ1_S** | 355 MB | 0% | 0% | 0.0% | Skipped (early-stop) |
48
+
49
+ ### imatrix
50
+
51
+ IQ2/IQ1 quants were generated with importance matrix (imatrix) calibration from
52
+ a 44KB domain-specific corpus. The 0.8B model holds at Q3_K_M (93%) and IQ3_M (75.5%),
53
+ but falls sharply at IQ2 and below.
54
+
55
+ ## Full Quantization Map (Release + Research)
56
+
57
+ Combined view across all quantizations for the 0.8B model, ordered by bit depth (high low):
58
+
59
+ | Quant | Size | Thinking | Non-Think | Avg | Bundle |
60
+ |-------|------|----------|-----------|-----|--------|
61
+ | F16 | 1932 MB | 94% | 93% | 93.5% | Release |
62
+ | Q8_0 | 1032 MB | 89% | 90% | 89.5% | Release |
63
+ | Q5_K_M | 718 MB | 90% | 86% | 88.0% | Release |
64
+ | Q4_K_M | 641 MB | 89% | 92% | 90.5% | Release |
65
+ | Q4_0 | 615 MB | 91% | 90% | 90.5% | Release |
66
+ | IQ4_NL | 630 MB | 90% | 90% | 90.0% | Release |
67
+ | IQ4_XS | 611 MB | 89% | 92% | 90.5% | Release |
68
+ | **Q3_K_M** | **549 MB** | **92%** | **94%** | **93.0%** | **Research** |
69
+ | **IQ3_M** | **537 MB** | **76%** | **75%** | **75.5%** | **Research** |
70
+ | **Q2_K** | **482 MB** | **54%** | **58%** | **56.0%** | **Research** |
71
+ | **IQ2_M** | **439 MB** | **10%** | **12%** | **11.0%** | **Research** |
72
+ | **IQ2_XS** | **397 MB** | **0%** | **0%** | **0.0%** | **Research** |
73
+ | **IQ2_XXS** | **383 MB** | **0%** | **0%** | **0.0%** | **Research** |
74
+ | **IQ1_M** | **365 MB** | **0%** | **0%** | **0.0%** | **Research** |
75
+ | **IQ1_S** | **355 MB** | **0%** | **0%** | **0.0%** | **Research** |
76
+
77
+ ### Key findings
78
+
79
+ - **Standout research quant: Q3_K_M (549 MB, 93%)** ties F16 quality at 28% the size
80
+ - **IQ3_M (537 MB, 75.5%)** still usable proof of 0.8B's superior quantization resilience
81
+ - **Q2_K (56%)** marginal but non-zero the 0.8B handles 2-bit far better than 0.6B (0%)
82
+ - IQ2 and below are not viable despite imatrix calibration
83
+ - **0.8B vs 0.6B:** At each quant level, 0.8B scores significantly higher
84
+
85
+ ## Usage
86
+
87
+ ```bash
88
+ # llama.cpp Q3_K_M is the research highlight
89
+ ./llama-cli -m acnoryx-0.8b-q3_k_m.gguf -cnv -p "Xin chào"
90
+ ```
91
+
92
+ ## Related
93
+
94
+ - **Release bundle:** Production quantizations (F16 IQ4_XS, ≥4-bit)
95
+ - **0.6B research:** [Acnoryx/Airy-Lite](https://huggingface.co/Acnoryx/Airy-Lite) — smaller model with compact footprint
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
gguf/acnoryx-0.8b-iq1_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc41e7f81876d15a5f037c132695e5f3e8062fa8fd1c50cb9d98b00b0704ea3a
3
+ size 382832064
gguf/acnoryx-0.8b-iq1_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98d6475f5b4ed6f89274e05757ae901bbd08059fef006dfb233b2fe929ab8785
3
+ size 371795904
gguf/acnoryx-0.8b-iq2_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c0da92dea1fd8d75312a9583fab13c0048a49ee23e917e672ac67656391de67
3
+ size 459761600
gguf/acnoryx-0.8b-iq2_xs.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ef3524cdf46e85211ab39291b8d6d47000d468ee9c38e15bbd75f9225010c02
3
+ size 416333760
gguf/acnoryx-0.8b-iq2_xxs.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:449bc10952982f127a074496d5ad3ae430392bdab70feddebffc119e1bcadbf0
3
+ size 401225664
gguf/acnoryx-0.8b-iq3_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9739c3cb690a42ac38673db9c8a97d1fc96078980967dc8f089f42ac6355c843
3
+ size 563208896
gguf/acnoryx-0.8b-q2_k.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:54e6af18be0acf65bfaed296bacc7d6b7cf6e9bada977a393c29e3305790110d
3
+ size 505755840
gguf/acnoryx-0.8b-q3_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0d14483d825fea21cad63b294d582f98d294e813d13b99d21c1caffae9d5fd9
3
+ size 575476416