Mungert commited on
Commit
bb2dd19
Β·
verified Β·
0 Parent(s):

Super-squash history to reclaim storage

Browse files
.gitattributes ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ PLM-1.8B-Instruct-iq3_xs.gguf filter=lfs diff=lfs merge=lfs -text
37
+ PLM-1.8B-Instruct-iq3_xxs.gguf filter=lfs diff=lfs merge=lfs -text
38
+ PLM-1.8B-Instruct-iq3_s.gguf filter=lfs diff=lfs merge=lfs -text
39
+ PLM-1.8B-Instruct-iq3_m.gguf filter=lfs diff=lfs merge=lfs -text
40
+ PLM-1.8B-Instruct.imatrix filter=lfs diff=lfs merge=lfs -text
41
+ PLM-1.8B-Instruct-bf16.gguf filter=lfs diff=lfs merge=lfs -text
42
+ PLM-1.8B-Instruct-f16.gguf filter=lfs diff=lfs merge=lfs -text
43
+ PLM-1.8B-Instruct-f16-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ PLM-1.8B-Instruct-bf16-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
45
+ PLM-1.8B-Instruct-f16-q6_k.gguf filter=lfs diff=lfs merge=lfs -text
46
+ PLM-1.8B-Instruct-bf16-q6_k.gguf filter=lfs diff=lfs merge=lfs -text
47
+ PLM-1.8B-Instruct-f16-q4_k.gguf filter=lfs diff=lfs merge=lfs -text
48
+ PLM-1.8B-Instruct-bf16-q4_k.gguf filter=lfs diff=lfs merge=lfs -text
49
+ PLM-1.8B-Instruct-q3_k_l.gguf filter=lfs diff=lfs merge=lfs -text
50
+ PLM-1.8B-Instruct-q4_k_l.gguf filter=lfs diff=lfs merge=lfs -text
51
+ PLM-1.8B-Instruct-q5_k_l.gguf filter=lfs diff=lfs merge=lfs -text
52
+ PLM-1.8B-Instruct-q6_k_l.gguf filter=lfs diff=lfs merge=lfs -text
53
+ PLM-1.8B-Instruct-q3_k_m.gguf filter=lfs diff=lfs merge=lfs -text
54
+ PLM-1.8B-Instruct-q3_k_s.gguf filter=lfs diff=lfs merge=lfs -text
55
+ PLM-1.8B-Instruct-q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text
56
+ PLM-1.8B-Instruct-q4_k_s.gguf filter=lfs diff=lfs merge=lfs -text
57
+ PLM-1.8B-Instruct-q5_k_m.gguf filter=lfs diff=lfs merge=lfs -text
58
+ PLM-1.8B-Instruct-q5_k_s.gguf filter=lfs diff=lfs merge=lfs -text
59
+ PLM-1.8B-Instruct-q6_k_m.gguf filter=lfs diff=lfs merge=lfs -text
60
+ PLM-1.8B-Instruct-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
61
+ PLM-1.8B-Instruct-q4_0.gguf filter=lfs diff=lfs merge=lfs -text
62
+ PLM-1.8B-Instruct-q4_1.gguf filter=lfs diff=lfs merge=lfs -text
63
+ PLM-1.8B-Instruct-q4_0_l.gguf filter=lfs diff=lfs merge=lfs -text
64
+ PLM-1.8B-Instruct-q4_1_l.gguf filter=lfs diff=lfs merge=lfs -text
65
+ PLM-1.8B-Instruct-q5_0.gguf filter=lfs diff=lfs merge=lfs -text
66
+ PLM-1.8B-Instruct-q5_1.gguf filter=lfs diff=lfs merge=lfs -text
67
+ PLM-1.8B-Instruct-q5_0_l.gguf filter=lfs diff=lfs merge=lfs -text
68
+ PLM-1.8B-Instruct-q5_1_l.gguf filter=lfs diff=lfs merge=lfs -text
69
+ PLM-1.8B-Instruct-iq4_xs.gguf filter=lfs diff=lfs merge=lfs -text
70
+ PLM-1.8B-Instruct-iq4_nl.gguf filter=lfs diff=lfs merge=lfs -text
PLM-1.8B-Instruct-bf16-q4_k.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9dbf7af863475fa8f72ed1bfa0cfc6d3b1382a6ceef79b6b2b85976fc951647b
3
+ size 1549787104
PLM-1.8B-Instruct-bf16-q6_k.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4cde03ab74f2b7c98a90f27e7c005e1e188409a0875825b71ebae69d6c51b15a
3
+ size 1870946272
PLM-1.8B-Instruct-bf16-q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2a60ce1790c1b0ebe14051e4d72f1e26034f35dd82bef2d30181381ddd8083e
3
+ size 2237652704
PLM-1.8B-Instruct-bf16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3dc414308550eefba8cbfbd0fbf9ce639edee5f000458e5a36172e61f7ba5340
3
+ size 3657162464
PLM-1.8B-Instruct-f16-q4_k.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:971c91633fc19627e85fe63f22acecbdeb10d7c1e7c5b5ba2861b02513d67781
3
+ size 1549787104
PLM-1.8B-Instruct-f16-q6_k.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9c324147a5ca1f21ae42d7fe31eaa2b2c70370f5d280346bc7424816e7b1a31
3
+ size 1870946272
PLM-1.8B-Instruct-f16-q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2cdfeea4f598482c01b7d7f85ff51a652383486f9b219ffbbb7b81d83364b621
3
+ size 2237652704
PLM-1.8B-Instruct-iq3_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e859ed537a0cf34ffe92a4aa7b3e893bb0d26e2ddfd319b6f34dffe28f5738a
3
+ size 897818592
PLM-1.8B-Instruct-iq3_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61b6e7d76a6105f1946c625d326f708a579300982a4fde3790bd2e1fa754fc7d
3
+ size 871079904
PLM-1.8B-Instruct-iq3_xs.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:612328c70f6adc0bc6e852bda05d01d6172762fb34ae9e936138ef767b5e835a
3
+ size 842768352
PLM-1.8B-Instruct-iq3_xxs.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f3cbf2e52bf1a9fc9cd2a1d83d1bb2a6a8df013f757426b3d4737fdc54152957
3
+ size 793812960
PLM-1.8B-Instruct-iq4_nl.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:532c44a703a802073eeb0ea9f4ffee1aeb3baf7196a2ddcde7f48b178eca44f5
3
+ size 1113503712
PLM-1.8B-Instruct-iq4_xs.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c72a73a1ff8b5a3f0bdd8c5e24691418ecb1f6ca7b328ddb47ecfad0e7f70592
3
+ size 1066186720
PLM-1.8B-Instruct-q3_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67eabf35c34e7611b6fc569a1b7c6f915bcbb9196de0db156bea463d93d14ad4
3
+ size 964403168
PLM-1.8B-Instruct-q3_k_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c886291ef11d39f5fddb88a300db3cc9a5c77f34f58642f90408460645bb2e42
3
+ size 871079904
PLM-1.8B-Instruct-q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dbd367be9c00f43d30952bbaa98c2c5d0a989786ec54ce8eefc4b992a0d970f9
3
+ size 1033281504
PLM-1.8B-Instruct-q4_1.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7db70ceeccc054c07279bc872058d8ed8012635112412e5b781c3d8da7848ca0
3
+ size 1147363296
PLM-1.8B-Instruct-q4_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46c81ec5a33df4d6687d2a36e4f8d4cd9fc718201c35006203aa69b3380477f6
3
+ size 1182709728
PLM-1.8B-Instruct-q4_k_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:92cf7c50de6bd87ef35a17b8184dfe6a18e842a1ed183912bc591c81f7272d27
3
+ size 1121892320
PLM-1.8B-Instruct-q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a95490672f1acfafd2b1a67e829fcd98dacf4427cbbccafe24a094d3d9dfdfd7
3
+ size 1261445088
PLM-1.8B-Instruct-q5_1.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c05d6bf7a88b322355efb853f74dd7863b933b4e7a02f64040ccd689f3f955c3
3
+ size 1375526880
PLM-1.8B-Instruct-q5_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c48a83666a37a6db27877b0dced4bbc76279ca99405eca52472a1c2633d69e6
3
+ size 1338423264
PLM-1.8B-Instruct-q5_k_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76d2a53065fd1193e75709821edfcf6ff29c193c007ce3a55556133d473222f7
3
+ size 1302771680
PLM-1.8B-Instruct-q6_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abb7667db57c041e0d933805a55f109641cfdb6ef3cf18fda8c5ac74d10ef255
3
+ size 1503868896
PLM-1.8B-Instruct-q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd1c0d1154509804f78109b61ed27846454a6eccc76b4d53d4f2e74ef5367d8a
3
+ size 1945935584
PLM-1.8B-Instruct.imatrix ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3d4d609718014dc58601c591a401de169c66ce41c59617dda6a72a812bd608d
3
+ size 2169345
README.md ADDED
@@ -0,0 +1,300 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: PLM-Team/PLM-1.8B-Instruct
3
+ language:
4
+ - en
5
+ - zh
6
+ library_name: transformers
7
+ license: apache-2.0
8
+ quantized_by: PLM-Team
9
+ pipeline_tag: text-generation
10
+ ---
11
+
12
+ # <span style="color: #7FFF7F;">PLM-1.8B-Instruct GGUF Models</span>
13
+
14
+ ## **Choosing the Right Model Format**
15
+
16
+ Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**.
17
+
18
+ ### **BF16 (Brain Float 16) – Use if BF16 acceleration is available**
19
+ - A 16-bit floating-point format designed for **faster computation** while retaining good precision.
20
+ - Provides **similar dynamic range** as FP32 but with **lower memory usage**.
21
+ - Recommended if your hardware supports **BF16 acceleration** (check your device's specs).
22
+ - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32.
23
+
24
+ πŸ“Œ **Use BF16 if:**
25
+ βœ” Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs).
26
+ βœ” You want **higher precision** while saving memory.
27
+ βœ” You plan to **requantize** the model into another format.
28
+
29
+ πŸ“Œ **Avoid BF16 if:**
30
+ ❌ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower).
31
+ ❌ You need compatibility with older devices that lack BF16 optimization.
32
+
33
+ ---
34
+
35
+ ### **F16 (Float 16) – More widely supported than BF16**
36
+ - A 16-bit floating-point **high precision** but with less of range of values than BF16.
37
+ - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs).
38
+ - Slightly lower numerical precision than BF16 but generally sufficient for inference.
39
+
40
+ πŸ“Œ **Use F16 if:**
41
+ βœ” Your hardware supports **FP16** but **not BF16**.
42
+ βœ” You need a **balance between speed, memory usage, and accuracy**.
43
+ βœ” You are running on a **GPU** or another device optimized for FP16 computations.
44
+
45
+ πŸ“Œ **Avoid F16 if:**
46
+ ❌ Your device lacks **native FP16 support** (it may run slower than expected).
47
+ ❌ You have memory limitations.
48
+
49
+ ---
50
+
51
+ ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low-VRAM Inference**
52
+ Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
53
+ - **Lower-bit models (Q4_K)** β†’ **Best for minimal memory usage**, may have lower precision.
54
+ - **Higher-bit models (Q6_K, Q8_0)** β†’ **Better accuracy**, requires more memory.
55
+
56
+ πŸ“Œ **Use Quantized Models if:**
57
+ βœ” You are running inference on a **CPU** and need an optimized model.
58
+ βœ” Your device has **low VRAM** and cannot load full-precision models.
59
+ βœ” You want to reduce **memory footprint** while keeping reasonable accuracy.
60
+
61
+ πŸ“Œ **Avoid Quantized Models if:**
62
+ ❌ You need **maximum accuracy** (full-precision models are better for this).
63
+ ❌ Your hardware has enough VRAM for higher-precision formats (BF16/F16).
64
+
65
+ ---
66
+
67
+ ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)**
68
+ These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint.
69
+
70
+ - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**.
71
+ - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large.
72
+ - **Trade-off**: Lower accuracy compared to higher-bit quantizations.
73
+
74
+ - **IQ3_S**: Small block size for **maximum memory efficiency**.
75
+ - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive.
76
+
77
+ - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**.
78
+ - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting.
79
+
80
+ - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy.
81
+ - **Use case**: Best for **low-memory devices** where **Q6_K** is too large.
82
+
83
+ - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**.
84
+ - **Use case**: Best for **ARM-based devices** or **low-memory environments**.
85
+
86
+ ---
87
+
88
+ ### **Summary Table: Model Format Selection**
89
+
90
+ | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
91
+ |--------------|------------|---------------|----------------------|---------------|
92
+ | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory |
93
+ | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available |
94
+ | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments |
95
+ | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized |
96
+ | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models |
97
+ | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy |
98
+ | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices |
99
+
100
+ ---
101
+
102
+ ## **Included Files & Details**
103
+
104
+ ### `PLM-1.8B-Instruct-bf16.gguf`
105
+ - Model weights preserved in **BF16**.
106
+ - Use this if you want to **requantize** the model into a different format.
107
+ - Best if your device supports **BF16 acceleration**.
108
+
109
+ ### `PLM-1.8B-Instruct-f16.gguf`
110
+ - Model weights stored in **F16**.
111
+ - Use if your device supports **FP16**, especially if BF16 is not available.
112
+
113
+ ### `PLM-1.8B-Instruct-bf16-q8_0.gguf`
114
+ - **Output & embeddings** remain in **BF16**.
115
+ - All other layers quantized to **Q8_0**.
116
+ - Use if your device supports **BF16** and you want a quantized version.
117
+
118
+ ### `PLM-1.8B-Instruct-f16-q8_0.gguf`
119
+ - **Output & embeddings** remain in **F16**.
120
+ - All other layers quantized to **Q8_0**.
121
+
122
+ ### `PLM-1.8B-Instruct-q4_k.gguf`
123
+ - **Output & embeddings** quantized to **Q8_0**.
124
+ - All other layers quantized to **Q4_K**.
125
+ - Good for **CPU inference** with limited memory.
126
+
127
+ ### `PLM-1.8B-Instruct-q4_k_s.gguf`
128
+ - Smallest **Q4_K** variant, using less memory at the cost of accuracy.
129
+ - Best for **very low-memory setups**.
130
+
131
+ ### `PLM-1.8B-Instruct-q6_k.gguf`
132
+ - **Output & embeddings** quantized to **Q8_0**.
133
+ - All other layers quantized to **Q6_K** .
134
+
135
+ ### `PLM-1.8B-Instruct-q8_0.gguf`
136
+ - Fully **Q8** quantized model for better accuracy.
137
+ - Requires **more memory** but offers higher precision.
138
+
139
+ ### `PLM-1.8B-Instruct-iq3_xs.gguf`
140
+ - **IQ3_XS** quantization, optimized for **extreme memory efficiency**.
141
+ - Best for **ultra-low-memory devices**.
142
+
143
+ ### `PLM-1.8B-Instruct-iq3_m.gguf`
144
+ - **IQ3_M** quantization, offering a **medium block size** for better accuracy.
145
+ - Suitable for **low-memory devices**.
146
+
147
+ ### `PLM-1.8B-Instruct-q4_0.gguf`
148
+ - Pure **Q4_0** quantization, optimized for **ARM devices**.
149
+ - Best for **low-memory environments**.
150
+ - Prefer IQ4_NL for better accuracy.
151
+
152
+ # <span id="testllm" style="color: #7F7FFF;">πŸš€ If you find these models useful</span>
153
+ ❀ **Please click "Like" if you find this useful!**
154
+ Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**:
155
+ πŸ‘‰ [Quantum Network Monitor](https://readyforquantum.com/dashboard)
156
+
157
+ πŸ’¬ **How to test**:
158
+ 1. Click the **chat icon** (bottom right on any page)
159
+ 2. Choose an **AI assistant type**:
160
+ - `TurboLLM` (GPT-4-mini)
161
+ - `FreeLLM` (Open-source)
162
+ - `TestLLM` (Experimental CPU-only)
163
+
164
+ ### **What I’m Testing**
165
+ I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
166
+ - **Function calling** against live network services
167
+ - **How small can a model go** while still handling:
168
+ - Automated **Nmap scans**
169
+ - **Quantum-readiness checks**
170
+ - **Metasploit integration**
171
+
172
+ 🟑 **TestLLM** – Current experimental model (llama.cpp on 6 CPU threads):
173
+ - βœ… **Zero-configuration setup**
174
+ - ⏳ 30s load time (slow inference but **no API costs**)
175
+ - πŸ”§ **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
176
+
177
+ ### **Other Assistants**
178
+ 🟒 **TurboLLM** – Uses **gpt-4-mini** for:
179
+ - **Real-time network diagnostics**
180
+ - **Automated penetration testing** (Nmap/Metasploit)
181
+ - πŸ”‘ Get more tokens by [downloading our Quantum Network Monitor Agent](https://readyforquantum.com/download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
182
+
183
+ πŸ”΅ **HugLLM** – Open-source models (β‰ˆ8B params):
184
+ - **2x more tokens** than TurboLLM
185
+ - **AI-powered log analysis**
186
+ - 🌐 Runs on Hugging Face Inference API
187
+
188
+ ### πŸ’‘ **Example AI Commands to Test**:
189
+ 1. `"Give me info on my websites SSL certificate"`
190
+ 2. `"Check if my server is using quantum safe encyption for communication"`
191
+ 3. `"Run a quick Nmap vulnerability test"`
192
+ 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution!
193
+
194
+ ### Final word
195
+ I fund the servers to create the models files, run the Quantum Network Monitor Service and Pay for Inference from Novita and OpenAI all from my own pocket. All of the code for creating the models and the work I have done with Quantum Network Monitor is [open source](https://github.com/Mungert69). Feel free to use what you find useful. Please support my work and consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) .
196
+ This will help me pay for the services and increase the token limits for everyone.
197
+
198
+ Thank you :)
199
+
200
+
201
+ <center>
202
+ <img src="https://www.cdeng.net/plm/plm_logo.png" alt="plm-logo" width="200"/>
203
+ <h2>πŸ–²οΈ PLM: Efficient Peripheral Language Models Hardware-Co-Designed for Ubiquitous Computing</h2>
204
+ <a href='https://www.project-plm.com/'>πŸ‘‰ Project PLM Website</a>
205
+ </center>
206
+
207
+ <center>
208
+
209
+ ||||||||
210
+ |:-:|:-:|:-:|:-:|:-:|:-:|:-:|
211
+ |<a href='https://arxiv.org/abs/2503.12167'><img src='https://img.shields.io/badge/Paper-ArXiv-C71585'></a>|<a href='https://huggingface.co/PLM-Team/PLM-1.8B-Base'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging Face-Base-red'></a>|<a href='https://huggingface.co/PLM-Team/PLM-1.8B-Instruct'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging Face-Instruct-red'></a>|<a href='https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging Face-gguf-red'></a>|<a href='https://huggingface.co/datasets/plm-team/scots'><img src='https://img.shields.io/badge/Data-plm%20mix-4169E1'></img></a>|<a><img src="https://img.shields.io/github/stars/plm-team/PLM"></a>|
212
+
213
+ </center>
214
+
215
+ ---
216
+
217
+ The PLM (Peripheral Language Model) series introduces a novel model architecture to peripheral computing by delivering powerful language capabilities within the constraints of resource-limited devices. Through modeling and system co-design strategy, PLM optimizes model performance and fits edge system requirements, PLM employs **Multi-head Latent Attention** and **squared ReLU** activation to achieve sparsity, significantly reducing memory footprint and computational demands. Coupled with a meticulously crafted training regimen using curated datasets and a Warmup-Stable-Decay-Constant learning rate scheduler, PLM demonstrates superior performance compared to existing small language models, all while maintaining the lowest activated parameters, making it ideally suited for deployment on diverse peripheral platforms like mobile phones and Raspberry Pis.
218
+
219
+
220
+ **Here we present the static quants of https://huggingface.co/PLM-Team/PLM-1.8B-Instruct**
221
+
222
+ ## Provided Quants
223
+
224
+ | Link | Type | Size/GB | Notes |
225
+ |:-----|:-----|--------:|:------|
226
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-F16.gguf|F16| 3.66GB| Recommanded|
227
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q2_K.gguf|Q2_K| 827 MB| |
228
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q3_K_L.gguf|Q3_K_L| 1.09 GB| |
229
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q3_K_M.gguf|Q3_K_M| 1.01 GB| |
230
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q3_K_S.gguf|Q3_K_S| 912 MB| |
231
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q4_0.gguf|Q4_0| 1.11 GB| |
232
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q4_1.gguf|Q4_1| 1.21 GB| |
233
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q4_K_M.gguf|Q4_K_M| 1.18 GB| Recommanded|
234
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q4_K_S.gguf|Q4_K_S| 1.12 GB| |
235
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q5_0.gguf|Q5_0| 1.3 GB| |
236
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q5_1.gguf|Q5_1| 1.4 GB| |
237
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q5_K_M.gguf|Q5_K_M| 1.34 GB| |
238
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q5_K_S.gguf|Q5_K_S| 1.3 GB| |
239
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q6_K.gguf|Q6_K| 1.5 GB| |
240
+ |https://huggingface.co/PLM-Team/PLM-1.8B-Instruct-gguf/blob/main/PLM-1.8B-Instruct-Q8_0.gguf|Q8_0| 1.95 GB| Recommanded|
241
+
242
+ ## Usage (llama.cpp)
243
+
244
+ Now [llama.cpp](https://github.com/ggml-org/llama.cpp) supports our model. Here is the usage:
245
+
246
+ ```bash
247
+ git clone https://github.com/Si1w/llama.cpp.git
248
+ cd llama.cpp
249
+ ```
250
+
251
+ If you want to convert the orginal model into `gguf` form by yourself, you can
252
+
253
+ ```bash
254
+ pip install -r requirements.txt
255
+ python convert_hf_to_ggyf.py [model] --outtype {f32,f16,bf16,q8_0,tq1_0,tq2_0,auto}
256
+ ```
257
+
258
+ Then, we can build with CPU of GPU (e.g. Orin). The build is based on `cmake`.
259
+
260
+ - For CPU
261
+
262
+ ```bash
263
+ cmake -B build
264
+ cmake --build build --config Release
265
+ ```
266
+
267
+ - For GPU
268
+
269
+ ```bash
270
+ cmake -B build -DGGML_CUDA=ON
271
+ cmake --build build --config Release
272
+ ```
273
+
274
+ Don't forget to download the GGUF files of the PLM. We use the quantization methods in `llama.cpp` to generate the quantized PLM.
275
+
276
+ ```bash
277
+ huggingface-cli download --resume-download PLM-Team/PLM-1.8B-Instruct-gguf --local-dir PLM-Team/PLM-1.8B-Instruct-gguf
278
+ ```
279
+
280
+ After build the `llama.cpp`, we can use `llama-cli` script to launch the PLM.
281
+
282
+ ```bash
283
+ ./build/bin/llama-cli -m ./PLM-Team/PLM-1.8B-Instruct-gguf/PLM-1.8B-Instruct-Q8_0.gguf -cnv -p "hello!" -n 128
284
+ ```
285
+
286
+ ## Citation
287
+
288
+ If you find Project PLM helpful for your research or applications, please cite as follows:
289
+
290
+ ```
291
+ @misc{deng2025plmefficientperipherallanguage,
292
+ title={PLM: Efficient Peripheral Language Models Hardware-Co-Designed for Ubiquitous Computing},
293
+ author={Cheng Deng and Luoyang Sun and Jiwen Jiang and Yongcheng Zeng and Xinjian Wu and Wenxin Zhao and Qingfa Xiao and Jiachuan Wang and Lei Chen and Lionel M. Ni and Haifeng Zhang and Jun Wang},
294
+ year={2025},
295
+ eprint={2503.12167},
296
+ archivePrefix={arXiv},
297
+ primaryClass={cs.CL},
298
+ url={https://arxiv.org/abs/2503.12167},
299
+ }
300
+ ```