Mungert commited on
Commit
f0a1b06
·
verified ·
0 Parent(s):

Super-squash history to reclaim storage

Browse files
.gitattributes ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ AFM-4.5B-f16.gguf filter=lfs diff=lfs merge=lfs -text
37
+ AFM-4.5B-f16_q8_0.gguf filter=lfs diff=lfs merge=lfs -text
38
+ AFM-4.5B-bf16_q8_0.gguf filter=lfs diff=lfs merge=lfs -text
39
+ AFM-4.5B-f16_q6_k.gguf filter=lfs diff=lfs merge=lfs -text
40
+ AFM-4.5B-bf16_q6_k.gguf filter=lfs diff=lfs merge=lfs -text
41
+ AFM-4.5B-f16_q4_k.gguf filter=lfs diff=lfs merge=lfs -text
42
+ AFM-4.5B-bf16_q4_k.gguf filter=lfs diff=lfs merge=lfs -text
43
+ AFM-4.5B-q2_k_l.gguf filter=lfs diff=lfs merge=lfs -text
44
+ AFM-4.5B-q3_k_l.gguf filter=lfs diff=lfs merge=lfs -text
45
+ AFM-4.5B-q4_k_l.gguf filter=lfs diff=lfs merge=lfs -text
46
+ AFM-4.5B-q5_k_l.gguf filter=lfs diff=lfs merge=lfs -text
47
+ AFM-4.5B-q6_k_l.gguf filter=lfs diff=lfs merge=lfs -text
48
+ AFM-4.5B-q2_k_m.gguf filter=lfs diff=lfs merge=lfs -text
49
+ AFM-4.5B-q2_k_s.gguf filter=lfs diff=lfs merge=lfs -text
50
+ AFM-4.5B-q3_k_m.gguf filter=lfs diff=lfs merge=lfs -text
51
+ AFM-4.5B-q3_k_s.gguf filter=lfs diff=lfs merge=lfs -text
52
+ AFM-4.5B-q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text
53
+ AFM-4.5B-q4_k_s.gguf filter=lfs diff=lfs merge=lfs -text
54
+ AFM-4.5B-q5_k_m.gguf filter=lfs diff=lfs merge=lfs -text
55
+ AFM-4.5B-q5_k_s.gguf filter=lfs diff=lfs merge=lfs -text
56
+ AFM-4.5B-q6_k_m.gguf filter=lfs diff=lfs merge=lfs -text
57
+ AFM-4.5B-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
58
+ AFM-4.5B-q4_0.gguf filter=lfs diff=lfs merge=lfs -text
59
+ AFM-4.5B-q4_1.gguf filter=lfs diff=lfs merge=lfs -text
60
+ AFM-4.5B-q4_0_l.gguf filter=lfs diff=lfs merge=lfs -text
61
+ AFM-4.5B-q4_1_l.gguf filter=lfs diff=lfs merge=lfs -text
62
+ AFM-4.5B-q5_0.gguf filter=lfs diff=lfs merge=lfs -text
63
+ AFM-4.5B-q5_1.gguf filter=lfs diff=lfs merge=lfs -text
64
+ AFM-4.5B-q5_0_l.gguf filter=lfs diff=lfs merge=lfs -text
65
+ AFM-4.5B-q5_1_l.gguf filter=lfs diff=lfs merge=lfs -text
66
+ AFM-4.5B-iq2_xs.gguf filter=lfs diff=lfs merge=lfs -text
67
+ AFM-4.5B-iq2_xxs.gguf filter=lfs diff=lfs merge=lfs -text
68
+ AFM-4.5B-iq2_s.gguf filter=lfs diff=lfs merge=lfs -text
69
+ AFM-4.5B-iq2_m.gguf filter=lfs diff=lfs merge=lfs -text
70
+ AFM-4.5B-iq3_xs.gguf filter=lfs diff=lfs merge=lfs -text
71
+ AFM-4.5B-iq3_xxs.gguf filter=lfs diff=lfs merge=lfs -text
72
+ AFM-4.5B-iq3_s.gguf filter=lfs diff=lfs merge=lfs -text
73
+ AFM-4.5B-iq3_m.gguf filter=lfs diff=lfs merge=lfs -text
74
+ AFM-4.5B-iq4_xs.gguf filter=lfs diff=lfs merge=lfs -text
75
+ AFM-4.5B-iq4_nl.gguf filter=lfs diff=lfs merge=lfs -text
76
+ AFM-4.5B-imatrix.gguf filter=lfs diff=lfs merge=lfs -text
77
+ AFM-4.5B-bf16.gguf filter=lfs diff=lfs merge=lfs -text
AFM-4.5B-bf16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4fedacd1660c7d6927ef1bd64e1ea03e3149ef770788cc31d022cbadf079844c
3
+ size 9246580832
AFM-4.5B-bf16_q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55b191c0953a3b08753b2ca0bff1a196738470633e7794a67cd30dcfb868dc7a
3
+ size 7388635232
AFM-4.5B-f16_q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:851cd1b20bfacce27e2c4eee2b50fb1ac006ed009d7c434104da678460890235
3
+ size 7388635232
AFM-4.5B-imatrix.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:834d755fcf23d5197795999946c5218b2ac5e7275c07f94175bc1bef7de638ec
3
+ size 4530656
AFM-4.5B-iq2_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e5ee1d5ce0b3dec2e781dc3bd7c68f6316196182aed9f9cadeb18b56dab8a2a
3
+ size 1909853760
AFM-4.5B-iq2_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:497b72e33ebca9a283b766853852c263970b2107698b3fda0d4564f9a31edf10
3
+ size 1851854400
AFM-4.5B-iq2_xs.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07a7d4bb98d702beb072e45b7dc43eebe945da072d31c7910920cf59e914101b
3
+ size 1784618560
AFM-4.5B-iq2_xxs.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce7da3e207bcf10d8f5e411fb0e073eafc978077ce923b695b81bc964abc83c6
3
+ size 1664278080
AFM-4.5B-iq3_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76cb15c6a198c0346fb28a76e3a10b6576175638b4f0b2715ca4c3c7ebd3e1ba
3
+ size 2384191040
AFM-4.5B-iq3_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:821266eebe18219165a1038c73b5309976b56ef988addd12cadc8258613cc136
3
+ size 2358181440
AFM-4.5B-iq3_xs.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63a133b8adc99eb6159ed6d6caca159d6bbd7e3c8018023e1a3b84c8e245d836
3
+ size 2133884480
AFM-4.5B-iq3_xxs.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4509e5c3030f9bcacd5829b31440a2fbcab6402e3a9f5f534bc329464ec8bae1
3
+ size 2096283200
AFM-4.5B-iq4_nl.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99c6dc6f1b83fd75b907ad0a854602f0fe7fcda70d3f981326a4531d3fb992b4
3
+ size 2697146048
AFM-4.5B-iq4_xs.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e7638477336e9617fad9fae19e4b7f700bc308185481657ee994e0697280c087
3
+ size 2564517184
AFM-4.5B-q2_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a549ff7eb72f72e6823f6a696a8b0493ea2ce91905a0601604568312e153d62c
3
+ size 2054302656
AFM-4.5B-q2_k_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5bf5e2031d0eaffe5a06b8a18a9fa5a56f801407c7aaeac674fe245bf3592ec5
3
+ size 1973689920
AFM-4.5B-q3_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:265816d756b4970dcc645ce41b2e0309b496472185e662e8643d499655f3a8f1
3
+ size 2524646336
AFM-4.5B-q3_k_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7058b1251e7821b1f70946bae0b608ae9ac718bd396cf844da49c4dfdae5c9aa
3
+ size 2419744320
AFM-4.5B-q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4fefba5f99c51772f723efbf2d790391c284c17f22d3c9bc642c755d5ff73e5e
3
+ size 2606764480
AFM-4.5B-q4_1.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ca4479c20da0d53693765be28de4feca3466bfabf45fb8ed6afa0a28c365d31d
3
+ size 2895452160
AFM-4.5B-q4_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:12295e5c01b959d545b2dea0f4f12f3137a4f6044d2d01c80b6afd2fec4a4339
3
+ size 3007339456
AFM-4.5B-q4_k_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee2a778c9996f020a5dfe7766477a95cb61aa5a3e7059aa42aa147cd1022d28f
3
+ size 2830781376
AFM-4.5B-q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b94cf405436c3b5f1e5aa1a45303303430c8f89aae90280e526b0474b7e9fee
3
+ size 3184139840
AFM-4.5B-q5_1.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:44c65dea0bf76a2ab6ffdb63f1285ce3b6dad20401195397ebbd4d22025b2219
3
+ size 3472827520
AFM-4.5B-q5_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9bb4ba1e5ce6f6447d32f8d3884b25551b33c766766d078560867a3f1aa697da
3
+ size 3394083776
AFM-4.5B-q5_k_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d7ee5d28bbf114edf49769b1105906cc4936de3a5dd88beda48899f79f9f15c7
3
+ size 3329674176
AFM-4.5B-q6_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74276222c18dba70115b31368ce0dc2887d04c30fb917e0902aea5b83de0e901
3
+ size 3797601216
AFM-4.5B-q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1f2340131a52b9f2d70e42d1531d8e9ff6e96d0ae1d898e687d5bedb54fd53d
3
+ size 4916265632
README.md ADDED
@@ -0,0 +1,264 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: aml
4
+ language:
5
+ - en
6
+ - es
7
+ - fr
8
+ - de
9
+ - it
10
+ - pt
11
+ - ru
12
+ - ar
13
+ - hi
14
+ - ko
15
+ - zh
16
+ library_name: transformers
17
+ extra_gated_prompt: Company name is optional, please put NA if you would prefer not to share it.
18
+ extra_gated_fields:
19
+ Company: text
20
+ I agree to use this model in accordance with the Arcee Model License (AML): checkbox
21
+ base_model:
22
+ - arcee-ai/AFM-4.5B-Base
23
+ ---
24
+
25
+ # <span style="color: #7FFF7F;">AFM-4.5B GGUF Models</span>
26
+
27
+
28
+ ## <span style="color: #7F7FFF;">Model Generation Details</span>
29
+
30
+ This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`8ad7b3e6`](https://github.com/ggerganov/llama.cpp/commit/8ad7b3e65b5834e5574c2f5640056c9047b5d93b).
31
+
32
+
33
+
34
+
35
+
36
+ ---
37
+
38
+ ## <span style="color: #7FFF7F;">Quantization Beyond the IMatrix</span>
39
+
40
+ I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.
41
+
42
+ In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here:
43
+ 👉 [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py)
44
+
45
+ While this does increase model file size, it significantly improves precision for a given quantization level.
46
+
47
+ ### **I'd love your feedback—have you tried this? How does it perform for you?**
48
+
49
+
50
+
51
+
52
+ ---
53
+
54
+ <a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;">
55
+ Click here to get info on choosing the right GGUF model format
56
+ </a>
57
+
58
+ ---
59
+
60
+
61
+
62
+ <!--Begin Original Model Card-->
63
+
64
+
65
+ <div align="center">
66
+ <picture>
67
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/Lj9YVLIKKdImV_jID0A1g.png" width="25%" alt="Arcee AFM 4.5B">
68
+ </picture>
69
+ </div>
70
+
71
+
72
+ # AFM-4.5B
73
+
74
+ AFM-4.5B is a 4.5 billion parameter instruction-tuned model developed by Arcee.ai, designed for enterprise-grade performance across diverse deployment environments from cloud to edge. The base model was trained on a dataset of 8 trillion tokens, comprising 6.5 trillion tokens of general pretraining data followed by 1.5 trillion tokens of midtraining data with enhanced focus on mathematical reasoning and code generation. Following pretraining, the model underwent supervised fine-tuning on high-quality instruction datasets. The instruction-tuned model was further refined through reinforcement learning on verifiable rewards as well as for human preference. We use a modified version of [TorchTitan](https://arxiv.org/abs/2410.06511) for pretraining, [Axolotl](https://axolotl.ai) for supervised fine-tuning, and a modified version of [Verifiers](https://github.com/willccbb/verifiers) for reinforcement learning.
75
+
76
+ The development of AFM-4.5B prioritized data quality as a fundamental requirement for achieving robust model performance. We collaborated with DatologyAI, a company specializing in large-scale data curation. DatologyAI's curation pipeline integrates a suite of proprietary algorithms—model-based quality filtering, embedding-based curation, target distribution-matching, source mixing, and synthetic data. Their expertise enabled the creation of a curated dataset tailored to support strong real-world performance.
77
+
78
+ The model architecture follows a standard transformer decoder-only design based on Vaswani et al., incorporating several key modifications for enhanced performance and efficiency. Notable architectural features include grouped query attention for improved inference efficiency and ReLU^2 activation functions instead of SwiGLU to enable sparsification while maintaining or exceeding performance benchmarks.
79
+
80
+ The model available in this repo is the instruct model following supervised fine-tuning and reinforcement learning.
81
+
82
+ ***
83
+
84
+ <div align="center">
85
+ <picture>
86
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/sSVjGNHfrJKmQ6w8I18ek.png" style="background-color:ghostwhite;padding:5px;" width="17%" alt="Powered by Datology">
87
+ </picture>
88
+ </div>
89
+
90
+ ## Model Details
91
+
92
+ * **Model Architecture:** ArceeForCausalLM
93
+ * **Parameters:** 4.5B
94
+ * **Training Tokens:**
95
+ * **License:** [Arcee Model License (AML)](https://huggingface.co/arcee-ai/AFM-4.5B#license)
96
+ * **Recommended settings:**
97
+ * temperature: 0.5
98
+ * top_k: 50
99
+ * top_p: 0.95
100
+ * repeat_penalty: 1.1
101
+
102
+ ***
103
+
104
+ ## Benchmarks
105
+
106
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/BdsWFc4pxiHlK2E0j9AfG.png)
107
+ *Qwen3 and SmolLM's reasoning approach causes their scores to vary wildly from suite to suite - but these are all scores on our internal harness with the same hyperparameters. Be sure to reference their reported scores. SmolLM just released its [bench](https://github.com/huggingface/smollm).
108
+
109
+ ## How to use with `transformers`
110
+
111
+ You can use the model directly with the `transformers` library.
112
+
113
+ We recommend a lower temperature, around 0.5, for optimal performance.
114
+
115
+ ```python
116
+ from transformers import AutoTokenizer, AutoModelForCausalLM
117
+ import torch
118
+
119
+ model_id = "arcee-ai/AFM-4.5B"
120
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
121
+ model = AutoModelForCausalLM.from_pretrained(
122
+ model_id,
123
+ torch_dtype=torch.bfloat16,
124
+ device_map="auto"
125
+ )
126
+
127
+ messages = [
128
+ {"role": "user", "content": "Who are you?"},
129
+ ]
130
+
131
+ input_ids = tokenizer.apply_chat_template(
132
+ messages,
133
+ add_generation_prompt=True,
134
+ return_tensors="pt"
135
+ ).to(model.device)
136
+
137
+ outputs = model.generate(
138
+ input_ids,
139
+ max_new_tokens=256,
140
+ do_sample=True,
141
+ temperature=0.5,
142
+ top_k=50,
143
+ top_p=0.95
144
+ )
145
+
146
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
147
+ print(response)
148
+ ```
149
+
150
+ ## How to use with `vllm`
151
+
152
+ Pending a PR merge: https://github.com/vllm-project/vllm/pull/21725
153
+
154
+ ## How to use with Together API
155
+
156
+ You can access this model directly via the [Together Playground](https://api.together.xyz/playground/arcee-ai/AFM-4.5B).
157
+
158
+ ### Python (Official Together SDK)
159
+
160
+ ```python
161
+ from together import Together
162
+
163
+ client = Together()
164
+ response = client.chat.completions.create(
165
+ model="arcee-ai/AFM-4.5B",
166
+ messages=[
167
+ {
168
+ "role": "user",
169
+ "content": "What are some fun things to do in New York?"
170
+ }
171
+ ]
172
+ )
173
+ print(response.choices[0].message.content)
174
+ ```
175
+
176
+ ### cURL
177
+
178
+ ```bash
179
+ curl -X POST "https://api.together.xyz/v1/chat/completions" \
180
+ -H "Authorization: Bearer $TOGETHER_API_KEY" \
181
+ -H "Content-Type: application/json" \
182
+ -d '{
183
+ "model": "arcee-ai/AFM-4.5B",
184
+ "messages": [
185
+ {
186
+ "role": "user",
187
+ "content": "What are some fun things to do in New York?"
188
+ }
189
+ ]
190
+ }'
191
+ ```
192
+
193
+
194
+ ## Quantization support
195
+
196
+ Support for llama.cpp is available, GGUF format quants are provided here:
197
+
198
+ https://huggingface.co/arcee-ai/AFM-4.5B-GGUF
199
+
200
+ ## License
201
+
202
+ AFM-4.5B is released under the [Arcee Model License](https://huggingface.co/arcee-ai/AFM-4.5B/blob/main/LICENSE). If your company makes less than $1.75 million in annual revenue, you’re free to use the model for commercial purposes, as long as you’re not providing the weights to a company above that threshold. If your product or application using AFM-4.5B is sold to a larger company, that’s fine—as long as they don’t receive or run the weights directly.
203
+
204
+ We want as many developers, researchers, and builders as possible to benefit from AFM-4.5B. At the same time, this license ensures that we can continue to develop and support the model for the community.
205
+
206
+
207
+ <!--End Original Model Card-->
208
+
209
+ ---
210
+
211
+ # <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
212
+
213
+ Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
214
+
215
+ 👉 [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
216
+
217
+
218
+ The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
219
+
220
+ 💬 **How to test**:
221
+ Choose an **AI assistant type**:
222
+ - `TurboLLM` (GPT-4.1-mini)
223
+ - `HugLLM` (Hugginface Open-source models)
224
+ - `TestLLM` (Experimental CPU-only)
225
+
226
+ ### **What I’m Testing**
227
+ I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
228
+ - **Function calling** against live network services
229
+ - **How small can a model go** while still handling:
230
+ - Automated **Nmap security scans**
231
+ - **Quantum-readiness checks**
232
+ - **Network Monitoring tasks**
233
+
234
+ 🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
235
+ - ✅ **Zero-configuration setup**
236
+ - ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
237
+ - 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
238
+
239
+ ### **Other Assistants**
240
+ 🟢 **TurboLLM** – Uses **gpt-4.1-mini** :
241
+ - **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
242
+ - **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
243
+ - **Real-time network diagnostics and monitoring**
244
+ - **Security Audits**
245
+ - **Penetration testing** (Nmap/Metasploit)
246
+
247
+ 🔵 **HugLLM** – Latest Open-source models:
248
+ - 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
249
+
250
+ ### 💡 **Example commands you could test**:
251
+ 1. `"Give me info on my websites SSL certificate"`
252
+ 2. `"Check if my server is using quantum safe encyption for communication"`
253
+ 3. `"Run a comprehensive security audit on my server"`
254
+ 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
255
+
256
+ ### Final Word
257
+
258
+ I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
259
+
260
+ If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
261
+
262
+ I'm also open to job opportunities or sponsorship.
263
+
264
+ Thank you! 😊