Mungert commited on
Commit
a111bc9
·
verified ·
0 Parent(s):

Super-squash history to reclaim storage

Browse files
.gitattributes ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ II-Search-4B-f16_q8_0.gguf filter=lfs diff=lfs merge=lfs -text
37
+ II-Search-4B-bf16_q8_0.gguf filter=lfs diff=lfs merge=lfs -text
38
+ II-Search-4B-q2_k_m.gguf filter=lfs diff=lfs merge=lfs -text
39
+ II-Search-4B-q2_k_s.gguf filter=lfs diff=lfs merge=lfs -text
40
+ II-Search-4B-q3_k_m.gguf filter=lfs diff=lfs merge=lfs -text
41
+ II-Search-4B-q3_k_s.gguf filter=lfs diff=lfs merge=lfs -text
42
+ II-Search-4B-q4_k_m.gguf filter=lfs diff=lfs merge=lfs -text
43
+ II-Search-4B-q4_k_s.gguf filter=lfs diff=lfs merge=lfs -text
44
+ II-Search-4B-q8_0.gguf filter=lfs diff=lfs merge=lfs -text
45
+ II-Search-4B-q4_0.gguf filter=lfs diff=lfs merge=lfs -text
46
+ II-Search-4B-q4_1.gguf filter=lfs diff=lfs merge=lfs -text
47
+ II-Search-4B-q5_0.gguf filter=lfs diff=lfs merge=lfs -text
48
+ II-Search-4B-q5_1.gguf filter=lfs diff=lfs merge=lfs -text
49
+ II-Search-4B-iq2_xs.gguf filter=lfs diff=lfs merge=lfs -text
50
+ II-Search-4B-iq2_xxs.gguf filter=lfs diff=lfs merge=lfs -text
51
+ II-Search-4B-iq2_s.gguf filter=lfs diff=lfs merge=lfs -text
52
+ II-Search-4B-iq2_m.gguf filter=lfs diff=lfs merge=lfs -text
53
+ II-Search-4B-iq3_xs.gguf filter=lfs diff=lfs merge=lfs -text
54
+ II-Search-4B-iq3_xxs.gguf filter=lfs diff=lfs merge=lfs -text
55
+ II-Search-4B-iq3_s.gguf filter=lfs diff=lfs merge=lfs -text
56
+ II-Search-4B-iq3_m.gguf filter=lfs diff=lfs merge=lfs -text
57
+ II-Search-4B-iq4_xs.gguf filter=lfs diff=lfs merge=lfs -text
58
+ II-Search-4B-iq4_nl.gguf filter=lfs diff=lfs merge=lfs -text
59
+ II-Search-4B-imatrix.gguf filter=lfs diff=lfs merge=lfs -text
60
+ II-Search-4B-bf16.gguf filter=lfs diff=lfs merge=lfs -text
II-Search-4B-bf16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab41d146a03d98d6b15d8b32224140262a40c4997be851b2424b2f65baaa619e
3
+ size 8829197024
II-Search-4B-bf16_q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:baf046181cd0583dd2ac52f5861b19968a3d4e55ac311a4704467d1f7c0c1a58
3
+ size 6705830624
II-Search-4B-f16_q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8fd424ec838b82dddab70c869e3735eca087ae20f81d6be5705c9f0722027ad0
3
+ size 6705830624
II-Search-4B-imatrix.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:73a2f473076879334e6776d413b8df500b7c5e95b1777ddc0a8de541f1bb974b
3
+ size 3872672
II-Search-4B-iq2_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bcc4c3c793d52dc57654023658315a9a84e875c6386f5356c9915ef0b3e108db
3
+ size 1823153184
II-Search-4B-iq2_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a713bb3f953d2ec45dff2e7bd5bec10ad44c84fa6d21aa7ae6210071fb6ffb3
3
+ size 1758927904
II-Search-4B-iq2_xs.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f44c295ebc2c99eaeee6b955a1b3d7046dbf5ac0950533ec34599b746f4ad0bb
3
+ size 1713585184
II-Search-4B-iq2_xxs.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b02cbb6634959f3c433ef40036a1f5a6b414b269da66b157b7a504aa8325d048
3
+ size 1609341984
II-Search-4B-iq3_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57139b55de625dbb7eb2ab416d69f3c2adad2d20172e7a56ae39be8bf7ac19b5
3
+ size 2322102304
II-Search-4B-iq3_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81a17341d98b76022600dd16dedb2c42103a429c3e637fe4099b6957e6a5e4d9
3
+ size 2322102304
II-Search-4B-iq3_xs.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b84c3ab6588edf157fadb94bd2a6d556aef3f40207fd1ab2fdb4a40d1c0e379b
3
+ size 2143270944
II-Search-4B-iq3_xxs.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f62fa7b7750219c3dcf7797688b056136a4f6269bda0206d04b97a65dc6d8564
3
+ size 2092767264
II-Search-4B-iq4_nl.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac9c022728d9b3454539b41898f87895dd6be3b2a48906b116ea840c4326b41b
3
+ size 2499853344
II-Search-4B-iq4_xs.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5b6c797afb0a56656aa67ec3d78b04c8393f65c67e18ea12c67b7e90ecf6f49e
3
+ size 2589816864
II-Search-4B-q2_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a3fb870dc15aae847cfb1037c44b3288de2fd9a9f771d41e197dd01e9d6e41e
3
+ size 1858952224
II-Search-4B-q2_k_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a191c1ab6c64dce7e353cda2af80acec344639137992b0da2cb1d7261b94537
3
+ size 1761713184
II-Search-4B-q3_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d9c4510faaa147491cd43f8a1414688ec6eedf9f44059cf9c4fd19434dadd736
3
+ size 2378627104
II-Search-4B-q3_k_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95c5e2b98383b9ad804accb6fd5d2ea35215071ec9c15d3844e6709234705072
3
+ size 2275310624
II-Search-4B-q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:877e960355e7ed4a0ee6b632164401d33295b66a7d3e4ae0c649208854c961c9
3
+ size 2877013024
II-Search-4B-q4_1.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0fac58dfe6e537b461ed3976142e511711bc4aa53170cfa819171cfd4c4e9596
3
+ size 2812378144
II-Search-4B-q4_k_m.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:94d8c99b74054abd248a59b44ca1a5fa7226c22d0c4550c070f836ede1941405
3
+ size 2985987104
II-Search-4B-q4_k_s.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d2e00e74c97066df3153f6453b921ae858895c86fcde4040d98109b525786612
3
+ size 2726623264
II-Search-4B-q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:547ccedf6fc0dd0cd9978c66f457fdffc9dc9c192a315f4fb34a178e1aca2fda
3
+ size 3331177504
II-Search-4B-q5_1.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:537fd2814e393a22ca79a37477266390c37458345fb368f7611e86f5b18c7972
3
+ size 3558259744
II-Search-4B-q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ebbab20f5fc33ccd34e729d243aaa317f759297014136158f57558143931cfbe
3
+ size 4693670624
README.md ADDED
@@ -0,0 +1,238 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Qwen/Qwen3-4B
4
+ pipeline_tag: text-generation
5
+ library_name: transformers
6
+ license: apache-2.0
7
+ ---
8
+
9
+ # <span style="color: #7FFF7F;">II-Search-4B GGUF Models</span>
10
+
11
+
12
+ ## <span style="color: #7F7FFF;">Model Generation Details</span>
13
+
14
+ This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`cd6983d5`](https://github.com/ggerganov/llama.cpp/commit/cd6983d56d2cce94ecb86bb114ae8379a609073c).
15
+
16
+
17
+
18
+
19
+
20
+ ---
21
+
22
+ ## <span style="color: #7FFF7F;">Quantization Beyond the IMatrix</span>
23
+
24
+ I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.
25
+
26
+ In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here:
27
+ 👉 [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py)
28
+
29
+ While this does increase model file size, it significantly improves precision for a given quantization level.
30
+
31
+ ### **I'd love your feedback—have you tried this? How does it perform for you?**
32
+
33
+
34
+
35
+
36
+ ---
37
+
38
+ <a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;">
39
+ Click here to get info on choosing the right GGUF model format
40
+ </a>
41
+
42
+ ---
43
+
44
+
45
+
46
+ <!--Begin Original Model Card-->
47
+
48
+
49
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63466107f7bd6326925fc770/b6xfld0bUDDAQIFvMCapD.png)
50
+
51
+ # II-Search-4B
52
+
53
+
54
+
55
+ <aside>
56
+
57
+ A 4B parameter language model specialized in information seeking, multi-hop reasoning, and web-integrated search, achieving state-of-the-art performance among models of similar size.
58
+
59
+ </aside>
60
+
61
+
62
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63466107f7bd6326925fc770/rUpsG4-X9ZdO6JVEp6xVO.png)
63
+
64
+
65
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63466107f7bd6326925fc770/83kNxbdWU8mk8lqLZ9Gnb.png)
66
+
67
+ ## Model Description
68
+
69
+ II-Search-4B is a 4B parameter language model based on Qwen3-4B, fine-tuned specifically for information seeking tasks and web-integrated reasoning. It excels at complex multi-hop information retrieval, fact verification, and comprehensive report generation.
70
+
71
+ ### Key Features
72
+
73
+ - Enhanced tool usage for web search and webpage visits
74
+ - Multi-hop reasoning capabilities with sophisticated planning
75
+ - Verified information retrieval with cross-checking
76
+ - Strong performance on factual QA benchmarks
77
+ - Comprehensive report generation for research queries
78
+
79
+ ## Training Methodology
80
+
81
+ Our training process consisted of three key phases:
82
+
83
+ ### Phase 1: Tool Call Ability Stimulation
84
+
85
+ We used a distillation approach from larger models (Qwen3-235B) to generate reasoning paths with function calling on multi-hop datasets. This established the base capabilities for tool use.
86
+
87
+ ### Phase 2: Reasoning Improvement
88
+
89
+ We addressed initial limitations by:
90
+
91
+ - Creating synthetic problems requiring more reasoning turns, inspired by Random Walk algorithm
92
+ - Improving reasoning thought patterns for more efficient and cleaner reasoning paths
93
+
94
+ ### Phase 3: Rejection Sampling & Report Generation
95
+
96
+ We applied:
97
+
98
+ - Filtering to keep only high-quality reasoning traces (correct answers with proper reasoning)
99
+ - STORM-inspired techniques to enhance comprehensive report generation
100
+
101
+ ### Phase 4: Reinforcement Learning
102
+
103
+ We trained the model using reinforcement learning
104
+ - Used dataset: [dgslibisey/MuSiQue](https://huggingface.co/datasets/dgslibisey/MuSiQue)
105
+ - Incorporated our in-house search database (containing Wiki data, Fineweb data, and ArXiv data)
106
+
107
+ ## Performance
108
+
109
+ | **Benchmark** | **Qwen3-4B** | **Jan-4B** | **WebSailor-3B** | **II-Search-4B** |
110
+ | --- | --- | --- | --- | --- |
111
+ | OpenAI/SimpleQA | 76.8 | 80.1 | 81.8 | 91.8 |
112
+ | Google/Frames | 30.7 | 24.8 | 34.0 | 67.5 |
113
+ | Seal_0 | 6.31 | 2.7 | 1.8 | 22.5 |
114
+
115
+ ### Tool Usage Comparison
116
+
117
+ **Simple QA (SerpDev)**
118
+
119
+ | | **Qwen3-4B** | **Jan-4B** | **WebSailor-3B** | **II-Search-4B** |
120
+ | --- | --- | --- | --- | --- |
121
+ | # Search | 1.0 | 0.9 | 2.1 | 2.2 |
122
+ | # Visit | 0.1 | 1.9 | 6.4 | 3.5 |
123
+ | # Total Tools | 1.1 | 2.8 | 8.5 | 5.7 |
124
+
125
+ All benchmark traces from models can be found at: https://huggingface.co/datasets/Intelligent-Internet/II-Search-Benchmark-Details
126
+
127
+ ## Intended Use
128
+
129
+ II-Search-4B is designed for:
130
+
131
+ - Information seeking and factual question answering
132
+ - Research assistance and comprehensive report generation
133
+ - Fact verification and evidence-based reasoning
134
+ - Educational and research applications requiring factual accuracy
135
+
136
+ ## Usage
137
+ To deploy and interact with the II-Search-4B model effectively, follow these options:
138
+ 1. Serve the model using vLLM or SGLang
139
+
140
+ Use the following command to serve the model with vLLM (adjust parameters as needed for your hardware setup):
141
+ ```bash
142
+ vllm serve Intelligent-Internet/II-Search-4B --served-model-name II-Search-4B --tensor-parallel-size 8 --enable-reasoning --reasoning-parser deepseek_r1 --rope-scaling '{"rope_type":"yarn","factor":1.5,"original_max_position_embeddings":98304}' --max-model-len 131072
143
+ ```
144
+ This configuration enables distributed tensor parallelism across 8 GPUs, reasoning capabilities, custom RoPE scaling for extended context, and a maximum context length of 131,072 tokens.
145
+
146
+ 2. Integrate web_search and web_visit tools
147
+
148
+ Equip the served model with web_search and web_visit tools to enable internet-aware functionality. Alternatively, use a middleware like MCP for tool integration—see this example repository: https://github.com/hoanganhpham1006/mcp-server-template.
149
+
150
+ ## Host on macOS with MLX for local use
151
+ As an alternative for Apple Silicon users, host the quantized [II-Search-4B-MLX](https://huggingface.co/Intelligent-Internet/II-Search-4B-MLX) version on your Mac. Then, interact with it via user-friendly interfaces like LM Studio or Ollama Desktop.
152
+
153
+ ## Recommended Generation Parameters
154
+
155
+ ```python
156
+ generate_cfg = {
157
+ 'top_k': 20,
158
+ 'top_p': 0.95,
159
+ 'temperature': 0.6,
160
+ 'repetition_penalty': 1.1,
161
+ 'max_tokens': 2048
162
+ }
163
+ ```
164
+
165
+ - For a query that you need to find a short and accurate answer. Add the following phrase: "\n\nPlease reason step-by-step and put the final answer within \\\\boxed{}."
166
+
167
+ ## Citation
168
+
169
+ ```
170
+ @misc{II-Search-4B,
171
+ author = {Intelligent Internet},
172
+ title = {II-Search-4B: Information Seeking and Web-Integrated Reasoning LLM},
173
+ year = {2025},
174
+ publisher = {Hugging Face},
175
+ journal = {Hugging Face Hub},
176
+ howpublished = {\url{https://huggingface.co/II-Vietnam/II-Search-4B}},
177
+ }
178
+
179
+ ```
180
+
181
+ <!--End Original Model Card-->
182
+
183
+ ---
184
+
185
+ # <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
186
+
187
+ Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
188
+
189
+ 👉 [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
190
+
191
+
192
+ The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
193
+
194
+ 💬 **How to test**:
195
+ Choose an **AI assistant type**:
196
+ - `TurboLLM` (GPT-4.1-mini)
197
+ - `HugLLM` (Hugginface Open-source models)
198
+ - `TestLLM` (Experimental CPU-only)
199
+
200
+ ### **What I’m Testing**
201
+ I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
202
+ - **Function calling** against live network services
203
+ - **How small can a model go** while still handling:
204
+ - Automated **Nmap security scans**
205
+ - **Quantum-readiness checks**
206
+ - **Network Monitoring tasks**
207
+
208
+ 🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
209
+ - ✅ **Zero-configuration setup**
210
+ - ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
211
+ - 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
212
+
213
+ ### **Other Assistants**
214
+ 🟢 **TurboLLM** – Uses **gpt-4.1-mini** :
215
+ - **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
216
+ - **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
217
+ - **Real-time network diagnostics and monitoring**
218
+ - **Security Audits**
219
+ - **Penetration testing** (Nmap/Metasploit)
220
+
221
+ 🔵 **HugLLM** – Latest Open-source models:
222
+ - 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
223
+
224
+ ### 💡 **Example commands you could test**:
225
+ 1. `"Give me info on my websites SSL certificate"`
226
+ 2. `"Check if my server is using quantum safe encyption for communication"`
227
+ 3. `"Run a comprehensive security audit on my server"`
228
+ 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
229
+
230
+ ### Final Word
231
+
232
+ I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
233
+
234
+ If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
235
+
236
+ I'm also open to job opportunities or sponsorship.
237
+
238
+ Thank you! 😊