Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,278 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
- es
|
| 6 |
+
- fr
|
| 7 |
+
- de
|
| 8 |
+
- it
|
| 9 |
+
- pt
|
| 10 |
+
- ru
|
| 11 |
+
- ar
|
| 12 |
+
- hi
|
| 13 |
+
- ko
|
| 14 |
+
- zh
|
| 15 |
+
library_name: transformers
|
| 16 |
+
base_model:
|
| 17 |
+
- arcee-ai/Trinity-Nano-Base
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
# <span style="color: #7FFF7F;">Trinity-Nano-Preview GGUF Models</span>
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
## <span style="color: #7F7FFF;">Model Generation Details</span>
|
| 24 |
+
|
| 25 |
+
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`2e1c9cd81`](https://github.com/ggerganov/llama.cpp/commit/2e1c9cd814227c576da56379d79b15d7dfd199b2).
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
|
| 31 |
+
---
|
| 32 |
+
|
| 33 |
+
## <span style="color: #7FFF7F;">Quantization Beyond the IMatrix</span>
|
| 34 |
+
|
| 35 |
+
I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.
|
| 36 |
+
|
| 37 |
+
In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here:
|
| 38 |
+
👉 [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py)
|
| 39 |
+
|
| 40 |
+
While this does increase model file size, it significantly improves precision for a given quantization level.
|
| 41 |
+
|
| 42 |
+
### **I'd love your feedback—have you tried this? How does it perform for you?**
|
| 43 |
+
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
|
| 47 |
+
---
|
| 48 |
+
|
| 49 |
+
<a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;">
|
| 50 |
+
Click here to get info on choosing the right GGUF model format
|
| 51 |
+
</a>
|
| 52 |
+
|
| 53 |
+
---
|
| 54 |
+
|
| 55 |
+
|
| 56 |
+
|
| 57 |
+
<!--Begin Original Model Card-->
|
| 58 |
+
|
| 59 |
+
<div align="center">
|
| 60 |
+
<picture>
|
| 61 |
+
<img
|
| 62 |
+
src="https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/i-v1KyAMOW_mgVGeic9WJ.png"
|
| 63 |
+
alt="Arcee Trinity Mini"
|
| 64 |
+
style="max-width: 100%; height: auto;"
|
| 65 |
+
>
|
| 66 |
+
</picture>
|
| 67 |
+
</div>
|
| 68 |
+
|
| 69 |
+
# Trinity Nano Preview
|
| 70 |
+
|
| 71 |
+
Trinity Nano Preview is a preview of Arcee AI's 6B MoE model with 1B active parameters. It is the small-sized model in our new Trinity family, a series of open-weight models for enterprise and tinkerers alike.
|
| 72 |
+
|
| 73 |
+
This is a chat tuned model, with a delightful personality and charm we think users will love. We note that this model is pushing the limits of sparsity in small language models with only 800M non-embedding parameters active per token, and as such **may be unstable** in certain use cases, especially in this preview.
|
| 74 |
+
|
| 75 |
+
This is an *experimental* release, it's fun to talk to but will not be hosted anywhere, so download it and try it out yourself!
|
| 76 |
+
|
| 77 |
+
***
|
| 78 |
+
|
| 79 |
+
Trinity Nano Preview is trained on 10T tokens gathered and curated through a key partnership with [Datology](https://www.datologyai.com/), building upon the excellent dataset we used on [AFM-4.5B](https://huggingface.co/arcee-ai/AFM-4.5B) with additional math and code.
|
| 80 |
+
|
| 81 |
+
Training was performed on a cluster of 512 H200 GPUs powered by [Prime Intellect](https://www.primeintellect.ai/) using HSDP parallelism.
|
| 82 |
+
|
| 83 |
+
More details, including key architecture decisions, can be found on our blog [here](https://www.arcee.ai/blog/the-trinity-manifesto)
|
| 84 |
+
|
| 85 |
+
***
|
| 86 |
+
|
| 87 |
+
## Model Details
|
| 88 |
+
|
| 89 |
+
* **Model Architecture:** AfmoeForCausalLM
|
| 90 |
+
* **Parameters:** 6B, 1B active
|
| 91 |
+
* **Experts:** 128 total, 8 active, 1 shared
|
| 92 |
+
* **Context length:** 128k
|
| 93 |
+
* **Training Tokens:** 10T
|
| 94 |
+
* **License:** [Apache 2.0](https://huggingface.co/arcee-ai/Trinity-Mini#license)
|
| 95 |
+
|
| 96 |
+
***
|
| 97 |
+
|
| 98 |
+
<div align="center">
|
| 99 |
+
<picture>
|
| 100 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/sSVjGNHfrJKmQ6w8I18ek.png" style="background-color:ghostwhite;padding:5px;" width="17%" alt="Powered by Datology">
|
| 101 |
+
</picture>
|
| 102 |
+
</div>
|
| 103 |
+
|
| 104 |
+
### Running our model
|
| 105 |
+
|
| 106 |
+
- [Transformers](https://huggingface.co/arcee-ai/Trinity-Mini#transformers)
|
| 107 |
+
- [VLLM](https://huggingface.co/arcee-ai/Trinity-Mini#vllm)
|
| 108 |
+
- [llama.cpp](https://huggingface.co/arcee-ai/Trinity-Mini#llamacpp)
|
| 109 |
+
- [LM Studio](https://huggingface.co/arcee-ai/Trinity-Mini#lm-studio)
|
| 110 |
+
|
| 111 |
+
## Transformers
|
| 112 |
+
|
| 113 |
+
Use the `main` transformers branch
|
| 114 |
+
|
| 115 |
+
```
|
| 116 |
+
git clone https://github.com/huggingface/transformers.git
|
| 117 |
+
cd transformers
|
| 118 |
+
|
| 119 |
+
# pip
|
| 120 |
+
pip install '.[torch]'
|
| 121 |
+
|
| 122 |
+
# uv
|
| 123 |
+
uv pip install '.[torch]'
|
| 124 |
+
```
|
| 125 |
+
|
| 126 |
+
```python
|
| 127 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 128 |
+
import torch
|
| 129 |
+
|
| 130 |
+
model_id = "arcee-ai/Trinity-Nano-Preview"
|
| 131 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 132 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 133 |
+
model_id,
|
| 134 |
+
torch_dtype=torch.bfloat16,
|
| 135 |
+
device_map="auto"
|
| 136 |
+
)
|
| 137 |
+
|
| 138 |
+
messages = [
|
| 139 |
+
{"role": "user", "content": "Who are you?"},
|
| 140 |
+
]
|
| 141 |
+
|
| 142 |
+
input_ids = tokenizer.apply_chat_template(
|
| 143 |
+
messages,
|
| 144 |
+
add_generation_prompt=True,
|
| 145 |
+
return_tensors="pt"
|
| 146 |
+
).to(model.device)
|
| 147 |
+
|
| 148 |
+
outputs = model.generate(
|
| 149 |
+
input_ids,
|
| 150 |
+
max_new_tokens=256,
|
| 151 |
+
do_sample=True,
|
| 152 |
+
temperature=0.5,
|
| 153 |
+
top_k=50,
|
| 154 |
+
top_p=0.95
|
| 155 |
+
)
|
| 156 |
+
|
| 157 |
+
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
| 158 |
+
print(response)
|
| 159 |
+
```
|
| 160 |
+
|
| 161 |
+
If using a released transformers, simply pass "trust_remote_code=True":
|
| 162 |
+
|
| 163 |
+
```python
|
| 164 |
+
model_id = "arcee-ai/Trinity-Nano-Preview"
|
| 165 |
+
tokenizer = AutoTokenizer.from_pretrained(model_id)
|
| 166 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 167 |
+
model_id,
|
| 168 |
+
torch_dtype=torch.bfloat16,
|
| 169 |
+
device_map="auto",
|
| 170 |
+
trust_remote_code=True
|
| 171 |
+
)
|
| 172 |
+
```
|
| 173 |
+
|
| 174 |
+
## VLLM
|
| 175 |
+
|
| 176 |
+
Supported in VLLM release 0.11.1
|
| 177 |
+
|
| 178 |
+
```
|
| 179 |
+
# pip
|
| 180 |
+
pip install "vllm>=0.11.1"
|
| 181 |
+
```
|
| 182 |
+
|
| 183 |
+
Serving the model with suggested settings:
|
| 184 |
+
|
| 185 |
+
```
|
| 186 |
+
vllm serve arcee-train/Trinity-Nano-Preview \
|
| 187 |
+
--dtype bfloat16 \
|
| 188 |
+
--enable-auto-tool-choice \
|
| 189 |
+
--reasoning-parser deepseek_r1 \
|
| 190 |
+
--tool-call-parser hermes
|
| 191 |
+
```
|
| 192 |
+
|
| 193 |
+
## llama.cpp
|
| 194 |
+
|
| 195 |
+
Supported in llama.cpp release b7061
|
| 196 |
+
|
| 197 |
+
Download the latest [llama.cpp release](https://github.com/ggml-org/llama.cpp/releases)
|
| 198 |
+
|
| 199 |
+
```
|
| 200 |
+
llama-server -hf arcee-ai/Trinity-Nano-Preview-GGUF:q4_k_m
|
| 201 |
+
```
|
| 202 |
+
|
| 203 |
+
## LM Studio
|
| 204 |
+
|
| 205 |
+
Supported in latest LM Studio runtime
|
| 206 |
+
|
| 207 |
+
Update to latest available, then verify your runtime by:
|
| 208 |
+
|
| 209 |
+
1. Click "Power User" at the bottom left
|
| 210 |
+
2. Click the green "Developer" icon at the top left
|
| 211 |
+
3. Select "LM Runtimes" at the top
|
| 212 |
+
4. Refresh the list of runtimes and verify that the latest is installed
|
| 213 |
+
|
| 214 |
+
Then, go to Model Search and search for `arcee-ai/Trinity-Nano-Preview-GGUF`, download your prefered size, and load it up in the chat
|
| 215 |
+
|
| 216 |
+
|
| 217 |
+
## License
|
| 218 |
+
|
| 219 |
+
Trinity-Nano-Preview is released under the Apache-2.0 license.
|
| 220 |
+
|
| 221 |
+
<!--End Original Model Card-->
|
| 222 |
+
|
| 223 |
+
---
|
| 224 |
+
|
| 225 |
+
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
|
| 226 |
+
|
| 227 |
+
Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
|
| 228 |
+
|
| 229 |
+
👉 [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
|
| 230 |
+
|
| 231 |
+
|
| 232 |
+
The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
|
| 233 |
+
|
| 234 |
+
💬 **How to test**:
|
| 235 |
+
Choose an **AI assistant type**:
|
| 236 |
+
- `TurboLLM` (GPT-4.1-mini)
|
| 237 |
+
- `HugLLM` (Hugginface Open-source models)
|
| 238 |
+
- `TestLLM` (Experimental CPU-only)
|
| 239 |
+
|
| 240 |
+
### **What I’m Testing**
|
| 241 |
+
I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
|
| 242 |
+
- **Function calling** against live network services
|
| 243 |
+
- **How small can a model go** while still handling:
|
| 244 |
+
- Automated **Nmap security scans**
|
| 245 |
+
- **Quantum-readiness checks**
|
| 246 |
+
- **Network Monitoring tasks**
|
| 247 |
+
|
| 248 |
+
🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
|
| 249 |
+
- ✅ **Zero-configuration setup**
|
| 250 |
+
- ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
|
| 251 |
+
- 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
|
| 252 |
+
|
| 253 |
+
### **Other Assistants**
|
| 254 |
+
🟢 **TurboLLM** – Uses **gpt-4.1-mini** :
|
| 255 |
+
- **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
|
| 256 |
+
- **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
|
| 257 |
+
- **Real-time network diagnostics and monitoring**
|
| 258 |
+
- **Security Audits**
|
| 259 |
+
- **Penetration testing** (Nmap/Metasploit)
|
| 260 |
+
|
| 261 |
+
🔵 **HugLLM** – Latest Open-source models:
|
| 262 |
+
- 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
|
| 263 |
+
|
| 264 |
+
### 💡 **Example commands you could test**:
|
| 265 |
+
1. `"Give me info on my websites SSL certificate"`
|
| 266 |
+
2. `"Check if my server is using quantum safe encyption for communication"`
|
| 267 |
+
3. `"Run a comprehensive security audit on my server"`
|
| 268 |
+
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
|
| 269 |
+
|
| 270 |
+
### Final Word
|
| 271 |
+
|
| 272 |
+
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
|
| 273 |
+
|
| 274 |
+
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
|
| 275 |
+
|
| 276 |
+
I'm also open to job opportunities or sponsorship.
|
| 277 |
+
|
| 278 |
+
Thank you! 😊
|