Mungert commited on
Commit
cabf6f1
ยท
verified ยท
1 Parent(s): 4fcd699

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +251 -0
README.md ADDED
@@ -0,0 +1,251 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - zh
5
+ license: apache-2.0
6
+ library_name: transformers
7
+ tags:
8
+ - web3
9
+ - finance
10
+ - defi
11
+ - chain-of-thought
12
+ - sft
13
+ - security-audit
14
+ - on-device-ai
15
+ metrics:
16
+ - accuracy
17
+ - ponzi-detection-rate
18
+ - code-security-score
19
+ pipeline_tag: text-generation
20
+ inference: false
21
+ base_model:
22
+ - openai/gpt-oss-20b
23
+ ---
24
+
25
+ # <span style="color: #7FFF7F;">DMind-3 GGUF Models</span>
26
+
27
+
28
+ ## <span style="color: #7F7FFF;">Model Generation Details</span>
29
+
30
+ This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`05fa625ea`](https://github.com/ggerganov/llama.cpp/commit/05fa625eac5bbdbe88b43f857156c35501421d6e).
31
+
32
+
33
+
34
+
35
+
36
+ ---
37
+
38
+ ## <span style="color: #7FFF7F;">Quantization Beyond the IMatrix</span>
39
+
40
+ I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.
41
+
42
+ In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here:
43
+ ๐Ÿ‘‰ [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py)
44
+
45
+ While this does increase model file size, it significantly improves precision for a given quantization level.
46
+
47
+ ### **I'd love your feedbackโ€”have you tried this? How does it perform for you?**
48
+
49
+
50
+
51
+
52
+ ---
53
+
54
+ <a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;">
55
+ Click here to get info on choosing the right GGUF model format
56
+ </a>
57
+
58
+ ---
59
+
60
+
61
+
62
+ <!--Begin Original Model Card-->
63
+
64
+
65
+ # ๐Ÿ”ฎ DMind-3: The Age of Foresight
66
+
67
+ *From local logic to global foresight. In a world of isolated systems, the one who sees the whole board wins.*
68
+
69
+ We have armed the individual with a shield (`DMind-3-nano`) and a brain (`DMind-3-mini`). We have enabled sovereign intelligence to defend and to reason. Yet, in the interconnected chaos of global markets, even the sharpest mind can be blindsided by a tsunami forming on the other side of the world. Local optimization is not enough. True sovereignty requires not just reaction, but **pre-emption**.
70
+
71
+ Web3 is a single, planet-scale financial machine. Capital flows like weather patterns, and risks cascade across protocols like lightning. To navigate this reality, one cannot merely analyze a single smart contract or a single chain. One must perceive the entire systemโ€”its flows, its pressures, its emergent properties. This requires a perspective that transcends the local, a form of intelligence that can synthesize global, cross-domain information into actionable foresight.
72
+
73
+ DMind-3 is our answer. It is not an incremental upgrade; it is a categorical leap. While `nano` provides intuition and `mini` provides logic, `max` delivers **foresight**. It is the Oracle in the cloud, the strategic command center that sees the entire battlefield. It was built not to answer questions, but to question the answers, to model the unseen, and to chart a course through the complexity of a new financial era.
74
+
75
+ ๐Ÿ›ก๏ธ `DMind-3-nano` is your Shield. โš”๏ธ `DMind-3-mini` is your Spear. ๐Ÿ”ฎ `DMind-3` is your Oracle.
76
+
77
+ Welcome to the Age of Foresight.
78
+
79
+ ---
80
+
81
+ ## ๐Ÿ›๏ธ DMind-3: The Macro-Strategic Financial Engine
82
+
83
+ ### 1. Evolution & Legacy
84
+
85
+ The DMind-3 series was conceived as a complete, multi-layered cognitive architecture for the sovereign individual. `Nano` secures the present transaction. `Mini` formulates the immediate strategy. `Max` defines the long-term campaign.
86
+
87
+ This final piece of the trilogy moves beyond the tactical and into the strategic. It was born from the recognition that the most significant opportunities and the most devastating risks in Web3 are systemic. They are not found in code, but in the interplay between code, capital, and human psychology at a global scale. DMind-3 is engineered to be a **Macro-Strategic Financial Engine**, providing institutional-grade foresight as a utility for developers, funds, and the agent ecosystems built upon the DMind stack.
88
+
89
+ ### 2. โš™๏ธ Model Details
90
+
91
+ | Property | Value |
92
+ |---|---|
93
+ | **Model Name** | DMind-3 |
94
+ | **Organization** | DMindAI |
95
+ | **Base Architecture** | gpt-oss-20b (Customized Transformer w/ Multi-Scale RoPE) |
96
+ | **Parameter Count** | 21 Billion |
97
+ | **Precision** | BF16 / FP16 (Native) |
98
+ | **Context Window** | 256k tokens |
99
+ | **Deployment** | Cloud API & Private Enterprise VPC |
100
+
101
+ ### 3. ๐Ÿ”ฌ Methodology: Hierarchical Predictive Synthesis (HPS)
102
+
103
+ DMind-3 introduces **Hierarchical Predictive Synthesis (HPS)**. While Cยณ-SFT (used in `mini`) teaches the model to correct its own reasoning, HPS teaches it to synthesize multiple, conflicting, time-variant data streams into a coherent probabilistic forecast. It operates on a nested hierarchy of abstraction, from raw on-chain events to complex macroeconomic indicators.
104
+
105
+ ![Figure 1](./Figures/Figure1.png)
106
+
107
+ **(Figure 1: The HPS training paradigm, showing multi-level data fusion and probabilistic future state generation)**
108
+
109
+ **Mathematical Formalization**
110
+
111
+ The HPS objective function seeks to minimize the divergence between the model's predicted distribution of future states and the actual observed outcomes, weighted by strategic importance:
112
+
113
+ $$
114
+ \mathcal{L}_{\text{HPS}}(\theta) = - \mathbb{E}_{\mathcal{D}} \left[ \sum_{t=1}^{T} \sum_{i=1}^{M} \omega_i \cdot \log P_\theta(S'_{t+1} \mid S_t, A_t, M_i) \right] + \lambda \sum_{l=1}^{L} \| \Omega_l(\theta) - \Omega_l(\theta_{\text{ref}}) \|_F
115
+ $$
116
+
117
+ where:
118
+
119
+ | Symbol | Description |
120
+ |---|---|
121
+ | \\(S_t\\) | The state of the global market at time \\(t\\) |
122
+ | \\(A_t\\) | The set of all actions (transactions, governance votes) at time \\(t\\) |
123
+ | \\(M_i\\) | The \\(i\\)-th modality of data (e.g., on-chain, news, social sentiment) |
124
+ | \\(\omega_i\\) | The attention weight assigned to the strategic importance of modality \\(i\\) |
125
+ | \\(\Omega_l\\) | The parameter matrix at layer \\(l\\) of the network, regularized to prevent catastrophic forgetting |
126
+
127
+ **Dual-State Inference Mechanism**
128
+
129
+ Similar to `DMind-3-mini`, the model supports a dual-state inference mechanism triggered by a special token:
130
+
131
+ $$
132
+ \hat{y} =
133
+ \begin{cases}
134
+ \operatorname*{arg\,max}\limits_{y} P_\theta(y \mid x, \mathcal{C}_{\text{global}}) & \text{if } \tau = \emptyset \quad (\text{Standard Mode}) \\
135
+ \operatorname*{arg\,max}\limits_{y} P_\theta(y \mid x, \mathcal{C}_{\text{global}}, \mathcal{R}_{\text{risk}}, \mathcal{H}_{\text{hist}}) & \text{if } \tau = \texttt{<FORESIGHT>} \quad (\text{Strategic Mode})
136
+ \end{cases}
137
+ $$
138
+
139
+ This forces the model to not just predict, but to weigh the importance of different data sources when constructing its view of the future.
140
+
141
+ ### 4. ๐Ÿ’ก Intended Use: Institutional-Grade Web3 Intelligence
142
+
143
+ DMind-3 is designed to power the next generation of DeFi analytics, risk management platforms, and autonomous agent orchestrators.
144
+
145
+ **Key Capabilities:**
146
+ - ๐Ÿ”ฎ **Macro-Strategic Foresight**: Identify emerging cross-chain narratives, predict market regime shifts, and model the impact of major economic events on crypto asset correlations.
147
+ - ๐Ÿ›๏ธ **Automated Institutional Research**: Generate deep, data-driven reports on novel protocols, perform automated tokenomics valuation, and assess long-term protocol viability.
148
+ - ๐ŸŒŠ **Systemic Risk Assessment**: Model contagion risk across DeFi, detect liquidity black holes before they form, and run stress tests on entire ecosystems based on simulated market shocks.
149
+ - ๐Ÿค– **Agent Fleet Orchestration**: Serve as the central "strategic brain" for fleets of `mini` and `nano` agents, providing high-level directives and market context.
150
+
151
+ ### 5. ๐Ÿ“š The Brain, Shield & Oracle Ecosystem
152
+
153
+ The DMind-3 series is a vertically integrated stack designed for sovereign intelligence.
154
+
155
+ ![Figure 2](./Figures/Figure2.png)
156
+
157
+ **(Figure 2: The full DMind-3 Cognitive Architecture, from on-device reflexes to cloud-native foresight)**
158
+
159
+ - **The Oracle (DMind-3)**: Runs in the cloud. Provides macro-strategic foresight, systemic risk analysis, and orchestrates the agent fleet.
160
+ - **The Brain (DMind-3-mini)**: Runs on your local high-performance machine. Executes complex, bespoke strategies and performs deep, focused research under the Oracle's guidance.
161
+ - **The Shield (DMind-3-nano)**: Runs in your browser or wallet. Provides real-time, intuitive transaction security and intent recognition, acting as the final line of defense.
162
+
163
+ ### 6. ๐Ÿ“š Training Data
164
+
165
+ DMind-3 was trained on a corpus of over 500,000 curated, high-signal documents and a multi-terabyte stream of structured on-chain data.
166
+
167
+ | Data Source | Proportion | Description |
168
+ |---|---|---|
169
+ | **Institutional Alpha Reports** | 35% | Comprehensive reports from premier crypto-native funds and TradFi institutions, deconstructed into causal models. |
170
+ | **Global Macroeconomic Data** | 25% | Time-series data from sources like the Federal Reserve (FRED), World Bank, and IMF, correlated with on-chain metrics. |
171
+ | **Cross-Chain Indexed Data** | 20% | A complete, indexed history of transactions, state changes, and logs across all major EVM chains, Solana, and Cosmos ecosystems. |
172
+ | **Financial Post-Mortems & Audits** | 10% | In-depth analysis of systemic failures, economic exploits, and protocol hacks, focusing on pre-mortem indicators and contagion pathways. |
173
+ | **Geopolitical & Regulatory Feeds** | 10% | Real-time feeds on global regulatory changes, policy proposals, and geopolitical events impacting digital asset markets. |
174
+
175
+ ### 7. ๐Ÿ† Performance Benchmarks
176
+
177
+ Evaluated on three key benchmarks: **DMind Benchmark** (Web3 Native Logic), **FinanceQA** (Financial Domain Knowledge), and **AIME 2025** (Advanced Mathematical Reasoning).
178
+
179
+ ![Figure 3: Performance Benchmarks](./Figures/Figure3.png)
180
+
181
+ **(Figure 3: LLM Performance Evaluation - 3 Benchmarks: DMind Benchmark, FinanceQA, AIME 2025)**
182
+
183
+ The evaluation compares DMind-3 (21B) against top-tier frontier models (GPT-5.1, Claude Sonnet 4.5) and other efficient models. Despite its optimized size, the Max model demonstrates exceptional efficiency, particularly in specialized domain tasks where it outperforms significantly larger generalist models.
184
+
185
+
186
+ ### 8. โš–๏ธ Limitations & Disclaimer
187
+
188
+ - **Not a Financial Advisor (NFA)**: DMind-3 is a powerful analytical tool for generating insights and modeling risks. It is not a registered financial advisor. All outputs should be independently verified and are not a solicitation to trade.
189
+ - **Probabilistic Nature**: All forecasts are probabilistic and based on the data available up to the knowledge cutoff. The model cannot predict black swan events and is subject to the inherent unpredictability of markets.
190
+ - **Knowledge Cutoff**: The core model has a knowledge cutoff of June 2025. While it can process real-time data provided via the API, its foundational understanding is based on its training corpus.
191
+
192
+ ---
193
+
194
+ <!--End Original Model Card-->
195
+
196
+ ---
197
+
198
+ # <span id="testllm" style="color: #7F7FFF;">๐Ÿš€ If you find these models useful</span>
199
+
200
+ Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
201
+
202
+ ๐Ÿ‘‰ [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
203
+
204
+
205
+ The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
206
+
207
+ ๐Ÿ’ฌ **How to test**:
208
+ Choose an **AI assistant type**:
209
+ - `TurboLLM` (GPT-4.1-mini)
210
+ - `HugLLM` (Hugginface Open-source models)
211
+ - `TestLLM` (Experimental CPU-only)
212
+
213
+ ### **What Iโ€™m Testing**
214
+ Iโ€™m pushing the limits of **small open-source models for AI network monitoring**, specifically:
215
+ - **Function calling** against live network services
216
+ - **How small can a model go** while still handling:
217
+ - Automated **Nmap security scans**
218
+ - **Quantum-readiness checks**
219
+ - **Network Monitoring tasks**
220
+
221
+ ๐ŸŸก **TestLLM** โ€“ Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
222
+ - โœ… **Zero-configuration setup**
223
+ - โณ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
224
+ - ๐Ÿ”ง **Help wanted!** If youโ€™re into **edge-device AI**, letโ€™s collaborate!
225
+
226
+ ### **Other Assistants**
227
+ ๐ŸŸข **TurboLLM** โ€“ Uses **gpt-4.1-mini** :
228
+ - **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
229
+ - **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
230
+ - **Real-time network diagnostics and monitoring**
231
+ - **Security Audits**
232
+ - **Penetration testing** (Nmap/Metasploit)
233
+
234
+ ๐Ÿ”ต **HugLLM** โ€“ Latest Open-source models:
235
+ - ๐ŸŒ Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
236
+
237
+ ### ๐Ÿ’ก **Example commands you could test**:
238
+ 1. `"Give me info on my websites SSL certificate"`
239
+ 2. `"Check if my server is using quantum safe encyption for communication"`
240
+ 3. `"Run a comprehensive security audit on my server"`
241
+ 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
242
+
243
+ ### Final Word
244
+
245
+ I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIโ€”all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
246
+
247
+ If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) โ˜•. Your support helps cover service costs and allows me to raise token limits for everyone.
248
+
249
+ I'm also open to job opportunities or sponsorship.
250
+
251
+ Thank you! ๐Ÿ˜Š