Update README.md
Browse files
README.md
CHANGED
|
@@ -1,12 +1,4 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
> Inference stays on your device. Standardized function calling for wallets, DEXs, and agents. Built on `google/functiongemma-270m-it`.
|
| 4 |
-
|
| 5 |
-
**Repo purpose:** host the open-source training/eval pipeline and release artifacts. Place your exported model under `./model` before pushing to Hugging Face.
|
| 6 |
-
|
| 7 |
-
## HF Card Metadata
|
| 8 |
-
|
| 9 |
-
```
|
| 10 |
language:
|
| 11 |
- en
|
| 12 |
- zh
|
|
@@ -27,7 +19,21 @@ tags:
|
|
| 27 |
- standard-protocol
|
| 28 |
library_name: transformers
|
| 29 |
pipeline_tag: text-generation
|
| 30 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
|
| 32 |
## Highlights
|
| 33 |
|
|
@@ -152,24 +158,16 @@ Model: <start_function_call>call:SEARCH_TOKEN{symbol:"SOL",chain:"solana"}<end_f
|
|
| 152 |
|
| 153 |
## Performance Snapshot
|
| 154 |
|
| 155 |
-
- Function recognition: ~
|
| 156 |
-
- Argument extraction: ~
|
| 157 |
-
- Protocol adherence: SEARCH_TOKEN 98.
|
| 158 |
-
- Multi-turn success: ~
|
| 159 |
|
| 160 |
Scope: tokens/chains listed in **Model Overview**; outside that set may be lower.
|
| 161 |
|
| 162 |
-
## Deployment Notes
|
| 163 |
-
|
| 164 |
-
- On-device: convert to ONNX/CoreML/TFLite for mobile/hardware wallets; apply INT8 quant for ~70MB.
|
| 165 |
-
- CPU-only: expect sub-500ms on modern CPUs; GPUs give faster.
|
| 166 |
-
- Keep Chinese benchmark samples intact (they are intentional test cases).
|
| 167 |
-
|
| 168 |
## License & Governance
|
| 169 |
|
| 170 |
- Code: MIT (`LICENSE`)
|
| 171 |
- Model card intent: Apache-2.0 (as in metadata above)
|
| 172 |
- Protocol specs (SEARCH_TOKEN / EXECUTE_SWAP): public domain for maximal adoption
|
| 173 |
- Contributions are welcome via issues/PRs.
|
| 174 |
-
|
| 175 |
-
Issues/PRs welcome. When publishing to Hugging Face, ensure `./model` contains your final weights/tokenizer. Replace `YOUR_ORG/DMind-3-nano`, badges, and links with your namespace before release.
|
|
|
|
| 1 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
- zh
|
|
|
|
| 19 |
- standard-protocol
|
| 20 |
library_name: transformers
|
| 21 |
pipeline_tag: text-generation
|
| 22 |
+
---
|
| 23 |
+
|
| 24 |
+
# DMind-3-nano: Privacy-First On-Device Crypto Intent Recognition
|
| 25 |
+
|
| 26 |
+
> Inference stays on your device. Standardized function calling for wallets, DEXs, and agents. Built on `google/functiongemma-270m-it`.
|
| 27 |
+
|
| 28 |
+
## Model Description
|
| 29 |
+
|
| 30 |
+
DMind-3-nano is a small, edge-optimized language model fine-tuned for **crypto wallet and DEX intent recognition** using standardized function-calling protocols. It is designed to run **entirely on-device**, enabling privacy-preserving, low-latency intent parsing for Web3 wallets and local agents.
|
| 31 |
+
|
| 32 |
+
This repository hosts the **open-source training and evaluation pipeline** as well as the released model artifacts.
|
| 33 |
+
|
| 34 |
+
**Repo purpose:** host the open-source training/eval pipeline and release artifacts.
|
| 35 |
+
|
| 36 |
+
|
| 37 |
|
| 38 |
## Highlights
|
| 39 |
|
|
|
|
| 158 |
|
| 159 |
## Performance Snapshot
|
| 160 |
|
| 161 |
+
- Function recognition: ~98.8% on validated set
|
| 162 |
+
- Argument extraction: ~97.4%
|
| 163 |
+
- Protocol adherence: SEARCH_TOKEN 98.5%, EXECUTE_SWAP 97.3%
|
| 164 |
+
- Multi-turn success: ~93.7%
|
| 165 |
|
| 166 |
Scope: tokens/chains listed in **Model Overview**; outside that set may be lower.
|
| 167 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 168 |
## License & Governance
|
| 169 |
|
| 170 |
- Code: MIT (`LICENSE`)
|
| 171 |
- Model card intent: Apache-2.0 (as in metadata above)
|
| 172 |
- Protocol specs (SEARCH_TOKEN / EXECUTE_SWAP): public domain for maximal adoption
|
| 173 |
- Contributions are welcome via issues/PRs.
|
|
|
|
|
|