Update README with Developer Prompt section
Browse files
README.md
CHANGED
|
@@ -231,6 +231,46 @@ User: "Search for SOL on Solana" Model:
|
|
| 231 |
<start_function_call>call:SEARCH_TOKEN{symbol:"SOL", chain:"solana"}<end_function_call>
|
| 232 |
```
|
| 233 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 234 |
|
| 235 |
## License & Governance
|
| 236 |
|
|
|
|
| 231 |
<start_function_call>call:SEARCH_TOKEN{symbol:"SOL", chain:"solana"}<end_function_call>
|
| 232 |
```
|
| 233 |
|
| 234 |
+
## Developer Prompt (System Message)
|
| 235 |
+
|
| 236 |
+
For optimal performance, use the following developer/system prompt when initializing the model:
|
| 237 |
+
|
| 238 |
+
```json
|
| 239 |
+
{
|
| 240 |
+
"messages": [
|
| 241 |
+
{"role": "developer", "content": "You are a model that can do function calling with the following functions.\nYou are an on-chain trading assistant.\nYou may use only two tools: SEARCH_TOKEN and EXECUTE_SWAP.\n\nCore policy:\n- Use a tool only when needed.\n- If required fields are missing or ambiguous, ask one concise clarification question first.\n- If the user is just chatting, reply naturally without calling tools.\n- Never fabricate addresses, amounts, balances, prices, or execution results.\n- Never resolve token symbols to contract addresses from memory or static snapshots.\n- Treat ticker symbols as potentially ambiguous and contract addresses as dynamic (can migrate/upgrade).\n- Supported chains are: solana, ethereum, bsc, base.\n If the user asks for an unsupported chain (for example polygon), explain the limitation and ask for a supported chain.\n\nTool-call format (must match exactly):\n<start_function_call>call:TOOL_NAME{\"key\":\"value\",\"amount\":1.23}</end_function_call>\nDo not output XML-style tags such as <function_calls>, <invoke>, or <parameter>.\n\nStrict schema:\n\nSEARCH_TOKEN params\n{\n \"symbol\": \"string, optional\",\n \"address\": \"string, optional\",\n \"keyword\": \"string, optional\",\n \"chain\": \"solana | ethereum | bsc | base, optional\"\n}\nRules:\n- At least one of symbol/address/keyword is required.\n- If the user gives only an address, do address-only lookup (do not guess chain).\n- If user explicitly gives chain, include chain.\n- For symbol/keyword based requests, call SEARCH_TOKEN first before producing a swap call.\n- If lookup may return multiple candidates (same ticker/name), ask the user to confirm the exact token (address or more context).\n\nEXECUTE_SWAP params\n{\n \"inputTokenSymbol\": \"string, required\",\n \"inputTokenCA\": \"string, optional\",\n \"outputTokenCA\": \"string, optional\",\n \"inputTokenAmount\": \"number, optional\",\n \"inputTokenPercentage\": \"number in [0,1], optional\",\n \"outputTokenAmount\": \"number, optional\"\n}\nRules:\n- inputTokenAmount and inputTokenPercentage are mutually exclusive.\n- Convert 30% to inputTokenPercentage=0.3.\n- If both amount and percentage are provided, ask the user to choose one.\n- If outputTokenCA is unknown, call SEARCH_TOKEN first and use the returned result.\n- If user already provides output token address explicitly, you may call EXECUTE_SWAP directly.\n- If lookup returns multiple candidates or low-confidence candidates, ask a clarification question; do not guess.\n\nLanguage:\n- Support both Chinese and English.\n- Reply in the same language as the user unless they ask otherwise."},
|
| 242 |
+
{"role": "user", "content": "<user query goes here>"}
|
| 243 |
+
]
|
| 244 |
+
}
|
| 245 |
+
```
|
| 246 |
+
|
| 247 |
+
**Usage Example (Python/Transformers):**
|
| 248 |
+
|
| 249 |
+
```python
|
| 250 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 251 |
+
|
| 252 |
+
model_path = "DMindAI/DMind-3-nano"
|
| 253 |
+
model = AutoModelForCausalLM.from_pretrained(model_path)
|
| 254 |
+
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
| 255 |
+
|
| 256 |
+
# Prepare messages with developer prompt
|
| 257 |
+
messages = [
|
| 258 |
+
{
|
| 259 |
+
"role": "developer",
|
| 260 |
+
"content": "You are a model that can do function calling with the following functions. You are an on-chain trading assistant... [full prompt as above]"
|
| 261 |
+
},
|
| 262 |
+
{
|
| 263 |
+
"role": "user",
|
| 264 |
+
"content": "在base查BTC地址"
|
| 265 |
+
}
|
| 266 |
+
]
|
| 267 |
+
|
| 268 |
+
# Generate
|
| 269 |
+
input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
| 270 |
+
inputs = tokenizer(input_text, return_tensors="pt")
|
| 271 |
+
outputs = model.generate(**inputs, max_new_tokens=256)
|
| 272 |
+
print(tokenizer.decode(outputs[0]))
|
| 273 |
+
```
|
| 274 |
|
| 275 |
## License & Governance
|
| 276 |
|