Update README.md
Browse files
README.md
CHANGED
|
@@ -9,14 +9,13 @@ tags:
|
|
| 9 |
- merge
|
| 10 |
---
|
| 11 |
|
| 12 |
-
|
| 13 |
# Neuro-Orchestrator-8B
|
| 14 |
|
| 15 |
<div align="center">
|
| 16 |
|
| 17 |

|
| 18 |

|
| 19 |
-

|
| 21 |
|
| 22 |
**The Autonomous Adaptive Research Engine**
|
|
@@ -26,7 +25,7 @@ tags:
|
|
| 26 |
|
| 27 |
## 🧠 Model Overview
|
| 28 |
|
| 29 |
-
**Neuro-Orchestrator-8B** is a
|
| 30 |
|
| 31 |
1. **Analyzes** the complexity of the user request first (HiPO Influence).
|
| 32 |
2. **Decides** whether to answer immediately or enter a deep reasoning loop.
|
|
@@ -42,8 +41,7 @@ It was created using the **TIES-Merging** method on the following base models:
|
|
| 42 |
|
| 43 |
## 💻 Usage Code (Bfloat16)
|
| 44 |
|
| 45 |
-
This model is optimized to run in `bfloat16` precision
|
| 46 |
-
*Note: This configuration requires a GPU with sufficient VRAM (approx. 16GB+).*
|
| 47 |
|
| 48 |
### Installation
|
| 49 |
```bash
|
|
@@ -70,7 +68,7 @@ print("Model loaded successfully!")
|
|
| 70 |
|
| 71 |
# --- INFERENCE FUNCTION ---
|
| 72 |
def run_neuro_agent(prompt):
|
| 73 |
-
# ChatML Format triggers the Orchestrator personality
|
| 74 |
full_prompt = (
|
| 75 |
f"<|im_start|>system\n"
|
| 76 |
f"You are Neuro-Orchestrator. Analyze the request complexity, plan, and execute.<|im_end|>\n"
|
|
@@ -202,7 +200,7 @@ Both a pound of feathers and a pound of lead weigh exactly the same... However,
|
|
| 202 |
|
| 203 |
## 📜 License
|
| 204 |
|
| 205 |
-
This model is a merge of models with Apache 2.0
|
| 206 |
* [MiroThinker-v1.0-8B](https://huggingface.co/miromind-ai/MiroThinker-v1.0-8B)
|
| 207 |
* [Nemotron-Orchestrator-8B](https://huggingface.co/nvidia/Nemotron-Orchestrator-8B)
|
| 208 |
* [HiPO-8B](https://huggingface.co/Kwaipilot/HiPO-8B)
|
|
|
|
| 9 |
- merge
|
| 10 |
---
|
| 11 |
|
|
|
|
| 12 |
# Neuro-Orchestrator-8B
|
| 13 |
|
| 14 |
<div align="center">
|
| 15 |
|
| 16 |

|
| 17 |

|
| 18 |
+

|
| 19 |

|
| 20 |
|
| 21 |
**The Autonomous Adaptive Research Engine**
|
|
|
|
| 25 |
|
| 26 |
## 🧠 Model Overview
|
| 27 |
|
| 28 |
+
**Neuro-Orchestrator-8B** is a state-of-the-art agentic merge built on the **Qwen architecture**. It is designed to solve the "always-on" reasoning problem by utilizing a **hybrid gating mechanism** derived from merging three distinct Qwen-based fine-tunes:
|
| 29 |
|
| 30 |
1. **Analyzes** the complexity of the user request first (HiPO Influence).
|
| 31 |
2. **Decides** whether to answer immediately or enter a deep reasoning loop.
|
|
|
|
| 41 |
|
| 42 |
## 💻 Usage Code (Bfloat16)
|
| 43 |
|
| 44 |
+
This model is optimized to run in `bfloat16` precision. It uses the specific **ChatML** prompt template native to Qwen models.
|
|
|
|
| 45 |
|
| 46 |
### Installation
|
| 47 |
```bash
|
|
|
|
| 68 |
|
| 69 |
# --- INFERENCE FUNCTION ---
|
| 70 |
def run_neuro_agent(prompt):
|
| 71 |
+
# Qwen/ChatML Format triggers the Orchestrator personality
|
| 72 |
full_prompt = (
|
| 73 |
f"<|im_start|>system\n"
|
| 74 |
f"You are Neuro-Orchestrator. Analyze the request complexity, plan, and execute.<|im_end|>\n"
|
|
|
|
| 200 |
|
| 201 |
## 📜 License
|
| 202 |
|
| 203 |
+
This model is a merge of Qwen-based models. Users should comply with the Apache 2.0 license and the specific terms of the constituent models:
|
| 204 |
* [MiroThinker-v1.0-8B](https://huggingface.co/miromind-ai/MiroThinker-v1.0-8B)
|
| 205 |
* [Nemotron-Orchestrator-8B](https://huggingface.co/nvidia/Nemotron-Orchestrator-8B)
|
| 206 |
* [HiPO-8B](https://huggingface.co/Kwaipilot/HiPO-8B)
|