vasanthfeb13 commited on
Commit
09e8c1e
·
0 Parent(s):

Deploy Main Space

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +607 -0
  2. app.py +108 -0
  3. configs/node-config.example.yml +16 -0
  4. configs/node-kali-vm.yml +24 -0
  5. configs/node-laptop.yml +12 -0
  6. configs/node-linux-vm.yml +12 -0
  7. configs/node-windows-vm.yml +10 -0
  8. configs/orchestrator-blaxel-gemini.yml +36 -0
  9. configs/orchestrator-blaxel-openai.yml +78 -0
  10. configs/orchestrator-config.example.yml +26 -0
  11. configs/orchestrator-local-demo.yml +15 -0
  12. configs/orchestrator-modal.yml +60 -0
  13. configs/orchestrator-three-node.yml +26 -0
  14. configs/orchestrator-vms.yml +35 -0
  15. configs/orchestrator.yml +61 -0
  16. configs/ui-config.example.yml +5 -0
  17. configs/ui-modal.yml +5 -0
  18. orchestrator-config.yml +59 -0
  19. requirements.txt +27 -0
  20. src/nacc_node/__init__.py +25 -0
  21. src/nacc_node/__pycache__/__init__.cpython-312.pyc +0 -0
  22. src/nacc_node/__pycache__/cli.cpython-312.pyc +0 -0
  23. src/nacc_node/__pycache__/config.cpython-312.pyc +0 -0
  24. src/nacc_node/__pycache__/filesystem.cpython-312.pyc +0 -0
  25. src/nacc_node/__pycache__/pairing.cpython-312.pyc +0 -0
  26. src/nacc_node/__pycache__/tools.cpython-312.pyc +0 -0
  27. src/nacc_node/cli.py +163 -0
  28. src/nacc_node/config.py +43 -0
  29. src/nacc_node/filesystem.py +131 -0
  30. src/nacc_node/pairing.py +37 -0
  31. src/nacc_node/tools.py +637 -0
  32. src/nacc_orchestrator/__init__.py +4 -0
  33. src/nacc_orchestrator/__pycache__/__init__.cpython-312.pyc +0 -0
  34. src/nacc_orchestrator/__pycache__/__init__.cpython-314.pyc +0 -0
  35. src/nacc_orchestrator/__pycache__/agents.cpython-312.pyc +0 -0
  36. src/nacc_orchestrator/__pycache__/agents.cpython-314.pyc +0 -0
  37. src/nacc_orchestrator/__pycache__/audit.cpython-312.pyc +0 -0
  38. src/nacc_orchestrator/__pycache__/backend_manager.cpython-312.pyc +0 -0
  39. src/nacc_orchestrator/__pycache__/blaxel_backend.cpython-312.pyc +0 -0
  40. src/nacc_orchestrator/__pycache__/cerebras_backend.cpython-312.pyc +0 -0
  41. src/nacc_orchestrator/__pycache__/cerebras_backend.cpython-314.pyc +0 -0
  42. src/nacc_orchestrator/__pycache__/cli.cpython-312.pyc +0 -0
  43. src/nacc_orchestrator/__pycache__/cli.cpython-314.pyc +0 -0
  44. src/nacc_orchestrator/__pycache__/config.cpython-312.pyc +0 -0
  45. src/nacc_orchestrator/__pycache__/config.cpython-314.pyc +0 -0
  46. src/nacc_orchestrator/__pycache__/gemini_backend.cpython-312.pyc +0 -0
  47. src/nacc_orchestrator/__pycache__/gemini_backend.cpython-314.pyc +0 -0
  48. src/nacc_orchestrator/__pycache__/modal_backend.cpython-312.pyc +0 -0
  49. src/nacc_orchestrator/__pycache__/nodes.cpython-312.pyc +0 -0
  50. src/nacc_orchestrator/__pycache__/openai_backend.cpython-312.pyc +0 -0
README.md ADDED
@@ -0,0 +1,607 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: NACC - Network Agentic Command Control
3
+ emoji: 🚀
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: gradio
7
+ app_file: app.py
8
+ pinned: false
9
+ license: mit
10
+ short_description: Autonomous AI Agent Orchestrator for Multi-Node Control
11
+
12
+ tags:
13
+ - mcp-in-action-track-enterprise
14
+ - mcp-in-action-track-consumer
15
+ - mcp-in-action-track-creative
16
+ - building-mcp-track-enterprise
17
+ - building-mcp-track-consumer
18
+ - building-mcp-track-creative
19
+ ---
20
+
21
+ # 🚀 NACC Orchestrator: Network Agentic Command Control
22
+
23
+ > **Turn your entire network into a single, intelligent entity.**
24
+
25
+ NACC is an **Autonomous AI Agent Application** that orchestrates multiple computers as a unified intelligent system using the **Model Context Protocol (MCP)**. It showcases autonomous reasoning, planning, and execution across distributed nodes—controlling Mac, Linux, and VM environments through natural language. Whether managing enterprise infrastructure, home labs, or creative workflows, NACC demonstrates the true power of MCP-enabled AI agents.
26
+
27
+ ---
28
+
29
+ ### 🎥 [Watch the Video Demo](LINK_TO_YOUR_VIDEO_HERE)
30
+ ### 🐦 [See it on X / LinkedIn](LINK_TO_YOUR_SOCIAL_POST_HERE)
31
+
32
+ ---
33
+
34
+ ## 🏆 Hackathon Tracks
35
+
36
+ ### 🎯 Primary: Track 2 - MCP in Action
37
+
38
+ NACC is a **complete AI agent application** that demonstrates autonomous reasoning, planning, and execution:
39
+
40
+ * **🏢 Enterprise Applications** (`mcp-in-action-track-enterprise`): Multi-node orchestration for distributed infrastructure management, automated deployments, and coordinated system administration
41
+ * **👤 Consumer Applications** (`mcp-in-action-track-consumer`): User-friendly natural language interface for home labs, personal workflows, and everyday task automation
42
+ * **🎨 Creative Applications** (`mcp-in-action-track-creative`): Innovative hub-and-spoke architecture enabling unprecedented cross-machine AI coordination and distributed creative workflows
43
+
44
+ **What makes NACC an AI Agent Application:**
45
+ - **Autonomous Reasoning**: AI interprets complex multi-step requests and plans execution across nodes
46
+ - **Dynamic Planning**: Intelligently routes commands based on node capabilities and current context
47
+ - **Autonomous Execution**: Executes coordinated actions across multiple machines without manual intervention
48
+
49
+ ### 🔧 Secondary: Track 1 - Building MCP
50
+
51
+ NACC also functions as an **MCP Server** extending LLM capabilities:
52
+
53
+ * Exposes powerful tools for distributed file operations, shell execution, and cross-node synchronization
54
+ * Integrates with Claude Desktop and any MCP-compatible client
55
+ * Built entirely on MCP protocol for standardized AI-to-system communication
56
+
57
+ This project was built **from the ground up** using the Model Context Protocol, showcasing true innovation in distributed AI orchestration.
58
+
59
+ ---
60
+
61
+ ## �📑 Table of Contents
62
+
63
+ - [Hackathon Categories](#-hackathon-categories)
64
+ - [Architecture](#-architecture)
65
+ - [Key Features](#-key-features)
66
+ - [Why Blaxel](#-why-blaxel-powers-nacc)
67
+ - [How It Works](#-how-it-works)
68
+ - [Installation & Setup](#%EF%B8%8F-installation--setup)
69
+ - [Configuration](#%EF%B8%8F-configuration)
70
+ - [Usage Guide](#-usage-guide)
71
+ - [Example Workflows](#-example-workflows)
72
+ - [Tech Stack](#%EF%B8%8F-tech-stack)
73
+ - [Platform Compatibility](#-platform-compatibility)
74
+ - [Troubleshooting](#-troubleshooting)
75
+ - [FAQ](#-faq)
76
+ - [Team Members](#-team-members)
77
+
78
+ ---
79
+
80
+ ## 🧠 Architecture
81
+
82
+ NACC uses a **Hub-and-Spoke** architecture where a central Orchestrator manages multiple distributed Nodes.
83
+
84
+ ```mermaid
85
+ graph TD
86
+ User["👤 User"] -->|Natural Language| UI["💻 NACC Web UI / Claude Desktop"]
87
+ UI -->|JSON-RPC| Orch["🧠 Orchestrator - The Brain"]
88
+
89
+ subgraph Local Network
90
+ Orch -->|HTTP/MCP| Node1["🖥️ Node 1 - MacBook"]
91
+ Orch -->|HTTP/MCP| Node2["🐧 Node 2 - Kali VM"]
92
+ Orch -->|HTTP/MCP| Node3["☁️ Node 3 - Ubuntu Server"]
93
+ end
94
+
95
+ Node1 -->|Execute| FS1["📂 Filesystem / Shell"]
96
+ Node2 -->|Execute| FS2["📂 Filesystem / Shell"]
97
+ Node3 -->|Execute| FS3["📂 Filesystem / Shell"]
98
+ ```
99
+
100
+ ## 🌟 Key Features
101
+
102
+ * **🗣️ Natural Language Control**: "Switch to Kali and scan the network" -> Executed instantly.
103
+ * **🔌 MCP Native**: Built from scratch on the Model Context Protocol for standardized communication.
104
+ * **📂 Dynamic File Operations**: Create, read, edit, and delete files across any connected node using natural language.
105
+ * **🔄 Context-Aware Navigation**: The system remembers which node you're on and your current directory.
106
+ * **🛡️ Secure & Private**: Designed for **local networks**. Your data stays within your LAN.
107
+ * **🎨 Professional UI**: Modern, dark-themed interface built with Gradio 6.0.
108
+ * **⚡ Powered by Blaxel**: Lightning-fast serverless AI inference with zero configuration.
109
+ * **🤖 Multi-Backend Support**: Works with OpenAI, Anthropic, Gemini, Mistral, Llama, or local LLMs via Ollama.
110
+
111
+ ---
112
+
113
+ ## ⚡ Why Blaxel Powers NACC
114
+
115
+ NACC uses **[Blaxel](https://blaxel.com)** as its default AI backend, and for good reason:
116
+
117
+ ### 🚀 Flawless Performance
118
+ * **Zero Configuration**: Works out of the box - no API keys needed for testing
119
+ * **Lightning Fast**: Sub-second response times for command execution
120
+ * **Always Available**: 99.9% uptime serverless infrastructure
121
+ * **Cost Effective**: Pay only for what you use, perfect for demos and development
122
+
123
+ ### 🎯 Perfect for NACC
124
+ Blaxel's serverless architecture aligns perfectly with NACC's distributed nature:
125
+ * No cold starts when orchestrating multiple nodes
126
+ * Handles concurrent requests across nodes seamlessly
127
+ * Automatically scales with your network growth
128
+
129
+ ### 💡 Acknowledgment
130
+ A huge thank you to the **Blaxel team** for building such a robust platform! Their commitment to making AI accessible and performant has been instrumental in bringing NACC to life. *We'd love to collaborate further - internship opportunities welcome!* 😊
131
+
132
+ While NACC supports multiple backends (OpenAI, Anthropic, Gemini, Ollama), Blaxel remains our recommended choice for its perfect balance of speed, reliability, and ease of use.
133
+
134
+ ---
135
+
136
+ ## 🔍 How It Works
137
+
138
+ NACC operates through a sophisticated request flow that bridges natural language and system commands:
139
+
140
+ ```mermaid
141
+ sequenceDiagram
142
+ participant User
143
+ participant UI as NACC UI
144
+ participant LLM as AI Backend
145
+ participant Orch as Orchestrator
146
+ participant Node as Target Node
147
+
148
+ User->>UI: "Create file test.txt on kali-vm"
149
+ UI->>Orch: Send query with context
150
+ Orch->>LLM: Parse intent + context
151
+ LLM->>Orch: {tool: "write_file", node: "kali-vm", path: "test.txt"}
152
+ Orch->>Node: HTTP POST /tools/write-file
153
+ Node->>Node: Execute on filesystem
154
+ Node->>Orch: {success: true, path: "/home/user/test.txt"}
155
+ Orch->>UI: Format response
156
+ UI->>User: "✅ File created on kali-vm"
157
+ ```
158
+
159
+ ### Core Components
160
+
161
+ 1. **UI Layer** (`nacc_ui/`): Gradio-based chat interface with context management
162
+ 2. **Orchestrator** (`nacc_orchestrator/`): Central brain that routes requests and manages node registry
163
+ 3. **Node Agents** (`nacc_node/`): Lightweight servers exposing MCP tools for file/shell operations
164
+ 4. **LLM Backend**: Interprets natural language and selects appropriate tools
165
+
166
+ ---
167
+
168
+ ## 🛠️ Installation & Setup
169
+
170
+ ### Prerequisites
171
+ * **Python 3.10+** installed on all machines.
172
+ * **Network**: All machines (Orchestrator and Nodes) must be on the **same local network** (LAN).
173
+ * **Git**: For cloning the repository.
174
+
175
+ ### 🎮 Demo Guide
176
+
177
+ NACC can be demonstrated in two powerful ways. Choose the one that fits your environment:
178
+
179
+ ### ☁️ Option 1: Cloud Demo (Hugging Face Spaces)
180
+ **Perfect for:** Judges, quick testing, zero setup.
181
+
182
+ 1. **Open the Space**: Go to the NACC Space.
183
+ 2. **Auto-Connect**: You will see **`✅ Online: Local Node (Demo)`** immediately. This is a real node running *inside* the Space container.
184
+ 3. **Try Commands**:
185
+ * "List files in current directory"
186
+ * "Create a file named hello.txt with content 'Hello from HF Space!'"
187
+ 4. **Multi-Node Magic** (Optional):
188
+ * If you deployed the **VM Node Space**, connect to it:
189
+ * *"Connect to node at https://mcp-1st-birthday-virtual-machine.hf.space and name it Remote-VM"*
190
+ * Now you can switch between nodes: *"Switch to Remote-VM and list files"*
191
+
192
+ ### 🏠 Option 2: Real-World Demo (Local Network)
193
+ **Perfect for:** Full power, controlling physical devices, home labs.
194
+
195
+ 1. **Run Orchestrator**: `python -m src.nacc_orchestrator.cli serve` on your main computer.
196
+ 2. **Run Nodes**: `python -m src.nacc_node.cli serve` on your other laptops, Raspberry Pis, or VMs.
197
+ 3. **Connect**:
198
+ * *"Connect to node at 192.168.1.15 and name it MacBook-Pro"*
199
+ * *"Connect to node at 192.168.1.20 and name it Kali-Linux"*
200
+ 4. **Orchestrate**:
201
+ * *"Check CPU usage on all nodes"*
202
+ * *"Sync the 'project' folder from MacBook-Pro to Kali-Linux"*
203
+
204
+ ---
205
+
206
+ ## 🚀 Quick Start (Local) (For Judges)
207
+
208
+ ```bash
209
+ # Clone the repository
210
+ git clone https://github.com/Vasanthadithya-mundrathi/NACC.git
211
+ cd NACC
212
+
213
+ # Setup environment (Mac/Linux)
214
+ python3 -m venv .venv
215
+ source .venv/bin/activate
216
+ pip install -r requirements.txt
217
+
218
+ # Start the Orchestrator
219
+ ./start_nacc.sh
220
+ ```
221
+
222
+ Open your browser to `http://localhost:7860` - You're ready!
223
+
224
+ ### Detailed Setup
225
+
226
+ #### 1. Setting up the Orchestrator (The Brain)
227
+ Run this on your main computer (e.g., MacBook, PC).
228
+
229
+ 1. **Clone the Repository**
230
+ ```bash
231
+ git clone https://github.com/Vasanthadithya-mundrathi/NACC.git
232
+ cd NACC
233
+ ```
234
+
235
+ 2. **Install Dependencies**
236
+ ```bash
237
+ python3 -m venv .venv
238
+ source .venv/bin/activate # On Windows: .venv\Scripts\activate
239
+ pip install -r requirements.txt
240
+ ```
241
+
242
+ 3. **Make Scripts Executable** (Mac/Linux only)
243
+ ```bash
244
+ chmod +x start_nacc.sh
245
+ ```
246
+
247
+ 4. **Start the Orchestrator**
248
+ ```bash
249
+ ./start_nacc.sh
250
+ ```
251
+
252
+ The UI will be available at `http://localhost:7860`.
253
+
254
+ #### 2. Setting up Nodes (The Agents)
255
+ Run this on any other machine you want to control.
256
+
257
+ > **💡 Recommendation**: We highly recommend installing NACC Nodes in a **Virtual Machine (VM)** (like Kali Linux or Ubuntu) to safely test the powerful command execution capabilities without affecting your main system.
258
+
259
+ 1. **Install NACC** (Same steps as above: clone & install requirements).
260
+
261
+ 2. **Start the Node**
262
+ Run the node agent, specifying a unique name and port:
263
+ ```bash
264
+ # On the Node machine (e.g., Kali VM)
265
+ python3 -m nacc_node.main --name "kali-vm" --host 0.0.0.0 --port 8001
266
+ ```
267
+
268
+ *Note: `0.0.0.0` allows the node to accept connections from any IP on your local network. Note the IP address of this machine (e.g., `192.168.1.15`).*
269
+
270
+ ---
271
+
272
+ ## ⚙️ Configuration
273
+
274
+ ### API Keys & Backend Selection
275
+
276
+ NACC supports multiple AI backends. You can configure them via:
277
+
278
+ 1. **Environment Variables** (Recommended for production):
279
+ ```bash
280
+ export OPENAI_API_KEY="sk-..."
281
+ export ANTHROPIC_API_KEY="sk-ant-..."
282
+ export GEMINI_API_KEY="..."
283
+ export BLAXEL_API_KEY="..."
284
+ ```
285
+
286
+ 2. **UI Settings**: Use the Settings tab in the NACC Web UI to enter keys and select your backend.
287
+
288
+ 3. **Config Files**: Edit `configs/orchestrator-config.yml` for advanced configuration.
289
+
290
+ ### Supported Backends
291
+
292
+ | Backend | Model | Notes |
293
+ |---------|-------|-------|
294
+ | **OpenAI** | GPT-4, GPT-4 Turbo | Best overall performance |
295
+ | **Anthropic** | Claude 3.5 Sonnet | Excellent for complex reasoning |
296
+ | **Google** | Gemini 1.5 Pro | Fast and cost-effective |
297
+ | **Blaxel** | (Serverless) | Pre-configured for quick start |
298
+ | **Ollama** | Llama 3, Mistral | Run locally (requires Docker) |
299
+
300
+ ### Custom Configuration
301
+
302
+ Edit `configs/orchestrator-config.yml` to:
303
+ * Add static nodes (nodes that auto-connect on startup)
304
+ * Configure default timeouts
305
+ * Set audit log paths
306
+ * Customize allowed commands per node
307
+
308
+ Example:
309
+ ```yaml
310
+ nodes:
311
+ - node_id: "macbook-local"
312
+ transport: "local"
313
+ root_dir: "/Users/yourname"
314
+ tags: ["mac", "local"]
315
+
316
+ agent_backend:
317
+ provider: "openai"
318
+ model: "gpt-4-turbo"
319
+ ```
320
+
321
+ ---
322
+
323
+ ## 🚀 Usage Guide
324
+
325
+ ### Connecting a Node
326
+ Once your Orchestrator and Node are running:
327
+
328
+ 1. Open the NACC UI (`http://localhost:7860`).
329
+ 2. In the Chat interface, type:
330
+ > "Connect to node at 192.168.1.15 on port 8001 and name it kali-vm"
331
+ 3. The system will register the node. You can verify by asking:
332
+ > "List all connected nodes"
333
+
334
+ ### Controlling Nodes
335
+ You can now switch contexts and control any node naturally.
336
+
337
+ **Navigation & Exploration:**
338
+ ```
339
+ "Switch to kali-vm"
340
+ "Navigate to Documents folder"
341
+ "List all files in this directory"
342
+ "Go back to parent directory"
343
+ ```
344
+
345
+ **File Operations:**
346
+ ```
347
+ "Create a file named 'notes.txt' with content 'Meeting at 5 PM'"
348
+ "Read the content of 'notes.txt'"
349
+ "Delete 'notes.txt'"
350
+ "Write 'Updated content' to existing.txt"
351
+ ```
352
+
353
+ **Cross-Node Actions:**
354
+ ```
355
+ "Switch to macbook-local"
356
+ "Share test.txt from kali-vm to macbook-local"
357
+ "Check system stats on all nodes"
358
+ ```
359
+
360
+ ### Using with MCP Clients (Claude Desktop)
361
+
362
+ NACC acts as an MCP Server, meaning you can connect it to Claude Desktop or any MCP-compatible client!
363
+
364
+ 1. **Configure Claude Desktop**:
365
+ Add this to your `claude_desktop_config.json` (usually in `~/AppData/Roaming/Claude/` on Windows or `~/Library/Application Support/Claude/` on Mac):
366
+ ```json
367
+ {
368
+ "mcpServers": {
369
+ "nacc": {
370
+ "command": "uv",
371
+ "args": [
372
+ "--directory",
373
+ "/absolute/path/to/NACC",
374
+ "run",
375
+ "nacc-orchestrator"
376
+ ]
377
+ }
378
+ }
379
+ }
380
+ ```
381
+
382
+ 2. **Alternative (Python Direct)**:
383
+ ```json
384
+ {
385
+ "mcpServers": {
386
+ "nacc": {
387
+ "command": "python3",
388
+ "args": [
389
+ "-m",
390
+ "nacc_orchestrator.mcp_server"
391
+ ],
392
+ "env": {
393
+ "PYTHONPATH": "/absolute/path/to/NACC"
394
+ }
395
+ }
396
+ }
397
+ }
398
+ ```
399
+
400
+ 3. **Restart Claude Desktop**. You can now ask Claude:
401
+ > "List files on my Kali VM"
402
+ > "Create a backup directory on my Ubuntu server"
403
+
404
+ ---
405
+
406
+ ## 💡 Example Workflows
407
+
408
+ ### Scenario 1: Basic File Operations (Local Node)
409
+
410
+ **Goal**: Organize your files using AI.
411
+
412
+ **Prompt**:
413
+ > "Create a directory called 'test_data', write a file named 'notes.txt' inside it with the text 'Hello World', and then read it back to me."
414
+
415
+ **What Happens**:
416
+ 1. Intent Parser identifies 3 steps (Create Dir, Write File, Read File).
417
+ 2. Orchestrator executes them sequentially on the local node.
418
+ 3. Result: You see the file content "Hello World" in the chat.
419
+
420
+ ### Scenario 2: Network Security Scan (Kali Node)
421
+
422
+ *Requires a connected Kali Linux node.*
423
+
424
+ **Goal**: Scan a target IP for vulnerabilities.
425
+
426
+ **Prompt**:
427
+ > "Switch to the Kali node. Run an nmap scan on 192.168.1.50 to find open ports. If you find port 80, try to curl the homepage."
428
+
429
+ **What Happens**:
430
+ 1. Router routes the request to the `kali-vm` node.
431
+ 2. Agent executes `nmap -F 192.168.1.50`.
432
+ 3. Logic: If output contains "80/tcp open", the Agent executes `curl http://192.168.1.50`.
433
+ 4. Result: The AI summarizes the open ports and the web page content.
434
+
435
+ ### Scenario 3: Multi-Node Workflow
436
+
437
+ *Requires multiple connected nodes.*
438
+
439
+ **Goal**: Distributed task execution.
440
+
441
+ **Prompt**:
442
+ > "Check the CPU usage on the MacBook node, and if it's below 50%, tell the Cloud node to start the backup process."
443
+
444
+ **What Happens**:
445
+ 1. Orchestrator queries MacBook node for system stats.
446
+ 2. Orchestrator analyzes the CPU usage.
447
+ 3. If condition is met, sends command `python backup_script.py` to Cloud node.
448
+ 4. Result: A coordinated action across two different physical machines.
449
+
450
+ ---
451
+
452
+ ## 🛠️ Tech Stack
453
+
454
+ ### Core Technologies
455
+ * **Frontend**: Gradio 6.0 (Python)
456
+ * **Backend**: FastAPI, Uvicorn
457
+ * **Protocol**: JSON-RPC 2.0 (MCP Standard)
458
+ * **AI Models**: OpenAI, Anthropic, Gemini, Mistral, Llama (via Ollama)
459
+
460
+ ### Key Libraries
461
+ * `pydantic` - Data validation and settings management
462
+ * `httpx` - Async HTTP client for node communication
463
+ * `pyyaml` - Configuration file parsing
464
+ * `rich` - Terminal UI for debugging
465
+
466
+ ### Compatibility
467
+ * **Primary Support**: macOS, Linux (Debian/Ubuntu/Kali)
468
+ * **Windows Support**: Recommended via **WSL2**. Native Windows support is experimental.
469
+
470
+ ---
471
+
472
+ ## 🔧 Troubleshooting
473
+
474
+ ### Port Already in Use
475
+ **Error**: `Address already in use: 7860`
476
+
477
+ **Solution**: The system automatically finds the next open port. Check the terminal output for the correct URL (e.g., `http://127.0.0.1:7861`).
478
+
479
+ ### Node Not Connecting
480
+ **Error**: "Could not connect to node"
481
+
482
+ **Solutions**:
483
+ 1. Verify the node is running: `ps aux | grep nacc-node`
484
+ 2. Check firewall settings allow port 8001
485
+ 3. Confirm both machines are on the same local network
486
+ 4. Try connecting to the node's IP directly in a browser: `http://192.168.1.15:8001/healthz`
487
+
488
+ ### Permission Denied
489
+ **Error**: `Permission denied: './start_nacc.sh'`
490
+
491
+ **Solution** (Mac/Linux):
492
+ ```bash
493
+ chmod +x start_nacc.sh
494
+ ```
495
+
496
+ ### Missing Dependencies
497
+ **Error**: `ModuleNotFoundError: No module named 'gradio'`
498
+
499
+ **Solution**:
500
+ ```bash
501
+ source .venv/bin/activate # Activate virtual environment
502
+ pip install -r requirements.txt
503
+ ```
504
+
505
+ ### API Key Issues
506
+ **Error**: "Invalid API key"
507
+
508
+ **Solutions**:
509
+ 1. Verify your API key is correctly set in environment variables or UI settings
510
+ 2. Try using the Blaxel backend (pre-configured)
511
+ 3. Switch to local Ollama if you have it running
512
+
513
+ ---
514
+
515
+ ## 💻 Platform Compatibility
516
+
517
+ ### ✅ Fully Supported Platforms
518
+
519
+ NACC was **built and extensively tested** on the following platforms:
520
+
521
+ * **macOS** (Primary development platform)
522
+ * **Linux** - Debian/Ubuntu/Kali (Tested on Kali Linux VM)
523
+
524
+ These platforms provide the best experience with full feature support and stability.
525
+
526
+ ### ⚠️ Windows Compatibility Warning
527
+
528
+ **NACC is NOT recommended for Windows** and may not work properly:
529
+
530
+ * **Built on macOS**: The entire system was developed and optimized for Unix-like environments
531
+ * **Tested on Linux**: All testing and verification was done on macOS and Kali Linux
532
+ * **Shell Dependencies**: Many core features rely on bash/sh commands that don't translate well to Windows
533
+ * **Path Handling**: File system operations use Unix path conventions
534
+
535
+ **If you must use Windows**:
536
+ 1. Use **WSL2 (Windows Subsystem for Linux)** - This is the ONLY supported way
537
+ 2. Native Windows PowerShell/CMD is experimental and will likely fail
538
+ 3. Expect compatibility issues with file operations and shell commands
539
+
540
+ We strongly recommend using macOS or a Linux VM for the best experience.
541
+
542
+ ---
543
+
544
+ ## ❓ FAQ
545
+
546
+ **Q: Do I need an API Key to use NACC?**
547
+
548
+ A: NACC comes configured to use **Blaxel (Serverless)** by default for quick demos. For production, we recommend using your own OpenAI/Anthropic/Gemini key. You can also run **Ollama locally** (Docker required) for completely free, offline operation.
549
+
550
+ **Q: Is NACC secure?**
551
+
552
+ A: Yes, NACC is designed for **local network use only**. All communication between the Orchestrator and Nodes happens over your LAN. However, **do not expose Node ports to the public internet** without additional security layers (VPN/SSH Tunnel).
553
+
554
+ **Q: Can I use NACC in production?**
555
+
556
+ A: NACC is currently a hackathon project and proof-of-concept. While it's functional, we recommend additional hardening (authentication, encryption, rate limiting) before production use.
557
+
558
+ **Q: I'm on Windows. Will this work?**
559
+
560
+ A: Yes! We recommend using **WSL2** for the best experience. If running natively on Windows, some shell commands might need PowerShell equivalents.
561
+
562
+ **Q: Can I control Windows machines?**
563
+
564
+ A: Theoretically yes, but NACC is optimized for Unix-like systems (Mac/Linux). Windows support is experimental and may require custom tool configurations.
565
+
566
+ **Q: How many nodes can I connect?**
567
+
568
+ A: There's no hard limit, but we recommend starting with 2-5 nodes for optimal performance. The system is designed to scale horizontally.
569
+
570
+ **Q: Can I use this with my existing automation scripts?**
571
+
572
+ A: Absolutely! NACC Nodes can execute any script or command available on the target machine. Just use natural language to invoke them.
573
+
574
+ ---
575
+
576
+ ## ⚠️ Important Disclaimer
577
+
578
+ **Local Network Only**: NACC is designed for local network environments. Nodes are discovered and controlled via direct IP connections. Ensure all your devices are connected to the same Wi-Fi or LAN.
579
+
580
+ **Security Note**: Do not expose the Node ports (default 8001) or Orchestrator ports (default 7860) to the public internet without additional security layers (VPN, SSH Tunnel, reverse proxy with authentication).
581
+
582
+ **VM Recommendation**: For testing, we strongly recommend running Nodes in isolated Virtual Machines to prevent accidental system modifications.
583
+
584
+ ---
585
+
586
+ ## 👥 Team Members
587
+
588
+ * [Vasanthadithya-mundrathi](https://huggingface.co/Vasanthfeb13) - Creator & Lead Developer
589
+
590
+ ---
591
+
592
+ ## 🎯 Project Goals
593
+
594
+ NACC was created for the **Hugging Face MCP Birthday Hackathon 2025** to demonstrate:
595
+
596
+ 1. **MCP as a Universal Interface**: How the Model Context Protocol can unify disparate systems
597
+ 2. **Natural Language Operations**: Making system administration accessible through conversation
598
+ 3. **Distributed AI Orchestration**: Coordinating multiple agents across network boundaries
599
+ 4. **Practical AI Applications**: Real-world use cases beyond chatbots
600
+
601
+ ---
602
+
603
+ **Happy Hacking!** 🚀
604
+
605
+ *Created for the Hugging Face MCP Birthday Hackathon 2025*
606
+ *Built with ❤️ by Vasanthadithya Mundrathi*
607
+
app.py ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+ import threading
4
+ import time
5
+ import uvicorn
6
+ from fastapi import FastAPI
7
+ import os
8
+ import sys
9
+
10
+ # --- MONKEY PATCH START ---
11
+ # Fix for: ImportError: cannot import name 'HfFolder' from 'huggingface_hub'
12
+ try:
13
+ import huggingface_hub
14
+ if not hasattr(huggingface_hub, "HfFolder"):
15
+ print("[NACC] Patching huggingface_hub.HfFolder for Gradio compatibility...")
16
+ class HfFolder:
17
+ @classmethod
18
+ def save_token(cls, token):
19
+ pass
20
+ @classmethod
21
+ def get_token(cls):
22
+ return os.getenv("HF_TOKEN")
23
+ huggingface_hub.HfFolder = HfFolder
24
+ except ImportError:
25
+ pass
26
+ # --- MONKEY PATCH END ---
27
+
28
+ import gradio as gr
29
+ from pathlib import Path
30
+
31
+ # Add src to path
32
+ sys.path.append(os.path.join(os.path.dirname(__file__), "src"))
33
+
34
+ from nacc_orchestrator.server import build_service, create_app
35
+ from nacc_orchestrator.config import NodeDefinition
36
+ from nacc_ui.professional_ui_v2 import create_professional_ui_v2 as create_ui
37
+ from nacc_node.tools import NodeServer
38
+ from nacc_node.config import NodeConfig
39
+
40
+ # 1. Build the Orchestrator Service
41
+ service = build_service()
42
+
43
+ # 2. Register the Local Node (Auto-Discovery for HF Space)
44
+ local_node_def = NodeDefinition(
45
+ node_id="hf-space-local",
46
+ transport="http",
47
+ base_url="http://127.0.0.1:8001",
48
+ display_name="Local Node (Demo)",
49
+ tags=["local", "demo", "linux"],
50
+ priority=10
51
+ )
52
+ service.registry.add_node(local_node_def)
53
+ print(f"✅ Auto-registered local node: {local_node_def.node_id}")
54
+
55
+ # 3. Create the FastAPI App
56
+ orchestrator_app = create_app(service)
57
+
58
+ # 4. Create a combined FastAPI app
59
+ app = FastAPI()
60
+ app.mount("/api", orchestrator_app)
61
+
62
+ def start_local_node():
63
+ """Start a local node in the background for the HF Space demo."""
64
+ print("🚀 Starting local NACC Node for HF Space...")
65
+ try:
66
+ # Create a temporary config for the local node
67
+ config = NodeConfig(
68
+ node_id="hf-space-local",
69
+ root_dir=Path.cwd(),
70
+ display_name="Local Node (Demo)",
71
+ tags=["local", "demo"],
72
+ allowed_commands=["ls", "cat", "echo", "grep", "head", "tail", "pwd", "whoami"],
73
+ sync_targets={}
74
+ )
75
+
76
+ # Start node on port 8001
77
+ server = NodeServer(config, host="127.0.0.1", port=8001)
78
+ server.serve_forever()
79
+ except Exception as e:
80
+ print(f"❌ Failed to start local node: {e}")
81
+
82
+ def start_orchestrator_api():
83
+ """Start the orchestrator API in the background."""
84
+ print("🧠 Starting NACC Orchestrator API...")
85
+ uvicorn.run(orchestrator_app, host="127.0.0.1", port=8000)
86
+
87
+ # Start background services
88
+ if __name__ == "__main__":
89
+ # 1. Start the Orchestrator API in a thread
90
+ api_thread = threading.Thread(target=start_orchestrator_api, daemon=True)
91
+ api_thread.start()
92
+
93
+ # 2. Start a Local Node in a thread (so the Space works out-of-the-box)
94
+ node_thread = threading.Thread(target=start_local_node, daemon=True)
95
+ node_thread.start()
96
+
97
+ # 3. Give services a moment to spin up
98
+ time.sleep(3)
99
+
100
+ # 4. Launch the UI
101
+ print("💻 Launching NACC UI...")
102
+ # Pass the service instance to the UI if possible, or let it connect via API
103
+ # The UI connects to http://localhost:8000/api by default or via config
104
+ # We need to make sure the UI knows where the API is.
105
+ # professional_ui_v2.py likely reads config or defaults to localhost:8000
106
+
107
+ demo = create_ui()
108
+ demo.launch(server_name="0.0.0.0", server_port=7860)
configs/node-config.example.yml ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Example configuration for a single NACC node.
2
+ # Copy this file and adjust the values for your environment.
3
+
4
+ node_id: dev-node
5
+ root_dir: ./demo-root
6
+ description: Example development node
7
+ tags:
8
+ - dev
9
+ - laptop
10
+ allowed_commands:
11
+ - python
12
+ - ls
13
+ - cat
14
+ - echo
15
+ sync_targets:
16
+ backup: ./backups/dev-node
configs/node-kali-vm.yml ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ node_id: kali-vm
2
+ root_dir: /home/vasanth/nacc-shared
3
+ description: Kali Linux pentesting node in UTM
4
+ tags: [kali, linux, pentesting, security, network-tools]
5
+ allowed_commands:
6
+ - python3
7
+ - ls
8
+ - cat
9
+ - echo
10
+ - grep
11
+ - find
12
+ - nmap
13
+ - netcat
14
+ - nc
15
+ - ping
16
+ - curl
17
+ - wget
18
+ - uname
19
+ - whoami
20
+ - pwd
21
+ - hostname
22
+ sync_targets:
23
+ backup: /home/vasanth/nacc-backup
24
+ shared: /home/vasanth/nacc-sync
configs/node-laptop.yml ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ node_id: laptop-dev
2
+ root_dir: ./demo-root/laptop
3
+ description: Primary laptop sandbox
4
+ tags: [laptop, mac]
5
+ allowed_commands:
6
+ - python
7
+ - pytest
8
+ - ls
9
+ - cat
10
+ - echo
11
+ sync_targets:
12
+ archive: ./backups/laptop
configs/node-linux-vm.yml ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ node_id: linux-vm
2
+ root_dir: /home/ubuntu/nacc-share
3
+ description: Ubuntu VM
4
+ tags: [linux, vm]
5
+ allowed_commands:
6
+ - python3
7
+ - bash
8
+ - pytest
9
+ - tail
10
+ - echo
11
+ sync_targets:
12
+ share: /home/ubuntu/nacc-share/snapshots
configs/node-windows-vm.yml ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ node_id: windows-vm
2
+ root_dir: C:/NACC/share
3
+ description: Windows VM (PowerShell)
4
+ tags: [windows, vm]
5
+ allowed_commands:
6
+ - powershell
7
+ - cmd
8
+ - python
9
+ sync_targets:
10
+ share: C:/NACC/share/backups
configs/orchestrator-blaxel-gemini.yml ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # NACC Orchestrator configuration for Testing Blaxel Gemini Backend
2
+ nodes:
3
+ - node_id: kali-vm
4
+ transport: http
5
+ base_url: http://192.168.64.2:8765
6
+ description: Kali Linux pentesting VM in UTM
7
+ tags: [kali, linux, pentesting, security, vm]
8
+ priority: 10
9
+
10
+ - node_id: macbook-local
11
+ transport: local
12
+ display_name: MacBook Pro (Host)
13
+ root_dir: /Users/vasanthadithya
14
+ description: Local macOS host machine
15
+ tags: [mac, laptop, host, local, macos]
16
+ priority: 5
17
+ allowed_commands: [ls, cat, echo, pwd, mkdir, touch, rm, mv, cp, grep, find, python3, node, npm, git, curl, wget, which, whoami, uname, hostname, date, sh, bash, brew, pip, pip3, head, tail, wc, sort, uniq, sed, awk, chmod, chown, ps, kill, df, du, tar, gzip, gunzip, zip, unzip, open, pbcopy, pbpaste]
18
+
19
+ agent_backend:
20
+ kind: blaxel-gemini # Testing Blaxel with Gemini-2.0-flash
21
+ # container_id: ${BLAXEL_API_KEY}
22
+ timeout: 30.0
23
+ environment:
24
+ workspace: vasanthfeb13
25
+ model: gemini-2.0-flash
26
+
27
+ audit:
28
+ enabled: true
29
+ log_path: logs/audit-blaxel-gemini.log
30
+ max_size_mb: 100
31
+ keep_backups: 5
32
+
33
+ security:
34
+ require_auth: false
35
+ allowed_ips: []
36
+ rate_limit_per_minute: 100
configs/orchestrator-blaxel-openai.yml ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ agent_backend:
2
+ # container_id: ${BLAXEL_API_KEY} # Set in environment variables
3
+ environment:
4
+ model: sandbox-openai
5
+ workspace: vasanthfeb13
6
+ kind: blaxel-openai
7
+ timeout: 30.0
8
+ audit:
9
+ enabled: true
10
+ keep_backups: 5
11
+ log_path: logs/audit-blaxel-openai.log
12
+ max_size_mb: 100
13
+ nodes:
14
+ - allowed_commands:
15
+ - ls
16
+ - cat
17
+ - echo
18
+ - pwd
19
+ - mkdir
20
+ - touch
21
+ - rm
22
+ - mv
23
+ - cp
24
+ - grep
25
+ - find
26
+ - python3
27
+ - node
28
+ - npm
29
+ - git
30
+ - curl
31
+ - wget
32
+ - which
33
+ - whoami
34
+ - uname
35
+ - hostname
36
+ - date
37
+ - sh
38
+ - bash
39
+ - brew
40
+ - pip
41
+ - pip3
42
+ - head
43
+ - tail
44
+ - wc
45
+ - sort
46
+ - uniq
47
+ - sed
48
+ - awk
49
+ - chmod
50
+ - chown
51
+ - ps
52
+ - kill
53
+ - df
54
+ - du
55
+ - tar
56
+ - gzip
57
+ - gunzip
58
+ - zip
59
+ - unzip
60
+ - open
61
+ - pbcopy
62
+ - pbpaste
63
+ description: Local macOS host machine
64
+ display_name: MacBook Pro (Host)
65
+ node_id: macbook-local
66
+ priority: 5
67
+ root_dir: /Users/vasanthadithya
68
+ tags:
69
+ - mac
70
+ - laptop
71
+ - host
72
+ - local
73
+ - macos
74
+ transport: local
75
+ security:
76
+ allowed_ips: []
77
+ rate_limit_per_minute: 100
78
+ require_auth: false
configs/orchestrator-config.example.yml ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Example orchestrator configuration
2
+ orchestrator_id: demo-orchestrator
3
+ nodes:
4
+ - node_id: local-dev
5
+ display_name: Local Dev Node
6
+ transport: local
7
+ root_dir: ./demo-root
8
+ tags: [dev, laptop]
9
+ priority: 10
10
+ allowed_commands: [python, ls, cat, echo]
11
+ sync_targets:
12
+ backup: ./backups/local-dev
13
+ - node_id: remote-http
14
+ display_name: Remote HTTP Node
15
+ transport: http
16
+ base_url: http://127.0.0.1:8765
17
+ tags: [remote, linux]
18
+ priority: 20
19
+ agent_backend:
20
+ kind: docker-mistral
21
+ container_id: ccdfa597c64
22
+ command: ["python", "/opt/mistral/agent.py"]
23
+ audit:
24
+ path: ./logs/audit.log
25
+ max_entries: 10000
26
+ refresh_interval: 10
configs/orchestrator-local-demo.yml ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ orchestrator_id: local-demo
2
+ nodes:
3
+ - node_id: local-dev
4
+ display_name: Local Dev
5
+ transport: local
6
+ root_dir: .
7
+ tags: [dev, laptop]
8
+ allowed_commands:
9
+ - echo
10
+ - python
11
+ - ls
12
+ agent_backend:
13
+ kind: local-heuristic
14
+ audit:
15
+ path: ./logs/audit-local-demo.log
configs/orchestrator-modal.yml ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # NACC Orchestrator Configuration - Modal Backend (IBM Granite MoE)
2
+ # Modal is the hackathon sponsor providing serverless GPU infrastructure
3
+ # Uses IBM Granite-3.0-3B-A800M-Instruct (3.3B total, 800M active MoE)
4
+
5
+ orchestrator_id: nacc-orchestrator-modal
6
+
7
+ # Backend: Modal serverless GPU with IBM Granite MoE
8
+ # For development: Run `modal serve src/nacc_orchestrator/modal_backend.py` first, then leave container_id blank
9
+ # For production: Deploy with `modal deploy src/nacc_orchestrator/modal_backend.py` and set container_id to the web endpoint URL
10
+ agent_backend:
11
+ kind: modal
12
+ container_id: null # Leave blank for development mode (modal serve), or set to web endpoint URL for production
13
+ timeout: 120.0
14
+ environment:
15
+ model: ibm-granite/granite-3.0-3b-a800m-instruct
16
+ description: "Efficient MoE model with 32 experts, 800M active parameters"
17
+
18
+ # Multi-node cluster configuration
19
+ nodes:
20
+ # Local Mac node
21
+ - node_id: mac
22
+ display_name: "MacBook (localhost)"
23
+ transport: local
24
+ root_dir: /tmp/nacc-local
25
+ tags:
26
+ - development
27
+ - local
28
+ - macos
29
+ priority: 50
30
+ weight: 1.5
31
+
32
+ # Remote Kali Linux VM
33
+ - node_id: kali-vm
34
+ display_name: "Kali Linux VM"
35
+ transport: http
36
+ base_url: http://192.168.64.2:8765
37
+ auth_token: "nacc-secure-token-2024"
38
+ tags:
39
+ - security
40
+ - linux
41
+ - remote
42
+ - pentesting
43
+ priority: 100
44
+ weight: 1.0
45
+ allowed_commands:
46
+ - nmap
47
+ - netcat
48
+ - curl
49
+ - wget
50
+ - ssh
51
+ - python3
52
+ - bash
53
+
54
+ # Audit logging
55
+ audit:
56
+ path: logs/audit-modal.log
57
+ max_entries: 100000
58
+
59
+ # Node health check interval
60
+ refresh_interval: 10.0
configs/orchestrator-three-node.yml ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ orchestrator_id: demo-three-node
2
+ nodes:
3
+ - node_id: laptop-dev
4
+ display_name: Laptop Sandbox
5
+ transport: local
6
+ root_dir: ./demo-root/laptop
7
+ tags: [mac, laptop]
8
+ priority: 5
9
+ - node_id: linux-vm
10
+ display_name: Linux VM
11
+ transport: http
12
+ base_url: http://127.0.0.1:9876
13
+ tags: [linux, vm]
14
+ priority: 10
15
+ auth_token: ${LINUX_VM_TOKEN:-}
16
+ - node_id: windows-vm
17
+ display_name: Windows VM
18
+ transport: http
19
+ base_url: http://127.0.0.1:9877
20
+ tags: [windows, vm]
21
+ priority: 15
22
+ agent_backend:
23
+ kind: docker-mistral
24
+ container_id: ccdfa597c64
25
+ audit:
26
+ path: ./logs/audit.log
configs/orchestrator-vms.yml ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # NACC Orchestrator configuration for Kali VM + Mac (local laptop coming soon)
2
+ nodes:
3
+ - node_id: kali-vm
4
+ transport: http
5
+ base_url: http://192.168.64.2:8765
6
+ description: Kali Linux pentesting VM in UTM
7
+ tags: [kali, linux, pentesting, security, vm]
8
+ priority: 10
9
+
10
+ - node_id: macbook-local
11
+ transport: local
12
+ display_name: MacBook Pro (Host)
13
+ root_dir: /Users/vasanthadithya
14
+ description: Local macOS host machine
15
+ tags: [mac, laptop, host, local, macos]
16
+ priority: 5
17
+ allowed_commands: [ls, cat, echo, pwd, mkdir, touch, rm, mv, cp, grep, find, python3, node, npm, git, curl, wget, which, whoami, uname, hostname, date, sh, bash, brew, pip, pip3, head, tail, wc, sort, uniq, sed, awk, chmod, chown, ps, kill, df, du, tar, gzip, gunzip, zip, unzip, open, pbcopy, pbpaste]
18
+
19
+ agent_backend:
20
+ kind: docker-mistral # Using Docker Desktop AI models
21
+ container_id: mistral-nemo # Model name (not container ID)
22
+ timeout: 60.0 # Increased timeout for model warmup
23
+ max_tokens: 200 # Keep responses concise
24
+ environment: {}
25
+
26
+ audit:
27
+ enabled: true
28
+ log_path: logs/audit-vms.log
29
+ max_size_mb: 100
30
+ keep_backups: 5
31
+
32
+ security:
33
+ require_auth: false
34
+ allowed_ips: []
35
+ rate_limit_per_minute: 100
configs/orchestrator.yml ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ agent_backend:
2
+ container_id: null
3
+ environment:
4
+ description: Modal serverless GPU with IBM Granite MoE (3.3B/800M active)
5
+ modal_app_id: ap-8PaCvfA0O7brdDuCMSuNRV
6
+ model: ibm-granite/granite-3.0-3b-a800m-instruct
7
+ kind: modal
8
+ timeout: 120.0
9
+ audit:
10
+ max_entries: 50000
11
+ path: logs/audit.log
12
+ nodes:
13
+ - display_name: MacBook Pro (Local)
14
+ node_id: macbook-local
15
+ priority: 50
16
+ root_dir: /Users/vasanthadithya/nacc-workspace
17
+ tags:
18
+ - development
19
+ - local
20
+ - macos
21
+ transport: local
22
+ weight: 1.5
23
+ - allowed_commands:
24
+ - ls
25
+ - cat
26
+ - pwd
27
+ - echo
28
+ - mkdir
29
+ - touch
30
+ - nmap
31
+ - netcat
32
+ - curl
33
+ - wget
34
+ - ssh
35
+ - python3
36
+ - bash
37
+ - nc
38
+ - ping
39
+ auth_token: nacc-secure-token-2024
40
+ base_url: http://192.168.64.2:8765
41
+ display_name: Kali Linux VM
42
+ node_id: kali-vm
43
+ priority: 100
44
+ tags:
45
+ - security
46
+ - linux
47
+ - remote
48
+ - pentesting
49
+ transport: http
50
+ weight: 1.0
51
+ - base_url: !!python/object:pydantic.networks.HttpUrl
52
+ _url: !!python/object/new:pydantic_core._pydantic_core.Url
53
+ - http://127.0.0.1:8765/
54
+ display_name: Node-697444
55
+ id: ff8ee336-a8f4-476a-8507-e8ab0df92d49
56
+ tags:
57
+ - dynamic
58
+ - unknown
59
+ transport: http
60
+ orchestrator_id: nacc-orchestrator
61
+ refresh_interval: 10.0
configs/ui-config.example.yml ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ # Example UI configuration
2
+ orchestrator_url: http://127.0.0.1:8888
3
+ host: 0.0.0.0
4
+ port: 7860
5
+ refresh_interval: 5
configs/ui-modal.yml ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ # UI configuration for Modal backend
2
+ orchestrator_url: http://127.0.0.1:8888
3
+ host: 0.0.0.0
4
+ port: 7860
5
+ refresh_interval: 5
orchestrator-config.yml ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ agent_backend:
2
+ container_id: null
3
+ environment:
4
+ description: Modal serverless GPU with IBM Granite MoE (3.3B/800M active)
5
+ modal_app_id: ap-8PaCvfA0O7brdDuCMSuNRV
6
+ model: ibm-granite/granite-3.0-3b-a800m-instruct
7
+ kind: modal
8
+ timeout: 120.0
9
+ audit:
10
+ max_entries: 50000
11
+ path: logs/audit.log
12
+ nodes:
13
+ - display_name: MacBook Pro (Local)
14
+ node_id: macbook-local
15
+ priority: 50
16
+ root_dir: /Users/vasanthadithya/nacc-workspace
17
+ tags:
18
+ - development
19
+ - local
20
+ - macos
21
+ transport: local
22
+ weight: 1.5
23
+ - allowed_commands:
24
+ - ls
25
+ - cat
26
+ - pwd
27
+ - echo
28
+ - mkdir
29
+ - touch
30
+ - nmap
31
+ - netcat
32
+ - curl
33
+ - wget
34
+ - ssh
35
+ - python3
36
+ - bash
37
+ - nc
38
+ - ping
39
+ auth_token: nacc-secure-token-2024
40
+ base_url: http://192.168.64.2:8765
41
+ display_name: Kali Linux VM
42
+ node_id: kali-vm
43
+ priority: 100
44
+ tags:
45
+ - security
46
+ - linux
47
+ - remote
48
+ - pentesting
49
+ transport: http
50
+ weight: 1.0
51
+ - base_url: http://127.0.0.1:8765/
52
+ display_name: Node-697444
53
+ id: ff8ee336-a8f4-476a-8507-e8ab0df92d49
54
+ tags:
55
+ - dynamic
56
+ - unknown
57
+ transport: http
58
+ orchestrator_id: nacc-orchestrator
59
+ refresh_interval: 10.0
requirements.txt ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # NACC Core Dependencies
2
+ fastapi>=0.115.0
3
+ uvicorn>=0.30.0
4
+ gradio==4.44.1 # Stable v4
5
+ pydantic>=2.8.2
6
+ pyyaml>=6.0.2
7
+ requests>=2.32.0
8
+ psutil>=6.0.0
9
+ paramiko>=3.5.0
10
+ pyjwt>=2.9.0
11
+ python-multipart>=0.0.9
12
+ huggingface_hub==0.24.6 # Explicitly pin to version with HfFolder
13
+
14
+ # AI Backend Dependencies
15
+ # Blaxel (Default) - No extra pip packages needed, uses HTTP
16
+
17
+ # Optional backends (install if using)
18
+ # modal>=0.63.0 # For Modal backend
19
+ # google-generativeai>=0.3.0 # For Gemini backend
20
+ # openai>=1.0.0 # For OpenAI backend
21
+
22
+ # Development Dependencies
23
+ pytest>=8.0.0
24
+ pytest-cov>=4.0.0
25
+ black>=24.0.0
26
+ ruff>=0.6.0
27
+ python-dotenv>=1.0.0
src/nacc_node/__init__.py ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """NACC node MCP server package."""
2
+
3
+ from .filesystem import FileMetadata, list_files, list_files_json
4
+ from .tools import (
5
+ NodeServer,
6
+ list_files_tool,
7
+ read_file_tool,
8
+ write_file_tool,
9
+ execute_command_tool,
10
+ sync_files_tool,
11
+ get_node_info_tool,
12
+ )
13
+
14
+ __all__ = [
15
+ "FileMetadata",
16
+ "list_files",
17
+ "list_files_json",
18
+ "NodeServer",
19
+ "list_files_tool",
20
+ "read_file_tool",
21
+ "write_file_tool",
22
+ "execute_command_tool",
23
+ "sync_files_tool",
24
+ "get_node_info_tool",
25
+ ]
src/nacc_node/__pycache__/__init__.cpython-312.pyc ADDED
Binary file (625 Bytes). View file
 
src/nacc_node/__pycache__/cli.cpython-312.pyc ADDED
Binary file (7.15 kB). View file
 
src/nacc_node/__pycache__/config.cpython-312.pyc ADDED
Binary file (3.03 kB). View file
 
src/nacc_node/__pycache__/filesystem.cpython-312.pyc ADDED
Binary file (6.07 kB). View file
 
src/nacc_node/__pycache__/pairing.cpython-312.pyc ADDED
Binary file (2.1 kB). View file
 
src/nacc_node/__pycache__/tools.cpython-312.pyc ADDED
Binary file (21.6 kB). View file
 
src/nacc_node/cli.py ADDED
@@ -0,0 +1,163 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Command-line entry point for the NACC node server.
2
+
3
+ Currently a stub that just prints configuration info; to be extended with
4
+ actual MCP server startup logic.
5
+ """
6
+
7
+ from __future__ import annotations
8
+
9
+ import argparse
10
+ import json
11
+ from pathlib import Path
12
+ import sys
13
+
14
+ from .filesystem import list_files
15
+ from .config import load_node_config
16
+ from .tools import NodeServer
17
+
18
+
19
+ def _build_parser() -> argparse.ArgumentParser:
20
+ parser = argparse.ArgumentParser(description="NACC node MCP server")
21
+ subparsers = parser.add_subparsers(dest="command")
22
+
23
+ serve = subparsers.add_parser("serve", help="Start the MCP node server")
24
+ serve.add_argument("--host", default="0.0.0.0", help="Host/IP to bind (default 0.0.0.0)")
25
+ serve.add_argument("--port", type=int, default=8765, help="Port to listen on")
26
+ serve.add_argument(
27
+ "--config",
28
+ type=str,
29
+ default="node-config.yml",
30
+ help="Path to node configuration file",
31
+ )
32
+ serve.add_argument(
33
+ "--dry-run",
34
+ action="store_true",
35
+ help="Validate configuration and exit without starting the server",
36
+ )
37
+
38
+ init_parser = subparsers.add_parser("init", help="Initialize a new node and generate pairing code")
39
+
40
+ list_parser = subparsers.add_parser("list-files", help="List files on this node")
41
+ list_parser.add_argument("--path", default=".", help="Path to inspect")
42
+ list_parser.add_argument(
43
+ "--recursive",
44
+ action="store_true",
45
+ help="Recurse into subdirectories",
46
+ )
47
+ list_parser.add_argument(
48
+ "--filter",
49
+ dest="pattern",
50
+ default=None,
51
+ help="Glob filter applied to relative paths",
52
+ )
53
+ list_parser.add_argument(
54
+ "--with-hash",
55
+ action="store_true",
56
+ help="Compute sha256 for files (slower)",
57
+ )
58
+ list_parser.add_argument(
59
+ "--config",
60
+ type=str,
61
+ default=None,
62
+ help="Optional node-config file to scope the path to the node root",
63
+ )
64
+
65
+ return parser
66
+
67
+
68
+ def main(argv: list[str] | None = None) -> None:
69
+ parser = _build_parser()
70
+ parsed_args = list(argv) if argv is not None else sys.argv[1:]
71
+ if not parsed_args or parsed_args[0].startswith("-"):
72
+ parsed_args = ["serve", *parsed_args]
73
+ args = parser.parse_args(parsed_args)
74
+
75
+ if args.command == "list-files":
76
+ root_override: Path | None = None
77
+ target_path = Path(args.path)
78
+ if args.config:
79
+ config = load_node_config(args.config)
80
+ root_override = config.root_dir
81
+ if not target_path.is_absolute():
82
+ target_path = (root_override / target_path).resolve()
83
+ files = list_files(
84
+ target_path,
85
+ recursive=args.recursive,
86
+ pattern=args.pattern,
87
+ include_hash=args.with_hash,
88
+ root=str(root_override) if root_override else None,
89
+ )
90
+ payload = {"files": [file.to_dict() for file in files], "count": len(files)}
91
+ print(json.dumps(payload, indent=2))
92
+ return
93
+
94
+ if args.command == "init":
95
+ import uuid
96
+ from .pairing import create_session
97
+
98
+ # Generate defaults
99
+ node_id = str(uuid.uuid4())
100
+ root_dir = Path.cwd().resolve()
101
+
102
+ # Create config structure
103
+ config_data = {
104
+ "node_id": node_id,
105
+ "root_dir": str(root_dir),
106
+ "display_name": f"Node-{node_id[:8]}",
107
+ "tags": ["generic"],
108
+ "allowed_commands": ["python", "ls", "cat", "echo", "grep"],
109
+ "sync_targets": {}
110
+ }
111
+
112
+ # Generate pairing code
113
+ session = create_session(node_id)
114
+
115
+ # Save config
116
+ output_path = Path("node-config.yml")
117
+ if output_path.exists():
118
+ print(f"Config file {output_path} already exists. Skipping creation.")
119
+ else:
120
+ import yaml
121
+ with open(output_path, "w") as f:
122
+ yaml.dump(config_data, f)
123
+ print(f"Created {output_path}")
124
+
125
+ print("\n" + "="*40)
126
+ print(f"🚀 Node Initialized: {config_data['display_name']}")
127
+ print(f"🆔 Node ID: {node_id}")
128
+ print(f"🔑 PAIRING CODE: {session.code}")
129
+ print("="*40)
130
+ print("\nRun this on your orchestrator to pair:")
131
+ print(f" nacc-orchestrator register-node {session.code} --ip <THIS_NODE_IP>")
132
+ print("\n(Note: The code is for display only in this version. Use the ID/IP for manual registration if needed.)")
133
+ return
134
+
135
+ if args.command in (None, "serve"):
136
+ config_path = getattr(args, "config", "node-config.yml")
137
+ config = load_node_config(config_path)
138
+ if getattr(args, "dry_run", False):
139
+ summary = {
140
+ "node_id": config.node_id,
141
+ "root_dir": str(config.root_dir),
142
+ "tags": config.tags,
143
+ "allowed_commands": config.allowed_commands,
144
+ "sync_targets": {key: str(value) for key, value in config.sync_targets.items()},
145
+ }
146
+ print(json.dumps(summary, indent=2))
147
+ return
148
+
149
+ host = getattr(args, "host", "0.0.0.0")
150
+ port = getattr(args, "port", 8765)
151
+ server = NodeServer(config, host=host, port=port)
152
+ try:
153
+ server.serve_forever()
154
+ except KeyboardInterrupt: # pragma: no cover - manual interrupt
155
+ print("\n[nacc-node] shutdown requested")
156
+ server.shutdown()
157
+ return
158
+
159
+ parser.error(f"Unknown command: {args.command}")
160
+
161
+
162
+ if __name__ == "__main__": # pragma: no cover
163
+ main()
src/nacc_node/config.py ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Configuration helpers for the NACC node server."""
2
+
3
+ from __future__ import annotations
4
+
5
+ from pathlib import Path
6
+ from typing import Any
7
+
8
+ import yaml
9
+ from pydantic import BaseModel, Field, field_validator
10
+
11
+
12
+ class NodeConfig(BaseModel):
13
+ node_id: str = Field(..., description="Unique identifier for the node")
14
+ root_dir: Path = Field(..., description="Root directory exposed by the node")
15
+ tags: list[str] = Field(default_factory=list)
16
+ description: str | None = None
17
+ allowed_commands: list[str] = Field(
18
+ default_factory=lambda: ["python", "ls", "cat", "echo"],
19
+ description="Simple whitelist of commands allowed in ExecuteCommand",
20
+ )
21
+ sync_targets: dict[str, Path] = Field(
22
+ default_factory=dict,
23
+ description="Mapping of sync target names to directories",
24
+ )
25
+
26
+ @field_validator("root_dir", mode="before")
27
+ def _expand_root(cls, value: Any) -> Path:
28
+ return Path(value).expanduser().resolve()
29
+
30
+ @field_validator("sync_targets", mode="before")
31
+ def _expand_targets(cls, value: dict[str, Any]) -> dict[str, Path]:
32
+ if not value:
33
+ return {}
34
+ return {key: Path(path).expanduser().resolve() for key, path in value.items()}
35
+
36
+
37
+ def load_node_config(path: str | Path) -> NodeConfig:
38
+ config_path = Path(path).expanduser().resolve()
39
+ if not config_path.exists():
40
+ raise FileNotFoundError(f"Node config not found: {config_path}")
41
+ with config_path.open("r", encoding="utf-8") as handle:
42
+ data = yaml.safe_load(handle) or {}
43
+ return NodeConfig(**data)
src/nacc_node/filesystem.py ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Filesystem tooling for the NACC node server.
2
+
3
+ Provides the "ListFiles" capability that other components (orchestrator, UI,
4
+ CLI) can call directly. This will later be wrapped in MCP transport but
5
+ remains a simple Python module for local development and testing.
6
+ """
7
+
8
+ from __future__ import annotations
9
+
10
+ from dataclasses import dataclass, asdict
11
+ from fnmatch import fnmatch
12
+ from pathlib import Path
13
+ from typing import Iterable
14
+ import hashlib
15
+ import json
16
+ import os
17
+ import time
18
+
19
+
20
+ @dataclass(slots=True)
21
+ class FileMetadata:
22
+ """Serializable metadata for a single filesystem entry."""
23
+
24
+ path: str
25
+ relative_path: str
26
+ is_dir: bool
27
+ size: int | None
28
+ modified: float
29
+ hash: str | None = None
30
+
31
+ def to_dict(self) -> dict[str, object]:
32
+ return asdict(self)
33
+
34
+
35
+ class FileSystemError(RuntimeError):
36
+ """Raised when a filesystem operation fails unexpectedly."""
37
+
38
+
39
+ def hash_file(path: Path, chunk_size: int = 1 << 16) -> str:
40
+ digest = hashlib.sha256()
41
+ with path.open("rb") as handle:
42
+ for chunk in iter(lambda: handle.read(chunk_size), b""):
43
+ digest.update(chunk)
44
+ return digest.hexdigest()
45
+
46
+
47
+ def iter_entries(root: Path, recursive: bool) -> Iterable[Path]:
48
+ if root.is_file():
49
+ yield root
50
+ return
51
+
52
+ yield root
53
+ if recursive:
54
+ yield from root.rglob("*")
55
+ else:
56
+ for child in root.iterdir():
57
+ yield child
58
+
59
+
60
+ def list_files(
61
+ path: str | os.PathLike[str],
62
+ *,
63
+ recursive: bool = False,
64
+ pattern: str | None = None,
65
+ include_hash: bool = False,
66
+ root: str | None = None,
67
+ ) -> list[FileMetadata]:
68
+ """Return metadata for files/directories under ``path``.
69
+
70
+ Parameters
71
+ ----------
72
+ path: str | PathLike
73
+ Root path to inspect.
74
+ recursive: bool
75
+ Whether to recurse into subdirectories (default False).
76
+ pattern: str | None
77
+ Optional glob-style pattern matching against the relative path.
78
+ include_hash: bool
79
+ If True, compute sha256 for regular files.
80
+ root: str | None
81
+ Custom root to compute ``relative_path``; defaults to ``path`` when it
82
+ points to a directory, or the parent directory when pointing to a file.
83
+ """
84
+
85
+ target = Path(path).expanduser().resolve()
86
+ if not target.exists():
87
+ raise FileNotFoundError(f"Path does not exist: {target}")
88
+
89
+ root_path = Path(root).expanduser().resolve() if root else (target if target.is_dir() else target.parent)
90
+ results: list[FileMetadata] = []
91
+
92
+ for entry in iter_entries(target, recursive):
93
+ rel = os.path.relpath(entry, root_path)
94
+ if pattern and not (fnmatch(rel, pattern) or fnmatch(entry.name, pattern)):
95
+ continue
96
+
97
+ stat_info = entry.stat()
98
+ is_dir = entry.is_dir()
99
+ size = None if is_dir else stat_info.st_size
100
+ modified = stat_info.st_mtime
101
+
102
+ file_hash: str | None = None
103
+ if include_hash and not is_dir:
104
+ try:
105
+ file_hash = hash_file(entry)
106
+ except OSError as exc: # pragma: no cover (hard to trigger reliably)
107
+ raise FileSystemError(f"Failed to hash {entry}: {exc}") from exc
108
+
109
+ results.append(
110
+ FileMetadata(
111
+ path=str(entry),
112
+ relative_path=rel,
113
+ is_dir=is_dir,
114
+ size=size,
115
+ modified=modified,
116
+ hash=file_hash,
117
+ )
118
+ )
119
+
120
+ results.sort(key=lambda meta: meta.relative_path)
121
+ return results
122
+
123
+
124
+ def list_files_json(**kwargs) -> str:
125
+ files = list_files(**kwargs)
126
+ payload = {
127
+ "generated_at": time.time(),
128
+ "count": len(files),
129
+ "files": [f.to_dict() for f in files],
130
+ }
131
+ return json.dumps(payload, indent=2)
src/nacc_node/pairing.py ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Node Pairing Logic
3
+ Generates and verifies 6-digit pairing codes for secure node initialization.
4
+ """
5
+
6
+ import secrets
7
+ import string
8
+ import time
9
+ from dataclasses import dataclass
10
+ from typing import Optional
11
+
12
+ @dataclass
13
+ class PairingSession:
14
+ code: str
15
+ node_id: str
16
+ created_at: float
17
+ expires_at: float
18
+
19
+ @property
20
+ def is_expired(self) -> bool:
21
+ return time.time() > self.expires_at
22
+
23
+ def generate_pairing_code(length: int = 6) -> str:
24
+ """Generate a secure random numeric code."""
25
+ alphabet = string.digits
26
+ return ''.join(secrets.choice(alphabet) for _ in range(length))
27
+
28
+ def create_session(node_id: str, ttl_seconds: int = 300) -> PairingSession:
29
+ """Create a new pairing session with a unique code."""
30
+ code = generate_pairing_code()
31
+ now = time.time()
32
+ return PairingSession(
33
+ code=code,
34
+ node_id=node_id,
35
+ created_at=now,
36
+ expires_at=now + ttl_seconds
37
+ )
src/nacc_node/tools.py ADDED
@@ -0,0 +1,637 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Node MCP tool implementations and lightweight HTTP server."""
2
+
3
+ from __future__ import annotations
4
+
5
+ import json
6
+ import logging
7
+ import os
8
+ import platform
9
+ import shlex
10
+ import shutil
11
+ import subprocess
12
+ import time
13
+ from dataclasses import dataclass
14
+ from http import HTTPStatus
15
+ from http.server import BaseHTTPRequestHandler, ThreadingHTTPServer
16
+ from pathlib import Path
17
+ from typing import Any, Callable, Dict
18
+
19
+ import psutil
20
+ from pydantic import BaseModel, Field, ValidationError
21
+
22
+ from .config import NodeConfig
23
+ from .filesystem import hash_file, list_files
24
+
25
+ logger = logging.getLogger(__name__)
26
+
27
+ ToolFunc = Callable[[NodeConfig, dict[str, Any]], dict[str, Any]]
28
+
29
+
30
+ def _resolve_within_root(root: Path, requested: str) -> Path:
31
+ candidate = Path(requested)
32
+ candidate = candidate.expanduser()
33
+ if not candidate.is_absolute():
34
+ candidate = (root / candidate).resolve()
35
+ else:
36
+ candidate = candidate.resolve()
37
+
38
+ try:
39
+ candidate.relative_to(root)
40
+ except ValueError as exc: # pragma: no cover - defensive guard
41
+ raise PermissionError(f"Requested path escapes root: {candidate}") from exc
42
+ return candidate
43
+
44
+
45
+ class ListFilesRequest(BaseModel):
46
+ path: str = "."
47
+ recursive: bool = False
48
+ pattern: str | None = None
49
+ include_hash: bool = False
50
+ limit: int | None = Field(default=None, ge=1, le=20_000)
51
+
52
+
53
+ class ReadFileRequest(BaseModel):
54
+ path: str
55
+ encoding: str | None = "utf-8"
56
+ max_bytes: int | None = Field(default=None, ge=1, le=50_000_000)
57
+
58
+
59
+ class WriteFileRequest(BaseModel):
60
+ path: str
61
+ content: str
62
+ encoding: str = "utf-8"
63
+ overwrite: bool = False
64
+ create_dirs: bool = True
65
+ backup: bool = True
66
+
67
+
68
+ class ExecuteCommandRequest(BaseModel):
69
+ command: list[str] | str
70
+ timeout: float = Field(default=60.0, gt=0, le=600)
71
+ env: dict[str, str] = Field(default_factory=dict)
72
+ cwd: str | None = None
73
+
74
+
75
+ class SyncFilesRequest(BaseModel):
76
+ source_path: str
77
+ targets: list[str] = Field(..., min_length=1)
78
+ strategy: str = Field(default="mirror", pattern=r"^(mirror|append)$")
79
+
80
+
81
+ class GetNodeInfoRequest(BaseModel):
82
+ include_processes: bool = False
83
+
84
+
85
+ @dataclass(slots=True)
86
+ class SyncReport:
87
+ target: str
88
+ dest_path: str
89
+ files_synced: int
90
+ bytes_copied: int
91
+ duration: float
92
+
93
+ def to_dict(self) -> dict[str, Any]:
94
+ return {
95
+ "target": self.target,
96
+ "dest_path": self.dest_path,
97
+ "files_synced": self.files_synced,
98
+ "bytes_copied": self.bytes_copied,
99
+ "duration": self.duration,
100
+ }
101
+
102
+
103
+ def list_files_tool(config: NodeConfig, payload: dict[str, Any]) -> dict[str, Any]:
104
+ request = ListFilesRequest.model_validate(payload)
105
+ target = _resolve_within_root(config.root_dir, request.path)
106
+ files = list_files(
107
+ target,
108
+ recursive=request.recursive,
109
+ pattern=request.pattern,
110
+ include_hash=request.include_hash,
111
+ root=config.root_dir,
112
+ )
113
+ if request.limit is not None:
114
+ files = files[: request.limit]
115
+ return {"files": [file.to_dict() for file in files], "count": len(files)}
116
+
117
+
118
+ def read_file_tool(config: NodeConfig, payload: dict[str, Any]) -> dict[str, Any]:
119
+ request = ReadFileRequest.model_validate(payload)
120
+ target = _resolve_within_root(config.root_dir, request.path)
121
+ if not target.exists():
122
+ raise FileNotFoundError(str(target))
123
+ if target.is_dir():
124
+ raise IsADirectoryError(str(target))
125
+
126
+ data = target.read_bytes()
127
+ if request.max_bytes and len(data) > request.max_bytes:
128
+ raise ValueError("File exceeds max_bytes limit")
129
+
130
+ content: str | None = None
131
+ if request.encoding:
132
+ content = data.decode(request.encoding)
133
+ return {
134
+ "path": str(target.relative_to(config.root_dir)),
135
+ "size": len(data),
136
+ "hash": hash_file(target),
137
+ "content": content,
138
+ }
139
+
140
+
141
+ def write_file_tool(config: NodeConfig, payload: dict[str, Any]) -> dict[str, Any]:
142
+ request = WriteFileRequest.model_validate(payload)
143
+ target = _resolve_within_root(config.root_dir, request.path)
144
+ target.parent.mkdir(parents=True, exist_ok=request.create_dirs)
145
+
146
+ backup_path: Path | None = None
147
+ if target.exists():
148
+ if not request.overwrite:
149
+ raise FileExistsError(str(target))
150
+ if request.backup:
151
+ timestamp = int(time.time())
152
+ backup_path = target.parent / f"{target.name}.bak.{timestamp}"
153
+ shutil.copy2(target, backup_path)
154
+
155
+ data = request.content.encode(request.encoding)
156
+ target.write_bytes(data)
157
+ file_hash = hash_file(target)
158
+ return {
159
+ "success": True,
160
+ "path": str(target.relative_to(config.root_dir)),
161
+ "bytes_written": len(data),
162
+ "hash": file_hash,
163
+ "backup_path": str(backup_path) if backup_path else None,
164
+ "message": f"File written successfully: {target.relative_to(config.root_dir)}"
165
+ }
166
+
167
+
168
+ def execute_command_tool(config: NodeConfig, payload: dict[str, Any]) -> dict[str, Any]:
169
+ request = ExecuteCommandRequest.model_validate(payload)
170
+ if isinstance(request.command, str):
171
+ command = shlex.split(request.command)
172
+ else:
173
+ command = request.command
174
+
175
+ if not command:
176
+ raise ValueError("Command cannot be empty")
177
+
178
+ base_cmd = Path(command[0]).name
179
+ if base_cmd not in config.allowed_commands:
180
+ raise PermissionError(f"Command '{base_cmd}' not on allow list")
181
+
182
+ cwd = _resolve_within_root(config.root_dir, request.cwd) if request.cwd else config.root_dir
183
+
184
+ env = os.environ.copy()
185
+ env.update({key: value for key, value in request.env.items() if isinstance(value, str)})
186
+
187
+ started = time.time()
188
+ proc = subprocess.run(
189
+ command,
190
+ cwd=str(cwd),
191
+ env=env,
192
+ capture_output=True,
193
+ text=True,
194
+ timeout=request.timeout,
195
+ )
196
+ duration = time.time() - started
197
+ return {
198
+ "command": command,
199
+ "stdout": proc.stdout,
200
+ "stderr": proc.stderr,
201
+ "exit_code": proc.returncode,
202
+ "duration": duration,
203
+ "cwd": str(cwd.relative_to(config.root_dir)),
204
+ }
205
+
206
+
207
+ def sync_files_tool(config: NodeConfig, payload: dict[str, Any]) -> dict[str, Any]:
208
+ request = SyncFilesRequest.model_validate(payload)
209
+ source = _resolve_within_root(config.root_dir, request.source_path)
210
+ if not source.exists():
211
+ raise FileNotFoundError(str(source))
212
+
213
+ reports: list[SyncReport] = []
214
+ source_rel = source.relative_to(config.root_dir)
215
+
216
+ for target_name in request.targets:
217
+ if target_name not in config.sync_targets:
218
+ raise ValueError(f"Unknown sync target: {target_name}")
219
+ target_dir = config.sync_targets[target_name]
220
+ dest_root = target_dir / source_rel
221
+ dest_root.parent.mkdir(parents=True, exist_ok=True)
222
+
223
+ started = time.time()
224
+ files_synced, bytes_copied = _copy_path(source, dest_root, strategy=request.strategy)
225
+ duration = time.time() - started
226
+ reports.append(
227
+ SyncReport(
228
+ target=target_name,
229
+ dest_path=str(dest_root),
230
+ files_synced=files_synced,
231
+ bytes_copied=bytes_copied,
232
+ duration=duration,
233
+ )
234
+ )
235
+
236
+ return {
237
+ "source": str(source_rel),
238
+ "targets": [report.to_dict() for report in reports],
239
+ }
240
+
241
+
242
+ def _copy_path(source: Path, dest: Path, *, strategy: str) -> tuple[int, int]:
243
+ files_synced = 0
244
+ bytes_copied = 0
245
+
246
+ if source.is_file():
247
+ dest.parent.mkdir(parents=True, exist_ok=True)
248
+ shutil.copy2(source, dest)
249
+ files_synced = 1
250
+ bytes_copied = source.stat().st_size
251
+ return files_synced, bytes_copied
252
+
253
+ if strategy == "mirror" and dest.exists():
254
+ shutil.rmtree(dest)
255
+
256
+ for src_file in source.rglob("*"):
257
+ if not src_file.is_file():
258
+ continue
259
+ rel = src_file.relative_to(source)
260
+ dest_file = dest / rel
261
+ dest_file.parent.mkdir(parents=True, exist_ok=True)
262
+ shutil.copy2(src_file, dest_file)
263
+ files_synced += 1
264
+ bytes_copied += src_file.stat().st_size
265
+
266
+ return files_synced, bytes_copied
267
+
268
+
269
+ def get_node_info_tool(config: NodeConfig, payload: dict[str, Any]) -> dict[str, Any]:
270
+ _ = GetNodeInfoRequest.model_validate(payload or {})
271
+ cpu = psutil.cpu_percent(interval=0.05)
272
+ mem = psutil.virtual_memory()
273
+ disk = psutil.disk_usage(str(config.root_dir))
274
+ boot_time = psutil.boot_time()
275
+
276
+ return {
277
+ "node_id": config.node_id,
278
+ "tags": config.tags,
279
+ "description": config.description,
280
+ "root_dir": str(config.root_dir),
281
+ "allowed_commands": config.allowed_commands,
282
+ "sync_targets": {key: str(value) for key, value in config.sync_targets.items()},
283
+ "metrics": {
284
+ "cpu_percent": cpu,
285
+ "memory_percent": mem.percent,
286
+ "memory_total": mem.total,
287
+ "disk_percent": disk.percent,
288
+ "disk_total": disk.total,
289
+ "uptime_seconds": time.time() - boot_time,
290
+ },
291
+ "platform": {
292
+ "system": platform.system(),
293
+ "release": platform.release(),
294
+ "version": platform.version(),
295
+ "machine": platform.machine(),
296
+ "python_version": platform.python_version(),
297
+ },
298
+ "timestamp": time.time(),
299
+ }
300
+
301
+
302
+ class NodeServer:
303
+ """Minimal HTTP server that exposes node tools as JSON endpoints."""
304
+
305
+ def __init__(self, config: NodeConfig, *, host: str = "0.0.0.0", port: int = 8765):
306
+ self.config = config
307
+ self.host = host
308
+ self.port = port
309
+ self._httpd: ThreadingHTTPServer | None = None
310
+
311
+ def serve_forever(self) -> None:
312
+ handler = self._build_handler()
313
+ self._httpd = ThreadingHTTPServer((self.host, self.port), handler)
314
+ logger.info("[nacc-node] serving http://%s:%s", self.host, self.port)
315
+ try:
316
+ self._httpd.serve_forever()
317
+ finally:
318
+ self._httpd.server_close()
319
+ logger.info("[nacc-node] server stopped")
320
+
321
+ def shutdown(self) -> None:
322
+ if self._httpd:
323
+ self._httpd.shutdown()
324
+
325
+ def _build_handler(self) -> type[BaseHTTPRequestHandler]:
326
+ config = self.config
327
+ tools: Dict[str, ToolFunc] = {
328
+ "list-files": list_files_tool,
329
+ "read-file": read_file_tool,
330
+ "write-file": write_file_tool,
331
+ "execute-command": execute_command_tool,
332
+ "sync-files": sync_files_tool,
333
+ "get-node-info": get_node_info_tool,
334
+ }
335
+ max_body = 512 * 1024
336
+
337
+ class NodeRequestHandler(BaseHTTPRequestHandler):
338
+ server_version = "NACCNode/0.3"
339
+
340
+ def log_message(self, format: str, *args: Any) -> None: # pragma: no cover - HTTP logging
341
+ logger.info("%s - %s", self.address_string(), format % args)
342
+
343
+ def _read_json_body(self) -> dict[str, Any]:
344
+ content_length = int(self.headers.get("Content-Length", 0))
345
+ if content_length > max_body:
346
+ raise ValueError("Payload too large")
347
+ if content_length <= 0:
348
+ return {}
349
+ body = self.rfile.read(content_length)
350
+ if not body:
351
+ return {}
352
+ return json.loads(body.decode("utf-8"))
353
+
354
+ def _send_json(self, status: HTTPStatus, payload: dict[str, Any]) -> None:
355
+ data = json.dumps(payload).encode("utf-8")
356
+ self.send_response(status)
357
+ self.send_header("Content-Type", "application/json")
358
+ self.send_header("Content-Length", str(len(data)))
359
+ self.end_headers()
360
+ self.wfile.write(data)
361
+
362
+ def do_GET(self) -> None: # noqa: N802 - required name
363
+ logger.info(f"GET request to: {self.path}")
364
+
365
+ # Handle root path (ignore query params)
366
+ path_clean = self.path.split('?')[0]
367
+
368
+ if path_clean == "/healthz":
369
+ self._send_json(HTTPStatus.OK, {
370
+ "status": "ok",
371
+ "service": "nacc-node",
372
+ "node_id": config.node_id
373
+ })
374
+ return
375
+ if path_clean == "/node":
376
+ payload = get_node_info_tool(config, {})
377
+ self._send_json(HTTPStatus.OK, payload)
378
+ return
379
+
380
+ if path_clean == "/" or path_clean == "/index.html" or path_clean == "/dashboard":
381
+ # Serve the VM Dashboard
382
+ html = """
383
+ <!DOCTYPE html>
384
+ <html lang="en">
385
+ <head>
386
+ <meta charset="UTF-8">
387
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
388
+ <title>NACC VM Node</title>
389
+ <style>
390
+ body { font-family: 'Courier New', Courier, monospace; background: #0d1117; color: #c9d1d9; margin: 0; padding: 20px; }
391
+ .container { max-width: 1000px; margin: 0 auto; }
392
+ h1 { border-bottom: 1px solid #30363d; padding-bottom: 10px; color: #58a6ff; font-family: -apple-system, sans-serif; }
393
+ .card { background: #161b22; border: 1px solid #30363d; border-radius: 6px; padding: 20px; margin-bottom: 20px; }
394
+ .card h2 { margin-top: 0; font-size: 1.2em; color: #79c0ff; font-family: -apple-system, sans-serif; }
395
+
396
+ /* Terminal Styles */
397
+ .terminal { background: #010409; padding: 15px; border-radius: 6px; border: 1px solid #30363d; height: 400px; overflow-y: auto; display: flex; flex-direction: column; }
398
+ .output { flex-grow: 1; white-space: pre-wrap; word-break: break-all; }
399
+ .input-line { display: flex; align-items: center; margin-top: 10px; border-top: 1px solid #21262d; padding-top: 10px; }
400
+ .prompt { color: #3fb950; margin-right: 10px; font-weight: bold; }
401
+ input { background: transparent; border: none; color: #c9d1d9; flex-grow: 1; font-family: inherit; font-size: 1em; outline: none; }
402
+
403
+ .status-ok { color: #3fb950; font-weight: bold; }
404
+ table { width: 100%; border-collapse: collapse; font-family: -apple-system, sans-serif; }
405
+ th, td { text-align: left; padding: 8px; border-bottom: 1px solid #21262d; }
406
+ th { color: #8b949e; }
407
+ tr:hover { background: #21262d; }
408
+ </style>
409
+ </head>
410
+ <body>
411
+ <div class="container">
412
+ <h1>🖥️ NACC Virtual Machine Node</h1>
413
+
414
+ <div class="card">
415
+ <h2>Status: <span class="status-ok">RUNNING</span></h2>
416
+ <div id="node-info" style="font-family: -apple-system, sans-serif;">Loading system info...</div>
417
+ </div>
418
+
419
+ <div class="card">
420
+ <h2>💻 Secure Terminal</h2>
421
+ <div class="terminal" id="terminal" onclick="document.getElementById('cmd-input').focus()">
422
+ <div class="output" id="output">
423
+ Welcome to NACC VM Secure Terminal.
424
+ Allowed commands: ls, cat, pwd, echo, grep, find, head, tail, tree, whoami, id
425
+ Type 'help' for info.
426
+ </div>
427
+ <div class="input-line">
428
+ <span class="prompt" id="prompt">user@vm:~$</span>
429
+ <input type="text" id="cmd-input" autocomplete="off" spellcheck="false">
430
+ </div>
431
+ </div>
432
+ </div>
433
+
434
+ <div class="card">
435
+ <h2>📂 File Browser</h2>
436
+ <button onclick="listFiles()" style="background: #238636; color: white; border: none; padding: 6px 12px; border-radius: 6px; cursor: pointer;">Refresh Current Dir</button>
437
+ <div id="file-list" style="margin-top: 10px;"></div>
438
+ </div>
439
+ </div>
440
+
441
+ <script>
442
+ let currentDir = ".";
443
+ const input = document.getElementById('cmd-input');
444
+ const output = document.getElementById('output');
445
+ const prompt = document.getElementById('prompt');
446
+
447
+ input.addEventListener('keydown', async (e) => {
448
+ if (e.key === 'Enter') {
449
+ const cmd = input.value.trim();
450
+ input.value = '';
451
+ if (!cmd) return;
452
+
453
+ appendToOutput(prompt.innerText + ' ' + cmd);
454
+ await processCommand(cmd);
455
+ // Keep focus
456
+ input.focus();
457
+ // Scroll to bottom
458
+ document.getElementById('terminal').scrollTop = document.getElementById('terminal').scrollHeight;
459
+ }
460
+ });
461
+
462
+ function appendToOutput(text) {
463
+ const div = document.createElement('div');
464
+ div.innerText = text;
465
+ output.appendChild(div);
466
+ }
467
+
468
+ async function processCommand(cmd) {
469
+ const args = cmd.split(' ');
470
+ const baseCmd = args[0];
471
+
472
+ if (baseCmd === 'clear') {
473
+ output.innerHTML = '';
474
+ return;
475
+ }
476
+
477
+ if (baseCmd === 'help') {
478
+ appendToOutput("Available commands: ls, cat, pwd, echo, grep, find, head, tail, tree, whoami, id\\nNavigation: cd <path>");
479
+ return;
480
+ }
481
+
482
+ if (baseCmd === 'cd') {
483
+ const target = args[1] || '.';
484
+ // Optimistic update, verify with pwd/ls later if needed
485
+ // Simple path joining logic for display
486
+ if (target === '..') {
487
+ // Very basic parent handling
488
+ const parts = currentDir.split('/');
489
+ parts.pop();
490
+ currentDir = parts.join('/') || '.';
491
+ } else if (target.startsWith('/')) {
492
+ currentDir = target;
493
+ } else {
494
+ currentDir = (currentDir === '.' ? '' : currentDir + '/') + target;
495
+ }
496
+ updatePrompt();
497
+ // Verify path by running ls
498
+ try {
499
+ await execute('ls', currentDir);
500
+ } catch (e) {
501
+ appendToOutput("Error: Directory not found (or access denied)");
502
+ // Revert? Nah, let user fix it
503
+ }
504
+ listFiles(); // Update file browser too
505
+ return;
506
+ }
507
+
508
+ // Execute on server
509
+ await execute(cmd, currentDir);
510
+ }
511
+
512
+ async function execute(command, cwd) {
513
+ try {
514
+ const res = await fetch('/tools/execute-command', {
515
+ method: 'POST',
516
+ headers: {'Content-Type': 'application/json'},
517
+ body: JSON.stringify({command: command, cwd: cwd})
518
+ });
519
+ const data = await res.json();
520
+
521
+ if (data.error) {
522
+ appendToOutput("Error: " + data.error);
523
+ } else {
524
+ if (data.stdout) appendToOutput(data.stdout);
525
+ if (data.stderr) appendToOutput("Stderr: " + data.stderr);
526
+ if (data.exit_code !== 0) appendToOutput("[Exit: " + data.exit_code + "]");
527
+
528
+ // Update cwd from server response if available (it returns the resolved cwd)
529
+ if (data.cwd) {
530
+ // currentDir = data.cwd; // Optional: sync with server truth
531
+ // updatePrompt();
532
+ }
533
+ }
534
+ } catch (e) {
535
+ appendToOutput("Network Error: " + e.message);
536
+ }
537
+ }
538
+
539
+ function updatePrompt() {
540
+ prompt.innerText = `user@vm:${currentDir}$`;
541
+ }
542
+
543
+ async function fetchNodeInfo() {
544
+ try {
545
+ const res = await fetch('/node');
546
+ const data = await res.json();
547
+ document.getElementById('node-info').innerHTML = `
548
+ <p><strong>Node ID:</strong> ${data.node_id}</p>
549
+ <p><strong>OS:</strong> ${data.platform.system} ${data.platform.release}</p>
550
+ <p><strong>Root:</strong> ${data.root_dir}</p>
551
+ `;
552
+ } catch (e) {}
553
+ }
554
+
555
+ async function listFiles() {
556
+ try {
557
+ const res = await fetch('/tools/list-files', {
558
+ method: 'POST',
559
+ headers: {'Content-Type': 'application/json'},
560
+ body: JSON.stringify({path: currentDir, recursive: false})
561
+ });
562
+ const data = await res.json();
563
+ if (data.files) {
564
+ let html = '<table><tr><th>Name</th><th>Type</th><th>Size</th></tr>';
565
+ data.files.forEach(f => {
566
+ html += `<tr><td>${f.name}</td><td>${f.is_dir ? 'DIR' : 'FILE'}</td><td>${f.size || '-'}</td></tr>`;
567
+ });
568
+ html += '</table>';
569
+ document.getElementById('file-list').innerHTML = html;
570
+ } else {
571
+ document.getElementById('file-list').innerText = "Error listing files: " + JSON.stringify(data);
572
+ }
573
+ } catch (e) {
574
+ document.getElementById('file-list').innerText = 'Error: ' + e.message;
575
+ }
576
+ }
577
+
578
+ // Init
579
+ fetchNodeInfo();
580
+ listFiles();
581
+ </script>
582
+ </body>
583
+ </html>
584
+ """
585
+ self.send_response(HTTPStatus.OK)
586
+ self.send_header("Content-Type", "text/html")
587
+ self.send_header("Content-Length", str(len(html)))
588
+ self.end_headers()
589
+ self.wfile.write(html.encode("utf-8"))
590
+ return
591
+
592
+ self._send_json(HTTPStatus.NOT_FOUND, {"error": "Not Found"})
593
+
594
+ def do_POST(self) -> None: # noqa: N802 - required name
595
+ if not self.path.startswith("/tools/"):
596
+ self._send_json(HTTPStatus.NOT_FOUND, {"error": "Unknown endpoint"})
597
+ return
598
+ tool_name = self.path.split("/", 2)[-1]
599
+ tool = tools.get(tool_name)
600
+ if not tool:
601
+ self._send_json(HTTPStatus.NOT_FOUND, {"error": f"Tool '{tool_name}' not available"})
602
+ return
603
+
604
+ try:
605
+ payload = self._read_json_body()
606
+ result = tool(config, payload)
607
+ except json.JSONDecodeError as exc:
608
+ self._send_json(HTTPStatus.BAD_REQUEST, {"error": "Invalid JSON", "details": str(exc)})
609
+ return
610
+ except ValidationError as exc:
611
+ self._send_json(HTTPStatus.BAD_REQUEST, {"error": "Validation failed", "details": exc.errors()})
612
+ return
613
+ except PermissionError as exc:
614
+ self._send_json(HTTPStatus.FORBIDDEN, {"error": str(exc)})
615
+ return
616
+ except FileNotFoundError as exc:
617
+ self._send_json(HTTPStatus.NOT_FOUND, {"error": str(exc)})
618
+ return
619
+ except Exception as exc: # pragma: no cover - defensive guard
620
+ logger.exception("Tool '%s' crashed", tool_name)
621
+ self._send_json(HTTPStatus.INTERNAL_SERVER_ERROR, {"error": str(exc)})
622
+ return
623
+
624
+ self._send_json(HTTPStatus.OK, result)
625
+
626
+ return NodeRequestHandler
627
+
628
+
629
+ __all__ = [
630
+ "NodeServer",
631
+ "list_files_tool",
632
+ "read_file_tool",
633
+ "write_file_tool",
634
+ "execute_command_tool",
635
+ "sync_files_tool",
636
+ "get_node_info_tool",
637
+ ]
src/nacc_orchestrator/__init__.py ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ """NACC orchestrator package.
2
+
3
+ Responsible for node registry, MCP calls to nodes, and agent coordination.
4
+ """
src/nacc_orchestrator/__pycache__/__init__.cpython-312.pyc ADDED
Binary file (314 Bytes). View file
 
src/nacc_orchestrator/__pycache__/__init__.cpython-314.pyc ADDED
Binary file (316 Bytes). View file
 
src/nacc_orchestrator/__pycache__/agents.cpython-312.pyc ADDED
Binary file (16.8 kB). View file
 
src/nacc_orchestrator/__pycache__/agents.cpython-314.pyc ADDED
Binary file (20.3 kB). View file
 
src/nacc_orchestrator/__pycache__/audit.cpython-312.pyc ADDED
Binary file (2.66 kB). View file
 
src/nacc_orchestrator/__pycache__/backend_manager.cpython-312.pyc ADDED
Binary file (9.52 kB). View file
 
src/nacc_orchestrator/__pycache__/blaxel_backend.cpython-312.pyc ADDED
Binary file (9.93 kB). View file
 
src/nacc_orchestrator/__pycache__/cerebras_backend.cpython-312.pyc ADDED
Binary file (4.77 kB). View file
 
src/nacc_orchestrator/__pycache__/cerebras_backend.cpython-314.pyc ADDED
Binary file (5.18 kB). View file
 
src/nacc_orchestrator/__pycache__/cli.cpython-312.pyc ADDED
Binary file (8.5 kB). View file
 
src/nacc_orchestrator/__pycache__/cli.cpython-314.pyc ADDED
Binary file (6.46 kB). View file
 
src/nacc_orchestrator/__pycache__/config.cpython-312.pyc ADDED
Binary file (6.93 kB). View file
 
src/nacc_orchestrator/__pycache__/config.cpython-314.pyc ADDED
Binary file (8.11 kB). View file
 
src/nacc_orchestrator/__pycache__/gemini_backend.cpython-312.pyc ADDED
Binary file (3.51 kB). View file
 
src/nacc_orchestrator/__pycache__/gemini_backend.cpython-314.pyc ADDED
Binary file (4.03 kB). View file
 
src/nacc_orchestrator/__pycache__/modal_backend.cpython-312.pyc ADDED
Binary file (10.6 kB). View file
 
src/nacc_orchestrator/__pycache__/nodes.cpython-312.pyc ADDED
Binary file (15.5 kB). View file
 
src/nacc_orchestrator/__pycache__/openai_backend.cpython-312.pyc ADDED
Binary file (3.63 kB). View file