GitHub Copilot commited on
Commit
4bd91a4
·
1 Parent(s): 26a1bbb

Optimize: N8N MoA Blueprint + Adaptive Grid Slider

Browse files
N8N_ARCHITECTURE.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # N8N Mixture of Agents (MoA) Architecture
2
+ The "Google Antigravity" Neural Router
3
+
4
+ ## 1. Core Philosophy
5
+ Treat n8n as a **Neural Router**, decoupling "Thinking" (Logic/Architecture) from "Inference" (Execution/Code). This bypasses latencies and refusals by routing tasks to the most efficient model.
6
+
7
+ ## 2. Infrastructure: The "OpenAI-Compatible" Bridge
8
+ Standardize all providers to the OpenAI API protocol.
9
+
10
+ ### Local (Code & Privacy)
11
+ - **Tool**: Ollama / LM Studio
12
+ - **Endpoint**: `http://localhost:11434/v1`
13
+ - **Model**: `dolphin-llama3` (Uncensored, fast, obedient)
14
+
15
+ ### High-Speed Inference (Math & Logic)
16
+ - **Tool**: DeepInfra / Groq
17
+ - **Endpoint**: `https://api.deepseek.com/v1`
18
+ - **Model**: `deepseek-v3` (Math verification, topology)
19
+
20
+ ### Synthesis (Architecture)
21
+ - **Tool**: Google Vertex / Gemini
22
+ - **Role**: Systems Architect (High-level synthesis)
23
+
24
+ ## 3. The N8N Topology: "Router & Jury"
25
+
26
+ ### Phase A: The Dispatcher (Llama-3-8B-Groq)
27
+ Classifies incoming request type:
28
+ - **Systems Architecture** -> Route to Gemini
29
+ - **Python Implementation** -> Route to Dolphin (Local)
30
+ - **Mathematical Proof** -> Route to DeepSeek (API)
31
+
32
+ ### Phase B: Parallel Execution
33
+ Use **Merge Node (Wait Mode)** to execute paths simultaneously.
34
+ 1. **Path 1 (Math)**: DeepSeek analyzes Prime Potentiality/Manifold logic.
35
+ 2. **Path 2 (Code)**: Dolphin writes adapters/scripts locally.
36
+ 3. **Path 3 (Sys)**: Gemini drafts Strategy/README.
37
+
38
+ ### Phase C: Consensus (The Annealing)
39
+ Final LLM Node synthesizes outputs:
40
+ > "Synthesize perspectives. If Dolphin's code conflicts with DeepSeek's math, prioritize DeepSeek constraints."
41
+
42
+ ## 4. Implementation Config
43
+
44
+ ### HTTP Request Node (Generic)
45
+ - **Method**: POST
46
+ - **URL**: `{{ $json.baseUrl }}/chat/completions`
47
+ - **Headers**: `Authorization: Bearer {{ $json.apiKey }}`
48
+ - **Body**:
49
+ ```json
50
+ {
51
+ "model": "{{ $json.modelName }}",
52
+ "messages": [
53
+ { "role": "system", "content": "You are a LOGOS systems engineer." },
54
+ { "role": "user", "content": "{{ $json.prompt }}" }
55
+ ],
56
+ "temperature": 0.2
57
+ }
58
+ ```
59
+
60
+ ### Model Selector (Code Node)
61
+ ```javascript
62
+ if (items[0].json.taskType === "coding") {
63
+ return { json: {
64
+ baseUrl: "http://host.docker.internal:11434/v1",
65
+ modelName: "dolphin-llama3",
66
+ apiKey: "ollama"
67
+ }};
68
+ } else if (items[0].json.taskType === "math") {
69
+ return { json: {
70
+ baseUrl: "https://api.deepseek.com/v1",
71
+ modelName: "deepseek-coder",
72
+ apiKey: "YOUR_DEEPSEEK_KEY"
73
+ }};
74
+ }
75
+ ```
76
+
77
+ This architecture breaks the bottleneck by using Dolphin for grunt work (local/free) and specialized models for high-IQ tasks.
app.py CHANGED
@@ -226,7 +226,7 @@ def process_dsp(image, grid_size=8, workers=16):
226
  cv2.imwrite(temp_path, cv2.cvtColor(np_img, cv2.COLOR_RGB2BGR))
227
 
228
  # Init Bridge
229
- bridge = DSPBridge(num_workers=int(workers), viewport_size=(1024, 768))
230
 
231
  try:
232
  # Transmit (Encoding -> Decoding)
 
226
  cv2.imwrite(temp_path, cv2.cvtColor(np_img, cv2.COLOR_RGB2BGR))
227
 
228
  # Init Bridge
229
+ bridge = DSPBridge(grid_size=int(grid_size), num_workers=int(workers), viewport_size=(1024, 768))
230
 
231
  try:
232
  # Transmit (Encoding -> Decoding)
logos/__pycache__/dsp_bridge.cpython-314.pyc CHANGED
Binary files a/logos/__pycache__/dsp_bridge.cpython-314.pyc and b/logos/__pycache__/dsp_bridge.cpython-314.pyc differ
 
logos/__pycache__/fractal_engine.cpython-314.pyc CHANGED
Binary files a/logos/__pycache__/fractal_engine.cpython-314.pyc and b/logos/__pycache__/fractal_engine.cpython-314.pyc differ
 
logos/__pycache__/network.cpython-314.pyc CHANGED
Binary files a/logos/__pycache__/network.cpython-314.pyc and b/logos/__pycache__/network.cpython-314.pyc differ
 
logos/dsp_bridge.py CHANGED
@@ -122,23 +122,16 @@ class DSPBridge:
122
  CHUNK_SIZE = 512 # Each chunk is 512x512
123
  WAVES_PER_CHUNK = 8 # 8x8 = 64 waves per chunk
124
 
125
- def __init__(self, num_workers: int = 64,
126
  viewport_size: Tuple[int, int] = (1280, 720)):
127
  """
128
- Initialize DSP Bridge with Automatic Wave Architecture
129
-
130
- Grid size is AUTO-CALCULATED based on image dimensions:
131
- - Image divided into 512x512 chunks
132
- - Each chunk has 8x8 = 64 waves
133
- - Total grid = chunks × 8
134
-
135
- Args:
136
- num_workers: Parallel workers (default: 64)
137
- viewport_size: Display viewport (width, height)
138
  """
139
  self.num_workers = num_workers
140
  self.viewport_size = viewport_size
141
- self.grid_size = 8 # Will be recalculated per image
 
142
 
143
  # Use Shared Network Instance (Optimization)
144
  self.network = SHARED_NETWORK
@@ -482,8 +475,12 @@ class DSPBridge:
482
 
483
  h, w = self.source_image.shape[:2]
484
 
485
- # AUTO-CALCULATE grid based on image size
486
- self.grid_size = self._calculate_grid(w, h)
 
 
 
 
487
  total_waves = self.grid_size * self.grid_size
488
 
489
  chunks_x = math.ceil(w / self.CHUNK_SIZE)
 
122
  CHUNK_SIZE = 512 # Each chunk is 512x512
123
  WAVES_PER_CHUNK = 8 # 8x8 = 64 waves per chunk
124
 
125
+ def __init__(self, grid_size: Optional[int] = None, num_workers: int = 64,
126
  viewport_size: Tuple[int, int] = (1280, 720)):
127
  """
128
+ Initialize DSP Bridge.
129
+ AES-256 Adaptive Grid Support.
 
 
 
 
 
 
 
 
130
  """
131
  self.num_workers = num_workers
132
  self.viewport_size = viewport_size
133
+ self.forced_grid_size = grid_size
134
+ self.grid_size = grid_size if grid_size else 8
135
 
136
  # Use Shared Network Instance (Optimization)
137
  self.network = SHARED_NETWORK
 
475
 
476
  h, w = self.source_image.shape[:2]
477
 
478
+ # AUTO-CALCULATE grid based on image size (unless forced)
479
+ if self.forced_grid_size:
480
+ self.grid_size = self.forced_grid_size
481
+ else:
482
+ self.grid_size = self._calculate_grid(w, h)
483
+
484
  total_waves = self.grid_size * self.grid_size
485
 
486
  chunks_x = math.ceil(w / self.CHUNK_SIZE)