Update README.md
Browse files
README.md
CHANGED
|
@@ -6,132 +6,8 @@ metrics:
|
|
| 6 |
- accuracy
|
| 7 |
- code_eval
|
| 8 |
---
|
| 9 |
-
|
| 10 |
-
The Strongest Reasoning AI Architecture Ever.
|
| 11 |
|
| 12 |
-
|
| 13 |
-
A multi-stage AI reasoning pipeline that orchestrates multiple models in parallel for superior synthesis and deep analysis capabilities.
|
| 14 |
|
| 15 |
-
|
| 16 |
-
Multi-Model Orchestration - Coordinates specialized AI models working in parallel
|
| 17 |
-
6-Stage Pipeline - Task profiling, decomposition, parallel branches, verification, adaptive recurrence, and synthesis
|
| 18 |
-
Adaptive Reasoning - Automatically adjusts depth based on task complexity
|
| 19 |
-
Uncertainty Quantification - Attacks its own hypotheses and measures confidence
|
| 20 |
-
Streaming Responses - Real-time output from the final synthesizer
|
| 21 |
-
Tier System - Low (free), High, and Max tiers with different chief models
|
| 22 |
-
Conversational Fast-Path - Skips heavy reasoning for casual chat
|
| 23 |
-
Installation
|
| 24 |
-
# Clone the repository
|
| 25 |
-
git clone https://github.com/yourusername/ardr.git
|
| 26 |
-
cd ardr
|
| 27 |
-
# Install dependencies
|
| 28 |
-
npm install openai
|
| 29 |
-
|
| 30 |
-
Setup
|
| 31 |
-
Option 1: Environment Variable (Recommended)
|
| 32 |
-
export OPENROUTER_API_KEY="your-api-key-here"
|
| 33 |
-
npx tsx ARDR.ts
|
| 34 |
-
|
| 35 |
-
Option 2: Direct Edit
|
| 36 |
-
Open ARDR.ts and replace:
|
| 37 |
-
|
| 38 |
-
const OPENROUTER_API_KEY = "YOUR_OPENROUTER_API_KEY";
|
| 39 |
-
|
| 40 |
-
With your actual API key from OpenRouter.
|
| 41 |
-
|
| 42 |
-
Usage
|
| 43 |
-
# Interactive mode (default: high tier)
|
| 44 |
-
`npx tsx ARDR.ts`
|
| 45 |
-
# Specify tier
|
| 46 |
-
`npx tsx ARDR.ts --tier max`
|
| 47 |
-
# Enable debug output
|
| 48 |
-
`npx tsx ARDR.ts --tier high --debug`
|
| 49 |
-
|
| 50 |
-
In-Session Commands
|
| 51 |
-
Command Description
|
| 52 |
-
tier low|high|max Switch reasoning tier
|
| 53 |
-
debug Toggle debug output
|
| 54 |
-
exit or quit End session
|
| 55 |
-
|
| 56 |
-
# PIPELINE ARCHITECTURE
|
| 57 |
-
|
| 58 |
-
The pipeline starts with the User Query entering the system.
|
| 59 |
-
|
| 60 |
-
**STAGE 0**: Task Profiler receives the query first. It classifies the task type (code, math, writing, reasoning, etc.), estimates complexity and hallucination risk, and allocates the reasoning budget by selecting which branches to activate.
|
| 61 |
-
The system then checks: Is this conversational?
|
| 62 |
-
|
| 63 |
-
If yes, it takes the Fast Path and responds directly without heavy reasoning.
|
| 64 |
-
|
| 65 |
-
If no, it proceeds to STAGE A: Structured Decomposition. This stage runs three parallel processes: the Symbolic Abstractor extracts entities and relationships, the Invariant Reducer identifies what must remain true, and the Formalizer creates contracts and algorithm sketches.
|
| 66 |
-
|
| 67 |
-
Next comes **STAGE B**: Dendritic Branches. Multiple specialized branches run in parallel: Logic, World, Code, Pattern, Adversarial, and others as needed. They all share a scratchpad to coordinate their findings.
|
| 68 |
-
|
| 69 |
-
The outputs flow into **STAGE C**: Verification Layer. This stage attacks the hypotheses by generating counterexamples, scoring consistency across branches, and quantifying the overall uncertainty.
|
| 70 |
-
|
| 71 |
-
The system then asks: Is uncertainty high? Are there weak branches?
|
| 72 |
-
|
| 73 |
-
If yes, it enters **STAGE D**: Adaptive Recurrence. This stage generates targeted improvement instructions, re-runs the weak branches with that feedback, and loops until the uncertainty threshold is met or max depth is reached.
|
| 74 |
-
|
| 75 |
-
If no (or after recurrence completes), everything flows to the **FINAL STAGE**: Grand Synthesizer. This compiles an evidence ledger from all branches, feeds it to the chief model, and streams the final response to the user.
|
| 76 |
-
|
| 77 |
-
# MODEL CONFIGURATION
|
| 78 |
-
|
| 79 |
-
**Branch Models**
|
| 80 |
-
|
| 81 |
-
Profiler uses Llama 3.3 70B for task classification.
|
| 82 |
-
|
| 83 |
-
Logic uses Gemini 2.0 Flash for deductive reasoning.
|
| 84 |
-
|
| 85 |
-
Pattern uses Gemini 2.0 Flash for pattern recognition.
|
| 86 |
-
|
| 87 |
-
World uses GPT-4.1 Mini for factual verification.
|
| 88 |
-
|
| 89 |
-
Code uses Claude Sonnet 4 for algorithm design.
|
| 90 |
-
|
| 91 |
-
Adversarial uses Gemini 2.0 Flash for generating counterexamples.
|
| 92 |
-
|
| 93 |
-
**Chief Models by Tier**
|
| 94 |
-
|
| 95 |
-
Low Tier uses Llama 3.3 70B. It's free and best for quick queries and testing.
|
| 96 |
-
|
| 97 |
-
High Tier uses Deepseek V3.2. Moderate cost, best for complex reasoning.
|
| 98 |
-
|
| 99 |
-
Max Tier uses Claude Opus 4. Premium cost, best for mission-critical tasks.
|
| 100 |
-
|
| 101 |
-
**FILE STRUCTURE**
|
| 102 |
-
|
| 103 |
-
`ARDR.ts` is the main entry point with CLI.
|
| 104 |
-
|
| 105 |
-
`ARDR_types.ts` contains TypeScript type definitions.
|
| 106 |
-
`
|
| 107 |
-
ARDR_models.ts` has model and color configuration.
|
| 108 |
-
|
| 109 |
-
`ARDR_utils.ts` contains the OpenAI client and utilities.
|
| 110 |
-
|
| 111 |
-
`ARDR_stages.ts` has all pipeline stage implementations.
|
| 112 |
-
|
| 113 |
-
`index.ts` provides module exports.
|
| 114 |
-
|
| 115 |
-
**HOW IT WORKS**
|
| 116 |
-
|
| 117 |
-
1. Task Profiler analyzes your query to determine task type, complexity, and which specialized branches are needed.
|
| 118 |
-
|
| 119 |
-
2. Structured Decomposition creates three views of the problem: Symbolic (entities, relations, logical constraints), Invariants (what must remain true), and Formal (contracts, algorithms, data structures).
|
| 120 |
-
|
| 121 |
-
3. Dendritic Branches run in parallel. Logic applies deductive reasoning. Pattern finds analogies. World verifies facts. Code designs algorithms. Adversarial attacks weak points.
|
| 122 |
-
|
| 123 |
-
4. Verification Layer actively tries to break the hypotheses by generating counterexamples, scoring consistency, and quantifying uncertainty.
|
| 124 |
-
|
| 125 |
-
5. Adaptive Recurrence re-runs weak branches if uncertainty is high, with targeted improvement instructions.
|
| 126 |
-
|
| 127 |
-
6. Grand Synthesis compiles all verified evidence and uses the chief model to produce a coherent, multi-proofed response.
|
| 128 |
-
|
| 129 |
-
**REQUIREMENTS**
|
| 130 |
-
|
| 131 |
-
Node.js 18 or higher, an OpenRouter API key, and the openai npm package.
|
| 132 |
-
|
| 133 |
-
**LICENSE**
|
| 134 |
-
|
| 135 |
-
MIT License.
|
| 136 |
-
|
| 137 |
-
ARDR is designed for tasks requiring deep, verified reasoning. For simple queries, it automatically uses the fast-path to save tokens and time.
|
|
|
|
| 6 |
- accuracy
|
| 7 |
- code_eval
|
| 8 |
---
|
| 9 |
+
ARDR (Adaptive Recurrent Dendritic Reasoning) is a multi-stage AI reasoning architecture that orchestrates multiple specialized models working in parallel. Instead of relying on a single model, ARDR coordinates experts in logic, pattern recognition, code, world knowledge, and adversarial thinking—all sharing a scratchpad and building on each other's insights to produce deeply verified responses.
|
|
|
|
| 10 |
|
| 11 |
+
The system works through six stages. First, a Task Profiler analyzes complexity and allocates resources. Then Structured Decomposition breaks the problem into symbolic, invariant, and formal views. Dendritic Branches run in parallel, each attacking the problem from a different angle. A Verification Layer actively tries to break its own hypotheses by generating counterexamples and measuring uncertainty. If confidence is too low, Adaptive Recurrence re-runs weak branches with targeted instructions. Finally, a Grand Synthesizer compiles all verified evidence into a coherent response.
|
|
|
|
| 12 |
|
| 13 |
+
ARDR offers three tiers: Low (Llama 3.3 70B, free), High (Deepseek V3.2), and Max (Claude Opus 4). It automatically detects conversational queries and skips heavy reasoning to save time and tokens. The result is an AI system that thinks deeper, catches its own mistakes, and produces multi-proofed answers for complex tasks.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|