Spaces:
Sleeping
Sleeping
AION Protocol Development
commited on
Commit
Β·
f429e16
1
Parent(s):
53430f2
feat: Update demo to v1.0 - Sync with Ectus-R production
Browse filesMAJOR UPDATE:
- Remove Mixtral 8x7B (deprecated by Groq)
- Add Llama 3.1 8B as replacement
- Remove GitHub Models (auth issues)
- Update README with v1.0 branding
- Add badges and landing page links
- Add real production metrics (95.6% QA, 50-400x faster)
- Total models: 6 functional (2 premium + 4 FREE)
Links to landing page (repos are private):
- creator.avermex.com/ectus-r
- All code in private repos (AION-R, AION-CR, Ectus-R, AION-CR-PRODUCTION)
README.md
CHANGED
|
@@ -19,115 +19,117 @@ tags:
|
|
| 19 |
|
| 20 |
# Ectus-R - Autonomous Software Engineering Platform
|
| 21 |
|
| 22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
-
|
|
|
|
|
|
|
| 25 |
|
| 26 |
## Features
|
| 27 |
|
| 28 |
-
###
|
| 29 |
-
- Generate production-ready code with your choice of AI model
|
| 30 |
-
- **12 models** across 4 tiers (ALL API keys configured):
|
| 31 |
-
|
| 32 |
-
**TIER 1: Premium (Highest Quality)** β
|
| 33 |
-
- Claude Sonnet 4.5 π (Anthropic) - $3/1M tokens
|
| 34 |
-
- GPT-4o π (OpenAI) - $2.50/1M tokens
|
| 35 |
-
|
| 36 |
-
**TIER 2: FREE GitHub Models (2025)** π
|
| 37 |
-
- GPT-4o mini (GitHub) - OpenAI GPT-4o mini (FREE, 10 req/min, 50 req/day)
|
| 38 |
-
- Llama 3.3 70B (GitHub) - Meta Llama 3.3 70B (FREE, 15 req/min, 150 req/day)
|
| 39 |
-
- Phi-4 (GitHub) - Microsoft Phi-4 (FREE, 15 req/min, 150 req/day)
|
| 40 |
-
- Mistral Large (GitHub) - Mistral Large (FREE, 10 req/min, 50 req/day)
|
| 41 |
-
|
| 42 |
-
**TIER 3: FREE Groq Models (14K req/day)** π
|
| 43 |
-
- Llama 3.3 70B (Groq) - Latest Llama, ultra-fast (131K context)
|
| 44 |
-
- Mixtral 8x7B (Groq) - Fast expert mixture (32K context)
|
| 45 |
-
- Gemma 2 9B (Groq) - Efficient code generation (8K context)
|
| 46 |
-
|
| 47 |
-
**TIER 4: FREE Google Models (Unlimited)** π₯
|
| 48 |
-
- Gemini 2.0 Flash - Experimental, ultra-fast (1M context)
|
| 49 |
-
- Gemini 1.5 Pro - 2M context for large codebases
|
| 50 |
-
|
| 51 |
-
- Languages: Rust, Python, TypeScript, Go, Java
|
| 52 |
-
- Real-time metrics: generation time, LOC, tokens/sec, cost
|
| 53 |
-
|
| 54 |
-
### β‘ Multi-Model Comparison
|
| 55 |
-
- Compare all 12 models side-by-side on the same task
|
| 56 |
-
- Real-time performance metrics table
|
| 57 |
-
- Identify fastest model, highest throughput, most comprehensive code
|
| 58 |
-
- Benchmarking for optimal model selection
|
| 59 |
-
- **10 FREE models available immediately!** (GitHub, Groq, Google)
|
| 60 |
-
|
| 61 |
-
### π Benchmarks & Performance
|
| 62 |
-
- Real-world performance data
|
| 63 |
-
- Ectus-R vs manual development comparison
|
| 64 |
-
- Cost savings analysis
|
| 65 |
-
- Quality metrics (95.6% QA success rate)
|
| 66 |
-
- AGI-AEF autonomy assessment breakdown
|
| 67 |
-
|
| 68 |
-
## Core Capabilities
|
| 69 |
-
|
| 70 |
-
β
**12 AI Models** - 2 premium + 10 FREE (GitHub + Groq + Google)
|
| 71 |
-
β
**ALL API Keys Configured** - All models working immediately
|
| 72 |
-
β
**Autonomous QA Cycle** - 95.6% success rate (industry-leading)
|
| 73 |
-
β
**Full-Stack Generation** - Frontend, backend, databases, infrastructure
|
| 74 |
-
β
**DevOps Automation** - Docker, Kubernetes, CI/CD pipelines
|
| 75 |
-
β
**50-400x Faster** - Compared to manual development
|
| 76 |
|
| 77 |
-
|
| 78 |
|
| 79 |
-
|
| 80 |
-
-
|
| 81 |
-
-
|
| 82 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 83 |
|
| 84 |
-
|
| 85 |
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
| Full Stack App | 2 days | 3 months | **45x faster** | 99.74% |
|
| 91 |
|
| 92 |
-
|
| 93 |
|
| 94 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 95 |
|
| 96 |
-
|
| 97 |
-
|------|--------------|-------|----------|
|
| 98 |
-
| **Startup** | < $1M ARR | **FREE** (MIT) | Unlimited developers, basic support |
|
| 99 |
-
| **Growth** | $1-10M ARR | **$499/month** | Priority support, SLA 99.5% |
|
| 100 |
-
| **Enterprise** | $10M+ ARR | **$2,499/month** | Dedicated support, SLA 99.9%, custom |
|
| 101 |
|
| 102 |
-
|
|
|
|
|
|
|
| 103 |
|
| 104 |
-
|
| 105 |
-
- π **Documentation:** [Ectus-R Docs](https://github.com/Yatrogenesis/Ectus-R/blob/main/README.md)
|
| 106 |
-
- π **License:** [MIT / Commercial](https://github.com/Yatrogenesis/Ectus-R/blob/main/LICENSE-COMMERCIAL.md)
|
| 107 |
-
- π **Benchmarks:** [BENCHMARKS.md](https://github.com/Yatrogenesis/Ectus-R/blob/main/BENCHMARKS.md)
|
| 108 |
|
| 109 |
-
##
|
| 110 |
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
|
|
|
|
|
|
|
| 114 |
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 121 |
```
|
| 122 |
|
| 123 |
-
|
| 124 |
|
| 125 |
-
|
| 126 |
-
- π **Issues:** [GitHub Issues](https://github.com/Yatrogenesis/Ectus-R/issues)
|
| 127 |
-
- π§ **Enterprise:** enterprise@yatrogenesis.com
|
| 128 |
|
| 129 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 130 |
|
| 131 |
-
**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 132 |
|
| 133 |
-
|
|
|
|
| 19 |
|
| 20 |
# Ectus-R - Autonomous Software Engineering Platform
|
| 21 |
|
| 22 |
+
[](https://www.rust-lang.org)
|
| 23 |
+
[](https://creator.avermex.com/ectus-r)
|
| 24 |
+
[](https://creator.avermex.com/ectus-r)
|
| 25 |
+
[](https://creator.avermex.com/aion-r)
|
| 26 |
+
[](https://creator.avermex.com/ectus-r/security)
|
| 27 |
|
| 28 |
+
> **π Production-Ready v1.0:** Multi-LLM β’ OWASP Top 10 β’ 50-400x Faster β’ 95.6% QA Success Rate
|
| 29 |
+
|
| 30 |
+
Interactive demo showcasing Ectus-R's multi-LLM code generation capabilities with real-time performance comparison across **10 AI models**.
|
| 31 |
|
| 32 |
## Features
|
| 33 |
|
| 34 |
+
### π€ Multi-LLM Code Generation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
|
| 36 |
+
Generate production-ready code with **10 AI models** across 3 tiers:
|
| 37 |
|
| 38 |
+
**TIER 1: Premium (Highest Quality)**
|
| 39 |
+
- π Claude Sonnet 4.5 - Best for complex architecture
|
| 40 |
+
- π GPT-4o - Best for general purpose
|
| 41 |
+
|
| 42 |
+
**TIER 2: FREE Groq (Ultra-Fast)**
|
| 43 |
+
- π Llama 3.3 70B - Latest Llama (131K context)
|
| 44 |
+
- π Llama 3.1 8B - Fast & efficient
|
| 45 |
+
- π Gemma 2 9B - Efficient code generation
|
| 46 |
+
|
| 47 |
+
**TIER 3: FREE Google**
|
| 48 |
+
- π₯ Gemini 2.0 Flash - Experimental (1M context)
|
| 49 |
|
| 50 |
+
### β‘ Real-Time Performance Metrics
|
| 51 |
|
| 52 |
+
- **Side-by-side comparison** of all models
|
| 53 |
+
- **Live metrics:** Generation time, LOC, tokens/sec, cost
|
| 54 |
+
- **Quality indicators:** Code completeness, best practices
|
| 55 |
+
- **Speed benchmarks:** Identify fastest model for your task
|
|
|
|
| 56 |
|
| 57 |
+
### π Proven Results
|
| 58 |
|
| 59 |
+
Based on Ectus-R production metrics:
|
| 60 |
+
- β‘ **50-400x faster** than manual development
|
| 61 |
+
- β
**95.6% QA success rate** (tests pass on first generation)
|
| 62 |
+
- π° **99.74%-99.93% cost savings**
|
| 63 |
+
- π **OWASP Top 10 compliant** code generation
|
| 64 |
+
|
| 65 |
+
## About Ectus-R
|
| 66 |
+
|
| 67 |
+
Ectus-R is an **autonomous software engineering platform** that transforms business requirements into production-ready applications. Powered by **AION-R** (AI Orchestration Network - Rust), it automates the entire development lifecycle.
|
| 68 |
+
|
| 69 |
+
### π Core Capabilities
|
| 70 |
+
|
| 71 |
+
β
**Multi-LLM Orchestration** - 5 providers with auto-fallback
|
| 72 |
+
β
**Autonomous QA** - 95.6% success rate (industry-leading)
|
| 73 |
+
β
**Full-Stack Generation** - Frontend, backend, databases, infrastructure
|
| 74 |
+
β
**Enterprise Security** - OWASP Top 10 compliant
|
| 75 |
+
β
**Production Deployment** - Docker, Kubernetes, CI/CD automation
|
| 76 |
|
| 77 |
+
### πΌ Commercial Licensing
|
|
|
|
|
|
|
|
|
|
|
|
|
| 78 |
|
| 79 |
+
- **FREE for Startups:** < $1M ARR (MIT License)
|
| 80 |
+
- **Growth Tier:** $499/month ($1-10M ARR)
|
| 81 |
+
- **Enterprise:** $2,499/month ($10M+ ARR)
|
| 82 |
|
| 83 |
+
[Learn more β](https://creator.avermex.com/ectus-r)
|
|
|
|
|
|
|
|
|
|
| 84 |
|
| 85 |
+
## How It Works
|
| 86 |
|
| 87 |
+
1. **π Describe Requirements** - Natural language or technical specs
|
| 88 |
+
2. **π€ AI Analysis** - AION-R orchestrates multiple LLMs
|
| 89 |
+
3. **βοΈ Code Generation** - Production-ready code with tests
|
| 90 |
+
4. **β
Quality Assurance** - Automated testing and validation
|
| 91 |
+
5. **π Deployment** - Docker, K8s configs ready to deploy
|
| 92 |
|
| 93 |
+
### Example: REST API Generation
|
| 94 |
+
|
| 95 |
+
```
|
| 96 |
+
Input: "Create a REST API for a blog with users and posts"
|
| 97 |
+
|
| 98 |
+
Output (11.3 seconds):
|
| 99 |
+
βββ src/
|
| 100 |
+
β βββ main.rs # Complete implementation
|
| 101 |
+
β βββ models/ # User & Post models
|
| 102 |
+
β βββ routes/ # CRUD endpoints
|
| 103 |
+
βββ tests/ # Unit & integration tests
|
| 104 |
+
βββ Dockerfile # Production container
|
| 105 |
+
βββ README.md # API documentation
|
| 106 |
```
|
| 107 |
|
| 108 |
+
**Result:** Production-ready code in seconds vs. 2-4 hours manually
|
| 109 |
|
| 110 |
+
## Performance Benchmarks
|
|
|
|
|
|
|
| 111 |
|
| 112 |
+
| Task Type | Ectus-R | Manual | Speedup |
|
| 113 |
+
|-----------|---------|--------|---------|
|
| 114 |
+
| REST API | 11.3s | 2-4h | **640x** |
|
| 115 |
+
| Microservices | 4h | 6 weeks | **240x** |
|
| 116 |
+
| Full Stack App | 2 days | 3 months | **45x** |
|
| 117 |
+
|
| 118 |
+
## Technology Stack
|
| 119 |
|
| 120 |
+
- **Core:** Rust (89%), Python (7%), TypeScript (4%)
|
| 121 |
+
- **AI Engine:** AION-R multi-LLM orchestration
|
| 122 |
+
- **Security:** OWASP Top 10 compliant
|
| 123 |
+
- **Total LOC:** 142,366 lines
|
| 124 |
+
|
| 125 |
+
## Links & Resources
|
| 126 |
+
|
| 127 |
+
- π **Website:** [creator.avermex.com/ectus-r](https://creator.avermex.com/ectus-r)
|
| 128 |
+
- π **Documentation:** [docs.avermex.com/ectus-r](https://creator.avermex.com/ectus-r/docs)
|
| 129 |
+
- π **Benchmarks:** [Detailed metrics](https://creator.avermex.com/ectus-r/benchmarks)
|
| 130 |
+
- π **Licensing:** [Commercial terms](https://creator.avermex.com/ectus-r/pricing)
|
| 131 |
+
- π§ **Contact:** enterprise@yatrogenesis.com
|
| 132 |
+
|
| 133 |
+
---
|
| 134 |
|
| 135 |
+
**Built with Rust** β’ **Powered by AION-R** β’ **Enterprise-Ready** β’ **v1.0 Production**
|
app.py
CHANGED
|
@@ -35,49 +35,7 @@ MODEL_CONFIGS = {
|
|
| 35 |
"description": "Best for general purpose"
|
| 36 |
},
|
| 37 |
|
| 38 |
-
# === TIER 2: FREE
|
| 39 |
-
"GPT-4o mini (GitHub) π": {
|
| 40 |
-
"provider": "github",
|
| 41 |
-
"model": "gpt-4o-mini",
|
| 42 |
-
"api_key_env": "GITHUB_TOKEN",
|
| 43 |
-
"cost_per_1M_tokens": 0.00,
|
| 44 |
-
"context_window": 128000,
|
| 45 |
-
"tier": "free-github",
|
| 46 |
-
"rate_limit": "10 req/min, 50 req/day",
|
| 47 |
-
"description": "OpenAI GPT-4o mini via GitHub Models (FREE)"
|
| 48 |
-
},
|
| 49 |
-
"Llama 3.3 70B (GitHub) π": {
|
| 50 |
-
"provider": "github",
|
| 51 |
-
"model": "Llama-3.3-70B-Instruct",
|
| 52 |
-
"api_key_env": "GITHUB_TOKEN",
|
| 53 |
-
"cost_per_1M_tokens": 0.00,
|
| 54 |
-
"context_window": 128000,
|
| 55 |
-
"tier": "free-github",
|
| 56 |
-
"rate_limit": "15 req/min, 150 req/day",
|
| 57 |
-
"description": "Meta Llama 3.3 70B via GitHub Models (FREE)"
|
| 58 |
-
},
|
| 59 |
-
"Phi-4 (GitHub) π": {
|
| 60 |
-
"provider": "github",
|
| 61 |
-
"model": "Phi-4",
|
| 62 |
-
"api_key_env": "GITHUB_TOKEN",
|
| 63 |
-
"cost_per_1M_tokens": 0.00,
|
| 64 |
-
"context_window": 16384,
|
| 65 |
-
"tier": "free-github",
|
| 66 |
-
"rate_limit": "15 req/min, 150 req/day",
|
| 67 |
-
"description": "Microsoft Phi-4 via GitHub Models (FREE)"
|
| 68 |
-
},
|
| 69 |
-
"Mistral Large (GitHub) π": {
|
| 70 |
-
"provider": "github",
|
| 71 |
-
"model": "Mistral-Large",
|
| 72 |
-
"api_key_env": "GITHUB_TOKEN",
|
| 73 |
-
"cost_per_1M_tokens": 0.00,
|
| 74 |
-
"context_window": 128000,
|
| 75 |
-
"tier": "free-github",
|
| 76 |
-
"rate_limit": "10 req/min, 50 req/day",
|
| 77 |
-
"description": "Mistral Large via GitHub Models (FREE)"
|
| 78 |
-
},
|
| 79 |
-
|
| 80 |
-
# === TIER 3: FREE GROQ MODELS ===
|
| 81 |
"Llama 3.3 70B (Groq) π": {
|
| 82 |
"provider": "groq",
|
| 83 |
"model": "llama-3.3-70b-versatile",
|
|
@@ -87,14 +45,14 @@ MODEL_CONFIGS = {
|
|
| 87 |
"tier": "free-groq",
|
| 88 |
"description": "Latest Llama model via Groq (Ultra-fast)"
|
| 89 |
},
|
| 90 |
-
"
|
| 91 |
"provider": "groq",
|
| 92 |
-
"model": "
|
| 93 |
"api_key_env": "GROQ_API_KEY",
|
| 94 |
"cost_per_1M_tokens": 0.00,
|
| 95 |
-
"context_window":
|
| 96 |
"tier": "free-groq",
|
| 97 |
-
"description": "Fast via Groq (
|
| 98 |
},
|
| 99 |
"Gemma 2 9B (Groq) π": {
|
| 100 |
"provider": "groq",
|
|
|
|
| 35 |
"description": "Best for general purpose"
|
| 36 |
},
|
| 37 |
|
| 38 |
+
# === TIER 2: FREE GROQ MODELS ===
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
"Llama 3.3 70B (Groq) π": {
|
| 40 |
"provider": "groq",
|
| 41 |
"model": "llama-3.3-70b-versatile",
|
|
|
|
| 45 |
"tier": "free-groq",
|
| 46 |
"description": "Latest Llama model via Groq (Ultra-fast)"
|
| 47 |
},
|
| 48 |
+
"Llama 3.1 8B (Groq) π": {
|
| 49 |
"provider": "groq",
|
| 50 |
+
"model": "llama-3.1-8b-instant",
|
| 51 |
"api_key_env": "GROQ_API_KEY",
|
| 52 |
"cost_per_1M_tokens": 0.00,
|
| 53 |
+
"context_window": 128000,
|
| 54 |
"tier": "free-groq",
|
| 55 |
+
"description": "Fast & efficient via Groq (FREE)"
|
| 56 |
},
|
| 57 |
"Gemma 2 9B (Groq) π": {
|
| 58 |
"provider": "groq",
|