Addyk24 commited on
Commit
a15d9f1
Β·
1 Parent(s): 6ef16b5
Files changed (4) hide show
  1. .gitignore +1 -0
  2. README.md +169 -14
  3. app.py +1496 -0
  4. requirements.txt +0 -0
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ .env
README.md CHANGED
@@ -1,14 +1,169 @@
1
- ---
2
- title: TheAgora
3
- emoji: πŸ“ˆ
4
- colorFrom: gray
5
- colorTo: pink
6
- sdk: gradio
7
- sdk_version: 5.33.0
8
- app_file: app.py
9
- pinned: false
10
- license: mit
11
- short_description: A revolutionary platform for AI model deliberation.
12
- ---
13
-
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # The Agora: Where artificial minds gather to forge wisdom
2
+
3
+ ### TRACK : mcp-server-track
4
+
5
+ ## 🌟 Project Overview
6
+ Agora, also known as "AI Democracy," is an innovative Gradio-based server designed to foster collaborative decision-making among diverse large language models (LLMs).
7
+
8
+ Imagine an "AI Council" where specialized AI agents deliberate and vote on complex problems, providing reasoned arguments, highlighting disagreements, and ultimately arriving at a synthesized consensus.
9
+
10
+ This system transcends the limitations of single-model outputs by leveraging the unique strengths of various LLMs, making it perfect for scenarios demanding nuanced.
11
+
12
+ ## ✨ Features
13
+ Multi-Model AI Council: Orchestrates a diverse panel of AI models, each playing a specific role:
14
+
15
+ - *Anthropic Claude: Specialized in ethical considerations and moral reasoning.*
16
+
17
+ - *OpenAI GPT (e.g., GPT-4o): Excels in creative problem-solving and brainstorming novel solutions.*
18
+
19
+ - *Mistral: Focused on robust technical analysis and detailed breakdowns.*
20
+
21
+ - *Sambanova: Provides rapid, high-throughput inference and quick factual recall.*
22
+
23
+ - *Hyperbolic Labs (placeholder for specialized models): Integrated for highly specialized tasks or domain-specific knowledge.*
24
+
25
+ - *Orchestrated AI Debates:
26
+ Facilitates structured dialogues and 'debates' between AI models, allowing them to present arguments and counter-arguments.*
27
+
28
+ - *Transparent Reasoning: Each model's individual reasoning, thought process, and initial stance are transparently displayed.*
29
+
30
+ - *Disagreement Highlight: Clearly identifies areas of disagreement between models, providing insights into differing perspectives.*
31
+
32
+ - *Final Consensus & Synthesis: Synthesizes the collective insights and votes into a consolidated, consensus-driven final answer.*
33
+
34
+ - *Gradio User Interface: Provides an intuitive and interactive web interface for users to submit problems and view the council's deliberations.*
35
+
36
+ ## πŸš€ Workflow:
37
+
38
+ - *How Agora Reaches Consensus:*
39
+ - *Agora operates through a sophisticated, multi-stage process to transform a complex problem into a collective AI consensus.*
40
+ - *The system acts as a Multi-Council Orchestration Protocol (MCP) server, managing the flow between the user interface and the various AI models.*
41
+
42
+ Here's a conceptual workflow:
43
+
44
+ IMAGE
45
+
46
+ - *User Problem Submission (Gradio UI):*
47
+
48
+ IMAGE
49
+
50
+ A user submits a complex problem or query via the Gradio web interface. The input is typically a natural language prompt, potentially with accompanying data.
51
+
52
+
53
+ Image Description:
54
+
55
+ A screenshot of a Gradio interface with an input text box for the user's problem and a "Submit" button.
56
+
57
+ - *Problem Parsing & Initial Distribution (MCP Orchestrator):*
58
+
59
+ - *The MCP Orchestrator (a custom backend server) receives the user's problem.*
60
+
61
+ - *It parses the input and determines the initial context for the AI Council.*
62
+
63
+ Based on pre-defined roles, the orchestrator dispatches the problem to specific models or groups of models for initial analysis and proposals. For instance, Claude might get an ethical framing, GPT a creative angle, and Mistral a technical breakdown.
64
+
65
+
66
+ Image Description: A diagram showing the MCP Orchestrator sending the problem to multiple distinct AI models.
67
+
68
+ Individual Model Reasoning & Proposals:
69
+
70
+ Each designated AI model processes the problem based on its specialty.
71
+
72
+ Models generate their initial solutions, ethical considerations, technical analyses, or creative approaches.
73
+
74
+ These individual outputs (including their 'reasoning' and 'confidence scores' if applicable) are sent back to the MCP Orchestrator.
75
+
76
+
77
+ Image Description: Multiple thought bubbles or document icons, each representing a different AI model's unique output and reasoning.
78
+
79
+ Debate Orchestration (MCP Orchestrator):
80
+
81
+ The orchestrator initiates a multi-turn 'debate' or 'review' phase.
82
+
83
+ - *Round 1 (Initial Review): Each model's proposal is shared (anonymously or attributed) with other relevant models.*
84
+
85
+ - *Round 2 (Rebuttal & Refinement): Models respond to critiques, refine their initial proposals, or adjust their positions.*
86
+
87
+ Image Description: A visual representation of AI models exchanging arguments, possibly with arrows indicating flow of information and feedback loops.
88
+
89
+ - *Voting & Consensus Formation:*
90
+
91
+ - *After the debate rounds, the orchestrator prompts each AI model to "vote" on the most optimal solution or to provide a final, refined recommendation.*
92
+
93
+ - *A consensus algorithm (e.g., majority vote, weighted average based on model confidence/role importance, or a final synthesis by a designated 'moderator' AI) is applied to derive the final collective decision. Disagreements are explicitly logged.*
94
+
95
+
96
+ Result Presentation (Gradio UI):
97
+
98
+ - *The MCP Orchestrator sends the complete deliberation log, including:-*
99
+
100
+ - *Each model's initial reasoning.*
101
+
102
+ - *Key arguments and counter-arguments during the debate.*
103
+
104
+ - *Areas of significant disagreement.*
105
+
106
+ - *The final, synthesized consensus or voted-upon solution.*
107
+
108
+ - *Gradio renders this information to the user in a clear, structured, and interactive format.*
109
+
110
+
111
+ Image Description: A Gradio output screen showing a structured summary of the AI council's deliberation and the final consensus.
112
+
113
+
114
+
115
+ ## πŸ› οΈ Technologies Used
116
+ Frontend: Gradio (for interactive web interface)
117
+
118
+ Backend: Custom Python MCP Orchestrator (Flask/FastAPI recommended for server implementation)
119
+
120
+ ### AI Models (via APIs):
121
+
122
+ - *Anthropic Claude*
123
+
124
+ - *OpenAI GPT (e.g., GPT-4o)*
125
+
126
+ - *Mistral AI*
127
+
128
+ - *Sambanova (or similar, e.g., via Hugging Face Inference API)*
129
+
130
+ - *Hyperbolic Labs (or other specialized custom models/APIs)*
131
+
132
+ ## 🎯 Potential Use Cases
133
+ - *Medical Diagnoses: AI council reviewing patient data, lab results, and symptoms to propose the most likely diagnosis, considering ethical implications, treatment creativity, and technical accuracy.*
134
+
135
+ - *Legal Advice: Analyzing case details, precedents, and laws to provide comprehensive legal advice, weighing ethical considerations and strategic options.*
136
+
137
+ - *Business Strategy: Developing complex business plans, marketing strategies, or investment decisions by leveraging creative, analytical, and ethical AI perspectives.*
138
+
139
+ - *Scientific Research: Formulating hypotheses, designing experiments, and interpreting results across various scientific disciplines.*
140
+
141
+ ## βš™οΈ Setup and Installation
142
+
143
+
144
+ 1. Clone the repository:
145
+ ```
146
+ git clone https://huggingface.co/spaces/Agents-MCP-Hackathon/TheAgora
147
+
148
+ cd .\TheAgora\
149
+ ```
150
+
151
+ 2. Install dependencies:
152
+
153
+ ```
154
+ pip install -r requirements.txt
155
+ ```
156
+ 3. Run the MCP App:
157
+ ```
158
+ python app.py
159
+ ```
160
+
161
+ ## 🀝 Contributing
162
+
163
+ Aditya Katkar\
164
+ Github\
165
+ LinkedIn
166
+
167
+
168
+ ## πŸ“„ License
169
+ (Information about the project's license will be placed here.)
app.py ADDED
@@ -0,0 +1,1496 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from dotenv import load_dotenv
2
+ load_dotenv()
3
+
4
+ """
5
+ πŸ›οΈ AI Democracy - Multi-Model Consensus System
6
+ The Agora: Where artificial minds gather to forge wisdom
7
+
8
+ A revolutionary platform for AI model deliberation and consensus building.
9
+ """
10
+
11
+ # AGNO IMPORTS
12
+ from agno.agent import Agent
13
+ from agno.team.team import Team
14
+ import asyncio
15
+ from textwrap import dedent
16
+ # Add these imports at the top of your file, after the existing imports
17
+ from agno.models.openai import OpenAIChat
18
+ from agno.models.anthropic import Claude
19
+ from agno.models.mistral import MistralChat
20
+ from agno.models.sambanova import Sambanova
21
+
22
+ # Misc imports
23
+ import logging
24
+ from enum import Enum
25
+ from dataclasses import dataclass
26
+ from datetime import datetime
27
+ from typing import List, Optional, Dict, Any
28
+ import os
29
+ import uuid
30
+ import json
31
+
32
+ # Database
33
+ from supabase import create_client
34
+ import gradio as gr
35
+
36
+ # Configure logging
37
+ logging.basicConfig(level=logging.INFO)
38
+ logger = logging.getLogger(__name__)
39
+
40
+ class ModelType(Enum):
41
+ CLAUDE = "claude"
42
+ GPT4 = "gpt4"
43
+ MISTRAL = "mistral"
44
+ SAMBANOVA = "sambanova"
45
+
46
+ class ProblemDomain(Enum):
47
+ MEDICAL = "medical"
48
+ LEGAL = "legal"
49
+ BUSINESS = "business"
50
+ TECHNICAL = "technical"
51
+ ETHICAL = "ethical"
52
+ GENERAL = "general"
53
+
54
+ @dataclass
55
+ class ModelResponse:
56
+ model_name: str
57
+ response: str
58
+ confidence: float
59
+ reasoning: str
60
+ timestamp: datetime
61
+ tokens_used: int = 0
62
+
63
+ @dataclass
64
+ class DebateRound:
65
+ round_number: int
66
+ responses: List[ModelResponse]
67
+ consensus_score: float
68
+ timestamp: datetime
69
+
70
+ @dataclass
71
+ class Problem:
72
+ id: str
73
+ title: str
74
+ description: str
75
+ domain: ProblemDomain
76
+ context: str
77
+ user_id: str
78
+ timestamp: datetime
79
+
80
+ # Utility function
81
+ def get_current_timestamp():
82
+ """Get current timestamp in ISO format"""
83
+ return datetime.now().isoformat()
84
+
85
+ # API Keys and Configuration - with validation
86
+ def get_api_key(key_name: str) -> Optional[str]:
87
+ """Safely get API key with validation"""
88
+ key = os.environ.get(key_name)
89
+ if not key:
90
+ logger.warning(f"⚠️ {key_name} not found in environment variables")
91
+ return key
92
+
93
+ ANTHROPIC_API_KEY = get_api_key("ANTHROPIC_API_KEY")
94
+ OPENAI_API_KEY = get_api_key("OPENAI_API_KEY")
95
+ MISTRAL_API_KEY = get_api_key("MISTRAL_API_KEY")
96
+ SAMBANOVA_API_KEY = get_api_key("SAMBANOVA_API_KEY")
97
+ SUPABASE_DB_PASSWORD = get_api_key("SUPABASE_DB_PASSWORD")
98
+ SUPABASE_KEY = get_api_key("SUPABASE_KEY")
99
+ SUPABASE_URL = get_api_key("SUPABASE_URL")
100
+
101
+ # IMPROVED EMBEDDER SETUP
102
+ def setup_embedder():
103
+ """Setup embedder with proper error handling"""
104
+ try:
105
+ from agno.embedder.huggingface import HuggingfaceCustomEmbedder
106
+ embedder = HuggingfaceCustomEmbedder()
107
+
108
+ # Patch embedding_dimension if missing
109
+ if not hasattr(embedder, "embedding_dimension"):
110
+ if hasattr(embedder, "model") and hasattr(embedder.model, "get_sentence_embedding_dimension"):
111
+ embedder.embedding_dimension = embedder.model.get_sentence_embedding_dimension()
112
+ elif hasattr(embedder, "model") and hasattr(embedder.model, "get_output_dimension"):
113
+ embedder.embedding_dimension = embedder.model.get_output_dimension()
114
+ else:
115
+ try:
116
+ dummy = embedder.get_embedding("test")
117
+ embedder.embedding_dimension = len(dummy)
118
+ except Exception:
119
+ embedder.embedding_dimension = 384 # Default for MiniLM
120
+
121
+ logger.info(f"βœ… Embedder initialized with dimension: {embedder.embedding_dimension}")
122
+ return embedder
123
+ except Exception as e:
124
+ logger.error(f"❌ Failed to initialize embedder: {str(e)}")
125
+ return None
126
+
127
+ # IMPROVED KNOWLEDGE BASE SETUP
128
+ def setup_knowledge_base(embedder):
129
+ """Setup knowledge base with proper error handling"""
130
+ if not embedder or not SUPABASE_URL or not SUPABASE_DB_PASSWORD:
131
+ logger.warning("⚠️ Knowledge base disabled due to missing components")
132
+ return None
133
+
134
+ try:
135
+ from agno.agent import AgentKnowledge
136
+ from agno.vectordb.pgvector import PgVector
137
+
138
+ knowledge_base = AgentKnowledge(
139
+ embedder=embedder,
140
+ vector_db=PgVector(
141
+ host=SUPABASE_URL.replace("https://", "").split(".")[0],
142
+ port=5432,
143
+ user="postgres",
144
+ password=SUPABASE_DB_PASSWORD,
145
+ database="postgres",
146
+ table_name="conversations_w_llm",
147
+ embedding_dimension=embedder.embedding_dimension,
148
+ ),
149
+ )
150
+ logger.info("βœ… Knowledge base initialized")
151
+ return knowledge_base
152
+ except Exception as e:
153
+ logger.error(f"❌ Failed to initialize knowledge base: {str(e)}")
154
+ return None
155
+
156
+ # Initialize components
157
+ embedder = setup_embedder()
158
+ knowledge_base = setup_knowledge_base(embedder)
159
+
160
+
161
+ # Enhanced Output Formatter for AI Democracy System
162
+ from dataclasses import dataclass
163
+ from typing import List, Dict, Any
164
+ from datetime import datetime
165
+ import json
166
+
167
+ # Enhanced Output Formatter for AI Democracy System - MARKDOWN REMOVED
168
+ from dataclasses import dataclass
169
+ from typing import List, Dict, Any
170
+ from datetime import datetime
171
+ import json
172
+ import re
173
+
174
+ class AgoraOutputFormatter:
175
+ """Enhanced formatter for making Agora analysis results more readable and visually appealing"""
176
+
177
+ def __init__(self):
178
+ self.emojis = {
179
+ 'high_quality': '🌟',
180
+ 'medium_quality': '⭐',
181
+ 'low_quality': 'πŸ’«',
182
+ 'consensus_high': '🎯',
183
+ 'consensus_medium': 'πŸ”„',
184
+ 'consensus_low': 'πŸ”€',
185
+ 'confidence_high': 'πŸ’ͺ',
186
+ 'confidence_medium': 'πŸ‘',
187
+ 'confidence_low': 'πŸ€”',
188
+ 'agent': 'πŸ€–',
189
+ 'analysis': 'πŸ”¬',
190
+ 'insights': 'πŸ’‘',
191
+ 'recommendations': 'πŸ“‹',
192
+ 'risks': '⚠️',
193
+ 'benefits': 'βœ…',
194
+ 'summary': 'πŸ“Š',
195
+ 'timestamp': 'πŸ•’',
196
+ 'domain': '🏷️',
197
+ 'problem': '🎯',
198
+ 'quality': '⭐',
199
+ 'database': 'πŸ’Ύ'
200
+ }
201
+
202
+ def format_debate_results(self, problem: 'Problem', debate_round: 'DebateRound', save_success: bool = True) -> tuple[str, str]:
203
+ """Format the complete debate results with enhanced readability"""
204
+
205
+ # Generate main results
206
+ main_output = self._generate_enhanced_main_output(problem, debate_round, save_success)
207
+
208
+ # Generate summary
209
+ summary_output = self._generate_enhanced_summary(problem, debate_round)
210
+
211
+ return main_output, summary_output
212
+
213
+ def _generate_enhanced_main_output(self, problem: 'Problem', debate_round: 'DebateRound', save_success: bool) -> str:
214
+ """Generate the main enhanced output with beautiful formatting"""
215
+
216
+ # Header section
217
+ header = self._create_header(problem, debate_round, save_success)
218
+
219
+ # Agent responses section
220
+ responses_section = self._create_responses_section(debate_round.responses)
221
+
222
+ # Consensus analysis section
223
+ consensus_section = self._create_consensus_section(debate_round)
224
+
225
+ # Quality metrics section
226
+ metrics_section = self._create_metrics_section(debate_round.responses)
227
+
228
+ return f"{header}\n\n{responses_section}\n\n{consensus_section}\n\n{metrics_section}"
229
+
230
+ def _create_header(self, problem: 'Problem', debate_round: 'DebateRound', save_success: bool) -> str:
231
+ """Create an attractive header section"""
232
+ quality_emoji = self._get_quality_emoji(debate_round.consensus_score)
233
+ consensus_emoji = self._get_consensus_emoji(debate_round.consensus_score)
234
+ save_status = f"{self.emojis['database']} Saved to database" if save_success else "⚠️ Database save failed"
235
+
236
+ return f"""
237
+ ╔══════════════════════════════════════════════════════════════════════════════╗
238
+ β•‘ {quality_emoji} AI DEMOCRACY ANALYSIS RESULTS {quality_emoji} β•‘
239
+ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•
240
+
241
+ {self.emojis['problem']} Problem: {problem.title}
242
+ {self.emojis['domain']} Domain: {problem.domain.value.title()}
243
+ {consensus_emoji} Consensus Score: {debate_round.consensus_score:.2f}/1.00 ({self._get_consensus_label(debate_round.consensus_score)})
244
+ {self.emojis['timestamp']} Completed: {debate_round.timestamp.strftime('%Y-%m-%d at %H:%M:%S')}
245
+ {self.emojis['agent']} AI Agents: {len(debate_round.responses)} real models responded
246
+ {save_status}
247
+
248
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
249
+ β”‚ Problem Description: β”‚
250
+ β”‚ {self._wrap_text(problem.description, 75)} β”‚
251
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
252
+ """
253
+
254
+ def _create_responses_section(self, responses: List['ModelResponse']) -> str:
255
+ """Create beautifully formatted agent responses section"""
256
+ section_lines = [
257
+ "╔══════════════════════════════════════════════════════════════════════════════╗",
258
+ "β•‘ πŸ€– AI AGENT RESPONSES β•‘",
259
+ "β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•",
260
+ ""
261
+ ]
262
+
263
+ for i, response in enumerate(responses, 1):
264
+ confidence_emoji = self._get_confidence_emoji(response.confidence)
265
+ agent_box = self._create_agent_response_box(i, response, confidence_emoji)
266
+ section_lines.append(agent_box)
267
+ section_lines.append("")
268
+
269
+ return "\n".join(section_lines)
270
+
271
+ def _create_agent_response_box(self, index: int, response: 'ModelResponse', confidence_emoji: str) -> str:
272
+ """Create a formatted box for each agent response"""
273
+ # Parse and format the response content
274
+ formatted_response = self._format_agent_response_content(response.response)
275
+
276
+ return f"""β”Œβ”€ {index}. {response.model_name} ─{"─" * (65 - len(response.model_name))}┐
277
+ β”‚ {confidence_emoji} Confidence: {response.confidence:.2f} β”‚ {self.emojis['timestamp']} {response.timestamp.strftime('%H:%M:%S')} β”‚ Tokens: ~{int(response.tokens_used)} β”‚
278
+ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
279
+ {formatted_response}
280
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜"""
281
+
282
+ def _clean_markdown(self, text: str) -> str:
283
+ """Remove all markdown formatting from text"""
284
+ # Remove bold/italic markers
285
+ text = re.sub(r'\*\*([^*]+)\*\*', r'\1', text) # **bold** -> bold
286
+ text = re.sub(r'\*([^*]+)\*', r'\1', text) # *italic* -> italic
287
+ text = re.sub(r'__([^_]+)__', r'\1', text) # __bold__ -> bold
288
+ text = re.sub(r'_([^_]+)_', r'\1', text) # _italic_ -> italic
289
+
290
+ # Remove headers
291
+ text = re.sub(r'^#{1,6}\s+', '', text, flags=re.MULTILINE) # # Header -> Header
292
+
293
+ # Remove code blocks
294
+ text = re.sub(r'```[^`]*```', '[Code Block]', text, flags=re.DOTALL)
295
+ text = re.sub(r'`([^`]+)`', r'\1', text) # `code` -> code
296
+
297
+ # Remove links
298
+ text = re.sub(r'\[([^\]]+)\]\([^)]+\)', r'\1', text) # [text](url) -> text
299
+
300
+ # Remove list markers
301
+ text = re.sub(r'^[\*\-\+]\s+', 'β€’ ', text, flags=re.MULTILINE) # - item -> β€’ item
302
+ text = re.sub(r'^\d+\.\s+', 'β€’ ', text, flags=re.MULTILINE) # 1. item -> β€’ item
303
+
304
+ # Remove extra whitespace
305
+ text = re.sub(r'\n\s*\n', '\n\n', text) # Multiple newlines -> double newline
306
+ text = text.strip()
307
+
308
+ return text
309
+
310
+ def _format_agent_response_content(self, response_text: str) -> str:
311
+ """Format agent response content with markdown removed and better structure"""
312
+ # First, clean all markdown
313
+ clean_text = self._clean_markdown(response_text)
314
+
315
+ # Split into paragraphs
316
+ paragraphs = [p.strip() for p in clean_text.split('\n\n') if p.strip()]
317
+ formatted_lines = []
318
+
319
+ for paragraph in paragraphs:
320
+ if not paragraph.strip():
321
+ continue
322
+
323
+ # Check if it's likely a header (short line that's not a sentence)
324
+ is_header = (
325
+ len(paragraph) < 80 and
326
+ not paragraph.endswith('.') and
327
+ not paragraph.endswith('?') and
328
+ not paragraph.endswith('!') and
329
+ ':' in paragraph[-10:] # Ends with colon nearby
330
+ )
331
+
332
+ # Check if it's a numbered point
333
+ is_numbered_point = paragraph.strip().startswith('β€’ ')
334
+
335
+ if is_header:
336
+ # Format as section header
337
+ header_text = paragraph.replace(':', '').strip()
338
+ formatted_lines.append(f"β”‚ {self.emojis['insights']} {header_text}")
339
+ formatted_lines.append("β”‚")
340
+ elif is_numbered_point:
341
+ # Format as bullet point
342
+ point_text = paragraph.replace('β€’ ', '').strip()
343
+ wrapped_point = self._wrap_text(f"β€’ {point_text}", 71)
344
+ for line in wrapped_point.split('\n'):
345
+ if line.strip():
346
+ formatted_lines.append(f"β”‚ {line}")
347
+ else:
348
+ # Format as regular paragraph
349
+ wrapped_lines = self._wrap_text(paragraph.strip(), 73)
350
+ for line in wrapped_lines.split('\n'):
351
+ if line.strip():
352
+ formatted_lines.append(f"β”‚ {line}")
353
+
354
+ # Add spacing between sections
355
+ formatted_lines.append("β”‚")
356
+
357
+ # Remove the last empty line if it exists
358
+ if formatted_lines and formatted_lines[-1] == "β”‚":
359
+ formatted_lines.pop()
360
+
361
+ return "\n".join(formatted_lines)
362
+
363
+ def _create_consensus_section(self, debate_round: 'DebateRound') -> str:
364
+ """Create consensus analysis section"""
365
+ consensus_emoji = self._get_consensus_emoji(debate_round.consensus_score)
366
+ consensus_label = self._get_consensus_label(debate_round.consensus_score)
367
+
368
+ return f"""╔══════════════════════════════════════════════════════════════════════════════╗
369
+ β•‘ {consensus_emoji} CONSENSUS ANALYSIS β•‘
370
+ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•
371
+
372
+ β”Œβ”€ Agreement Level ────────────────────────────────────────────────────────────┐
373
+ β”‚ Score: {debate_round.consensus_score:.2f}/1.00 ({consensus_label}) β”‚
374
+ β”‚ {self._create_consensus_bar(debate_round.consensus_score)} β”‚
375
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
376
+
377
+ {self._create_consensus_interpretation(debate_round.consensus_score)}"""
378
+
379
+ def _create_metrics_section(self, responses: List['ModelResponse']) -> str:
380
+ """Create quality metrics section"""
381
+ avg_confidence = sum(r.confidence for r in responses) / len(responses)
382
+ total_tokens = sum(r.tokens_used for r in responses)
383
+
384
+ return f"""╔══════════════════════════════════════════════════════════════════════════════╗
385
+ β•‘ πŸ“Š QUALITY METRICS β•‘
386
+ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•
387
+
388
+ β”Œβ”€ Response Quality ───────────────────────────────────────────────────────────┐
389
+ β”‚ Average Confidence: {avg_confidence:.2f}/1.00 β”‚
390
+ β”‚ Total Tokens Used: ~{int(total_tokens)} β”‚
391
+ β”‚ Response Distribution: {self._create_response_quality_distribution(responses)} β”‚
392
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜"""
393
+
394
+ def _generate_enhanced_summary(self, problem: 'Problem', debate_round: 'DebateRound') -> str:
395
+ """Generate an enhanced summary panel"""
396
+ avg_confidence = sum(r.confidence for r in debate_round.responses) / len(debate_round.responses)
397
+ quality_assessment = self._get_consensus_label(debate_round.consensus_score)
398
+
399
+ # Extract key themes from responses (simplified)
400
+ key_themes = self._extract_key_themes(debate_round.responses)
401
+
402
+ return f"""╔══════════════════════════════════════════════════════════════════════════════╗
403
+ β•‘ πŸ“‹ EXECUTIVE SUMMARY β•‘
404
+ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•
405
+
406
+ {self.emojis['problem']} Problem Domain: {problem.domain.value.title()}
407
+ {self.emojis['agent']} AI Models Consulted: {len(debate_round.responses)} (Real AI - No Mock Data)
408
+ {self.emojis['quality']} Average Confidence: {avg_confidence:.2f}/1.00
409
+ {self.emojis['consensus_high'] if debate_round.consensus_score > 0.7 else self.emojis['consensus_medium'] if debate_round.consensus_score > 0.4 else self.emojis['consensus_low']} Consensus Quality: {quality_assessment}
410
+
411
+ β”Œβ”€ Key Insights ──────────────────────────────────────────────────────────────┐
412
+ {key_themes}
413
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
414
+
415
+ β”Œβ”€ Reliability Assessment ────────────────────────────────────────────────────┐
416
+ β”‚ βœ“ All responses from genuine AI models β”‚
417
+ β”‚ βœ“ No artificial or mock data used β”‚
418
+ β”‚ βœ“ Real-time analysis with current model capabilities β”‚
419
+ β”‚ Quality Level: {quality_assessment} ({debate_round.consensus_score:.2f}/1.00) β”‚
420
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜"""
421
+
422
+ # Helper methods
423
+ def _get_quality_emoji(self, score: float) -> str:
424
+ if score >= 0.8: return self.emojis['high_quality']
425
+ elif score >= 0.6: return self.emojis['medium_quality']
426
+ else: return self.emojis['low_quality']
427
+
428
+ def _get_consensus_emoji(self, score: float) -> str:
429
+ if score >= 0.7: return self.emojis['consensus_high']
430
+ elif score >= 0.4: return self.emojis['consensus_medium']
431
+ else: return self.emojis['consensus_low']
432
+
433
+ def _get_confidence_emoji(self, confidence: float) -> str:
434
+ if confidence >= 0.7: return self.emojis['confidence_high']
435
+ elif confidence >= 0.5: return self.emojis['confidence_medium']
436
+ else: return self.emojis['confidence_low']
437
+
438
+ def _get_consensus_label(self, score: float) -> str:
439
+ if score >= 0.8: return "Excellent Agreement"
440
+ elif score >= 0.7: return "High Agreement"
441
+ elif score >= 0.6: return "Good Agreement"
442
+ elif score >= 0.4: return "Moderate Agreement"
443
+ elif score >= 0.3: return "Low Agreement"
444
+ else: return "Divergent Views"
445
+
446
+ def _create_consensus_bar(self, score: float) -> str:
447
+ """Create a visual progress bar for consensus score"""
448
+ bar_length = 50
449
+ filled = int(score * bar_length)
450
+ empty = bar_length - filled
451
+
452
+ bar = "β–ˆ" * filled + "β–‘" * empty
453
+ return f"β”‚ {bar} β”‚ {score:.1%}"
454
+
455
+ def _create_consensus_interpretation(self, score: float) -> str:
456
+ """Create interpretation text for consensus score"""
457
+ if score >= 0.8:
458
+ return """β”Œβ”€ Interpretation ────────────────────────────────────────────────────────────┐
459
+ β”‚ 🌟 Excellent: AI models show strong agreement on key points and approaches β”‚
460
+ β”‚ High confidence in recommendations and consistent reasoning patterns β”‚
461
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜"""
462
+ elif score >= 0.6:
463
+ return """β”Œβ”€ Interpretation ────────────────────────────────────────────────────────────┐
464
+ β”‚ ⭐ Good: Models generally align with some variation in emphasis or approach β”‚
465
+ β”‚ Solid foundation for decision-making with multiple valid perspectives β”‚
466
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜"""
467
+ elif score >= 0.4:
468
+ return """β”Œβ”€ Interpretation ────────────────────────────────────────────────────────────┐
469
+ β”‚ πŸ”„ Moderate: Mixed agreement - models see different aspects as priorities β”‚
470
+ β”‚ Consider multiple approaches or gather additional expert input β”‚
471
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€οΏ½οΏ½β”€β”€β”€β”€β”˜"""
472
+ else:
473
+ return """β”Œβ”€ Interpretation ────────────────────────────────────────────────────────────┐
474
+ β”‚ πŸ”€ Divergent: Significant disagreement suggests complex or contested issue β”‚
475
+ β”‚ Valuable to explore different perspectives before making decisions β”‚
476
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜"""
477
+
478
+ def _create_response_quality_distribution(self, responses: List['ModelResponse']) -> str:
479
+ """Create a simple distribution of response qualities"""
480
+ high = sum(1 for r in responses if r.confidence >= 0.7)
481
+ medium = sum(1 for r in responses if 0.5 <= r.confidence < 0.7)
482
+ low = sum(1 for r in responses if r.confidence < 0.5)
483
+
484
+ return f"High: {high}, Medium: {medium}, Low: {low}"
485
+
486
+ def _extract_key_themes(self, responses: List['ModelResponse']) -> str:
487
+ """Extract key themes from responses (simplified version)"""
488
+ # This is a simplified theme extraction - you could enhance with NLP
489
+ key_themes = [
490
+ "β”‚ β€’ Strategic planning and implementation considerations identified",
491
+ "β”‚ β€’ Risk assessment and mitigation strategies discussed",
492
+ "β”‚ β€’ Multiple stakeholder perspectives considered",
493
+ "β”‚ β€’ Evidence-based recommendations provided"
494
+ ]
495
+
496
+ return "\n".join(key_themes)
497
+
498
+ def _wrap_text(self, text: str, width: int) -> str:
499
+ """Wrap text to specified width while preserving words"""
500
+ words = text.split()
501
+ lines = []
502
+ current_line = []
503
+ current_length = 0
504
+
505
+ for word in words:
506
+ if current_length + len(word) + 1 <= width:
507
+ current_line.append(word)
508
+ current_length += len(word) + 1
509
+ else:
510
+ if current_line:
511
+ lines.append(" ".join(current_line))
512
+ current_line = [word]
513
+ current_length = len(word)
514
+
515
+ if current_line:
516
+ lines.append(" ".join(current_line))
517
+
518
+ return "\n".join(lines)
519
+
520
+
521
+ # Integration example for your Agora class
522
+ def integrate_enhanced_formatter(agora_instance):
523
+ """Example of how to integrate the enhanced formatter into your existing Agora class"""
524
+
525
+ # Add this to your Agora class __init__ method:
526
+ # self.output_formatter = AgoraOutputFormatter()
527
+
528
+ # Then modify your analyze_problem function in the Gradio interface:
529
+ def enhanced_analyze_problem(title: str, description: str, domain: str, user_id: str, context: str = ""):
530
+ """Enhanced version of analyze_problem with beautiful formatting"""
531
+ try:
532
+ if not title or not description:
533
+ return "❌ Please provide both title and description", ""
534
+
535
+ if not agora_instance.agents:
536
+ return "❌ No AI agents available. Please check API key configuration.", "❌ No agents configured"
537
+
538
+ # Convert domain to enum
539
+ domain_lower = domain.lower()
540
+
541
+ # Create problem object (assuming you have the Problem class)
542
+ problem = Problem(
543
+ id=str(uuid.uuid4()),
544
+ title=title,
545
+ description=description,
546
+ domain=ProblemDomain(domain_lower),
547
+ context=context or "No additional context provided",
548
+ user_id=user_id or "anonymous",
549
+ timestamp=datetime.now()
550
+ )
551
+
552
+ # Start analysis
553
+ try:
554
+ debate_round = agora_instance.start_debate(problem)
555
+ except Exception as e:
556
+ return f"❌ Analysis failed: {str(e)}", f"❌ Error: {str(e)}"
557
+
558
+ # Save results
559
+ save_success = agora_instance.save_debate_round(problem, debate_round)
560
+
561
+ # Use enhanced formatter instead of manual formatting
562
+ formatter = AgoraOutputFormatter()
563
+ main_output, summary_output = formatter.format_debate_results(problem, debate_round, save_success)
564
+
565
+ return main_output, summary_output
566
+
567
+ except Exception as e:
568
+ error_msg = f"❌ **Error during analysis:** {str(e)}"
569
+ return error_msg, f"❌ Error: {str(e)}"
570
+
571
+ return enhanced_analyze_problem
572
+
573
+
574
+ # IMPROVED SUPABASE DATABASE MANAGER
575
+ class SupabaseDatabaseManager:
576
+ """Manages database operations for storing conversations and knowledge"""
577
+
578
+ def __init__(self, supabase_url: str, supabase_key: str):
579
+ self.supabase_url = supabase_url
580
+ self.supabase_key = supabase_key
581
+ self.supabase = None
582
+ self.connected = False
583
+ if supabase_url and supabase_key:
584
+ self._init_connection()
585
+
586
+ def _init_connection(self):
587
+ """Initialize Supabase connection with error handling"""
588
+ try:
589
+ self.supabase = create_client(self.supabase_url, self.supabase_key)
590
+ # Test connection with a simple query
591
+ self.supabase.table('conversations').select('id').limit(1).execute()
592
+ self.connected = True
593
+ logger.info("βœ… Database connection successful")
594
+ except Exception as e:
595
+ logger.error(f"❌ Database connection failed: {str(e)}")
596
+ self.connected = False
597
+ self.supabase = None
598
+
599
+ def is_connected(self) -> bool:
600
+ """Check if database is connected"""
601
+ return self.connected and self.supabase is not None
602
+
603
+ def save_conversation(self, session_id: str, query: str, response: str, context: str = None) -> Optional[str]:
604
+ """Save conversation to database"""
605
+ if not self.is_connected():
606
+ logger.warning("Database not connected, skipping save")
607
+ return None
608
+
609
+ try:
610
+ conversation_data = {
611
+ 'session_id': session_id,
612
+ 'query': query,
613
+ 'response': response,
614
+ 'context': context,
615
+ 'timestamp': get_current_timestamp()
616
+ }
617
+ result = self.supabase.table('conversations').insert(conversation_data).execute()
618
+ return result.data[0]['id'] if result.data else None
619
+ except Exception as e:
620
+ logger.error(f"Error saving conversation: {str(e)}")
621
+ return None
622
+
623
+ def get_conversation_history(self, session_id: str, limit: int = 10) -> List[Dict]:
624
+ """Retrieve conversation history from database"""
625
+ if not self.is_connected():
626
+ logger.warning("Database not connected, returning empty history")
627
+ return []
628
+
629
+ try:
630
+ result = self.supabase.table('conversations')\
631
+ .select('*')\
632
+ .eq('session_id', session_id)\
633
+ .order('timestamp', desc=True)\
634
+ .limit(limit)\
635
+ .execute()
636
+ return result.data if result.data else []
637
+ except Exception as e:
638
+ logger.error(f"Error retrieving conversation history: {str(e)}")
639
+ return []
640
+
641
+ def save_problem(self, problem: Problem) -> Optional[str]:
642
+ """Save problem to database"""
643
+ if not self.is_connected():
644
+ logger.warning("Database not connected, skipping problem save")
645
+ return None
646
+
647
+ try:
648
+ problem_data = {
649
+ 'id': problem.id,
650
+ 'title': problem.title,
651
+ 'description': problem.description,
652
+ 'domain': problem.domain.value,
653
+ 'context': problem.context,
654
+ 'user_id': problem.user_id,
655
+ 'timestamp': problem.timestamp.isoformat()
656
+ }
657
+ result = self.supabase.table('problems').insert(problem_data).execute()
658
+ return result.data[0]['id'] if result.data else None
659
+ except Exception as e:
660
+ logger.error(f"Error saving problem: {str(e)}")
661
+ return None
662
+
663
+ # Database manager instance
664
+ db_manager = SupabaseDatabaseManager(SUPABASE_URL, SUPABASE_KEY)
665
+
666
+ # IMPROVED CONSENSUS CALCULATOR - Fixed division by zero
667
+ class ConsensusCalculator:
668
+ """Enhanced consensus calculation with better metrics"""
669
+
670
+ @staticmethod
671
+ def calculate_response_quality(response: str) -> float:
672
+ """Calculate quality score based on response content"""
673
+ if not response or len(response.strip()) < 10:
674
+ return 0.1
675
+
676
+ words = response.split()
677
+ sentences = response.split('.')
678
+
679
+ # Prevent division by zero
680
+ if len(words) == 0:
681
+ return 0.1
682
+
683
+ # Quality factors
684
+ length_score = min(1.0, len(words) / 100) # Optimal around 100 words
685
+ structure_score = min(1.0, len(sentences) / 5) if len(sentences) > 0 else 0.1
686
+
687
+ # Evidence markers
688
+ evidence_markers = ['research shows', 'studies indicate', 'data suggests', 'analysis reveals',
689
+ 'according to', 'evidence indicates', 'research demonstrates']
690
+ evidence_score = min(0.3, sum(1 for marker in evidence_markers if marker.lower() in response.lower()) * 0.1)
691
+
692
+ # Reasoning markers
693
+ reasoning_markers = ['because', 'therefore', 'however', 'furthermore', 'consequently',
694
+ 'moreover', 'additionally', 'thus', 'hence']
695
+ reasoning_score = min(0.3, sum(1 for marker in reasoning_markers if marker.lower() in response.lower()) * 0.05)
696
+
697
+ # Confidence modifiers
698
+ uncertainty_markers = ['maybe', 'possibly', 'might', 'could be', 'perhaps', 'unsure', 'unclear']
699
+ uncertainty_penalty = min(0.2, sum(1 for marker in uncertainty_markers if marker.lower() in response.lower()) * 0.05)
700
+
701
+ # Specificity bonus
702
+ specific_markers = ['specifically', 'for example', 'in particular', 'namely', 'such as']
703
+ specificity_bonus = min(0.2, sum(1 for marker in specific_markers if marker.lower() in response.lower()) * 0.05)
704
+
705
+ total_score = (
706
+ length_score * 0.25 +
707
+ structure_score * 0.15 +
708
+ evidence_score +
709
+ reasoning_score +
710
+ specificity_bonus -
711
+ uncertainty_penalty
712
+ )
713
+
714
+ return max(0.1, min(1.0, total_score))
715
+
716
+ @staticmethod
717
+ def calculate_consensus_score(responses: List[ModelResponse]) -> float:
718
+ """Calculate overall consensus score from multiple responses"""
719
+ if not responses or len(responses) == 0:
720
+ return 0.0
721
+
722
+ try:
723
+ # Average confidence scores
724
+ avg_confidence = sum(r.confidence for r in responses) / len(responses)
725
+
726
+ # Response length variance (lower variance = better consensus)
727
+ response_lengths = [len(r.response.split()) for r in responses]
728
+
729
+ if len(response_lengths) == 0:
730
+ return avg_confidence * 0.7 # No length consistency component
731
+
732
+ # Calculate variance safely
733
+ mean_length = sum(response_lengths) / len(response_lengths)
734
+ length_variance = sum((l - mean_length)**2 for l in response_lengths) / len(response_lengths)
735
+ length_consistency = max(0, 1 - (length_variance / 1000)) # Normalize
736
+
737
+ # Combine metrics
738
+ consensus_score = (avg_confidence * 0.7) + (length_consistency * 0.3)
739
+ return min(1.0, max(0.0, consensus_score))
740
+
741
+ except Exception as e:
742
+ logger.error(f"Error calculating consensus score: {str(e)}")
743
+ # Fallback: return average confidence if available
744
+ if responses:
745
+ try:
746
+ return sum(r.confidence for r in responses) / len(responses)
747
+ except:
748
+ return 0.5 # Default fallback
749
+ return 0.0
750
+
751
+ # MAIN AGORA CLASS - FIXED FOR REAL AGENT RESPONSES
752
+ class Agora:
753
+ """Main Agora class for managing AI debates and consensus"""
754
+
755
+ def __init__(self, primary_llm=None):
756
+ self.primary_llm = primary_llm or "gpt-4"
757
+ self.consensus_calculator = ConsensusCalculator()
758
+ self.db_manager = db_manager
759
+ self.agents = [] # Store individual agents instead of team
760
+ self.available_models = self._check_available_models()
761
+ self._initialize_agents()
762
+ self.output_formatter = AgoraOutputFormatter()
763
+
764
+ def _check_available_models(self):
765
+ """Check which models have API keys available"""
766
+ available = {}
767
+
768
+ models_to_check = {
769
+ "Claude Analyst": ("ANTHROPIC_API_KEY", ANTHROPIC_API_KEY),
770
+ "GPT-4 Strategist": ("OPENAI_API_KEY", OPENAI_API_KEY),
771
+ "Mistral Evaluator": ("MISTRAL_API_KEY", MISTRAL_API_KEY),
772
+ "SambaNova Specialist": ("SAMBANOVA_API_KEY", SAMBANOVA_API_KEY)
773
+ }
774
+
775
+ for model_name, (key_name, api_key) in models_to_check.items():
776
+ available[model_name] = bool(api_key)
777
+ logger.info(f"πŸ”‘ {model_name}: {'βœ… Available' if api_key else '❌ No API key'}")
778
+
779
+ return available
780
+
781
+
782
+
783
+ def _initialize_agents(self):
784
+ """Initialize individual AI agents"""
785
+ self.agents = []
786
+ plain_text_instructions = """
787
+
788
+ CRITICAL FORMATTING REQUIREMENTS:
789
+ - Do NOT use markdown formatting (no **, ##, -, etc.)
790
+ - Use plain text with natural line breaks
791
+ - For emphasis, use CAPITALIZATION or quotation marks
792
+ - For lists, use numbers (1., 2., 3.) or natural language
793
+ - For sections, use clear headers in plain text
794
+ - Write as if you're speaking directly to a person
795
+ - Use proper paragraph breaks with double line breaks
796
+
797
+ Example of good formatting:
798
+
799
+ KEY CONSIDERATIONS
800
+
801
+ The main challenges include three important areas. First, we need to consider the technical aspects. This involves ensuring proper implementation and testing procedures.
802
+
803
+ Second, the strategic implications require careful planning. Organizations should focus on long-term sustainability and stakeholder alignment.
804
+
805
+ RECOMMENDATIONS
806
+
807
+ Based on this analysis, I recommend the following steps:
808
+
809
+ 1. Conduct a thorough assessment of current capabilities
810
+ 2. Develop a phased implementation plan
811
+ 3. Establish clear success metrics and monitoring systems
812
+
813
+ This approach will help ensure successful outcomes while minimizing risks.
814
+
815
+ """
816
+
817
+ try:
818
+ # Check if we have at least one API key for real AI models
819
+ has_real_api_keys = any([ANTHROPIC_API_KEY, OPENAI_API_KEY, MISTRAL_API_KEY, SAMBANOVA_API_KEY])
820
+
821
+ if not has_real_api_keys:
822
+ logger.error("❌ No real AI model API keys found - cannot create agents")
823
+ return
824
+
825
+ # Create Claude agent
826
+ if ANTHROPIC_API_KEY:
827
+ try:
828
+ claude_agent = Agent(
829
+ name="Claude Analyst",
830
+ role="Critical Analysis Specialist",
831
+ model=Claude(id="claude-3-5-sonnet-20240620"),
832
+ instructions="""
833
+ You are Claude Analyst, a Critical Analysis Specialist.
834
+ Provide expert analysis on the given topic from your specialized perspective.
835
+ Be thorough, evidence-based, and constructive in your responses.
836
+ Consider both benefits and potential challenges in your analysis.
837
+ Structure your response clearly with key insights and recommendations.
838
+ Keep responses concise and brief, focusing on actionable insights.
839
+ Respond using clearly formatted text with proper line breaks, bullet points, and headings. Do not use escape characters like \n or markdown symbols like ### or - unless you're actually formatting for a markdown-rendering environment. Write as if you're showing it in a user-friendly UI with readable spacing and structure.
840
+ RESPONSE GUIDELINES:
841
+ 1. Always search for relevant information before answering complex questions
842
+ 2. Clearly distinguish between document-based and web-based information
843
+ 3. Provide source citations for all information
844
+ 4. Synthesize information from multiple sources when available
845
+ 5. Ask clarifying questions when the query is ambiguous
846
+ 6. Be conversational but informative
847
+ {plain_text_instructions}
848
+ """,
849
+ knowledge=knowledge_base,
850
+ )
851
+ self.agents.append(claude_agent)
852
+ logger.info("βœ… Created Claude agent")
853
+ except Exception as e:
854
+ logger.error(f"❌ Failed to create Claude agent: {str(e)}")
855
+
856
+ # Create GPT-4 agent
857
+ if OPENAI_API_KEY:
858
+ try:
859
+ openai_agent = Agent(
860
+ name="GPT-4 Strategist",
861
+ role="Strategic Planning Expert",
862
+ model=OpenAIChat(id="gpt-4o"),
863
+ instructions="""
864
+ You are GPT-4 Strategist, a Strategic Planning Expert.
865
+ Provide expert analysis on the given topic from your specialized perspective.
866
+ Be thorough, evidence-based, and constructive in your responses.
867
+ Focus on strategic implications, implementation approaches, and long-term considerations.
868
+ Structure your response clearly with actionable insights.
869
+ Keep responses concise and brief, focusing on strategic value.
870
+ Respond using clearly formatted text with proper line breaks, bullet points, and headings. Do not use escape characters like \n or markdown symbols like ### or - unless you're actually formatting for a markdown-rendering environment. Write as if you're showing it in a user-friendly UI with readable spacing and structure.
871
+ RESPONSE GUIDELINES:
872
+ 1. Always search for relevant information before answering complex questions
873
+ 2. Clearly distinguish between document-based and web-based information
874
+ 3. Provide source citations for all information
875
+ 4. Synthesize information from multiple sources when available
876
+ 5. Ask clarifying questions when the query is ambiguous
877
+ 6. Be conversational but informative
878
+ {plain_text_instructions}
879
+
880
+ """,
881
+ knowledge=knowledge_base,
882
+ )
883
+ self.agents.append(openai_agent)
884
+ logger.info("βœ… Created GPT-4 agent")
885
+ except Exception as e:
886
+ logger.error(f"❌ Failed to create GPT-4 agent: {str(e)}")
887
+
888
+ # Create Mistral agent
889
+ if MISTRAL_API_KEY:
890
+ try:
891
+ mistral_agent = Agent(
892
+ name="Mistral Evaluator",
893
+ role="Solution Evaluation Specialist",
894
+ model=MistralChat(
895
+ id="mistral-large-latest",
896
+ api_key=MISTRAL_API_KEY,
897
+ ), # Use model object
898
+ instructions="""
899
+ You are Mistral Evaluator, a Solution Evaluation Specialist.
900
+ Provide expert analysis on the given topic from your specialized perspective.
901
+ Be thorough, evidence-based, and constructive in your responses.
902
+ Focus on evaluating different approaches, assessing feasibility, and identifying risks.
903
+ Structure your response clearly with evaluation criteria and recommendations.
904
+ Keep responses concise and brief, focusing on practical evaluation.
905
+ Respond using clearly formatted text with proper line breaks, bullet points, and headings. Do not use escape characters like \n or markdown symbols like ### or - unless you're actually formatting for a markdown-rendering environment. Write as if you're showing it in a user-friendly UI with readable spacing and structure.
906
+ RESPONSE GUIDELINES:
907
+ 1. Always search for relevant information before answering complex questions
908
+ 2. Clearly distinguish between document-based and web-based information
909
+ 3. Provide source citations for all information
910
+ 4. Synthesize information from multiple sources when available
911
+ 5. Ask clarifying questions when the query is ambiguous
912
+ 6. Be conversational but informative
913
+ {plain_text_instructions}
914
+
915
+ """,
916
+ knowledge=knowledge_base,
917
+ )
918
+ self.agents.append(mistral_agent)
919
+ logger.info("βœ… Created Mistral agent")
920
+ except Exception as e:
921
+ logger.error(f"❌ Failed to create Mistral agent: {str(e)}")
922
+
923
+ # Create SambaNova agent - might need different import/setup
924
+ if SAMBANOVA_API_KEY:
925
+ try:
926
+ sambanova_agent = Agent(
927
+ name="SambaNova Specialist",
928
+ role="Technical Implementation Specialist",
929
+ model=Sambanova(),
930
+ instructions="""
931
+ You are SambaNova Specialist, a Technical Implementation Specialist.
932
+ Provide expert analysis on the given topic from your specialized perspective.
933
+ Be thorough, evidence-based, and constructive in your responses.
934
+ Focus on technical feasibility, implementation challenges, and system design.
935
+ Structure your response clearly with technical details and recommendations.
936
+ Keep responses concise and brief, focusing on technical implementation.
937
+ Respond using clearly formatted text with proper line breaks, bullet points, and headings. Do not use escape characters like \n or markdown symbols like ### or - unless you're actually formatting for a markdown-rendering environment. Write as if you're showing it in a user-friendly UI with readable spacing and structure.
938
+ RESPONSE GUIDELINES:
939
+ 1. Always search for relevant information before answering complex questions
940
+ 2. Clearly distinguish between document-based and web-based information
941
+ 3. Provide source citations for all information
942
+ 4. Synthesize information from multiple sources when available
943
+ 5. Ask clarifying questions when the query is ambiguous
944
+ 6. Be conversational but informative
945
+ {plain_text_instructions}
946
+
947
+ """,
948
+ knowledge=knowledge_base,
949
+ )
950
+ self.agents.append(sambanova_agent)
951
+ logger.info("βœ… SambaNova agent Created")
952
+ except Exception as e:
953
+ logger.error(f"❌ Failed to create SambaNova agent: {str(e)}")
954
+
955
+ logger.info(f"βœ… Successfully initialized {len(self.agents)} agents")
956
+
957
+ except Exception as e:
958
+ logger.error(f"❌ Failed to initialize agents: {str(e)}")
959
+ import traceback
960
+ logger.error(f"Full traceback: {traceback.format_exc()}")
961
+
962
+ def get_agent_status(self):
963
+ """Get detailed status of the agents and available models"""
964
+ status = {
965
+ "agents_initialized": len(self.agents) > 0,
966
+ "agent_count": len(self.agents),
967
+ "available_models": self.available_models,
968
+ "agents": []
969
+ }
970
+
971
+ for agent in self.agents:
972
+ status["agents"].append({
973
+ "name": agent.name,
974
+ "role": agent.role,
975
+ "model": getattr(agent, 'model', 'unknown')
976
+ })
977
+
978
+ return status
979
+
980
+ def start_debate(self, problem: Problem) -> DebateRound:
981
+ """Start individual agent analysis on a given problem - NO MOCK RESPONSES"""
982
+ logger.info(f"🎯 Starting analysis on: {problem.title}")
983
+ logger.info(f"πŸ” Available agents: {len(self.agents)}")
984
+
985
+ if not self.agents:
986
+ raise Exception("❌ No AI agents available - please check API keys configuration")
987
+
988
+ responses = []
989
+
990
+ # Create analysis prompt
991
+ analysis_prompt = dedent(f"""
992
+ **Problem Analysis Request**
993
+
994
+ Title: {problem.title}
995
+ Description: {problem.description}
996
+ Domain: {problem.domain.value}
997
+ Context: {problem.context}
998
+
999
+ Please provide your expert analysis on this problem, including:
1000
+ 1. Key considerations and challenges
1001
+ 2. Potential solutions or approaches
1002
+ 3. Risks and benefits assessment
1003
+ 4. Specific recommendations for next steps
1004
+
1005
+ Please be thorough and provide actionable insights from your specialized perspective.
1006
+
1007
+ Timestamp: {get_current_timestamp()}
1008
+ """)
1009
+
1010
+ # Get response from each agent individually
1011
+ for agent in self.agents:
1012
+ try:
1013
+ logger.info(f"πŸ“€ Requesting analysis from {agent.name}...")
1014
+
1015
+ # Get response from individual agent
1016
+ agent_response = agent.run(analysis_prompt)
1017
+
1018
+ if agent_response and len(str(agent_response).strip()) > 20:
1019
+ # Create model response object
1020
+ response_text = str(agent_response).strip()
1021
+ confidence = self.consensus_calculator.calculate_response_quality(response_text)
1022
+
1023
+ model_response = ModelResponse(
1024
+ model_name=agent.name,
1025
+ response=response_text,
1026
+ confidence=confidence,
1027
+ reasoning=f"Direct response from {agent.role}",
1028
+ timestamp=datetime.now(),
1029
+ tokens_used=len(response_text.split()) * 1.3
1030
+ )
1031
+ responses.append(model_response)
1032
+ logger.info(f"βœ… Received response from {agent.name} (confidence: {confidence:.2f})")
1033
+ else:
1034
+ logger.warning(f"⚠️ Empty or short response from {agent.name}")
1035
+
1036
+ except Exception as e:
1037
+ logger.error(f"❌ Error getting response from {agent.name}: {str(e)}")
1038
+ continue
1039
+
1040
+ # Ensure we have at least one response
1041
+ if not responses:
1042
+ raise Exception("❌ No responses received from any AI agents - check API keys and network connection")
1043
+
1044
+ # Calculate consensus score
1045
+ consensus_score = self.consensus_calculator.calculate_consensus_score(responses)
1046
+
1047
+ # Create debate round
1048
+ debate_round = DebateRound(
1049
+ round_number=1,
1050
+ responses=responses,
1051
+ consensus_score=consensus_score,
1052
+ timestamp=datetime.now()
1053
+ )
1054
+
1055
+ logger.info(f"βœ… Analysis completed - Generated {len(responses)} real AI responses - Consensus Score: {consensus_score:.2f}")
1056
+ return debate_round
1057
+
1058
+ def save_debate_round(self, problem: Problem, debate_round: DebateRound) -> bool:
1059
+ """Save debate round and problem to the database"""
1060
+ try:
1061
+ # Save problem first
1062
+ problem_id = self.db_manager.save_problem(problem)
1063
+
1064
+ # Save each response
1065
+ for response in debate_round.responses:
1066
+ self.db_manager.save_conversation(
1067
+ session_id=problem.id,
1068
+ query=f"Analysis request: {problem.title}",
1069
+ response=response.response,
1070
+ context=json.dumps({
1071
+ "round_number": debate_round.round_number,
1072
+ "model_name": response.model_name,
1073
+ "confidence": response.confidence,
1074
+ "consensus_score": debate_round.consensus_score,
1075
+ "tokens_used": response.tokens_used
1076
+ })
1077
+ )
1078
+
1079
+ logger.info(f"βœ… Saved debate round for problem: {problem.title}")
1080
+ return True
1081
+
1082
+ except Exception as e:
1083
+ logger.error(f"❌ Error saving debate round: {str(e)}")
1084
+ return False
1085
+
1086
+ # IMPROVED GRADIO INTERFACE
1087
+ def create_gradio_interface():
1088
+ """Create the Gradio interface for Agora"""
1089
+
1090
+ # Initialize Agora instance
1091
+ agora = Agora()
1092
+
1093
+ def analyze_problem(title: str, description: str, domain: str, user_id: str, context: str = ""):
1094
+ """Enhanced problem analysis with beautiful formatting using AgoraOutputFormatter"""
1095
+
1096
+ # Initialize the formatter
1097
+ formatter = AgoraOutputFormatter()
1098
+
1099
+ try:
1100
+ # Input validation
1101
+ if not title or not description:
1102
+ error_output = """
1103
+ ╔══════════════════════════════════════════════════════════════════════════════╗
1104
+ β•‘ ❌ INPUT ERROR β•‘
1105
+ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•
1106
+
1107
+ β”Œβ”€ Missing Required Information ──────────────────────────────────────────────┐
1108
+ β”‚ Please provide both a title and description for your problem. β”‚
1109
+ β”‚ β”‚
1110
+ β”‚ Required fields: β”‚
1111
+ β”‚ β€’ πŸ“ Problem Title: Clear, concise title β”‚
1112
+ β”‚ β€’ πŸ“„ Problem Description: Detailed explanation of the issue β”‚
1113
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
1114
+ """
1115
+ return error_output.strip(), "❌ Missing required information"
1116
+
1117
+ # Check if agents are available
1118
+ if not agora.agents:
1119
+ error_output = f"""
1120
+ ╔══════════════════════════════════════════════════════════════════════════════╗
1121
+ β•‘ ❌ CONFIGURATION ERROR β•‘
1122
+ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•
1123
+
1124
+ β”Œβ”€ No AI Agents Available ────────────────────────────────────────────────────┐
1125
+ β”‚ The system cannot proceed without properly configured AI agents. β”‚
1126
+ β”‚ β”‚
1127
+ β”‚ Please check: β”‚
1128
+ β”‚ β€’ πŸ”‘ API keys are properly set in environment variables β”‚
1129
+ β”‚ β€’ πŸ€– At least one AI model is configured and accessible β”‚
1130
+ β”‚ β€’ 🌐 Network connection allows API calls to AI services β”‚
1131
+ β”‚ β”‚
1132
+ β”‚ Current Status: {len(agora.agents)} agents initialized β”‚
1133
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
1134
+ """
1135
+ return error_output.strip(), "❌ No agents configured"
1136
+
1137
+ # Convert domain to enum safely
1138
+ try:
1139
+ domain_lower = domain.lower()
1140
+ problem_domain = ProblemDomain(domain_lower)
1141
+ except ValueError:
1142
+ # Fallback to GENERAL if invalid domain
1143
+ problem_domain = ProblemDomain.GENERAL
1144
+ logger.warning(f"Invalid domain '{domain}', defaulting to GENERAL")
1145
+
1146
+ # Create problem object
1147
+ problem = Problem(
1148
+ id=str(uuid.uuid4()),
1149
+ title=title.strip(),
1150
+ description=description.strip(),
1151
+ domain=problem_domain,
1152
+ context=context.strip() if context else "No additional context provided",
1153
+ user_id=user_id.strip() if user_id else "anonymous",
1154
+ timestamp=datetime.now()
1155
+ )
1156
+
1157
+ # Log analysis start
1158
+ logger.info(f"πŸš€ Starting enhanced analysis for: {problem.title}")
1159
+ logger.info(f"πŸ” Domain: {problem.domain.value} | Agents: {len(agora.agents)}")
1160
+
1161
+ # Show progress indicator (for UI feedback)
1162
+ progress_output = f"""
1163
+ ╔══════════════════════════════════════════════════════════════════════════════╗
1164
+ β•‘ πŸ”„ ANALYSIS IN PROGRESS β•‘
1165
+ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•
1166
+
1167
+ 🎯 **Analyzing:** {problem.title}
1168
+ 🏷️ **Domain:** {problem.domain.value.title()}
1169
+ πŸ€– **Consulting {len(agora.agents)} AI Agents...**
1170
+
1171
+ β”Œβ”€ Progress ──────────────────────────────────────────────────────────────────┐
1172
+ β”‚ ⏳ Sending analysis requests to AI models... β”‚
1173
+ β”‚ πŸ”„ This may take 30-60 seconds depending on model response times β”‚
1174
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
1175
+ """
1176
+
1177
+ # Start the actual analysis
1178
+ try:
1179
+ debate_round = agora.start_debate(problem)
1180
+ logger.info(f"βœ… Analysis completed - {len(debate_round.responses)} responses received")
1181
+
1182
+ except Exception as e:
1183
+ error_msg = str(e)
1184
+ logger.error(f"❌ Analysis failed: {error_msg}")
1185
+
1186
+ error_output = f"""
1187
+ ╔══════════════════════════════════════════════════════════════════════════════╗
1188
+ β•‘ ❌ ANALYSIS FAILED β•‘
1189
+ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•
1190
+
1191
+ β”Œβ”€ Error Details ─────────────────────────────────────────────────────────────┐
1192
+ β”‚ {formatter._wrap_text(error_msg, 73)} β”‚
1193
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
1194
+
1195
+ β”Œβ”€ Troubleshooting Steps ─────────────────────────────────────────────────────┐
1196
+ β”‚ 1. πŸ”‘ Verify all API keys are valid and have sufficient credits β”‚
1197
+ β”‚ 2. 🌐 Check internet connection and firewall settings β”‚
1198
+ β”‚ 3. πŸ€– Ensure AI services are operational (check status pages) β”‚
1199
+ β”‚ 4. πŸ“ Try with a simpler problem description β”‚
1200
+ β”‚ 5. πŸ”„ Refresh the page and try again β”‚
1201
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
1202
+ """
1203
+ return error_output.strip(), f"❌ Analysis Error: {error_msg}"
1204
+
1205
+ # Validate that we got meaningful responses
1206
+ if not debate_round.responses:
1207
+ error_output = """
1208
+ ╔══════════════════════════════════════════════════════════════════════════════╗
1209
+ β•‘ ❌ NO RESPONSES RECEIVED β•‘
1210
+ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•
1211
+
1212
+ β”Œβ”€ Issue Description ─────────────────────────────────────────────────────────┐
1213
+ β”‚ No AI agents provided responses to the analysis request. β”‚
1214
+ β”‚ This could indicate API key issues or service unavailability. β”‚
1215
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
1216
+ """
1217
+ return error_output.strip(), "❌ No responses received"
1218
+
1219
+ # Save results to database
1220
+ try:
1221
+ save_success = agora.save_debate_round(problem, debate_round)
1222
+ if save_success:
1223
+ logger.info("βœ… Results saved to database successfully")
1224
+ else:
1225
+ logger.warning("⚠️ Database save failed - results not persisted")
1226
+ except Exception as e:
1227
+ logger.error(f"❌ Database save error: {str(e)}")
1228
+ save_success = False
1229
+
1230
+ # Use the enhanced formatter to create beautiful output
1231
+ try:
1232
+ main_output, summary_output = formatter.format_debate_results(
1233
+ problem=problem,
1234
+ debate_round=debate_round,
1235
+ save_success=save_success
1236
+ )
1237
+
1238
+ # Log success metrics
1239
+ avg_confidence = sum(r.confidence for r in debate_round.responses) / len(debate_round.responses)
1240
+ logger.info(f"πŸ“Š Analysis completed successfully:")
1241
+ logger.info(f" β€’ Responses: {len(debate_round.responses)}")
1242
+ logger.info(f" β€’ Avg Confidence: {avg_confidence:.2f}")
1243
+ logger.info(f" β€’ Consensus Score: {debate_round.consensus_score:.2f}")
1244
+ logger.info(f" β€’ Database Saved: {'Yes' if save_success else 'No'}")
1245
+
1246
+ return main_output, summary_output
1247
+
1248
+ except Exception as e:
1249
+ logger.error(f"❌ Formatting error: {str(e)}")
1250
+
1251
+ # Fallback to basic formatting if formatter fails
1252
+ fallback_output = f"""
1253
+ ╔══════════════════════════════════════════════════════════════════════════════╗
1254
+ β•‘ ⚠️ ANALYSIS COMPLETE (BASIC FORMAT) β•‘
1255
+ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•
1256
+
1257
+ 🎯 **Problem:** {problem.title}
1258
+ πŸ“Š **Consensus Score:** {debate_round.consensus_score:.2f}/1.00
1259
+ πŸ€– **Responses:** {len(debate_round.responses)} AI agents
1260
+ πŸ•’ **Completed:** {debate_round.timestamp.strftime('%Y-%m-%d %H:%M:%S')}
1261
+
1262
+ **Responses:**
1263
+ {chr(10).join([f"β€’ **{r.model_name}** (confidence: {r.confidence:.2f}): {r.response[:200]}..." for r in debate_round.responses])}
1264
+ """
1265
+
1266
+ return fallback_output.strip(), f"βœ… Analysis completed ({len(debate_round.responses)} responses)"
1267
+
1268
+ except Exception as e:
1269
+ # Ultimate fallback for any unexpected errors
1270
+ error_msg = str(e)
1271
+ logger.error(f"❌ Unexpected error in analyze_problem: {error_msg}")
1272
+
1273
+ fallback_error = f"""
1274
+ ╔══════════════════════════════════════════════════════════════════════════════╗
1275
+ β•‘ ❌ UNEXPECTED ERROR β•‘
1276
+ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•
1277
+
1278
+ An unexpected error occurred during analysis.
1279
+
1280
+ **Error:** {error_msg}
1281
+
1282
+ **Troubleshooting:**
1283
+ β€’ Check system logs for detailed error information
1284
+ β€’ Verify all configuration settings
1285
+ β€’ Try restarting the application
1286
+ β€’ Contact support if the issue persists
1287
+
1288
+ **System Info:**
1289
+ β€’ Timestamp: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}
1290
+ β€’ Available Agents: {len(agora.agents) if 'agora' in locals() else 'Unknown'}
1291
+ """
1292
+
1293
+ return fallback_error.strip(), f"❌ System Error: {error_msg}"
1294
+
1295
+
1296
+ # Create Gradio interface with enhanced styling
1297
+ with gr.Blocks(
1298
+ title="AI Democracy - Agora System (Enhanced Output)",
1299
+ theme=gr.themes.Soft(),
1300
+ css="""
1301
+ .gradio-container {
1302
+ font-family: 'Monaco', 'Menlo', 'Ubuntu Mono', monospace !important;
1303
+ }
1304
+ .markdown {
1305
+ font-family: 'Monaco', 'Menlo', 'Ubuntu Mono', monospace !important;
1306
+ }
1307
+ """
1308
+ ) as demo:
1309
+
1310
+ gr.Markdown("""
1311
+ # πŸ›οΈ AI Democracy - Multi-Model Consensus System
1312
+ ## The Agora: Where artificial minds gather to forge wisdom
1313
+
1314
+ **Enhanced Output Edition** - Beautiful, readable analysis results from real AI models!
1315
+ """)
1316
+
1317
+ # Show enhanced agent status
1318
+ agent_status = agora.get_agent_status()
1319
+ available_models = [name for name, available in agent_status['available_models'].items() if available]
1320
+ status_markdown = f"""
1321
+ ### πŸ€– System Status
1322
+
1323
+ **Available Agents:** {agent_status['agent_count']} | **Active Models:** {', '.join(available_models) if available_models else 'None configured'}
1324
+
1325
+ **Features:** Enhanced Formatting ✨ | Real AI Responses πŸ€– | Consensus Analysis πŸ“Š | Database Storage πŸ’Ύ
1326
+ """
1327
+ gr.Markdown(status_markdown)
1328
+
1329
+ with gr.Row():
1330
+ with gr.Column(scale=2):
1331
+ title_input = gr.Textbox(
1332
+ label="πŸ“ Problem Title",
1333
+ placeholder="Enter a clear, concise title for your problem",
1334
+ lines=1,
1335
+ max_lines=2
1336
+ )
1337
+
1338
+ description_input = gr.Textbox(
1339
+ label="πŸ“„ Problem Description",
1340
+ placeholder="Provide a detailed description of the problem you want analyzed",
1341
+ lines=5,
1342
+ max_lines=10
1343
+ )
1344
+
1345
+ domain_input = gr.Dropdown(
1346
+ label="🏷️ Problem Domain",
1347
+ choices=[domain.value.title() for domain in ProblemDomain],
1348
+ value=ProblemDomain.GENERAL.value.title()
1349
+ )
1350
+
1351
+ context_input = gr.Textbox(
1352
+ label="πŸ” Additional Context (Optional)",
1353
+ placeholder="Any additional context, constraints, or specific requirements",
1354
+ lines=2,
1355
+ max_lines=4
1356
+ )
1357
+
1358
+ user_input = gr.Textbox(
1359
+ label="πŸ‘€ User ID (Optional)",
1360
+ placeholder="Enter your identifier for tracking (optional)",
1361
+ lines=1
1362
+ )
1363
+
1364
+ analyze_button = gr.Button(
1365
+ "πŸš€ Start Enhanced AI Analysis",
1366
+ variant="primary",
1367
+ size="lg",
1368
+ scale=2
1369
+ )
1370
+
1371
+ with gr.Column(scale=1):
1372
+ gr.Markdown(f"""
1373
+ ### 🎯 **Enhanced Analysis Process:**
1374
+
1375
+ 1. **Submit** your problem with detailed description
1376
+ 2. **Real AI models** analyze independently and thoroughly
1377
+ 3. **Enhanced formatting** makes results beautiful and readable
1378
+ 4. **Consensus scoring** shows agreement levels between models
1379
+ 5. **Executive summary** provides key insights at a glance
1380
+
1381
+ ### 🌟 **New Features:**
1382
+ - **Visual progress bars** for consensus scores
1383
+ - **Emoji indicators** for quality and confidence levels
1384
+ - **Structured response boxes** for each AI agent
1385
+ - **Executive summaries** with key insights
1386
+ - **Error handling** with helpful troubleshooting
1387
+
1388
+ ### πŸ€– **AI Model Status:**
1389
+ {chr(10).join([f"- **{name}**: {'🟒 Ready' if available else 'πŸ”΄ No API Key'}"
1390
+ for name, available in agent_status['available_models'].items()])}
1391
+
1392
+ ### ⚑ **Quality Assurance:**
1393
+ - βœ… Only real AI responses (no mock data)
1394
+ - βœ… Confidence scoring for reliability
1395
+ - βœ… Consensus analysis for agreement
1396
+ - βœ… Professional formatting for clarity
1397
+ """)
1398
+
1399
+ gr.Markdown("## πŸ“Š Enhanced AI Analysis Results")
1400
+
1401
+ with gr.Row():
1402
+ with gr.Column():
1403
+ results_output = gr.Markdown(
1404
+ label="πŸ”¬ Detailed Analysis",
1405
+ elem_classes=["markdown"],
1406
+ value="Analysis results will appear here after you submit a problem..."
1407
+ )
1408
+
1409
+ with gr.Row():
1410
+ with gr.Column():
1411
+ summary_output = gr.Markdown(
1412
+ label="πŸ“‹ Executive Summary",
1413
+ elem_classes=["markdown"],
1414
+ value="Executive summary will appear here..."
1415
+ )
1416
+
1417
+ # Event handling with the enhanced analyze_problem function
1418
+ analyze_button.click(
1419
+ fn=analyze_problem, # This is our new enhanced function
1420
+ inputs=[title_input, description_input, domain_input, user_input, context_input],
1421
+ outputs=[results_output, summary_output],
1422
+ show_progress=True
1423
+ )
1424
+
1425
+ # Enhanced example problems section
1426
+ gr.Markdown("""
1427
+ ### πŸ’‘ **Sample Problems to Test Enhanced Formatting:**
1428
+
1429
+ **Business Strategy:**
1430
+ - "How can we implement a sustainable remote work policy that maintains productivity and employee satisfaction?"
1431
+
1432
+ **Technology & Ethics:**
1433
+ - "What are the key considerations for implementing AI-powered decision making in healthcare while ensuring patient privacy and safety?"
1434
+
1435
+ **Environmental & Policy:**
1436
+ - "How should cities balance economic growth with environmental sustainability in urban planning decisions?"
1437
+
1438
+ **Social & Psychological:**
1439
+ - "What strategies can organizations use to improve mental health support while respecting employee privacy boundaries?"
1440
+
1441
+ **Innovation & Risk:**
1442
+ - "How can startups effectively validate product-market fit while managing limited resources and investor expectations?"
1443
+ """)
1444
+
1445
+ gr.Markdown("""
1446
+ ---
1447
+ ### πŸ”§ **Enhanced System Information:**
1448
+ - **Framework:** Agno AI Agent Framework with Enhanced Formatting
1449
+ - **Output Engine:** AgoraOutputFormatter with ASCII Art & Emojis
1450
+ - **Database:** Supabase (PostgreSQL + Vector Storage)
1451
+ - **AI Models:** Claude 3.5, GPT-4o, Mistral Large, SambaNova
1452
+ - **Version:** 2.0 (Enhanced UI/UX Edition)
1453
+ - **Features:** Real-time Analysis, Consensus Scoring, Beautiful Formatting
1454
+ """)
1455
+
1456
+ return demo
1457
+
1458
+
1459
+ # Additional utility function for testing the formatter
1460
+ def test_enhanced_formatting():
1461
+ """Test function to demonstrate the enhanced formatting capabilities"""
1462
+
1463
+ # This would typically be called with real data
1464
+ print("πŸ§ͺ Testing Enhanced Formatting...")
1465
+ print("βœ… AgoraOutputFormatter class ready for integration")
1466
+ print("βœ… Enhanced analyze_problem function ready")
1467
+ print("βœ… Gradio interface enhanced with new styling")
1468
+ print("πŸš€ Ready to provide beautiful AI analysis results!")
1469
+
1470
+
1471
+ # MAIN EXECUTION
1472
+ def main():
1473
+ """Main function to run the Agora system"""
1474
+ try:
1475
+ logger.info("πŸš€ Starting AI Democracy - Agora System")
1476
+
1477
+ # Create and launch Gradio interface
1478
+ demo = create_gradio_interface()
1479
+
1480
+ # Launch with configuration
1481
+ demo.launch(
1482
+ server_name="127.0.0.1",
1483
+ server_port=7860,
1484
+ share=False, # Set to True if you want a public link
1485
+ debug=False,
1486
+ show_error=True,
1487
+ mcp_server=True
1488
+ )
1489
+
1490
+ except Exception as e:
1491
+ logger.error(f"❌ Failed to start Agora system: {str(e)}")
1492
+ import traceback
1493
+ logger.error(f"Full traceback: {traceback.format_exc()}")
1494
+
1495
+ if __name__ == "__main__":
1496
+ main()
requirements.txt ADDED
Binary file (3.47 kB). View file