Strategic analysis is time-consuming and quality varies widely. Analysts spend hours gathering data and drafting reports, with no systematic quality checks until peer reviewβoften too late in the process.
The Solution
This demo implements an agentic AI pattern where specialized agents collaborate autonomously: one gathers data, another drafts analysis, a third evaluates quality, and a fourth revises until standards are met. The self-correcting loop eliminates the "first draft = final draft" problem common in LLM applications.
Why This Matters
Most enterprise AI deployments fail not from bad models, but from lack of quality gates. This architecture demonstrates how to build reliability into AI workflowsβa pattern applicable to any domain requiring consistent output quality.
# Clone the repository
git clone https://github.com/vn6295337/Instant-SWOT-Agent.git
cd Instant-SWOT-Agent
# Create and activate virtual environment
python3 -m venv .venv
source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Set up environment variablescp .env.example .env# Edit .env with your API keys# Run the application (FastAPI + React UI)
python -m src.main api
# Or use make
make api
Hugging Face Spaces Deployment
Create a new Space (Docker SDK)
Add this repository as the source
Set up Secrets (at least one LLM provider required):
GROQ_API_KEY (primary, recommended)
GEMINI_API_KEY (fallback)
OPENROUTER_API_KEY (fallback)
TAVILY_API_KEY (for live search data)
The system automatically falls back through providers if one fails
Requirements
Python 3.11+
At least one LLM API key (Groq, Gemini, or OpenRouter)
Tavily API key (optional, for live search data)
Usage Examples
Web UI
# Start the FastAPI server with React frontend
python -m src.main api
Open http://localhost:7860, enter a company name (e.g., "Tesla", "NVIDIA", "Microsoft") and click "Generate SWOT".
CLI Usage
# Analyze a company from command line
python -m src.main analyze --company "Apple" --strategy "Differentiation"
Programmatic Usage
from src.workflow.runner import run_self_correcting_workflow
# Generate SWOT analysis with specific strategy
result = run_self_correcting_workflow(company_name="Apple", strategy_focus="Differentiation")
print(f"Score: {result['score']}/10")
print(f"Revisions: {result['revision_count']}")
print(f"SWOT Analysis:\n{result['draft_report']}")
Testing
# Run tests
make test# Or directly
python3 tests/test_self_correcting_loop.py
Technical Characteristics
Analysis Time: Typically under 10 seconds (depends on API latency)
Quality Loop: Iterates until score β₯ 7/10 or max 3 revisions
Data Sources: 6 MCP servers aggregating SEC EDGAR, FRED, Yahoo Finance, Tavily, and Finnhub APIs
Frontend: React + TypeScript + Vite + Tailwind CSS
Backend: FastAPI with async workflow execution
Design Decisions
Decision
Choice
Rationale
Orchestration
LangGraph
Native support for cyclic workflows; cleaner than raw LangChain for multi-agent patterns
LLM Provider
Groq (Llama 3.1 8B)
Sub-second inference enables tight feedback loops; cost-effective for demos
Quality Threshold
7/10
Balances quality vs. latency; lower values cause excessive loops, higher values rarely achievable
Max Revisions
3
Empirically, quality plateaus after 2-3 iterations; prevents infinite loops
Same Model for Critic
Intentional tradeoff
Production would use a stronger model for evaluation; kept simple for demo cost management
Web Search
Tavily API
Purpose-built for LLM applications; returns clean, structured content
Known Limitations
Self-evaluation bias: The critic uses the same model family as the analyst. A production system would use a more capable evaluator model or human-in-the-loop for high-stakes decisions.
Mock data visibility: When Tavily API is unavailable, the UI clearly indicates cached data is being used.
Contributing
Contributions are welcome! Please follow these steps: