Commit
Β·
59de368
1
Parent(s):
5134f75
Add Senior Research Analyst feature with R&D pipeline focus - New ResearchAnalystAgent for extracting high-value insights - 4 specialized research prompts for experiments, prototypes, and product decisions - Enhanced UI with dedicated research analysis tab - Streaming support and export functionality - Non-breaking integration preserving all existing workflows
Browse files- RESEARCH_ANALYST_FEATURE.md +143 -0
- agents.py +222 -114
- app.py +305 -159
- test_research_feature.py +116 -0
- utils/prompts.py +46 -70
RESEARCH_ANALYST_FEATURE.md
ADDED
|
@@ -0,0 +1,143 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Senior Research Analyst Feature
|
| 2 |
+
|
| 3 |
+
## Overview
|
| 4 |
+
|
| 5 |
+
A new **Senior Research Analyst** feature has been added to the PDF Analysis Orchestrator that focuses on extracting high-value, novel ideas and converting them into concrete R&D pipeline outcomes. This feature operates as a specialized agent that acts as a senior research analyst with deep expertise in product and engineering R&D pipelines.
|
| 6 |
+
|
| 7 |
+
## Key Capabilities
|
| 8 |
+
|
| 9 |
+
### π― Core Functionality
|
| 10 |
+
- **Extract High-Value Insights**: Identifies novel ideas, breakthrough concepts, and innovative approaches with significant product/engineering impact
|
| 11 |
+
- **Assess Commercial Viability**: Evaluates potential for practical application, market readiness, and competitive advantage
|
| 12 |
+
- **Generate R&D Pipeline Outcomes**: Converts insights into concrete, actionable items for:
|
| 13 |
+
- **Experiments**: Specific hypotheses to test, methodologies to validate
|
| 14 |
+
- **Prototypes**: Technical implementations to build and demonstrate
|
| 15 |
+
- **Product Decisions**: Strategic choices for development priorities and resource allocation
|
| 16 |
+
- **Prioritize by Impact**: Focuses on ideas with highest potential for transformative change and measurable business value
|
| 17 |
+
|
| 18 |
+
### π¬ Research Analysis Process
|
| 19 |
+
|
| 20 |
+
1. **Document Analysis**: Processes PDFs with research-focused chunking strategy for large documents
|
| 21 |
+
2. **Insight Extraction**: Identifies novel technical concepts, innovation opportunities, and breakthrough potential
|
| 22 |
+
3. **Synthesis**: Combines insights from multiple document sections into comprehensive R&D pipeline strategy
|
| 23 |
+
4. **Outcome Generation**: Produces structured analysis with clear next steps for engineering and product teams
|
| 24 |
+
|
| 25 |
+
## Implementation Details
|
| 26 |
+
|
| 27 |
+
### New Components
|
| 28 |
+
|
| 29 |
+
#### 1. ResearchAnalystAgent (`agents.py`)
|
| 30 |
+
- **Class**: `ResearchAnalystAgent(BaseAgent)`
|
| 31 |
+
- **Purpose**: Specialized agent for R&D pipeline analysis
|
| 32 |
+
- **Features**:
|
| 33 |
+
- Research-focused document processing
|
| 34 |
+
- Advanced synthesis of insights across document sections
|
| 35 |
+
- Structured output for experiments, prototypes, and product decisions
|
| 36 |
+
- Streaming support for real-time analysis feedback
|
| 37 |
+
|
| 38 |
+
#### 2. Research Prompts (`utils/prompts.py`)
|
| 39 |
+
Four new specialized prompts for research analysis:
|
| 40 |
+
|
| 41 |
+
1. **R&D Pipeline Analysis** (`research_pipeline`)
|
| 42 |
+
- Identifies novel ideas with high product/engineering impact
|
| 43 |
+
- Converts insights into concrete R&D pipeline outcomes
|
| 44 |
+
|
| 45 |
+
2. **Innovation Opportunity Assessment** (`innovation_assessment`)
|
| 46 |
+
- Assesses commercial viability and innovation potential
|
| 47 |
+
- Generates recommendations for experimental validation
|
| 48 |
+
|
| 49 |
+
3. **Experimental Design Framework** (`experimental_design`)
|
| 50 |
+
- Designs specific experiments and validation methodologies
|
| 51 |
+
- Includes success metrics and implementation timelines
|
| 52 |
+
|
| 53 |
+
4. **Prototype Development Roadmap** (`prototype_roadmap`)
|
| 54 |
+
- Creates technical implementation roadmaps
|
| 55 |
+
- Includes specifications, development phases, and success criteria
|
| 56 |
+
|
| 57 |
+
#### 3. UI Integration (`app.py`)
|
| 58 |
+
- **New Tab**: "π¬ Senior Research Analyst"
|
| 59 |
+
- **Features**:
|
| 60 |
+
- Dedicated interface for research analysis
|
| 61 |
+
- Research-specific prompt selection
|
| 62 |
+
- Enhanced output display (20-30 lines)
|
| 63 |
+
- Export functionality for research results
|
| 64 |
+
- Research insights summary panel
|
| 65 |
+
|
| 66 |
+
### Technical Features
|
| 67 |
+
|
| 68 |
+
#### Streaming Support
|
| 69 |
+
- Real-time feedback during analysis
|
| 70 |
+
- Progress indicators for large document processing
|
| 71 |
+
- Research-focused status messages
|
| 72 |
+
|
| 73 |
+
#### Large Document Handling
|
| 74 |
+
- Research-optimized chunking strategy
|
| 75 |
+
- Section-by-section analysis for comprehensive coverage
|
| 76 |
+
- Advanced synthesis of insights across sections
|
| 77 |
+
|
| 78 |
+
#### Export Capabilities
|
| 79 |
+
- Full export support (TXT, JSON, PDF)
|
| 80 |
+
- Research-specific formatting
|
| 81 |
+
- Structured output preservation
|
| 82 |
+
|
| 83 |
+
## Usage
|
| 84 |
+
|
| 85 |
+
### Basic Usage
|
| 86 |
+
1. Navigate to the "π¬ Senior Research Analyst" tab
|
| 87 |
+
2. Upload a research document (PDF)
|
| 88 |
+
3. Select a research-specific prompt or provide custom instructions
|
| 89 |
+
4. Click "π¬ Research Analysis" to start processing
|
| 90 |
+
5. Review the structured R&D pipeline outcomes
|
| 91 |
+
6. Export results if needed
|
| 92 |
+
|
| 93 |
+
### Example Prompts
|
| 94 |
+
- "Identify breakthrough concepts with high product/engineering impact and design specific experiments to validate them"
|
| 95 |
+
- "Assess the commercial viability of technical innovations and create prototype development roadmaps"
|
| 96 |
+
- "Extract novel methodologies and convert them into concrete R&D pipeline outcomes"
|
| 97 |
+
|
| 98 |
+
## Integration
|
| 99 |
+
|
| 100 |
+
### Non-Breaking Changes
|
| 101 |
+
- **Existing workflows remain unchanged**: All original functionality preserved
|
| 102 |
+
- **New agent addition**: ResearchAnalystAgent added to agent roster
|
| 103 |
+
- **Extended orchestrator**: MasterOrchestrator supports "research" target
|
| 104 |
+
- **UI enhancement**: New tab without affecting existing tabs
|
| 105 |
+
|
| 106 |
+
### Backward Compatibility
|
| 107 |
+
- All existing analysis functions work as before
|
| 108 |
+
- Original agent performance unaffected
|
| 109 |
+
- Existing prompts and exports remain functional
|
| 110 |
+
- No changes to core configuration or dependencies
|
| 111 |
+
|
| 112 |
+
## Benefits
|
| 113 |
+
|
| 114 |
+
### For Research Teams
|
| 115 |
+
- **Structured R&D Pipeline**: Clear path from insights to implementation
|
| 116 |
+
- **Actionable Outcomes**: Specific experiments, prototypes, and decisions
|
| 117 |
+
- **Impact Prioritization**: Focus on high-value innovations
|
| 118 |
+
- **Commercial Assessment**: Market readiness evaluation
|
| 119 |
+
|
| 120 |
+
### For Product/Engineering Teams
|
| 121 |
+
- **Concrete Next Steps**: Immediate actionable items
|
| 122 |
+
- **Technical Specifications**: Detailed implementation guidance
|
| 123 |
+
- **Risk Assessment**: Potential challenges and mitigation strategies
|
| 124 |
+
- **Resource Planning**: Clear development phases and requirements
|
| 125 |
+
|
| 126 |
+
## Future Enhancements
|
| 127 |
+
|
| 128 |
+
Potential areas for future development:
|
| 129 |
+
- Integration with project management tools
|
| 130 |
+
- Automated experiment tracking
|
| 131 |
+
- Prototype milestone monitoring
|
| 132 |
+
- Product decision impact measurement
|
| 133 |
+
- Research portfolio optimization
|
| 134 |
+
|
| 135 |
+
## Testing
|
| 136 |
+
|
| 137 |
+
The implementation includes comprehensive testing to ensure:
|
| 138 |
+
- All new components can be imported and initialized
|
| 139 |
+
- Research prompts are properly configured
|
| 140 |
+
- Orchestrator integration works correctly
|
| 141 |
+
- No impact on existing functionality
|
| 142 |
+
|
| 143 |
+
Run `python test_research_feature.py` to verify the implementation.
|
agents.py
CHANGED
|
@@ -5,8 +5,7 @@ import logging
|
|
| 5 |
from typing import Optional, Dict, Any, List, AsyncGenerator
|
| 6 |
import time
|
| 7 |
|
| 8 |
-
from utils import call_openai_chat, load_pdf_text_cached, load_pdf_text_chunked, get_document_metadata,
|
| 9 |
-
from utils.visual_output import VisualOutputGenerator
|
| 10 |
from config import Config
|
| 11 |
|
| 12 |
logger = logging.getLogger(__name__)
|
|
@@ -34,91 +33,26 @@ class BaseAgent:
|
|
| 34 |
# Core Analysis Agent
|
| 35 |
# --------------------
|
| 36 |
class AnalysisAgent(BaseAgent):
|
| 37 |
-
def __init__(self, name: str, model: str, tasks_completed: int = 0):
|
| 38 |
-
super().__init__(name, model, tasks_completed)
|
| 39 |
-
self.visual_generator = VisualOutputGenerator()
|
| 40 |
-
|
| 41 |
async def handle(self, user_id: str, prompt: str, file_path: Optional[str] = None, context: Optional[Dict[str, Any]] = None):
|
| 42 |
start_time = time.time()
|
| 43 |
|
| 44 |
-
# Check cache first - exact prompt match
|
| 45 |
-
if file_path:
|
| 46 |
-
cached_result = get_cached_analysis(file_path, prompt)
|
| 47 |
-
if cached_result:
|
| 48 |
-
logger.info(f"Returning cached analysis for {file_path} with exact prompt match")
|
| 49 |
-
return cached_result
|
| 50 |
-
|
| 51 |
if file_path:
|
| 52 |
# Get document metadata
|
| 53 |
metadata = get_document_metadata(file_path)
|
| 54 |
|
| 55 |
-
#
|
| 56 |
-
|
| 57 |
-
if cached_content:
|
| 58 |
-
logger.info(f"Using cached document content for {file_path}")
|
| 59 |
-
text = cached_content
|
| 60 |
-
else:
|
| 61 |
-
# Load and cache text
|
| 62 |
-
text = load_pdf_text_cached(file_path)
|
| 63 |
-
cache_document_content(file_path, text)
|
| 64 |
-
logger.info(f"Cached document content for {file_path}")
|
| 65 |
|
| 66 |
# Check if document needs chunking
|
| 67 |
if len(text) > Config.CHUNK_SIZE:
|
| 68 |
-
|
| 69 |
else:
|
| 70 |
content = f"User prompt: {prompt}\n\nDocument text:\n{text}"
|
| 71 |
-
result = await self._process_content(prompt, content, metadata, text)
|
| 72 |
else:
|
| 73 |
content = f"User prompt: {prompt}"
|
| 74 |
metadata = {}
|
| 75 |
-
result = await self._process_content(prompt, content, metadata, "")
|
| 76 |
-
|
| 77 |
-
# Cache the analysis result
|
| 78 |
-
if file_path:
|
| 79 |
-
cache_analysis(file_path, prompt, result)
|
| 80 |
-
|
| 81 |
-
return result
|
| 82 |
-
|
| 83 |
-
async def _process_content(self, prompt: str, content: str, metadata: Dict[str, Any], text: str) -> Dict[str, Any]:
|
| 84 |
-
"""Process content with visual formatting"""
|
| 85 |
-
start_time = time.time()
|
| 86 |
|
| 87 |
-
|
| 88 |
-
max_tokens = Config.OPENAI_MAX_TOKENS
|
| 89 |
-
|
| 90 |
-
system = """You are AnalysisAgent: an expert analyst who produces deeply insightful, actionable, and contextually relevant analysis.
|
| 91 |
-
|
| 92 |
-
ANALYSIS APPROACH:
|
| 93 |
-
- Provide sophisticated, nuanced insights that go beyond surface-level observations
|
| 94 |
-
- Identify underlying patterns, implications, and strategic opportunities
|
| 95 |
-
- Connect concepts to real-world applications and business value
|
| 96 |
-
- Offer specific, actionable recommendations with clear implementation paths
|
| 97 |
-
- Consider multiple perspectives and potential challenges
|
| 98 |
-
- Provide evidence-based conclusions with supporting rationale
|
| 99 |
-
|
| 100 |
-
CONTENT STRUCTURE:
|
| 101 |
-
- Start with a compelling executive summary that captures the essence
|
| 102 |
-
- Organize insights by strategic importance and implementation priority
|
| 103 |
-
- Include specific examples, case studies, and concrete applications
|
| 104 |
-
- Highlight unique opportunities and competitive advantages
|
| 105 |
-
- Address potential risks, challenges, and mitigation strategies
|
| 106 |
-
- Provide clear next steps with timelines and success metrics
|
| 107 |
-
|
| 108 |
-
QUALITY STANDARDS:
|
| 109 |
-
- Be precise and specific rather than generic
|
| 110 |
-
- Include quantifiable insights where possible (ROI, market size, timelines)
|
| 111 |
-
- Reference industry best practices and benchmarks
|
| 112 |
-
- Consider scalability, feasibility, and resource requirements
|
| 113 |
-
- Provide context for why recommendations matter
|
| 114 |
-
- Connect analysis to broader market trends and opportunities
|
| 115 |
-
|
| 116 |
-
FORMATTING:
|
| 117 |
-
- Use clear headings with strategic focus
|
| 118 |
-
- Include bullet points for easy scanning
|
| 119 |
-
- Highlight key insights with **bold** text
|
| 120 |
-
- Use emojis sparingly for visual appeal (π― π‘ π β‘ β
)
|
| 121 |
-
- Structure information by priority and actionability"""
|
| 122 |
|
| 123 |
try:
|
| 124 |
response = await call_openai_chat(
|
|
@@ -126,28 +60,23 @@ FORMATTING:
|
|
| 126 |
messages=[{"role": "system", "content": system},
|
| 127 |
{"role": "user", "content": content}],
|
| 128 |
temperature=Config.OPENAI_TEMPERATURE,
|
| 129 |
-
max_tokens=
|
| 130 |
)
|
| 131 |
except Exception as e:
|
| 132 |
logger.exception("AnalysisAgent failed")
|
| 133 |
response = f"Error during analysis: {str(e)}"
|
| 134 |
|
| 135 |
-
# Enhance with visual formatting
|
| 136 |
-
visual_response = self.visual_generator.format_analysis_with_visuals(response, metadata)
|
| 137 |
-
|
| 138 |
self.tasks_completed += 1
|
| 139 |
|
| 140 |
# Add processing metadata
|
| 141 |
processing_time = time.time() - start_time
|
| 142 |
result = {
|
| 143 |
-
"analysis":
|
| 144 |
"metadata": {
|
| 145 |
"processing_time": round(processing_time, 2),
|
| 146 |
"document_metadata": metadata,
|
| 147 |
"agent": self.name,
|
| 148 |
-
"tasks_completed": self.tasks_completed
|
| 149 |
-
"tokens_used": max_tokens,
|
| 150 |
-
"cached": False
|
| 151 |
}
|
| 152 |
}
|
| 153 |
|
|
@@ -155,36 +84,11 @@ FORMATTING:
|
|
| 155 |
|
| 156 |
async def _handle_large_document(self, prompt: str, text: str, metadata: Dict[str, Any]) -> Dict[str, Any]:
|
| 157 |
"""Handle large documents by processing in chunks"""
|
| 158 |
-
# Use standard chunking
|
| 159 |
from utils import chunk_text
|
| 160 |
chunks = chunk_text(text, Config.CHUNK_SIZE)
|
| 161 |
-
metadata['chunk_size'] = Config.CHUNK_SIZE
|
| 162 |
-
metadata['chunk_overlap'] = 1000
|
| 163 |
-
metadata['total_chunks'] = len(chunks)
|
| 164 |
chunk_results = []
|
| 165 |
|
| 166 |
-
system = "
|
| 167 |
-
|
| 168 |
-
ANALYSIS APPROACH:
|
| 169 |
-
- Provide deep, nuanced insights that identify underlying patterns and strategic opportunities
|
| 170 |
-
- Connect concepts to real-world applications and business value
|
| 171 |
-
- Offer specific, actionable recommendations with clear implementation paths
|
| 172 |
-
- Consider multiple perspectives and potential challenges
|
| 173 |
-
- Provide evidence-based conclusions with supporting rationale
|
| 174 |
-
|
| 175 |
-
CHUNK ANALYSIS FOCUS:
|
| 176 |
-
- Extract the most important insights from this specific section
|
| 177 |
-
- Identify key concepts, data points, and implications
|
| 178 |
-
- Note any unique opportunities or competitive advantages mentioned
|
| 179 |
-
- Highlight specific examples, case studies, or applications
|
| 180 |
-
- Consider how this content relates to broader strategic themes
|
| 181 |
-
|
| 182 |
-
OUTPUT FORMAT:
|
| 183 |
-
- Start with key insights from this chunk
|
| 184 |
-
- Include specific examples and concrete applications
|
| 185 |
-
- Note strategic implications and opportunities
|
| 186 |
-
- Highlight any unique value propositions or competitive advantages
|
| 187 |
-
- Keep analysis focused and actionable"""
|
| 188 |
|
| 189 |
for i, chunk in enumerate(chunks):
|
| 190 |
content = f"User prompt: {prompt}\n\nDocument chunk {i+1}/{len(chunks)}:\n{chunk}"
|
|
@@ -207,7 +111,6 @@ OUTPUT FORMAT:
|
|
| 207 |
|
| 208 |
# Create final summary using hierarchical approach to avoid token limits
|
| 209 |
try:
|
| 210 |
-
from utils import create_hierarchical_summary
|
| 211 |
final_summary = await create_hierarchical_summary(
|
| 212 |
chunk_results=chunk_results,
|
| 213 |
prompt=prompt,
|
|
@@ -307,6 +210,205 @@ class ConversationAgent(BaseAgent):
|
|
| 307 |
return {"conversation": response}
|
| 308 |
|
| 309 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 310 |
# --------------------
|
| 311 |
# Master Orchestrator - Focused on Analysis
|
| 312 |
# --------------------
|
|
@@ -336,6 +438,11 @@ class MasterOrchestrator:
|
|
| 336 |
if "collab" in self.agents:
|
| 337 |
asyncio.create_task(self.agents["collab"].handle(user_id, payload, file_path))
|
| 338 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 339 |
return results
|
| 340 |
|
| 341 |
async def handle_user_prompt_streaming(self, user_id: str, prompt: str, file_path: Optional[str] = None, targets: Optional[List[str]] = None) -> AsyncGenerator[str, None]:
|
|
@@ -346,6 +453,9 @@ class MasterOrchestrator:
|
|
| 346 |
if "analysis" in targets and "analysis" in self.agents:
|
| 347 |
async for chunk in self.agents["analysis"].handle_streaming(user_id, prompt, file_path):
|
| 348 |
yield chunk
|
|
|
|
|
|
|
|
|
|
| 349 |
else:
|
| 350 |
# Fallback to regular handling
|
| 351 |
result = await self.handle_user_prompt(user_id, prompt, file_path, targets)
|
|
@@ -380,18 +490,16 @@ class MasterOrchestrator:
|
|
| 380 |
results["batch_results"].append(error_result)
|
| 381 |
results["failed"] += 1
|
| 382 |
|
| 383 |
-
# Create batch summary
|
| 384 |
if results["successful"] > 0:
|
| 385 |
successful_analyses = [r["analysis"] for r in results["batch_results"] if "error" not in r]
|
| 386 |
-
summary_prompt = f"Please provide a comprehensive summary of the following batch analysis results. Original prompt: {prompt}\n\nAnalyses:\n" + "\n\n---\n\n".join(successful_analyses)
|
| 387 |
|
| 388 |
try:
|
| 389 |
-
summary_response = await
|
|
|
|
|
|
|
| 390 |
model=Config.OPENAI_MODEL,
|
| 391 |
-
|
| 392 |
-
{"role": "user", "content": summary_prompt}],
|
| 393 |
-
temperature=Config.OPENAI_TEMPERATURE,
|
| 394 |
-
max_tokens=Config.OPENAI_MAX_TOKENS
|
| 395 |
)
|
| 396 |
results["summary"]["batch_analysis"] = summary_response
|
| 397 |
except Exception as e:
|
|
@@ -404,4 +512,4 @@ class MasterOrchestrator:
|
|
| 404 |
"success_rate": f"{(results['successful'] / len(file_paths)) * 100:.1f}%" if file_paths else "0%"
|
| 405 |
}
|
| 406 |
|
| 407 |
-
return results
|
|
|
|
| 5 |
from typing import Optional, Dict, Any, List, AsyncGenerator
|
| 6 |
import time
|
| 7 |
|
| 8 |
+
from utils import call_openai_chat, load_pdf_text_cached, load_pdf_text_chunked, get_document_metadata, create_hierarchical_summary
|
|
|
|
| 9 |
from config import Config
|
| 10 |
|
| 11 |
logger = logging.getLogger(__name__)
|
|
|
|
| 33 |
# Core Analysis Agent
|
| 34 |
# --------------------
|
| 35 |
class AnalysisAgent(BaseAgent):
|
|
|
|
|
|
|
|
|
|
|
|
|
| 36 |
async def handle(self, user_id: str, prompt: str, file_path: Optional[str] = None, context: Optional[Dict[str, Any]] = None):
|
| 37 |
start_time = time.time()
|
| 38 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
if file_path:
|
| 40 |
# Get document metadata
|
| 41 |
metadata = get_document_metadata(file_path)
|
| 42 |
|
| 43 |
+
# Load text with caching
|
| 44 |
+
text = load_pdf_text_cached(file_path)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
|
| 46 |
# Check if document needs chunking
|
| 47 |
if len(text) > Config.CHUNK_SIZE:
|
| 48 |
+
return await self._handle_large_document(prompt, text, metadata)
|
| 49 |
else:
|
| 50 |
content = f"User prompt: {prompt}\n\nDocument text:\n{text}"
|
|
|
|
| 51 |
else:
|
| 52 |
content = f"User prompt: {prompt}"
|
| 53 |
metadata = {}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
|
| 55 |
+
system = "You are AnalysisAgent: produce concise insights and structured summaries. Adapt your language and complexity to the target audience. Provide clear, actionable insights with appropriate examples and analogies for complex topics."
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 56 |
|
| 57 |
try:
|
| 58 |
response = await call_openai_chat(
|
|
|
|
| 60 |
messages=[{"role": "system", "content": system},
|
| 61 |
{"role": "user", "content": content}],
|
| 62 |
temperature=Config.OPENAI_TEMPERATURE,
|
| 63 |
+
max_tokens=Config.OPENAI_MAX_TOKENS
|
| 64 |
)
|
| 65 |
except Exception as e:
|
| 66 |
logger.exception("AnalysisAgent failed")
|
| 67 |
response = f"Error during analysis: {str(e)}"
|
| 68 |
|
|
|
|
|
|
|
|
|
|
| 69 |
self.tasks_completed += 1
|
| 70 |
|
| 71 |
# Add processing metadata
|
| 72 |
processing_time = time.time() - start_time
|
| 73 |
result = {
|
| 74 |
+
"analysis": response,
|
| 75 |
"metadata": {
|
| 76 |
"processing_time": round(processing_time, 2),
|
| 77 |
"document_metadata": metadata,
|
| 78 |
"agent": self.name,
|
| 79 |
+
"tasks_completed": self.tasks_completed
|
|
|
|
|
|
|
| 80 |
}
|
| 81 |
}
|
| 82 |
|
|
|
|
| 84 |
|
| 85 |
async def _handle_large_document(self, prompt: str, text: str, metadata: Dict[str, Any]) -> Dict[str, Any]:
|
| 86 |
"""Handle large documents by processing in chunks"""
|
|
|
|
| 87 |
from utils import chunk_text
|
| 88 |
chunks = chunk_text(text, Config.CHUNK_SIZE)
|
|
|
|
|
|
|
|
|
|
| 89 |
chunk_results = []
|
| 90 |
|
| 91 |
+
system = "You are AnalysisAgent: produce concise insights and structured summaries. Adapt your language and complexity to the target audience. Provide clear, actionable insights with appropriate examples and analogies for complex topics."
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 92 |
|
| 93 |
for i, chunk in enumerate(chunks):
|
| 94 |
content = f"User prompt: {prompt}\n\nDocument chunk {i+1}/{len(chunks)}:\n{chunk}"
|
|
|
|
| 111 |
|
| 112 |
# Create final summary using hierarchical approach to avoid token limits
|
| 113 |
try:
|
|
|
|
| 114 |
final_summary = await create_hierarchical_summary(
|
| 115 |
chunk_results=chunk_results,
|
| 116 |
prompt=prompt,
|
|
|
|
| 210 |
return {"conversation": response}
|
| 211 |
|
| 212 |
|
| 213 |
+
# --------------------
|
| 214 |
+
# Senior Research Analyst Agent
|
| 215 |
+
# --------------------
|
| 216 |
+
class ResearchAnalystAgent(BaseAgent):
|
| 217 |
+
async def handle(self, user_id: str, prompt: str, file_path: Optional[str] = None, context: Optional[Dict[str, Any]] = None):
|
| 218 |
+
start_time = time.time()
|
| 219 |
+
|
| 220 |
+
if file_path:
|
| 221 |
+
# Get document metadata
|
| 222 |
+
metadata = get_document_metadata(file_path)
|
| 223 |
+
|
| 224 |
+
# Load text with caching
|
| 225 |
+
text = load_pdf_text_cached(file_path)
|
| 226 |
+
|
| 227 |
+
# Check if document needs chunking
|
| 228 |
+
if len(text) > Config.CHUNK_SIZE:
|
| 229 |
+
return await self._handle_large_document_research(prompt, text, metadata)
|
| 230 |
+
else:
|
| 231 |
+
content = f"User prompt: {prompt}\n\nDocument text:\n{text}"
|
| 232 |
+
else:
|
| 233 |
+
content = f"User prompt: {prompt}"
|
| 234 |
+
metadata = {}
|
| 235 |
+
|
| 236 |
+
system = """You are a Senior Research Analyst with deep expertise in product and engineering R&D pipelines. Your role is to:
|
| 237 |
+
|
| 238 |
+
1. **Extract High-Value Insights**: Identify novel ideas, breakthrough concepts, and innovative approaches that could drive significant product/engineering impact.
|
| 239 |
+
|
| 240 |
+
2. **Assess Commercial Viability**: Evaluate the potential for practical application, market readiness, and competitive advantage.
|
| 241 |
+
|
| 242 |
+
3. **Generate R&D Pipeline Outcomes**: Convert insights into concrete, actionable items for:
|
| 243 |
+
- **Experiments**: Specific hypotheses to test, methodologies to validate
|
| 244 |
+
- **Prototypes**: Technical implementations to build and demonstrate
|
| 245 |
+
- **Product Decisions**: Strategic choices for development priorities and resource allocation
|
| 246 |
+
|
| 247 |
+
4. **Prioritize by Impact**: Focus on ideas with the highest potential for transformative change and measurable business value.
|
| 248 |
+
|
| 249 |
+
Provide structured analysis with clear next steps that engineering and product teams can immediately act upon."""
|
| 250 |
+
|
| 251 |
+
try:
|
| 252 |
+
response = await call_openai_chat(
|
| 253 |
+
model=self.model,
|
| 254 |
+
messages=[{"role": "system", "content": system},
|
| 255 |
+
{"role": "user", "content": content}],
|
| 256 |
+
temperature=0.1, # Lower temperature for more focused analysis
|
| 257 |
+
max_tokens=Config.OPENAI_MAX_TOKENS * 2 # More tokens for detailed research analysis
|
| 258 |
+
)
|
| 259 |
+
except Exception as e:
|
| 260 |
+
logger.exception("ResearchAnalystAgent failed")
|
| 261 |
+
response = f"Error during research analysis: {str(e)}"
|
| 262 |
+
|
| 263 |
+
self.tasks_completed += 1
|
| 264 |
+
|
| 265 |
+
# Add processing metadata
|
| 266 |
+
processing_time = time.time() - start_time
|
| 267 |
+
result = {
|
| 268 |
+
"research_analysis": response,
|
| 269 |
+
"metadata": {
|
| 270 |
+
"processing_time": round(processing_time, 2),
|
| 271 |
+
"document_metadata": metadata,
|
| 272 |
+
"agent": self.name,
|
| 273 |
+
"tasks_completed": self.tasks_completed,
|
| 274 |
+
"analysis_type": "research_and_development"
|
| 275 |
+
}
|
| 276 |
+
}
|
| 277 |
+
|
| 278 |
+
return result
|
| 279 |
+
|
| 280 |
+
async def _handle_large_document_research(self, prompt: str, text: str, metadata: Dict[str, Any]) -> Dict[str, Any]:
|
| 281 |
+
"""Handle large documents with research-focused chunking strategy"""
|
| 282 |
+
from utils import chunk_text
|
| 283 |
+
chunks = chunk_text(text, Config.CHUNK_SIZE)
|
| 284 |
+
chunk_results = []
|
| 285 |
+
|
| 286 |
+
system = """You are a Senior Research Analyst extracting high-value insights from document sections. Focus on:
|
| 287 |
+
- Novel technical concepts and methodologies
|
| 288 |
+
- Innovation opportunities and breakthrough potential
|
| 289 |
+
- Practical applications and commercial viability
|
| 290 |
+
- R&D pipeline implications
|
| 291 |
+
|
| 292 |
+
Provide structured insights that can feed into experiments, prototypes, and product decisions."""
|
| 293 |
+
|
| 294 |
+
for i, chunk in enumerate(chunks):
|
| 295 |
+
content = f"User prompt: {prompt}\n\nDocument section {i+1}/{len(chunks)}:\n{chunk}"
|
| 296 |
+
|
| 297 |
+
try:
|
| 298 |
+
response = await call_openai_chat(
|
| 299 |
+
model=self.model,
|
| 300 |
+
messages=[{"role": "system", "content": system},
|
| 301 |
+
{"role": "user", "content": content}],
|
| 302 |
+
temperature=0.1,
|
| 303 |
+
max_tokens=Config.OPENAI_MAX_TOKENS
|
| 304 |
+
)
|
| 305 |
+
chunk_results.append(f"--- Research Insights from Section {i+1} ---\n{response}")
|
| 306 |
+
except Exception as e:
|
| 307 |
+
logger.exception(f"ResearchAnalystAgent failed on chunk {i+1}")
|
| 308 |
+
chunk_results.append(f"--- Section {i+1} Analysis Error ---\nError: {str(e)}")
|
| 309 |
+
|
| 310 |
+
# Combine chunk results with research synthesis
|
| 311 |
+
try:
|
| 312 |
+
research_summary = await self._synthesize_research_insights(
|
| 313 |
+
chunk_results=chunk_results,
|
| 314 |
+
prompt=prompt,
|
| 315 |
+
model=self.model
|
| 316 |
+
)
|
| 317 |
+
except Exception as e:
|
| 318 |
+
logger.exception("ResearchAnalystAgent failed on research synthesis")
|
| 319 |
+
research_summary = f"Error creating research synthesis: {str(e)}\n\nSection Results:\n{chr(10).join(chunk_results)}"
|
| 320 |
+
|
| 321 |
+
return {
|
| 322 |
+
"research_analysis": research_summary,
|
| 323 |
+
"metadata": {
|
| 324 |
+
"processing_method": "research_chunked",
|
| 325 |
+
"chunks_processed": len(chunks),
|
| 326 |
+
"document_metadata": metadata,
|
| 327 |
+
"agent": self.name,
|
| 328 |
+
"tasks_completed": self.tasks_completed,
|
| 329 |
+
"analysis_type": "research_and_development"
|
| 330 |
+
}
|
| 331 |
+
}
|
| 332 |
+
|
| 333 |
+
async def _synthesize_research_insights(self, chunk_results: List[str], prompt: str, model: str) -> str:
|
| 334 |
+
"""Synthesize research insights from multiple document sections"""
|
| 335 |
+
synthesis_prompt = f"""
|
| 336 |
+
As a Senior Research Analyst, synthesize the following research insights into a comprehensive R&D pipeline strategy:
|
| 337 |
+
|
| 338 |
+
Original Analysis Request: {prompt}
|
| 339 |
+
|
| 340 |
+
Section Analysis Results:
|
| 341 |
+
{chr(10).join(chunk_results)}
|
| 342 |
+
|
| 343 |
+
Provide a structured synthesis that includes:
|
| 344 |
+
|
| 345 |
+
1. **Key Innovation Opportunities**: The most promising novel ideas with highest impact potential
|
| 346 |
+
2. **Technical Breakthroughs**: Specific technical concepts that could drive significant advancement
|
| 347 |
+
3. **R&D Pipeline Roadmap**:
|
| 348 |
+
- **Phase 1 Experiments**: Immediate hypotheses to test (3-5 specific experiments)
|
| 349 |
+
- **Phase 2 Prototypes**: Technical implementations to build (2-3 prototype concepts)
|
| 350 |
+
- **Phase 3 Product Decisions**: Strategic choices for development priorities (2-3 key decisions)
|
| 351 |
+
|
| 352 |
+
4. **Impact Assessment**: Expected outcomes and measurable business value
|
| 353 |
+
5. **Risk Mitigation**: Potential challenges and mitigation strategies
|
| 354 |
+
|
| 355 |
+
Focus on actionable outcomes that engineering and product teams can immediately implement.
|
| 356 |
+
"""
|
| 357 |
+
|
| 358 |
+
try:
|
| 359 |
+
response = await call_openai_chat(
|
| 360 |
+
model=model,
|
| 361 |
+
messages=[{"role": "user", "content": synthesis_prompt}],
|
| 362 |
+
temperature=0.1,
|
| 363 |
+
max_tokens=8000 # Larger context for comprehensive synthesis
|
| 364 |
+
)
|
| 365 |
+
return response
|
| 366 |
+
except Exception as e:
|
| 367 |
+
logger.exception("Research synthesis failed")
|
| 368 |
+
return f"Research synthesis error: {str(e)}"
|
| 369 |
+
|
| 370 |
+
async def handle_streaming(self, user_id: str, prompt: str, file_path: Optional[str] = None, context: Optional[Dict[str, Any]] = None) -> AsyncGenerator[str, None]:
|
| 371 |
+
"""Streaming version of research analysis"""
|
| 372 |
+
yield "π¬ Starting senior research analysis..."
|
| 373 |
+
|
| 374 |
+
if file_path:
|
| 375 |
+
metadata = get_document_metadata(file_path)
|
| 376 |
+
yield f"π Research document loaded: {metadata.get('page_count', 0)} pages, {metadata.get('file_size', 0) / 1024:.1f} KB"
|
| 377 |
+
|
| 378 |
+
text = load_pdf_text_cached(file_path)
|
| 379 |
+
|
| 380 |
+
if len(text) > Config.CHUNK_SIZE:
|
| 381 |
+
yield "π Large document detected, applying research-focused chunking strategy..."
|
| 382 |
+
from utils import chunk_text
|
| 383 |
+
chunks = chunk_text(text, Config.CHUNK_SIZE)
|
| 384 |
+
yield f"π Analyzing {len(chunks)} sections for innovation opportunities..."
|
| 385 |
+
|
| 386 |
+
# Process chunks with research focus
|
| 387 |
+
for i, chunk in enumerate(chunks):
|
| 388 |
+
yield f"βοΈ Extracting insights from research section {i+1}/{len(chunks)}..."
|
| 389 |
+
await asyncio.sleep(0.1) # Simulate processing time
|
| 390 |
+
|
| 391 |
+
yield "π Synthesizing research insights into R&D pipeline strategy..."
|
| 392 |
+
await asyncio.sleep(0.3)
|
| 393 |
+
yield "π― Generating concrete experiments, prototypes, and product decisions..."
|
| 394 |
+
await asyncio.sleep(0.2)
|
| 395 |
+
yield "β
Research analysis complete!"
|
| 396 |
+
else:
|
| 397 |
+
yield "β‘ Analyzing document for high-value R&D insights..."
|
| 398 |
+
await asyncio.sleep(0.3)
|
| 399 |
+
yield "π― Converting insights into actionable R&D pipeline outcomes..."
|
| 400 |
+
await asyncio.sleep(0.2)
|
| 401 |
+
yield "β
Research analysis complete!"
|
| 402 |
+
else:
|
| 403 |
+
yield "β‘ Processing research analysis request..."
|
| 404 |
+
await asyncio.sleep(0.2)
|
| 405 |
+
yield "β
Research analysis complete!"
|
| 406 |
+
|
| 407 |
+
# Get the actual result
|
| 408 |
+
result = await self.handle(user_id, prompt, file_path, context)
|
| 409 |
+
yield f"\nπ Research Analysis Result:\n{result.get('research_analysis', 'No result')}"
|
| 410 |
+
|
| 411 |
+
|
| 412 |
# --------------------
|
| 413 |
# Master Orchestrator - Focused on Analysis
|
| 414 |
# --------------------
|
|
|
|
| 438 |
if "collab" in self.agents:
|
| 439 |
asyncio.create_task(self.agents["collab"].handle(user_id, payload, file_path))
|
| 440 |
|
| 441 |
+
# Research analysis functionality
|
| 442 |
+
if "research" in targets and "research" in self.agents:
|
| 443 |
+
research_res = await self.agents["research"].handle(user_id, prompt, file_path)
|
| 444 |
+
results.update(research_res)
|
| 445 |
+
|
| 446 |
return results
|
| 447 |
|
| 448 |
async def handle_user_prompt_streaming(self, user_id: str, prompt: str, file_path: Optional[str] = None, targets: Optional[List[str]] = None) -> AsyncGenerator[str, None]:
|
|
|
|
| 453 |
if "analysis" in targets and "analysis" in self.agents:
|
| 454 |
async for chunk in self.agents["analysis"].handle_streaming(user_id, prompt, file_path):
|
| 455 |
yield chunk
|
| 456 |
+
elif "research" in targets and "research" in self.agents:
|
| 457 |
+
async for chunk in self.agents["research"].handle_streaming(user_id, prompt, file_path):
|
| 458 |
+
yield chunk
|
| 459 |
else:
|
| 460 |
# Fallback to regular handling
|
| 461 |
result = await self.handle_user_prompt(user_id, prompt, file_path, targets)
|
|
|
|
| 490 |
results["batch_results"].append(error_result)
|
| 491 |
results["failed"] += 1
|
| 492 |
|
| 493 |
+
# Create batch summary using hierarchical approach
|
| 494 |
if results["successful"] > 0:
|
| 495 |
successful_analyses = [r["analysis"] for r in results["batch_results"] if "error" not in r]
|
|
|
|
| 496 |
|
| 497 |
try:
|
| 498 |
+
summary_response = await create_hierarchical_summary(
|
| 499 |
+
chunk_results=successful_analyses,
|
| 500 |
+
prompt=f"Batch analysis summary for: {prompt}",
|
| 501 |
model=Config.OPENAI_MODEL,
|
| 502 |
+
max_tokens=6000
|
|
|
|
|
|
|
|
|
|
| 503 |
)
|
| 504 |
results["summary"]["batch_analysis"] = summary_response
|
| 505 |
except Exception as e:
|
|
|
|
| 512 |
"success_rate": f"{(results['successful'] / len(file_paths)) * 100:.1f}%" if file_paths else "0%"
|
| 513 |
}
|
| 514 |
|
| 515 |
+
return results
|
app.py
CHANGED
|
@@ -1,18 +1,18 @@
|
|
| 1 |
-
# PDF Analysis & Orchestrator
|
|
|
|
| 2 |
import os
|
| 3 |
import asyncio
|
| 4 |
import uuid
|
| 5 |
-
import re
|
| 6 |
from pathlib import Path
|
| 7 |
from typing import Optional, List, Tuple
|
| 8 |
import time
|
| 9 |
-
from datetime import datetime
|
| 10 |
|
| 11 |
import gradio as gr
|
| 12 |
from agents import (
|
| 13 |
AnalysisAgent,
|
| 14 |
CollaborationAgent,
|
| 15 |
ConversationAgent,
|
|
|
|
| 16 |
MasterOrchestrator,
|
| 17 |
)
|
| 18 |
from utils import load_pdf_text
|
|
@@ -25,27 +25,20 @@ from config import Config
|
|
| 25 |
# ------------------------
|
| 26 |
# Initialize Components
|
| 27 |
# ------------------------
|
| 28 |
-
|
| 29 |
-
Config.ensure_directories()
|
| 30 |
-
except Exception as e:
|
| 31 |
-
print(f"Warning: Could not ensure directories: {e}")
|
| 32 |
|
| 33 |
# Agent Roster - Focused on Analysis & Orchestration
|
| 34 |
AGENTS = {
|
| 35 |
"analysis": AnalysisAgent(name="AnalysisAgent", model=Config.OPENAI_MODEL, tasks_completed=0),
|
| 36 |
"collab": CollaborationAgent(name="CollaborationAgent", model=Config.OPENAI_MODEL, tasks_completed=0),
|
| 37 |
"conversation": ConversationAgent(name="ConversationAgent", model=Config.OPENAI_MODEL, tasks_completed=0),
|
|
|
|
| 38 |
}
|
| 39 |
ORCHESTRATOR = MasterOrchestrator(agents=AGENTS)
|
| 40 |
|
| 41 |
# Initialize managers
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
EXPORT_MANAGER = ExportManager()
|
| 45 |
-
except Exception as e:
|
| 46 |
-
print(f"Warning: Could not initialize managers: {e}")
|
| 47 |
-
PROMPT_MANAGER = None
|
| 48 |
-
EXPORT_MANAGER = None
|
| 49 |
|
| 50 |
# ------------------------
|
| 51 |
# File Handling
|
|
@@ -85,160 +78,178 @@ def handle_analysis(file, prompt, username="anonymous", use_streaming=False):
|
|
| 85 |
if file is None:
|
| 86 |
return "Please upload a PDF.", None, None
|
| 87 |
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
ORCHESTRATOR.handle_user_prompt,
|
| 109 |
user_id=username,
|
| 110 |
prompt=prompt,
|
| 111 |
-
file_path=
|
| 112 |
targets=["analysis"]
|
| 113 |
-
)
|
| 114 |
-
|
| 115 |
-
metadata = result.get("metadata", {}) if isinstance(result, dict) else {}
|
| 116 |
-
|
| 117 |
-
if cached_content:
|
| 118 |
-
status = "β
**Analysis Complete** - Fresh analysis using cached document"
|
| 119 |
-
else:
|
| 120 |
-
status = "β
**Analysis Complete** - Fresh analysis generated and cached"
|
| 121 |
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 125 |
|
| 126 |
def handle_batch_analysis(files, prompt, username="anonymous"):
|
| 127 |
"""Handle batch analysis of multiple PDFs"""
|
| 128 |
if not files or len(files) == 0:
|
| 129 |
return "Please upload at least one PDF.", None, None
|
| 130 |
|
| 131 |
-
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
validate_file_size(file)
|
| 136 |
path = save_uploaded_file(file, username)
|
| 137 |
file_paths.append(path)
|
| 138 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 139 |
result = run_async(
|
| 140 |
-
ORCHESTRATOR.
|
| 141 |
user_id=username,
|
| 142 |
prompt=prompt,
|
| 143 |
-
|
| 144 |
-
targets=["
|
| 145 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 146 |
|
| 147 |
-
#
|
| 148 |
-
|
| 149 |
-
|
| 150 |
-
|
| 151 |
-
|
| 152 |
-
|
| 153 |
-
|
| 154 |
-
|
| 155 |
-
|
| 156 |
-
|
| 157 |
-
|
| 158 |
-
|
| 159 |
-
|
| 160 |
-
|
| 161 |
-
|
| 162 |
-
formatted_output += f"\n--- File {i+1}: {Path(file_result.get('file_path', 'Unknown')).name} ---\n"
|
| 163 |
-
if "error" in file_result:
|
| 164 |
-
formatted_output += f"β Error: {file_result['error']}\n"
|
| 165 |
-
else:
|
| 166 |
-
formatted_output += f"β
{file_result.get('analysis', 'No analysis')}\n"
|
| 167 |
-
|
| 168 |
-
return formatted_output, None, None
|
| 169 |
-
except Exception as e:
|
| 170 |
-
return f"Error during batch analysis: {str(e)}", None, None
|
| 171 |
|
| 172 |
def handle_export(result_text, export_format, username="anonymous"):
|
| 173 |
-
"""Handle export of analysis results
|
| 174 |
if not result_text or result_text.strip() == "":
|
| 175 |
return "No content to export.", None
|
| 176 |
|
| 177 |
-
if not EXPORT_MANAGER:
|
| 178 |
-
return "Export functionality not available.", None
|
| 179 |
-
|
| 180 |
try:
|
| 181 |
-
# Create a unique filename
|
| 182 |
-
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
| 183 |
-
filename = f"analysis_{username}_{timestamp}.{export_format}"
|
| 184 |
-
|
| 185 |
if export_format == "txt":
|
| 186 |
-
|
| 187 |
-
clean_text = re.sub(r'<[^>]+>', '', result_text) # Remove HTML tags
|
| 188 |
-
clean_text = re.sub(r'\n\s*\n', '\n\n', clean_text) # Clean up spacing
|
| 189 |
-
filepath = EXPORT_MANAGER.export_text(clean_text, filename=filename)
|
| 190 |
elif export_format == "json":
|
| 191 |
-
data = {
|
| 192 |
-
|
| 193 |
-
"exported_by": username,
|
| 194 |
-
"timestamp": time.time(),
|
| 195 |
-
"export_date": datetime.now().isoformat(),
|
| 196 |
-
"format": export_format
|
| 197 |
-
}
|
| 198 |
-
filepath = EXPORT_MANAGER.export_json(data, filename=filename)
|
| 199 |
elif export_format == "pdf":
|
| 200 |
-
filepath = EXPORT_MANAGER.export_pdf(result_text,
|
| 201 |
else:
|
| 202 |
return f"Unsupported export format: {export_format}", None
|
| 203 |
|
| 204 |
-
|
| 205 |
-
success_msg = f"""
|
| 206 |
-
<div style="background: #d4edda; border: 1px solid #c3e6cb; border-radius: 8px; padding: 15px; margin: 10px 0;">
|
| 207 |
-
<h4 style="color: #155724; margin: 0 0 10px 0;">β
Export Successful!</h4>
|
| 208 |
-
<p style="color: #155724; margin: 0 0 10px 0;">Your analysis has been exported as <strong>{export_format.upper()}</strong> format.</p>
|
| 209 |
-
<p style="color: #155724; margin: 0; font-size: 14px;">Filename: <code>{filename}</code></p>
|
| 210 |
-
</div>
|
| 211 |
-
"""
|
| 212 |
-
|
| 213 |
-
return success_msg, filepath
|
| 214 |
except Exception as e:
|
| 215 |
-
|
| 216 |
-
<div style="background: #f8d7da; border: 1px solid #f5c6cb; border-radius: 8px; padding: 15px; margin: 10px 0;">
|
| 217 |
-
<h4 style="color: #721c24; margin: 0 0 10px 0;">β Export Failed</h4>
|
| 218 |
-
<p style="color: #721c24; margin: 0;">Error: {str(e)}</p>
|
| 219 |
-
</div>
|
| 220 |
-
"""
|
| 221 |
-
return error_msg, None
|
| 222 |
|
| 223 |
def get_custom_prompts():
|
| 224 |
"""Get available custom prompts"""
|
| 225 |
-
if not PROMPT_MANAGER:
|
| 226 |
-
return []
|
| 227 |
prompts = PROMPT_MANAGER.get_all_prompts()
|
| 228 |
return list(prompts.keys())
|
| 229 |
|
| 230 |
def load_custom_prompt(prompt_id):
|
| 231 |
"""Load a custom prompt template"""
|
| 232 |
-
if not PROMPT_MANAGER:
|
| 233 |
-
return ""
|
| 234 |
return PROMPT_MANAGER.get_prompt(prompt_id) or ""
|
| 235 |
|
| 236 |
# ------------------------
|
| 237 |
-
# Gradio UI -
|
| 238 |
# ------------------------
|
| 239 |
with gr.Blocks(title="PDF Analysis & Orchestrator", theme=gr.themes.Soft()) as demo:
|
| 240 |
gr.Markdown("# π PDF Analysis & Orchestrator - Intelligent Document Processing")
|
| 241 |
-
gr.Markdown("Upload PDFs and provide instructions for analysis, summarization, or explanation.")
|
| 242 |
|
| 243 |
with gr.Tabs():
|
| 244 |
# Single Document Analysis Tab
|
|
@@ -256,6 +267,14 @@ with gr.Blocks(title="PDF Analysis & Orchestrator", theme=gr.themes.Soft()) as d
|
|
| 256 |
value=None
|
| 257 |
)
|
| 258 |
load_prompt_btn = gr.Button("Load Prompt", size="sm")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 259 |
|
| 260 |
with gr.Column(scale=2):
|
| 261 |
gr.Markdown("### Analysis Instructions")
|
|
@@ -272,38 +291,79 @@ with gr.Blocks(title="PDF Analysis & Orchestrator", theme=gr.themes.Soft()) as d
|
|
| 272 |
# Results Section
|
| 273 |
with gr.Row():
|
| 274 |
with gr.Column(scale=2):
|
| 275 |
-
gr.
|
| 276 |
-
|
| 277 |
-
value="**Ready to analyze documents**\n\nUpload a PDF and enter your analysis instructions to get started.",
|
| 278 |
-
label="Analysis Result",
|
| 279 |
-
show_copy_button=True
|
| 280 |
-
)
|
| 281 |
-
status_box = gr.Markdown(
|
| 282 |
-
value="**π Status:** Ready to analyze documents\n\n**π‘ Tip:** Same document + same prompt = instant cached response!",
|
| 283 |
-
label="Status & Performance"
|
| 284 |
-
)
|
| 285 |
|
| 286 |
with gr.Column(scale=1):
|
| 287 |
# Export Section
|
| 288 |
-
with gr.Accordion("πΎ Export
|
| 289 |
-
gr.Markdown("**Download your analysis in multiple formats:**")
|
| 290 |
export_format = gr.Dropdown(
|
| 291 |
choices=["txt", "json", "pdf"],
|
| 292 |
-
label="
|
| 293 |
-
value="txt"
|
| 294 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 295 |
)
|
| 296 |
-
|
| 297 |
-
|
| 298 |
-
|
| 299 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 300 |
)
|
| 301 |
-
|
| 302 |
-
|
| 303 |
-
|
| 304 |
-
|
| 305 |
-
|
| 306 |
-
gr.
|
| 307 |
|
| 308 |
# Batch Processing Tab
|
| 309 |
with gr.Tab("π Batch Processing"):
|
|
@@ -327,19 +387,44 @@ with gr.Blocks(title="PDF Analysis & Orchestrator", theme=gr.themes.Soft()) as d
|
|
| 327 |
batch_output = gr.Textbox(label="Batch Results", lines=20, max_lines=30, show_copy_button=True)
|
| 328 |
batch_status = gr.Textbox(label="Batch Status", interactive=False)
|
| 329 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 330 |
# Event Handlers
|
| 331 |
# Single document analysis
|
| 332 |
-
def handle_analysis_with_markdown(file, prompt, username="anonymous", use_streaming=False):
|
| 333 |
-
result, status, doc_info = handle_analysis(file, prompt, username, use_streaming)
|
| 334 |
-
# Convert to markdown if it's a string
|
| 335 |
-
if isinstance(result, str):
|
| 336 |
-
return result, status, doc_info
|
| 337 |
-
return str(result), status, doc_info
|
| 338 |
-
|
| 339 |
submit_btn.click(
|
| 340 |
-
fn=
|
| 341 |
-
inputs=[pdf_in, prompt_input, username_input,
|
| 342 |
-
outputs=[output_box, status_box,
|
| 343 |
)
|
| 344 |
|
| 345 |
# Load custom prompt
|
|
@@ -363,12 +448,60 @@ with gr.Blocks(title="PDF Analysis & Orchestrator", theme=gr.themes.Soft()) as d
|
|
| 363 |
outputs=[pdf_in, prompt_input, output_box, status_box]
|
| 364 |
)
|
| 365 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 366 |
# Batch processing
|
| 367 |
batch_submit.click(
|
| 368 |
fn=handle_batch_analysis,
|
| 369 |
inputs=[batch_files, batch_prompt, batch_username],
|
| 370 |
outputs=[batch_output, batch_status, gr.State()]
|
| 371 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 372 |
|
| 373 |
# Examples
|
| 374 |
gr.Examples(
|
|
@@ -382,6 +515,19 @@ with gr.Blocks(title="PDF Analysis & Orchestrator", theme=gr.themes.Soft()) as d
|
|
| 382 |
inputs=prompt_input,
|
| 383 |
label="Example Instructions"
|
| 384 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 385 |
|
| 386 |
if __name__ == "__main__":
|
| 387 |
-
demo.launch(server_name="0.0.0.0", server_port=int(os.environ.get("PORT", 7860)))
|
|
|
|
| 1 |
+
# PDF Analysis & Orchestrator
|
| 2 |
+
# Extracted core functionality from Sharmaji ka PDF Blaster V1
|
| 3 |
import os
|
| 4 |
import asyncio
|
| 5 |
import uuid
|
|
|
|
| 6 |
from pathlib import Path
|
| 7 |
from typing import Optional, List, Tuple
|
| 8 |
import time
|
|
|
|
| 9 |
|
| 10 |
import gradio as gr
|
| 11 |
from agents import (
|
| 12 |
AnalysisAgent,
|
| 13 |
CollaborationAgent,
|
| 14 |
ConversationAgent,
|
| 15 |
+
ResearchAnalystAgent,
|
| 16 |
MasterOrchestrator,
|
| 17 |
)
|
| 18 |
from utils import load_pdf_text
|
|
|
|
| 25 |
# ------------------------
|
| 26 |
# Initialize Components
|
| 27 |
# ------------------------
|
| 28 |
+
Config.ensure_directories()
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
# Agent Roster - Focused on Analysis & Orchestration
|
| 31 |
AGENTS = {
|
| 32 |
"analysis": AnalysisAgent(name="AnalysisAgent", model=Config.OPENAI_MODEL, tasks_completed=0),
|
| 33 |
"collab": CollaborationAgent(name="CollaborationAgent", model=Config.OPENAI_MODEL, tasks_completed=0),
|
| 34 |
"conversation": ConversationAgent(name="ConversationAgent", model=Config.OPENAI_MODEL, tasks_completed=0),
|
| 35 |
+
"research": ResearchAnalystAgent(name="ResearchAnalystAgent", model=Config.OPENAI_MODEL, tasks_completed=0),
|
| 36 |
}
|
| 37 |
ORCHESTRATOR = MasterOrchestrator(agents=AGENTS)
|
| 38 |
|
| 39 |
# Initialize managers
|
| 40 |
+
PROMPT_MANAGER = PromptManager()
|
| 41 |
+
EXPORT_MANAGER = ExportManager()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 42 |
|
| 43 |
# ------------------------
|
| 44 |
# File Handling
|
|
|
|
| 78 |
if file is None:
|
| 79 |
return "Please upload a PDF.", None, None
|
| 80 |
|
| 81 |
+
validate_file_size(file)
|
| 82 |
+
path = save_uploaded_file(file, username)
|
| 83 |
+
|
| 84 |
+
if use_streaming:
|
| 85 |
+
return handle_analysis_streaming(path, prompt, username)
|
| 86 |
+
else:
|
| 87 |
+
result = run_async(
|
| 88 |
+
ORCHESTRATOR.handle_user_prompt,
|
| 89 |
+
user_id=username,
|
| 90 |
+
prompt=prompt,
|
| 91 |
+
file_path=path,
|
| 92 |
+
targets=["analysis"]
|
| 93 |
+
)
|
| 94 |
+
return result.get("analysis", "No analysis result."), None, None
|
| 95 |
+
|
| 96 |
+
def handle_analysis_streaming(file_path, prompt, username="anonymous"):
|
| 97 |
+
"""Handle analysis with streaming output"""
|
| 98 |
+
def stream_generator():
|
| 99 |
+
async def async_stream():
|
| 100 |
+
async for chunk in ORCHESTRATOR.handle_user_prompt_streaming(
|
|
|
|
| 101 |
user_id=username,
|
| 102 |
prompt=prompt,
|
| 103 |
+
file_path=file_path,
|
| 104 |
targets=["analysis"]
|
| 105 |
+
):
|
| 106 |
+
yield chunk
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 107 |
|
| 108 |
+
# Convert async generator to sync generator
|
| 109 |
+
loop = asyncio.new_event_loop()
|
| 110 |
+
asyncio.set_event_loop(loop)
|
| 111 |
+
try:
|
| 112 |
+
async_gen = async_stream()
|
| 113 |
+
while True:
|
| 114 |
+
try:
|
| 115 |
+
chunk = loop.run_until_complete(async_gen.__anext__())
|
| 116 |
+
yield chunk
|
| 117 |
+
except StopAsyncIteration:
|
| 118 |
+
break
|
| 119 |
+
finally:
|
| 120 |
+
loop.close()
|
| 121 |
+
|
| 122 |
+
return stream_generator(), None, None
|
| 123 |
|
| 124 |
def handle_batch_analysis(files, prompt, username="anonymous"):
|
| 125 |
"""Handle batch analysis of multiple PDFs"""
|
| 126 |
if not files or len(files) == 0:
|
| 127 |
return "Please upload at least one PDF.", None, None
|
| 128 |
|
| 129 |
+
# Validate all files
|
| 130 |
+
file_paths = []
|
| 131 |
+
for file in files:
|
| 132 |
+
try:
|
| 133 |
validate_file_size(file)
|
| 134 |
path = save_uploaded_file(file, username)
|
| 135 |
file_paths.append(path)
|
| 136 |
+
except Exception as e:
|
| 137 |
+
return f"Error with file {file}: {str(e)}", None, None
|
| 138 |
+
|
| 139 |
+
result = run_async(
|
| 140 |
+
ORCHESTRATOR.handle_batch_analysis,
|
| 141 |
+
user_id=username,
|
| 142 |
+
prompt=prompt,
|
| 143 |
+
file_paths=file_paths,
|
| 144 |
+
targets=["analysis"]
|
| 145 |
+
)
|
| 146 |
+
|
| 147 |
+
# Format batch results
|
| 148 |
+
batch_summary = result.get("summary", {})
|
| 149 |
+
batch_results = result.get("batch_results", [])
|
| 150 |
+
|
| 151 |
+
formatted_output = f"π Batch Analysis Results\n"
|
| 152 |
+
formatted_output += f"Total files: {batch_summary.get('processing_stats', {}).get('total_files', 0)}\n"
|
| 153 |
+
formatted_output += f"Successful: {batch_summary.get('processing_stats', {}).get('successful', 0)}\n"
|
| 154 |
+
formatted_output += f"Failed: {batch_summary.get('processing_stats', {}).get('failed', 0)}\n"
|
| 155 |
+
formatted_output += f"Success rate: {batch_summary.get('processing_stats', {}).get('success_rate', '0%')}\n\n"
|
| 156 |
+
|
| 157 |
+
if batch_summary.get("batch_analysis"):
|
| 158 |
+
formatted_output += f"π Batch Summary:\n{batch_summary['batch_analysis']}\n\n"
|
| 159 |
+
|
| 160 |
+
formatted_output += "π Individual Results:\n"
|
| 161 |
+
for i, file_result in enumerate(batch_results):
|
| 162 |
+
formatted_output += f"\n--- File {i+1}: {Path(file_result.get('file_path', 'Unknown')).name} ---\n"
|
| 163 |
+
if "error" in file_result:
|
| 164 |
+
formatted_output += f"β Error: {file_result['error']}\n"
|
| 165 |
+
else:
|
| 166 |
+
formatted_output += f"β
{file_result.get('analysis', 'No analysis')}\n"
|
| 167 |
+
|
| 168 |
+
return formatted_output, None, None
|
| 169 |
+
|
| 170 |
+
def handle_research_analysis(file, prompt, username="anonymous", use_streaming=False):
|
| 171 |
+
"""Handle research analysis with R&D pipeline focus"""
|
| 172 |
+
if file is None:
|
| 173 |
+
return "Please upload a PDF.", None, None
|
| 174 |
+
|
| 175 |
+
validate_file_size(file)
|
| 176 |
+
path = save_uploaded_file(file, username)
|
| 177 |
+
|
| 178 |
+
if use_streaming:
|
| 179 |
+
return handle_research_analysis_streaming(path, prompt, username)
|
| 180 |
+
else:
|
| 181 |
result = run_async(
|
| 182 |
+
ORCHESTRATOR.handle_user_prompt,
|
| 183 |
user_id=username,
|
| 184 |
prompt=prompt,
|
| 185 |
+
file_path=path,
|
| 186 |
+
targets=["research"]
|
| 187 |
)
|
| 188 |
+
return result.get("research_analysis", "No research analysis result."), None, None
|
| 189 |
+
|
| 190 |
+
def handle_research_analysis_streaming(file_path, prompt, username="anonymous"):
|
| 191 |
+
"""Handle research analysis with streaming output"""
|
| 192 |
+
def stream_generator():
|
| 193 |
+
async def async_stream():
|
| 194 |
+
async for chunk in ORCHESTRATOR.handle_user_prompt_streaming(
|
| 195 |
+
user_id=username,
|
| 196 |
+
prompt=prompt,
|
| 197 |
+
file_path=file_path,
|
| 198 |
+
targets=["research"]
|
| 199 |
+
):
|
| 200 |
+
yield chunk
|
| 201 |
|
| 202 |
+
# Convert async generator to sync generator
|
| 203 |
+
loop = asyncio.new_event_loop()
|
| 204 |
+
asyncio.set_event_loop(loop)
|
| 205 |
+
try:
|
| 206 |
+
async_gen = async_stream()
|
| 207 |
+
while True:
|
| 208 |
+
try:
|
| 209 |
+
chunk = loop.run_until_complete(async_gen.__anext__())
|
| 210 |
+
yield chunk
|
| 211 |
+
except StopAsyncIteration:
|
| 212 |
+
break
|
| 213 |
+
finally:
|
| 214 |
+
loop.close()
|
| 215 |
+
|
| 216 |
+
return stream_generator(), None, None
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 217 |
|
| 218 |
def handle_export(result_text, export_format, username="anonymous"):
|
| 219 |
+
"""Handle export of analysis results"""
|
| 220 |
if not result_text or result_text.strip() == "":
|
| 221 |
return "No content to export.", None
|
| 222 |
|
|
|
|
|
|
|
|
|
|
| 223 |
try:
|
|
|
|
|
|
|
|
|
|
|
|
|
| 224 |
if export_format == "txt":
|
| 225 |
+
filepath = EXPORT_MANAGER.export_text(result_text, username=username)
|
|
|
|
|
|
|
|
|
|
| 226 |
elif export_format == "json":
|
| 227 |
+
data = {"analysis": result_text, "exported_by": username, "timestamp": time.time()}
|
| 228 |
+
filepath = EXPORT_MANAGER.export_json(data, username=username)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 229 |
elif export_format == "pdf":
|
| 230 |
+
filepath = EXPORT_MANAGER.export_pdf(result_text, username=username)
|
| 231 |
else:
|
| 232 |
return f"Unsupported export format: {export_format}", None
|
| 233 |
|
| 234 |
+
return f"β
Export successful! File saved to: {filepath}", filepath
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 235 |
except Exception as e:
|
| 236 |
+
return f"β Export failed: {str(e)}", None
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 237 |
|
| 238 |
def get_custom_prompts():
|
| 239 |
"""Get available custom prompts"""
|
|
|
|
|
|
|
| 240 |
prompts = PROMPT_MANAGER.get_all_prompts()
|
| 241 |
return list(prompts.keys())
|
| 242 |
|
| 243 |
def load_custom_prompt(prompt_id):
|
| 244 |
"""Load a custom prompt template"""
|
|
|
|
|
|
|
| 245 |
return PROMPT_MANAGER.get_prompt(prompt_id) or ""
|
| 246 |
|
| 247 |
# ------------------------
|
| 248 |
+
# Gradio UI - Enhanced Interface
|
| 249 |
# ------------------------
|
| 250 |
with gr.Blocks(title="PDF Analysis & Orchestrator", theme=gr.themes.Soft()) as demo:
|
| 251 |
gr.Markdown("# π PDF Analysis & Orchestrator - Intelligent Document Processing")
|
| 252 |
+
gr.Markdown("Upload PDFs and provide instructions for analysis, summarization, or explanation. Now with enhanced features!")
|
| 253 |
|
| 254 |
with gr.Tabs():
|
| 255 |
# Single Document Analysis Tab
|
|
|
|
| 267 |
value=None
|
| 268 |
)
|
| 269 |
load_prompt_btn = gr.Button("Load Prompt", size="sm")
|
| 270 |
+
|
| 271 |
+
# Analysis Options
|
| 272 |
+
with gr.Accordion("βοΈ Analysis Options", open=False):
|
| 273 |
+
use_streaming = gr.Checkbox(label="Enable Streaming Output", value=False)
|
| 274 |
+
chunk_size = gr.Slider(
|
| 275 |
+
minimum=5000, maximum=30000, value=15000, step=1000,
|
| 276 |
+
label="Chunk Size (for large documents)"
|
| 277 |
+
)
|
| 278 |
|
| 279 |
with gr.Column(scale=2):
|
| 280 |
gr.Markdown("### Analysis Instructions")
|
|
|
|
| 291 |
# Results Section
|
| 292 |
with gr.Row():
|
| 293 |
with gr.Column(scale=2):
|
| 294 |
+
output_box = gr.Textbox(label="Analysis Result", lines=15, max_lines=25, show_copy_button=True)
|
| 295 |
+
status_box = gr.Textbox(label="Status", value="Ready to analyze documents", interactive=False)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 296 |
|
| 297 |
with gr.Column(scale=1):
|
| 298 |
# Export Section
|
| 299 |
+
with gr.Accordion("πΎ Export Results", open=False):
|
|
|
|
| 300 |
export_format = gr.Dropdown(
|
| 301 |
choices=["txt", "json", "pdf"],
|
| 302 |
+
label="Export Format",
|
| 303 |
+
value="txt"
|
| 304 |
+
)
|
| 305 |
+
export_btn = gr.Button("π₯ Export", variant="secondary")
|
| 306 |
+
export_status = gr.Textbox(label="Export Status", interactive=False)
|
| 307 |
+
|
| 308 |
+
# Document Info
|
| 309 |
+
with gr.Accordion("π Document Info", open=False):
|
| 310 |
+
doc_info = gr.Textbox(label="Document Information", interactive=False, lines=6)
|
| 311 |
+
|
| 312 |
+
# Senior Research Analyst Tab
|
| 313 |
+
with gr.Tab("π¬ Senior Research Analyst"):
|
| 314 |
+
gr.Markdown("### π― R&D Pipeline Analysis")
|
| 315 |
+
gr.Markdown("Act as a senior research analyst: extract high-value, novel ideas and convert them into concrete R&D pipeline outcomes (experiments β prototypes β product decisions)")
|
| 316 |
+
|
| 317 |
+
with gr.Row():
|
| 318 |
+
with gr.Column(scale=1):
|
| 319 |
+
research_pdf_in = gr.File(label="Upload Research Document", file_types=[".pdf"], elem_id="research_file_upload")
|
| 320 |
+
research_username_input = gr.Textbox(label="Username (optional)", placeholder="anonymous", elem_id="research_username")
|
| 321 |
+
|
| 322 |
+
# Research-Specific Prompts Section
|
| 323 |
+
with gr.Accordion("π― Research Prompts", open=False):
|
| 324 |
+
research_prompt_dropdown = gr.Dropdown(
|
| 325 |
+
choices=[pid for pid, prompt in PROMPT_MANAGER.get_all_prompts().items() if prompt.get("category") == "research"],
|
| 326 |
+
label="Select Research Prompt",
|
| 327 |
+
value="research_pipeline"
|
| 328 |
)
|
| 329 |
+
load_research_prompt_btn = gr.Button("Load Research Prompt", size="sm")
|
| 330 |
+
|
| 331 |
+
# Research Analysis Options
|
| 332 |
+
with gr.Accordion("βοΈ Research Options", open=False):
|
| 333 |
+
research_streaming = gr.Checkbox(label="Enable Streaming Output", value=True)
|
| 334 |
+
|
| 335 |
+
with gr.Column(scale=2):
|
| 336 |
+
gr.Markdown("### Research Analysis Instructions")
|
| 337 |
+
research_prompt_input = gr.Textbox(
|
| 338 |
+
lines=4,
|
| 339 |
+
placeholder="Focus on extracting novel ideas with high product/engineering impact...\nExamples:\n- Identify breakthrough concepts for R&D pipeline\n- Assess commercial viability of technical innovations\n- Design experimental frameworks for validation\n- Create prototype development roadmaps",
|
| 340 |
+
label="Research Instructions"
|
| 341 |
+
)
|
| 342 |
+
|
| 343 |
+
with gr.Row():
|
| 344 |
+
research_submit_btn = gr.Button("π¬ Research Analysis", variant="primary", size="lg")
|
| 345 |
+
research_clear_btn = gr.Button("ποΈ Clear", size="sm")
|
| 346 |
+
|
| 347 |
+
# Research Results Section
|
| 348 |
+
with gr.Row():
|
| 349 |
+
with gr.Column(scale=2):
|
| 350 |
+
research_output_box = gr.Textbox(label="Research Analysis Result", lines=20, max_lines=30, show_copy_button=True)
|
| 351 |
+
research_status_box = gr.Textbox(label="Research Status", value="Ready for research analysis", interactive=False)
|
| 352 |
+
|
| 353 |
+
with gr.Column(scale=1):
|
| 354 |
+
# Research Export Section
|
| 355 |
+
with gr.Accordion("πΎ Export Research Results", open=False):
|
| 356 |
+
research_export_format = gr.Dropdown(
|
| 357 |
+
choices=["txt", "json", "pdf"],
|
| 358 |
+
label="Export Format",
|
| 359 |
+
value="txt"
|
| 360 |
)
|
| 361 |
+
research_export_btn = gr.Button("π₯ Export Research", variant="secondary")
|
| 362 |
+
research_export_status = gr.Textbox(label="Export Status", interactive=False)
|
| 363 |
+
|
| 364 |
+
# Research Insights Summary
|
| 365 |
+
with gr.Accordion("π Research Insights", open=False):
|
| 366 |
+
research_insights = gr.Textbox(label="Key Insights Summary", interactive=False, lines=8)
|
| 367 |
|
| 368 |
# Batch Processing Tab
|
| 369 |
with gr.Tab("π Batch Processing"):
|
|
|
|
| 387 |
batch_output = gr.Textbox(label="Batch Results", lines=20, max_lines=30, show_copy_button=True)
|
| 388 |
batch_status = gr.Textbox(label="Batch Status", interactive=False)
|
| 389 |
|
| 390 |
+
# Custom Prompts Management Tab
|
| 391 |
+
with gr.Tab("π― Manage Prompts"):
|
| 392 |
+
with gr.Row():
|
| 393 |
+
with gr.Column(scale=1):
|
| 394 |
+
gr.Markdown("### Add New Prompt")
|
| 395 |
+
new_prompt_id = gr.Textbox(label="Prompt ID", placeholder="my_custom_prompt")
|
| 396 |
+
new_prompt_name = gr.Textbox(label="Prompt Name", placeholder="My Custom Analysis")
|
| 397 |
+
new_prompt_desc = gr.Textbox(label="Description", placeholder="What this prompt does")
|
| 398 |
+
new_prompt_template = gr.Textbox(
|
| 399 |
+
lines=4,
|
| 400 |
+
label="Prompt Template",
|
| 401 |
+
placeholder="Enter your custom prompt template..."
|
| 402 |
+
)
|
| 403 |
+
new_prompt_category = gr.Dropdown(
|
| 404 |
+
choices=["custom", "business", "technical", "explanation", "analysis"],
|
| 405 |
+
label="Category",
|
| 406 |
+
value="custom"
|
| 407 |
+
)
|
| 408 |
+
add_prompt_btn = gr.Button("β Add Prompt", variant="primary")
|
| 409 |
+
|
| 410 |
+
with gr.Column(scale=1):
|
| 411 |
+
gr.Markdown("### Existing Prompts")
|
| 412 |
+
prompt_list = gr.Dataframe(
|
| 413 |
+
headers=["ID", "Name", "Category", "Description"],
|
| 414 |
+
datatype=["str", "str", "str", "str"],
|
| 415 |
+
interactive=False,
|
| 416 |
+
label="Available Prompts"
|
| 417 |
+
)
|
| 418 |
+
refresh_prompts_btn = gr.Button("π Refresh List")
|
| 419 |
+
delete_prompt_id = gr.Textbox(label="Prompt ID to Delete", placeholder="prompt_id")
|
| 420 |
+
delete_prompt_btn = gr.Button("ποΈ Delete Prompt", variant="stop")
|
| 421 |
+
|
| 422 |
# Event Handlers
|
| 423 |
# Single document analysis
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 424 |
submit_btn.click(
|
| 425 |
+
fn=handle_analysis,
|
| 426 |
+
inputs=[pdf_in, prompt_input, username_input, use_streaming],
|
| 427 |
+
outputs=[output_box, status_box, doc_info]
|
| 428 |
)
|
| 429 |
|
| 430 |
# Load custom prompt
|
|
|
|
| 448 |
outputs=[pdf_in, prompt_input, output_box, status_box]
|
| 449 |
)
|
| 450 |
|
| 451 |
+
# Research analysis event handlers
|
| 452 |
+
research_submit_btn.click(
|
| 453 |
+
fn=handle_research_analysis,
|
| 454 |
+
inputs=[research_pdf_in, research_prompt_input, research_username_input, research_streaming],
|
| 455 |
+
outputs=[research_output_box, research_status_box, research_insights]
|
| 456 |
+
)
|
| 457 |
+
|
| 458 |
+
# Load research prompt
|
| 459 |
+
load_research_prompt_btn.click(
|
| 460 |
+
fn=load_custom_prompt,
|
| 461 |
+
inputs=[research_prompt_dropdown],
|
| 462 |
+
outputs=[research_prompt_input]
|
| 463 |
+
)
|
| 464 |
+
|
| 465 |
+
# Research export functionality
|
| 466 |
+
research_export_btn.click(
|
| 467 |
+
fn=handle_export,
|
| 468 |
+
inputs=[research_output_box, research_export_format, research_username_input],
|
| 469 |
+
outputs=[research_export_status, gr.State()]
|
| 470 |
+
)
|
| 471 |
+
|
| 472 |
+
# Research clear functionality
|
| 473 |
+
research_clear_btn.click(
|
| 474 |
+
fn=lambda: ("", "", "", "Ready for research analysis", ""),
|
| 475 |
+
inputs=[],
|
| 476 |
+
outputs=[research_pdf_in, research_prompt_input, research_output_box, research_status_box, research_insights]
|
| 477 |
+
)
|
| 478 |
+
|
| 479 |
# Batch processing
|
| 480 |
batch_submit.click(
|
| 481 |
fn=handle_batch_analysis,
|
| 482 |
inputs=[batch_files, batch_prompt, batch_username],
|
| 483 |
outputs=[batch_output, batch_status, gr.State()]
|
| 484 |
)
|
| 485 |
+
|
| 486 |
+
# Prompt management
|
| 487 |
+
add_prompt_btn.click(
|
| 488 |
+
fn=lambda id, name, desc, template, cat: PROMPT_MANAGER.add_prompt(id, name, desc, template, cat),
|
| 489 |
+
inputs=[new_prompt_id, new_prompt_name, new_prompt_desc, new_prompt_template, new_prompt_category],
|
| 490 |
+
outputs=[]
|
| 491 |
+
)
|
| 492 |
+
|
| 493 |
+
refresh_prompts_btn.click(
|
| 494 |
+
fn=lambda: [[pid, prompt["name"], prompt["category"], prompt["description"]]
|
| 495 |
+
for pid, prompt in PROMPT_MANAGER.get_all_prompts().items()],
|
| 496 |
+
inputs=[],
|
| 497 |
+
outputs=[prompt_list]
|
| 498 |
+
)
|
| 499 |
+
|
| 500 |
+
delete_prompt_btn.click(
|
| 501 |
+
fn=lambda pid: PROMPT_MANAGER.delete_prompt(pid),
|
| 502 |
+
inputs=[delete_prompt_id],
|
| 503 |
+
outputs=[]
|
| 504 |
+
)
|
| 505 |
|
| 506 |
# Examples
|
| 507 |
gr.Examples(
|
|
|
|
| 515 |
inputs=prompt_input,
|
| 516 |
label="Example Instructions"
|
| 517 |
)
|
| 518 |
+
|
| 519 |
+
# Research Examples
|
| 520 |
+
gr.Examples(
|
| 521 |
+
examples=[
|
| 522 |
+
["Identify breakthrough concepts with high product/engineering impact and design specific experiments to validate them"],
|
| 523 |
+
["Assess the commercial viability of technical innovations and create prototype development roadmaps"],
|
| 524 |
+
["Extract novel methodologies and convert them into concrete R&D pipeline outcomes"],
|
| 525 |
+
["Analyze technical concepts for transformative potential and generate strategic product decisions"],
|
| 526 |
+
["Design experimental frameworks to validate key hypotheses with measurable success criteria"],
|
| 527 |
+
],
|
| 528 |
+
inputs=research_prompt_input,
|
| 529 |
+
label="Research Analysis Examples"
|
| 530 |
+
)
|
| 531 |
|
| 532 |
if __name__ == "__main__":
|
| 533 |
+
demo.launch(server_name="0.0.0.0", server_port=int(os.environ.get("PORT", 7860)))
|
test_research_feature.py
ADDED
|
@@ -0,0 +1,116 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python3
|
| 2 |
+
"""
|
| 3 |
+
Test script for the new Senior Research Analyst feature
|
| 4 |
+
"""
|
| 5 |
+
|
| 6 |
+
def test_imports():
|
| 7 |
+
"""Test that all new components can be imported"""
|
| 8 |
+
try:
|
| 9 |
+
from agents import ResearchAnalystAgent, MasterOrchestrator
|
| 10 |
+
print("β
ResearchAnalystAgent imported successfully")
|
| 11 |
+
|
| 12 |
+
from config import Config
|
| 13 |
+
print("β
Config imported successfully")
|
| 14 |
+
|
| 15 |
+
from utils.prompts import PromptManager
|
| 16 |
+
print("β
PromptManager imported successfully")
|
| 17 |
+
|
| 18 |
+
return True
|
| 19 |
+
except ImportError as e:
|
| 20 |
+
print(f"β Import error: {e}")
|
| 21 |
+
return False
|
| 22 |
+
|
| 23 |
+
def test_agent_initialization():
|
| 24 |
+
"""Test that the ResearchAnalystAgent can be initialized"""
|
| 25 |
+
try:
|
| 26 |
+
from agents import ResearchAnalystAgent
|
| 27 |
+
from config import Config
|
| 28 |
+
|
| 29 |
+
agent = ResearchAnalystAgent(name='TestResearchAgent', model=Config.OPENAI_MODEL)
|
| 30 |
+
print("β
ResearchAnalystAgent initialized successfully")
|
| 31 |
+
return True
|
| 32 |
+
except Exception as e:
|
| 33 |
+
print(f"β Agent initialization error: {e}")
|
| 34 |
+
return False
|
| 35 |
+
|
| 36 |
+
def test_research_prompts():
|
| 37 |
+
"""Test that research prompts are available"""
|
| 38 |
+
try:
|
| 39 |
+
from utils.prompts import PromptManager
|
| 40 |
+
|
| 41 |
+
pm = PromptManager()
|
| 42 |
+
all_prompts = pm.get_all_prompts()
|
| 43 |
+
research_prompts = [pid for pid, prompt in all_prompts.items() if prompt.get('category') == 'research']
|
| 44 |
+
|
| 45 |
+
print(f"β
Found {len(research_prompts)} research prompts:")
|
| 46 |
+
for prompt_id in research_prompts:
|
| 47 |
+
prompt_info = all_prompts[prompt_id]
|
| 48 |
+
print(f" - {prompt_id}: {prompt_info['name']}")
|
| 49 |
+
|
| 50 |
+
return len(research_prompts) > 0
|
| 51 |
+
except Exception as e:
|
| 52 |
+
print(f"β Research prompts test error: {e}")
|
| 53 |
+
return False
|
| 54 |
+
|
| 55 |
+
def test_orchestrator_integration():
|
| 56 |
+
"""Test that the orchestrator can handle research targets"""
|
| 57 |
+
try:
|
| 58 |
+
from agents import ResearchAnalystAgent, MasterOrchestrator
|
| 59 |
+
from config import Config
|
| 60 |
+
|
| 61 |
+
# Create agents dict with research agent
|
| 62 |
+
agents = {
|
| 63 |
+
"research": ResearchAnalystAgent(name="ResearchAnalystAgent", model=Config.OPENAI_MODEL)
|
| 64 |
+
}
|
| 65 |
+
|
| 66 |
+
orchestrator = MasterOrchestrator(agents=agents)
|
| 67 |
+
print("β
MasterOrchestrator initialized with research agent")
|
| 68 |
+
return True
|
| 69 |
+
except Exception as e:
|
| 70 |
+
print(f"β Orchestrator integration error: {e}")
|
| 71 |
+
return False
|
| 72 |
+
|
| 73 |
+
def main():
|
| 74 |
+
"""Run all tests"""
|
| 75 |
+
print("π§ͺ Testing Senior Research Analyst Feature Implementation")
|
| 76 |
+
print("=" * 60)
|
| 77 |
+
|
| 78 |
+
tests = [
|
| 79 |
+
("Import Tests", test_imports),
|
| 80 |
+
("Agent Initialization", test_agent_initialization),
|
| 81 |
+
("Research Prompts", test_research_prompts),
|
| 82 |
+
("Orchestrator Integration", test_orchestrator_integration),
|
| 83 |
+
]
|
| 84 |
+
|
| 85 |
+
results = []
|
| 86 |
+
for test_name, test_func in tests:
|
| 87 |
+
print(f"\nπ Running {test_name}...")
|
| 88 |
+
result = test_func()
|
| 89 |
+
results.append((test_name, result))
|
| 90 |
+
|
| 91 |
+
print("\n" + "=" * 60)
|
| 92 |
+
print("π Test Results Summary:")
|
| 93 |
+
|
| 94 |
+
all_passed = True
|
| 95 |
+
for test_name, result in results:
|
| 96 |
+
status = "β
PASS" if result else "β FAIL"
|
| 97 |
+
print(f" {status} {test_name}")
|
| 98 |
+
if not result:
|
| 99 |
+
all_passed = False
|
| 100 |
+
|
| 101 |
+
print("\n" + "=" * 60)
|
| 102 |
+
if all_passed:
|
| 103 |
+
print("π All tests passed! Senior Research Analyst feature is ready.")
|
| 104 |
+
print("\nπ New Features Available:")
|
| 105 |
+
print(" - Senior Research Analyst Agent with R&D pipeline focus")
|
| 106 |
+
print(" - 4 specialized research prompts")
|
| 107 |
+
print(" - Dedicated research analysis tab in UI")
|
| 108 |
+
print(" - Streaming support for research analysis")
|
| 109 |
+
print(" - Export functionality for research results")
|
| 110 |
+
else:
|
| 111 |
+
print("β οΈ Some tests failed. Please check the implementation.")
|
| 112 |
+
|
| 113 |
+
return all_passed
|
| 114 |
+
|
| 115 |
+
if __name__ == "__main__":
|
| 116 |
+
main()
|
utils/prompts.py
CHANGED
|
@@ -29,90 +29,65 @@ class PromptManager:
|
|
| 29 |
def _get_default_prompts(self) -> Dict[str, Dict[str, str]]:
|
| 30 |
"""Get default prompt templates"""
|
| 31 |
return {
|
| 32 |
-
# Basic Analysis
|
| 33 |
"summarize": {
|
| 34 |
-
"name": "
|
| 35 |
-
"description": "Create a
|
| 36 |
-
"template": "
|
| 37 |
"category": "basic"
|
| 38 |
},
|
| 39 |
"explain_simple": {
|
| 40 |
-
"name": "
|
| 41 |
-
"description": "Explain complex content for general audience",
|
| 42 |
-
"template": "Explain this document in simple
|
| 43 |
"category": "explanation"
|
| 44 |
},
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
"
|
| 49 |
-
"description": "Deep analysis of monetization opportunities and strategies",
|
| 50 |
-
"template": "Analyze this document for monetization opportunities and provide strategic recommendations:\n\n## π― Core Value Proposition\n- **Unique value**: What makes this monetizable?\n- **Target market**: Who would pay for this?\n- **Competitive advantage**: Why choose this over alternatives?\n- **Market timing**: Is the market ready for this?\n\n## π° Revenue Model Opportunities\n- **Direct monetization**: How to charge customers\n - Subscription models and pricing tiers\n - One-time purchases and licensing\n - Usage-based pricing strategies\n- **Indirect monetization**: Adjacent revenue streams\n - Data monetization opportunities\n - Partnership and affiliate models\n - Platform and ecosystem strategies\n\n## π Market Analysis & Sizing\n- **Total Addressable Market (TAM)**: Overall market size\n- **Serviceable Addressable Market (SAM)**: Realistic target\n- **Serviceable Obtainable Market (SOM)**: Achievable share\n- **Market growth trends**: Is the market expanding?\n- **Customer segments**: Different user types and needs\n\n## π Implementation Strategy\n- **Go-to-market approach**: How to reach customers\n- **Pricing strategy**: Optimal pricing models\n- **Sales channels**: How to sell and distribute\n- **Partnership opportunities**: Strategic alliances\n- **Resource requirements**: What's needed to execute\n\n## β‘ Quick Wins vs Long-term Plays\n- **Immediate opportunities**: Low-hanging fruit (0-6 months)\n- **Medium-term strategies**: Scalable approaches (6-18 months)\n- **Long-term vision**: Major market positions (18+ months)\n- **Implementation timeline**: Realistic milestones\n\n## β οΈ Risk Assessment & Mitigation\n- **Market risks**: Competition, regulation, adoption\n- **Technical risks**: Implementation challenges\n- **Financial risks**: Investment requirements\n- **Mitigation strategies**: How to reduce risks\n\n## π Success Metrics & KPIs\n- **Revenue targets**: Specific financial goals\n- **Customer metrics**: Acquisition, retention, growth\n- **Market metrics**: Market share, penetration\n- **Operational metrics**: Efficiency, scalability\n\n## π‘ Strategic Recommendations\n- **Priority ranking**: Which opportunities to pursue first\n- **Resource allocation**: Where to focus efforts\n- **Partnership strategy**: Who to work with\n- **Competitive positioning**: How to differentiate\n- **Next steps**: Specific actions to take",
|
| 51 |
"category": "business"
|
| 52 |
},
|
| 53 |
-
"
|
| 54 |
-
"name": "
|
| 55 |
-
"description": "
|
| 56 |
-
"template": "
|
| 57 |
-
"category": "business"
|
| 58 |
-
},
|
| 59 |
-
"business_plan": {
|
| 60 |
-
"name": "πΌ Business Plan Analysis",
|
| 61 |
-
"description": "Analyze business plans and strategic documents",
|
| 62 |
-
"template": "Analyze this business plan/strategic document:\n\n## π― Business Overview\n- **Mission & Vision**: Core business purpose\n- **Target Market**: Who are the customers?\n- **Value Proposition**: What makes this unique?\n\n## π Market Analysis\n- Market size and opportunity\n- Competitive landscape\n- Market trends and drivers\n- Customer segments\n\n## π° Financial Projections\n- Revenue model\n- Key financial metrics\n- Funding requirements\n- Break-even analysis\n\n## π Strategy & Execution\n- Go-to-market strategy\n- Key milestones\n- Risk factors\n- Success metrics\n\n## β οΈ Risk Assessment\n- Major risks identified\n- Mitigation strategies\n- Contingency plans",
|
| 63 |
-
"category": "business"
|
| 64 |
-
},
|
| 65 |
-
|
| 66 |
-
# Technical Documents
|
| 67 |
-
"user_manual": {
|
| 68 |
-
"name": "π User Manual Analysis",
|
| 69 |
-
"description": "Extract key information from user manuals and guides",
|
| 70 |
-
"template": "Analyze this user manual/guide:\n\n## π― Product Overview\n- **What it does**: Main functionality\n- **Target users**: Who is this for?\n- **Key features**: Primary capabilities\n\n## βοΈ Setup & Installation\n- Prerequisites\n- Step-by-step setup\n- Common issues and solutions\n\n## π§ How to Use\n- Main workflows\n- Key procedures\n- Best practices\n- Tips and tricks\n\n## β οΈ Important Warnings\n- Safety considerations\n- Common mistakes to avoid\n- Troubleshooting guide\n\n## π Support Information\n- Where to get help\n- Documentation references\n- Contact information",
|
| 71 |
"category": "technical"
|
| 72 |
},
|
| 73 |
-
"
|
| 74 |
-
"name": "
|
| 75 |
-
"description": "
|
| 76 |
-
"template": "
|
| 77 |
-
"category": "
|
| 78 |
},
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
"
|
| 83 |
-
"
|
| 84 |
-
"template": "Analyze this financial report:\n\n## π Financial Overview\n- **Revenue**: Total income and trends\n- **Expenses**: Major cost categories\n- **Profitability**: Net income and margins\n- **Cash Flow**: Operating, investing, financing\n\n## π Key Metrics\n- **Growth rates**: Revenue and profit growth\n- **Efficiency ratios**: How well resources are used\n- **Liquidity ratios**: Ability to meet short-term obligations\n- **Leverage ratios**: Debt levels and risk\n\n## π Performance Analysis\n- **Strengths**: What's working well\n- **Weaknesses**: Areas of concern\n- **Trends**: Changes over time\n- **Comparisons**: vs. industry benchmarks\n\n## β οΈ Risk Factors\n- **Financial risks**: Potential problems\n- **Market risks**: External factors\n- **Operational risks**: Internal challenges\n\n## π‘ Investment Insights\n- **Valuation**: Is this fairly valued?\n- **Outlook**: Future prospects\n- **Recommendations**: Buy, hold, or sell?",
|
| 85 |
-
"category": "financial"
|
| 86 |
},
|
| 87 |
-
"
|
| 88 |
-
"name": "
|
| 89 |
-
"description": "
|
| 90 |
-
"template": "
|
| 91 |
-
"category": "
|
| 92 |
},
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
"
|
| 97 |
-
"
|
| 98 |
-
"template": "Analyze this academic paper:\n\n## π― Research Overview\n- **Research Question**: What is being investigated?\n- **Hypothesis**: What is being tested?\n- **Significance**: Why is this important?\n\n## π¬ Methodology\n- **Study Design**: How was the research conducted?\n- **Participants**: Who was studied?\n- **Data Collection**: What data was gathered?\n- **Analysis Methods**: How was data analyzed?\n\n## π Results & Findings\n- **Key Results**: Main findings\n- **Statistical Significance**: Are results meaningful?\n- **Effect Sizes**: How large are the effects?\n- **Limitations**: What are the constraints?\n\n## π Critical Analysis\n- **Strengths**: What was done well?\n- **Weaknesses**: What could be improved?\n- **Bias Assessment**: Potential sources of bias\n- **Reproducibility**: Can this be replicated?\n\n## π‘ Implications\n- **Theoretical Impact**: How does this advance knowledge?\n- **Practical Applications**: Real-world uses\n- **Future Research**: What should be studied next?\n- **Policy Implications**: How might this influence policy?",
|
| 99 |
-
"category": "academic"
|
| 100 |
},
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
"
|
| 105 |
-
"
|
| 106 |
-
"template": "Analyze this legal document:\n\n## π Document Overview\n- **Document Type**: Contract, agreement, policy, etc.\n- **Parties Involved**: Who are the key parties?\n- **Purpose**: What is this document for?\n- **Effective Date**: When does it take effect?\n\n## π Key Terms & Conditions\n- **Obligations**: What must each party do?\n- **Rights**: What are each party's rights?\n- **Restrictions**: What is prohibited?\n- **Timeline**: Important dates and deadlines\n\n## π° Financial Terms\n- **Payment Terms**: How and when to pay\n- **Fees & Costs**: Associated expenses\n- **Penalties**: Consequences of non-compliance\n- **Termination**: How to end the agreement\n\n## β οΈ Risk Assessment\n- **Liability**: Who is responsible for what?\n- **Indemnification**: Protection clauses\n- **Force Majeure**: Unforeseen circumstances\n- **Dispute Resolution**: How conflicts are handled\n\n## π‘ Key Takeaways\n- **Critical Deadlines**: Important dates to remember\n- **Action Items**: What needs to be done\n- **Risks to Monitor**: Areas of concern\n- **Recommendations**: Suggested next steps",
|
| 107 |
-
"category": "legal"
|
| 108 |
},
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
"
|
| 113 |
-
"
|
| 114 |
-
"template": "Analyze this creative brief/marketing document:\n\n## π― Project Overview\n- **Objective**: What is the goal?\n- **Target Audience**: Who is this for?\n- **Brand Voice**: What tone should be used?\n- **Key Message**: What should people remember?\n\n## π¨ Creative Direction\n- **Visual Style**: Design preferences\n- **Color Palette**: Brand colors\n- **Typography**: Font choices\n- **Imagery**: Photo/video style\n\n## π± Deliverables\n- **Format Requirements**: Sizes, specifications\n- **Platform Considerations**: Where will this be used?\n- **Technical Specs**: File formats, resolution\n- **Timeline**: Deadlines and milestones\n\n## π Success Metrics\n- **KPIs**: How will success be measured?\n- **Performance Goals**: Specific targets\n- **Testing Strategy**: How to validate effectiveness\n- **Reporting**: How to track results\n\n## π‘ Recommendations\n- **Optimization Opportunities**: How to improve\n- **Best Practices**: Industry standards\n- **Risk Mitigation**: Potential issues to avoid\n- **Next Steps**: Immediate actions needed",
|
| 115 |
-
"category": "creative"
|
| 116 |
}
|
| 117 |
}
|
| 118 |
|
|
@@ -177,9 +152,10 @@ class PromptManager:
|
|
| 177 |
return False
|
| 178 |
return False
|
| 179 |
|
|
|
|
| 180 |
def get_categories(self) -> List[str]:
|
| 181 |
"""Get all available categories"""
|
| 182 |
categories = set()
|
| 183 |
for prompt in self.prompts.values():
|
| 184 |
categories.add(prompt.get("category", "uncategorized"))
|
| 185 |
-
return sorted(list(categories))
|
|
|
|
| 29 |
def _get_default_prompts(self) -> Dict[str, Dict[str, str]]:
|
| 30 |
"""Get default prompt templates"""
|
| 31 |
return {
|
|
|
|
| 32 |
"summarize": {
|
| 33 |
+
"name": "Summarize Document",
|
| 34 |
+
"description": "Create a concise summary of the document",
|
| 35 |
+
"template": "Summarize this document in 3-5 key points, highlighting the main ideas and conclusions.",
|
| 36 |
"category": "basic"
|
| 37 |
},
|
| 38 |
"explain_simple": {
|
| 39 |
+
"name": "Explain Simply",
|
| 40 |
+
"description": "Explain complex content for a general audience",
|
| 41 |
+
"template": "Explain this document in simple terms that a 10-year-old could understand. Use analogies and examples where helpful.",
|
| 42 |
"category": "explanation"
|
| 43 |
},
|
| 44 |
+
"executive_summary": {
|
| 45 |
+
"name": "Executive Summary",
|
| 46 |
+
"description": "Create an executive summary for decision makers",
|
| 47 |
+
"template": "Create an executive summary of this document, focusing on key findings, recommendations, and business implications.",
|
|
|
|
|
|
|
| 48 |
"category": "business"
|
| 49 |
},
|
| 50 |
+
"technical_analysis": {
|
| 51 |
+
"name": "Technical Analysis",
|
| 52 |
+
"description": "Provide detailed technical analysis",
|
| 53 |
+
"template": "Provide a detailed technical analysis of this document, including methodology, data analysis, and technical conclusions.",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
"category": "technical"
|
| 55 |
},
|
| 56 |
+
"theme_segmentation": {
|
| 57 |
+
"name": "Theme Segmentation",
|
| 58 |
+
"description": "Break down document by themes and topics",
|
| 59 |
+
"template": "Segment this document by main themes and topics. Identify key themes and provide a brief summary of each section.",
|
| 60 |
+
"category": "organization"
|
| 61 |
},
|
| 62 |
+
"key_findings": {
|
| 63 |
+
"name": "Key Findings",
|
| 64 |
+
"description": "Extract key findings and insights",
|
| 65 |
+
"template": "Extract and analyze the key findings, insights, and recommendations from this document. Highlight the most important points.",
|
| 66 |
+
"category": "analysis"
|
|
|
|
|
|
|
| 67 |
},
|
| 68 |
+
"research_pipeline": {
|
| 69 |
+
"name": "R&D Pipeline Analysis",
|
| 70 |
+
"description": "Extract high-value insights for R&D pipeline development",
|
| 71 |
+
"template": "Act as a senior research analyst: identify novel ideas, breakthrough concepts, and innovative approaches with high product/engineering impact. Convert insights into concrete R&D pipeline outcomes: specific experiments to test, prototypes to build, and product decisions to make. Prioritize by transformative potential and measurable business value.",
|
| 72 |
+
"category": "research"
|
| 73 |
},
|
| 74 |
+
"innovation_assessment": {
|
| 75 |
+
"name": "Innovation Opportunity Assessment",
|
| 76 |
+
"description": "Assess commercial viability and innovation potential",
|
| 77 |
+
"template": "Analyze this document for breakthrough innovation opportunities. Identify novel technical concepts, assess their commercial viability, market readiness, and competitive advantage potential. Generate specific recommendations for experimental validation, prototype development, and strategic product decisions.",
|
| 78 |
+
"category": "research"
|
|
|
|
|
|
|
| 79 |
},
|
| 80 |
+
"experimental_design": {
|
| 81 |
+
"name": "Experimental Design Framework",
|
| 82 |
+
"description": "Design specific experiments and validation methodologies",
|
| 83 |
+
"template": "Extract technical concepts and methodologies from this document. Design specific experimental frameworks to validate key hypotheses, including success metrics, validation criteria, and implementation timelines. Focus on experiments that could drive significant product/engineering advancement.",
|
| 84 |
+
"category": "research"
|
|
|
|
|
|
|
| 85 |
},
|
| 86 |
+
"prototype_roadmap": {
|
| 87 |
+
"name": "Prototype Development Roadmap",
|
| 88 |
+
"description": "Create technical implementation roadmap for prototypes",
|
| 89 |
+
"template": "Identify technical concepts suitable for prototype development. Create a structured roadmap for building technical implementations that demonstrate key innovations. Include technical specifications, development phases, resource requirements, and success criteria for each prototype.",
|
| 90 |
+
"category": "research"
|
|
|
|
|
|
|
| 91 |
}
|
| 92 |
}
|
| 93 |
|
|
|
|
| 152 |
return False
|
| 153 |
return False
|
| 154 |
|
| 155 |
+
somewhere in the codebase
|
| 156 |
def get_categories(self) -> List[str]:
|
| 157 |
"""Get all available categories"""
|
| 158 |
categories = set()
|
| 159 |
for prompt in self.prompts.values():
|
| 160 |
categories.add(prompt.get("category", "uncategorized"))
|
| 161 |
+
return sorted(list(categories))
|