Spaces:
Sleeping
Sleeping
Commit
Β·
29116ed
1
Parent(s):
e7b6af9
feat(web-search): use Google Custom Search for live web results
Browse files- FILE_STRUCTURE.md +0 -82
- RULES_EXAMPLES.md +0 -292
- SUPABASE_MIGRATION_COMPLETE.md +0 -125
- SUPABASE_SETUP.md +0 -130
- TESTING_GUIDE.md +0 -421
- backend/api/mcp_clients/web_client.py +47 -35
- backend/api/routes/admin.py +69 -12
- backend/api/routes/web.py +12 -7
- backend/api/services/agent_orchestrator.py +114 -38
- backend/mcp_server/common/logging.py +49 -9
- backend/mcp_server/web/search.py +29 -26
- backend/tests/README_RETRY_TESTS.md +2 -0
- setup_env.py +0 -127
- setup_supabase_table.py +0 -121
- test_all.py +0 -233
- test_key.py +0 -45
- test_manual.py +0 -306
- test_retry_integration.py +0 -529
- test_retry_quick.py +0 -128
- test_simple.py +0 -148
- test_supabase_connection.py +0 -81
- verify_supabase_key.py +0 -106
- verify_supabase_setup.py +0 -181
- verify_tenant_isolation.py +0 -449
FILE_STRUCTURE.md
DELETED
|
@@ -1,82 +0,0 @@
|
|
| 1 |
-
# IntegraChat - Current File Structure
|
| 2 |
-
|
| 3 |
-
```
|
| 4 |
-
IntegraChat/
|
| 5 |
-
βββ backend/
|
| 6 |
-
β βββ api/
|
| 7 |
-
β β βββ main.py # FastAPI main application
|
| 8 |
-
β β βββ mcp_clients/
|
| 9 |
-
β β β βββ admin_client.py # Admin MCP client
|
| 10 |
-
β β β βββ mcp_client.py # Main MCP client wrapper
|
| 11 |
-
β β β βββ rag_client.py # RAG MCP client
|
| 12 |
-
β β β βββ web_client.py # Web search MCP client
|
| 13 |
-
β β βββ models/
|
| 14 |
-
β β β βββ __init__.py
|
| 15 |
-
β β β βββ agent.py # Agent request/response models
|
| 16 |
-
β β β βββ redflag.py # Red flag rule models
|
| 17 |
-
β β βββ routes/
|
| 18 |
-
β β β βββ admin.py # Admin routes
|
| 19 |
-
β β β βββ agent.py # Agent chat routes
|
| 20 |
-
β β β βββ analytics.py # Analytics routes
|
| 21 |
-
β β β βββ rag.py # RAG routes
|
| 22 |
-
β β β βββ web.py # Web search routes
|
| 23 |
-
β β βββ services/
|
| 24 |
-
β β β βββ agent_orchestrator.py # Main orchestrator (multi-tool execution)
|
| 25 |
-
β β β βββ intent_classifier.py # Intent classification service
|
| 26 |
-
β β β βββ llm_client.py # LLM client (Ollama/Groq)
|
| 27 |
-
β β β βββ prompt_builder.py # Prompt building utilities
|
| 28 |
-
β β β βββ redflag_detector.py # Red flag detection service
|
| 29 |
-
β β β βββ tool_selector.py # Multi-tool selection logic
|
| 30 |
-
β β βββ utils/
|
| 31 |
-
β β βββ text_extractor.py # Text extraction utilities
|
| 32 |
-
β βββ mcp_server/
|
| 33 |
-
β β βββ server.py # Unified MCP entrypoint (rag/web/admin)
|
| 34 |
-
β β βββ rag/ # RAG tool handlers (search/ingest/delete)
|
| 35 |
-
β β βββ web/ # Web search tool handler
|
| 36 |
-
β β βββ admin/ # Admin rules + violations tools
|
| 37 |
-
β β βββ common/ # Shared tenant/logging/utils helpers
|
| 38 |
-
β βββ tests/
|
| 39 |
-
β β βββ conftest.py # Pytest configuration
|
| 40 |
-
β β βββ test_agent_orchestrator.py # Orchestrator tests
|
| 41 |
-
β β βββ test_intent.py # Intent classification tests
|
| 42 |
-
β βββ workers/ # Background workers (empty)
|
| 43 |
-
β
|
| 44 |
-
βββ venv/ # Python virtual environment
|
| 45 |
-
βββ env.example # Environment variables template
|
| 46 |
-
βββ pytest.ini # Pytest configuration
|
| 47 |
-
βββ README.md # Project documentation
|
| 48 |
-
βββ requirements.txt # Python dependencies
|
| 49 |
-
βββ start.bat # Windows startup script
|
| 50 |
-
```
|
| 51 |
-
|
| 52 |
-
## Key Files Overview
|
| 53 |
-
|
| 54 |
-
### Core Services
|
| 55 |
-
- **`agent_orchestrator.py`** - Main orchestrator handling multi-tool execution
|
| 56 |
-
- **`tool_selector.py`** - Intelligent multi-tool selection (RAG + Web + LLM)
|
| 57 |
-
- **`intent_classifier.py`** - Classifies user intent
|
| 58 |
-
- **`redflag_detector.py`** - Detects policy violations
|
| 59 |
-
|
| 60 |
-
### MCP Servers
|
| 61 |
-
- **`backend/mcp_server/server.py`** - Unified MCP entrypoint (rag/web/admin tools)
|
| 62 |
-
- **`backend/mcp_server/rag/*.py`** - RAG tool handlers (search/ingest/delete)
|
| 63 |
-
- **`backend/mcp_server/web/search.py`** - DuckDuckGo handler
|
| 64 |
-
- **`backend/mcp_server/admin/*.py`** - Admin rules & violations tools
|
| 65 |
-
|
| 66 |
-
### API Routes
|
| 67 |
-
- **`agent.py`** - Main chat/agent endpoint
|
| 68 |
-
- **`rag.py`** - RAG operations
|
| 69 |
-
- **`web.py`** - Web search operations
|
| 70 |
-
- **`admin.py`** - Admin operations
|
| 71 |
-
- **`analytics.py`** - Analytics endpoints
|
| 72 |
-
|
| 73 |
-
### Models
|
| 74 |
-
- **`agent.py`** - AgentRequest, AgentDecision, AgentResponse
|
| 75 |
-
- **`redflag.py`** - RedFlagRule, RedFlagMatch
|
| 76 |
-
|
| 77 |
-
### MCP Clients
|
| 78 |
-
- **`mcp_client.py`** - Unified MCP client wrapper
|
| 79 |
-
- **`rag_client.py`** - RAG client
|
| 80 |
-
- **`web_client.py`** - Web search client
|
| 81 |
-
- **`admin_client.py`** - Admin client
|
| 82 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
RULES_EXAMPLES.md
DELETED
|
@@ -1,292 +0,0 @@
|
|
| 1 |
-
# Admin Rules Examples for IntegraChat
|
| 2 |
-
|
| 3 |
-
This document provides examples of rules you can use with the IntegraChat admin rules system.
|
| 4 |
-
|
| 5 |
-
## Quick Start
|
| 6 |
-
|
| 7 |
-
1. **Simple Rules** - Copy from `example_rules.txt` and paste into Gradio UI or Next.js frontend
|
| 8 |
-
2. **File Upload** - Drag and drop or upload TXT, PDF, DOC, or DOCX files directly
|
| 9 |
-
3. **Detailed Rules** - Use `example_rules_detailed.json` for rules with patterns and severity
|
| 10 |
-
4. **API** - Use the `/admin/rules`, `/admin/rules/bulk`, or `/admin/rules/upload-file` endpoints
|
| 11 |
-
|
| 12 |
-
## Rule Categories
|
| 13 |
-
|
| 14 |
-
### π΄ Critical Severity Rules
|
| 15 |
-
|
| 16 |
-
These rules block the most sensitive information:
|
| 17 |
-
|
| 18 |
-
```
|
| 19 |
-
Block password disclosure requests
|
| 20 |
-
Prevent sharing of API keys or tokens
|
| 21 |
-
No sharing of credit card information
|
| 22 |
-
Block requests for bank account details
|
| 23 |
-
Prevent sharing of health information
|
| 24 |
-
No disclosure of children's personal information
|
| 25 |
-
```
|
| 26 |
-
|
| 27 |
-
### π High Severity Rules
|
| 28 |
-
|
| 29 |
-
Important security and compliance rules:
|
| 30 |
-
|
| 31 |
-
```
|
| 32 |
-
Block social security number requests
|
| 33 |
-
Prevent disclosure of proprietary information
|
| 34 |
-
No unauthorized access to financial records
|
| 35 |
-
Block requests to delete system logs
|
| 36 |
-
Prevent unauthorized system configuration changes
|
| 37 |
-
No sharing of infrastructure credentials
|
| 38 |
-
```
|
| 39 |
-
|
| 40 |
-
### π‘ Medium Severity Rules
|
| 41 |
-
|
| 42 |
-
Operational and compliance rules:
|
| 43 |
-
|
| 44 |
-
```
|
| 45 |
-
Block requests for employee personal information
|
| 46 |
-
Prevent sharing of customer data without authorization
|
| 47 |
-
Block requests for confidential business strategies
|
| 48 |
-
Prevent disclosure of personal data of EU citizens
|
| 49 |
-
Block requests for generating harmful content
|
| 50 |
-
Prevent creation of misleading information
|
| 51 |
-
```
|
| 52 |
-
|
| 53 |
-
### π’ Low Severity Rules
|
| 54 |
-
|
| 55 |
-
General business rules:
|
| 56 |
-
|
| 57 |
-
```
|
| 58 |
-
Block requests for competitor pricing information
|
| 59 |
-
Prevent sharing of upcoming product launch details
|
| 60 |
-
No disclosure of vendor contract terms
|
| 61 |
-
Block requests for customer churn analysis data
|
| 62 |
-
```
|
| 63 |
-
|
| 64 |
-
## Using Rules with Patterns
|
| 65 |
-
|
| 66 |
-
For more precise matching, you can specify regex patterns:
|
| 67 |
-
|
| 68 |
-
### Example 1: Password Detection
|
| 69 |
-
```json
|
| 70 |
-
{
|
| 71 |
-
"rule": "Block password disclosure requests",
|
| 72 |
-
"pattern": ".*(password|pwd|passcode|credential|login).*",
|
| 73 |
-
"severity": "high",
|
| 74 |
-
"description": "Prevents users from requesting or sharing passwords"
|
| 75 |
-
}
|
| 76 |
-
```
|
| 77 |
-
|
| 78 |
-
### Example 2: API Key Detection
|
| 79 |
-
```json
|
| 80 |
-
{
|
| 81 |
-
"rule": "Prevent sharing of API keys or tokens",
|
| 82 |
-
"pattern": ".*(api.?key|token|secret|access.?key|auth.?token).*",
|
| 83 |
-
"severity": "critical",
|
| 84 |
-
"description": "Blocks requests to share API keys or tokens"
|
| 85 |
-
}
|
| 86 |
-
```
|
| 87 |
-
|
| 88 |
-
### Example 3: Credit Card Detection
|
| 89 |
-
```json
|
| 90 |
-
{
|
| 91 |
-
"rule": "No sharing of credit card information",
|
| 92 |
-
"pattern": ".*(credit.?card|card.?number|cvv|cvc|expiration).*",
|
| 93 |
-
"severity": "critical",
|
| 94 |
-
"description": "Blocks credit card information sharing"
|
| 95 |
-
}
|
| 96 |
-
```
|
| 97 |
-
|
| 98 |
-
## Adding Rules
|
| 99 |
-
|
| 100 |
-
### Method 1: Via Gradio UI (Easiest)
|
| 101 |
-
|
| 102 |
-
1. Open the IntegraChat Gradio interface
|
| 103 |
-
2. Go to "Admin Rules & Compliance" tab
|
| 104 |
-
3. Enter your tenant ID
|
| 105 |
-
4. **Option A - Text Input**: Paste rules from `example_rules.txt` (one per line) and click "Upload / Append Rules"
|
| 106 |
-
5. **Option B - File Upload**: Drag and drop or click to upload a TXT, PDF, DOC, or DOCX file containing rules
|
| 107 |
-
6. Rules are automatically enhanced by LLM (identifies edge cases, improves patterns)
|
| 108 |
-
7. Comment lines (starting with #) are automatically ignored
|
| 109 |
-
|
| 110 |
-
### Method 2: Via Next.js Frontend
|
| 111 |
-
|
| 112 |
-
1. Navigate to `/admin-rules` page
|
| 113 |
-
2. Enter your tenant ID in the navbar
|
| 114 |
-
3. **Text Input**: Paste rules in the text area and click "Upload / Append Rules"
|
| 115 |
-
4. **File Upload**: Drag and drop files or click the drop zone to upload
|
| 116 |
-
5. Click "Refresh Rules" to see your uploaded rules
|
| 117 |
-
|
| 118 |
-
### Method 3: Via API (Programmatic)
|
| 119 |
-
|
| 120 |
-
**Single Rule:**
|
| 121 |
-
```bash
|
| 122 |
-
curl -X POST http://localhost:8000/admin/rules \
|
| 123 |
-
-H "Content-Type: application/json" \
|
| 124 |
-
-H "x-tenant-id: your_tenant_id" \
|
| 125 |
-
-d '{
|
| 126 |
-
"rule": "Block password disclosure requests",
|
| 127 |
-
"pattern": ".*(password|pwd|passcode).*",
|
| 128 |
-
"severity": "high",
|
| 129 |
-
"description": "Prevents password sharing"
|
| 130 |
-
}'
|
| 131 |
-
```
|
| 132 |
-
|
| 133 |
-
**Bulk Rules:**
|
| 134 |
-
```bash
|
| 135 |
-
curl -X POST "http://localhost:8000/admin/rules/bulk?enhance=true" \
|
| 136 |
-
-H "Content-Type: application/json" \
|
| 137 |
-
-H "x-tenant-id: your_tenant_id" \
|
| 138 |
-
-d '{
|
| 139 |
-
"rules": [
|
| 140 |
-
"Block password disclosure requests",
|
| 141 |
-
"Prevent sharing of API keys",
|
| 142 |
-
"No sharing of credit card information"
|
| 143 |
-
]
|
| 144 |
-
}'
|
| 145 |
-
```
|
| 146 |
-
|
| 147 |
-
**File Upload:**
|
| 148 |
-
```bash
|
| 149 |
-
curl -X POST "http://localhost:8000/admin/rules/upload-file?enhance=true" \
|
| 150 |
-
-H "x-tenant-id: your_tenant_id" \
|
| 151 |
-
-F "file=@example_rules.txt"
|
| 152 |
-
```
|
| 153 |
-
|
| 154 |
-
### Method 4: Using Python
|
| 155 |
-
|
| 156 |
-
```python
|
| 157 |
-
import requests
|
| 158 |
-
|
| 159 |
-
BASE_URL = "http://localhost:8000"
|
| 160 |
-
TENANT_ID = "your_tenant_id"
|
| 161 |
-
|
| 162 |
-
# Add single rule
|
| 163 |
-
response = requests.post(
|
| 164 |
-
f"{BASE_URL}/admin/rules",
|
| 165 |
-
json={
|
| 166 |
-
"rule": "Block password disclosure requests",
|
| 167 |
-
"pattern": ".*(password|pwd).*",
|
| 168 |
-
"severity": "high"
|
| 169 |
-
},
|
| 170 |
-
headers={"x-tenant-id": TENANT_ID}
|
| 171 |
-
)
|
| 172 |
-
|
| 173 |
-
# Add bulk rules
|
| 174 |
-
response = requests.post(
|
| 175 |
-
f"{BASE_URL}/admin/rules/bulk",
|
| 176 |
-
json={
|
| 177 |
-
"rules": [
|
| 178 |
-
"Block password disclosure requests",
|
| 179 |
-
"Prevent sharing of API keys"
|
| 180 |
-
]
|
| 181 |
-
},
|
| 182 |
-
headers={"x-tenant-id": TENANT_ID}
|
| 183 |
-
)
|
| 184 |
-
```
|
| 185 |
-
|
| 186 |
-
## Rule Enhancement
|
| 187 |
-
|
| 188 |
-
When you add rules, the LLM will automatically:
|
| 189 |
-
- β
Identify edge cases (e.g., "password" β also catches "pwd", "passcode")
|
| 190 |
-
- β
Improve regex patterns for better matching
|
| 191 |
-
- β
Suggest appropriate severity levels
|
| 192 |
-
- β
Write clear descriptions
|
| 193 |
-
- β
Process rules in chunks (5 at a time) to avoid timeouts
|
| 194 |
-
- β
Handle large rule sets efficiently
|
| 195 |
-
|
| 196 |
-
**Note**: Enhancement can be disabled by setting `enhance=false` in the API query parameter, but it's enabled by default for better rule quality.
|
| 197 |
-
|
| 198 |
-
**Example:**
|
| 199 |
-
- **Input:** `Block password queries`
|
| 200 |
-
- **Enhanced:**
|
| 201 |
-
- Pattern: `.*password.*|.*pwd.*|.*passcode.*`
|
| 202 |
-
- Severity: `high`
|
| 203 |
-
- Edge cases: ["pwd", "passcode", "login credentials"]
|
| 204 |
-
|
| 205 |
-
## Testing Rules
|
| 206 |
-
|
| 207 |
-
After adding rules, test them by asking questions that should be blocked:
|
| 208 |
-
|
| 209 |
-
```
|
| 210 |
-
β "What is the admin password?"
|
| 211 |
-
β "Can you share the API key?"
|
| 212 |
-
β "Show me credit card numbers"
|
| 213 |
-
β "What's the SSN for user 123?"
|
| 214 |
-
|
| 215 |
-
β
"How do I reset my password?" (if rule allows)
|
| 216 |
-
β
"What is password hashing?" (educational, not disclosure)
|
| 217 |
-
```
|
| 218 |
-
|
| 219 |
-
## Best Practices
|
| 220 |
-
|
| 221 |
-
1. **Start Simple** - Begin with basic rules, then add patterns
|
| 222 |
-
2. **Use File Upload** - For large rule sets, upload from files instead of typing manually
|
| 223 |
-
3. **Leverage LLM Enhancement** - Let the system enhance your rules automatically
|
| 224 |
-
4. **Test Thoroughly** - Test rules with various phrasings
|
| 225 |
-
5. **Review Edge Cases** - Check if rules block legitimate queries
|
| 226 |
-
6. **Use Appropriate Severity** - Match severity to risk level (low for brief responses, high for blocking)
|
| 227 |
-
7. **Comment Lines** - Use `#` for comments in rule files - they're automatically ignored
|
| 228 |
-
8. **Regular Updates** - Review and update rules periodically
|
| 229 |
-
9. **Document Patterns** - Add descriptions explaining what each rule blocks
|
| 230 |
-
10. **Chunk Processing** - Large uploads are automatically chunked - be patient for 20+ rules
|
| 231 |
-
|
| 232 |
-
## Common Patterns
|
| 233 |
-
|
| 234 |
-
### Password Detection
|
| 235 |
-
```
|
| 236 |
-
.*(password|pwd|passcode|credential|login|auth).*
|
| 237 |
-
```
|
| 238 |
-
|
| 239 |
-
### Financial Information
|
| 240 |
-
```
|
| 241 |
-
.*(credit.?card|card.?number|cvv|bank.?account|routing).*
|
| 242 |
-
```
|
| 243 |
-
|
| 244 |
-
### Personal Information
|
| 245 |
-
```
|
| 246 |
-
.*(ssn|social.?security|tax.?id|personal.?data|pii).*
|
| 247 |
-
```
|
| 248 |
-
|
| 249 |
-
### API/Security
|
| 250 |
-
```
|
| 251 |
-
.*(api.?key|token|secret|access.?key|auth.?token).*
|
| 252 |
-
```
|
| 253 |
-
|
| 254 |
-
### Health Information
|
| 255 |
-
```
|
| 256 |
-
.*(health|medical|patient|hipaa|diagnosis).*
|
| 257 |
-
```
|
| 258 |
-
|
| 259 |
-
## Viewing Rules
|
| 260 |
-
|
| 261 |
-
```bash
|
| 262 |
-
# Get all rules
|
| 263 |
-
curl http://localhost:8000/admin/rules \
|
| 264 |
-
-H "x-tenant-id: your_tenant_id"
|
| 265 |
-
|
| 266 |
-
# Get detailed rules with patterns
|
| 267 |
-
curl "http://localhost:8000/admin/rules?detailed=true" \
|
| 268 |
-
-H "x-tenant-id: your_tenant_id"
|
| 269 |
-
```
|
| 270 |
-
|
| 271 |
-
## Deleting Rules
|
| 272 |
-
|
| 273 |
-
```bash
|
| 274 |
-
curl -X DELETE http://localhost:8000/admin/rules/Block%20password%20disclosure%20requests \
|
| 275 |
-
-H "x-tenant-id: your_tenant_id"
|
| 276 |
-
```
|
| 277 |
-
|
| 278 |
-
## Monitoring Violations
|
| 279 |
-
|
| 280 |
-
```bash
|
| 281 |
-
# Get recent violations
|
| 282 |
-
curl http://localhost:8000/admin/violations \
|
| 283 |
-
-H "x-tenant-id: your_tenant_id"
|
| 284 |
-
```
|
| 285 |
-
|
| 286 |
-
## Need Help?
|
| 287 |
-
|
| 288 |
-
- Check `example_rules.txt` for simple rule examples
|
| 289 |
-
- See `example_rules_detailed.json` for advanced patterns
|
| 290 |
-
- Review the API documentation in `README.md`
|
| 291 |
-
- Test rules in the Gradio UI before deploying
|
| 292 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
SUPABASE_MIGRATION_COMPLETE.md
DELETED
|
@@ -1,125 +0,0 @@
|
|
| 1 |
-
# Supabase Migration Complete β
|
| 2 |
-
|
| 3 |
-
After running the migration, your data is now in Supabase. This document explains how to ensure **all future data** is saved to Supabase instead of SQLite.
|
| 4 |
-
|
| 5 |
-
## β
What's Already Configured
|
| 6 |
-
|
| 7 |
-
Both `RulesStore` and `AnalyticsStore` automatically detect and use Supabase when credentials are available. They will:
|
| 8 |
-
|
| 9 |
-
1. **Check for Supabase credentials** in your `.env` file
|
| 10 |
-
2. **Use Supabase if available** (preferred)
|
| 11 |
-
3. **Fall back to SQLite** only if Supabase is not configured
|
| 12 |
-
|
| 13 |
-
## π§ Required Configuration
|
| 14 |
-
|
| 15 |
-
To ensure Supabase is used for all future data, make sure your `.env` file has:
|
| 16 |
-
|
| 17 |
-
```env
|
| 18 |
-
# Required for runtime (REST API)
|
| 19 |
-
SUPABASE_URL=https://your-project-id.supabase.co
|
| 20 |
-
SUPABASE_SERVICE_KEY=your_service_role_key_here
|
| 21 |
-
|
| 22 |
-
# Optional: For direct PostgreSQL connection (migration only)
|
| 23 |
-
POSTGRESQL_URL=postgresql://postgres:password@db.xxxxx.supabase.co:5432/postgres
|
| 24 |
-
```
|
| 25 |
-
|
| 26 |
-
**Important:**
|
| 27 |
-
- `SUPABASE_URL` and `SUPABASE_SERVICE_KEY` are **required** for runtime
|
| 28 |
-
- `POSTGRESQL_URL` is optional (only needed for migration script)
|
| 29 |
-
- Both stores use the Supabase REST API at runtime, not direct PostgreSQL
|
| 30 |
-
|
| 31 |
-
## β
Verify Configuration
|
| 32 |
-
|
| 33 |
-
Run the verification script to confirm Supabase is configured:
|
| 34 |
-
|
| 35 |
-
```bash
|
| 36 |
-
python verify_supabase_setup.py
|
| 37 |
-
```
|
| 38 |
-
|
| 39 |
-
This will show:
|
| 40 |
-
- β
Which backend each store is using
|
| 41 |
-
- β οΈ Any missing configuration
|
| 42 |
-
- π Summary of what will be saved where
|
| 43 |
-
|
| 44 |
-
## π After Configuration
|
| 45 |
-
|
| 46 |
-
1. **Restart your services:**
|
| 47 |
-
```bash
|
| 48 |
-
# Stop your FastAPI server
|
| 49 |
-
# Stop your MCP server
|
| 50 |
-
# Then restart them
|
| 51 |
-
```
|
| 52 |
-
|
| 53 |
-
2. **Check startup logs:**
|
| 54 |
-
You should see messages like:
|
| 55 |
-
```
|
| 56 |
-
β
RulesStore: Using Supabase backend
|
| 57 |
-
β
AnalyticsStore: Using Supabase backend
|
| 58 |
-
β
AgentOrchestrator Analytics: Using Supabase backend
|
| 59 |
-
```
|
| 60 |
-
|
| 61 |
-
3. **Test by adding data:**
|
| 62 |
-
- Add a rule via the admin panel
|
| 63 |
-
- Make a query to generate analytics
|
| 64 |
-
- Check Supabase Dashboard β Table Editor to verify data appears
|
| 65 |
-
|
| 66 |
-
## π Where Data is Saved
|
| 67 |
-
|
| 68 |
-
| Data Type | Storage Location | Configuration |
|
| 69 |
-
|-----------|-----------------|---------------|
|
| 70 |
-
| Admin Rules | Supabase `admin_rules` table | `SUPABASE_URL` + `SUPABASE_SERVICE_KEY` |
|
| 71 |
-
| Analytics Events | Supabase analytics tables | `SUPABASE_URL` + `SUPABASE_SERVICE_KEY` |
|
| 72 |
-
| Tool Usage | Supabase `tool_usage_events` | `SUPABASE_URL` + `SUPABASE_SERVICE_KEY` |
|
| 73 |
-
| Red Flags | Supabase `redflag_violations` | `SUPABASE_URL` + `SUPABASE_SERVICE_KEY` |
|
| 74 |
-
| RAG Searches | Supabase `rag_search_events` | `SUPABASE_URL` + `SUPABASE_SERVICE_KEY` |
|
| 75 |
-
| Agent Queries | Supabase `agent_query_events` | `SUPABASE_URL` + `SUPABASE_SERVICE_KEY` |
|
| 76 |
-
|
| 77 |
-
## π Troubleshooting
|
| 78 |
-
|
| 79 |
-
### Data still going to SQLite?
|
| 80 |
-
|
| 81 |
-
1. **Check your `.env` file:**
|
| 82 |
-
```bash
|
| 83 |
-
# Make sure these are set (no quotes, no spaces)
|
| 84 |
-
SUPABASE_URL=https://xxxxx.supabase.co
|
| 85 |
-
SUPABASE_SERVICE_KEY=eyJ... (full key)
|
| 86 |
-
```
|
| 87 |
-
|
| 88 |
-
2. **Verify credentials:**
|
| 89 |
-
```bash
|
| 90 |
-
python verify_supabase_key.py
|
| 91 |
-
```
|
| 92 |
-
|
| 93 |
-
3. **Check startup logs:**
|
| 94 |
-
Look for warnings like:
|
| 95 |
-
```
|
| 96 |
-
β οΈ RulesStore: Using SQLite backend
|
| 97 |
-
```
|
| 98 |
-
This means Supabase credentials are missing or invalid.
|
| 99 |
-
|
| 100 |
-
4. **Restart services:**
|
| 101 |
-
Environment variables are loaded at startup. After changing `.env`, restart your services.
|
| 102 |
-
|
| 103 |
-
### Tables don't exist?
|
| 104 |
-
|
| 105 |
-
If you see errors about missing tables:
|
| 106 |
-
|
| 107 |
-
1. Go to Supabase Dashboard β SQL Editor
|
| 108 |
-
2. Run `supabase_admin_rules_table.sql` (for rules)
|
| 109 |
-
3. Run `supabase_analytics_tables.sql` (for analytics)
|
| 110 |
-
|
| 111 |
-
### API Key errors?
|
| 112 |
-
|
| 113 |
-
- Make sure you're using the **service_role** key (not anon key)
|
| 114 |
-
- Key should be ~200+ characters long
|
| 115 |
-
- No quotes or spaces around the value in `.env`
|
| 116 |
-
|
| 117 |
-
## π Summary
|
| 118 |
-
|
| 119 |
-
β
**Migration complete** - Your existing data is in Supabase
|
| 120 |
-
β
**Auto-detection enabled** - Stores automatically use Supabase when configured
|
| 121 |
-
β
**Startup logging** - You'll see which backend is being used
|
| 122 |
-
β
**Verification script** - Run `verify_supabase_setup.py` to check configuration
|
| 123 |
-
|
| 124 |
-
**Next time you add rules or generate analytics, they will automatically be saved to Supabase!** π
|
| 125 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
SUPABASE_SETUP.md
DELETED
|
@@ -1,130 +0,0 @@
|
|
| 1 |
-
# Supabase Setup for Admin Rules
|
| 2 |
-
|
| 3 |
-
This guide will help you set up Supabase to store admin rules instead of SQLite.
|
| 4 |
-
|
| 5 |
-
## Step 1: Create the Table in Supabase
|
| 6 |
-
|
| 7 |
-
1. **Go to your Supabase Dashboard**
|
| 8 |
-
- Navigate to: https://app.supabase.com
|
| 9 |
-
- Select your project
|
| 10 |
-
|
| 11 |
-
2. **Open SQL Editor**
|
| 12 |
-
- Click on "SQL Editor" in the left sidebar
|
| 13 |
-
- Click "New query"
|
| 14 |
-
|
| 15 |
-
3. **Run the SQL Script**
|
| 16 |
-
- Copy the contents of `supabase_admin_rules_table.sql`
|
| 17 |
-
- Paste it into the SQL Editor
|
| 18 |
-
- Click "Run" to execute
|
| 19 |
-
|
| 20 |
-
This will create:
|
| 21 |
-
- `admin_rules` table with all necessary columns
|
| 22 |
-
- Indexes for performance
|
| 23 |
-
- Row Level Security (RLS) policies
|
| 24 |
-
- Automatic timestamp updates
|
| 25 |
-
|
| 26 |
-
## Step 2: Configure Environment Variables
|
| 27 |
-
|
| 28 |
-
Make sure your `.env` file has Supabase credentials:
|
| 29 |
-
|
| 30 |
-
```env
|
| 31 |
-
SUPABASE_URL=https://your-project.supabase.co
|
| 32 |
-
SUPABASE_SERVICE_KEY=your_service_role_key_here
|
| 33 |
-
```
|
| 34 |
-
|
| 35 |
-
**Important:** Use the **Service Role Key** (not the anon key) for full access.
|
| 36 |
-
|
| 37 |
-
To find your keys:
|
| 38 |
-
1. Go to Supabase Dashboard β Settings β API
|
| 39 |
-
2. Copy the "Project URL" β `SUPABASE_URL`
|
| 40 |
-
3. Copy the "service_role" key β `SUPABASE_SERVICE_KEY`
|
| 41 |
-
|
| 42 |
-
## Step 3: Verify Setup
|
| 43 |
-
|
| 44 |
-
The `RulesStore` will automatically use Supabase if:
|
| 45 |
-
- `SUPABASE_URL` is set
|
| 46 |
-
- `SUPABASE_SERVICE_KEY` is set
|
| 47 |
-
- Supabase Python client is installed (`pip install supabase`)
|
| 48 |
-
|
| 49 |
-
If Supabase is not configured, it will fall back to SQLite automatically.
|
| 50 |
-
|
| 51 |
-
## Step 4: Test the Integration
|
| 52 |
-
|
| 53 |
-
You can test if rules are being saved to Supabase:
|
| 54 |
-
|
| 55 |
-
```python
|
| 56 |
-
from backend.api.storage.rules_store import RulesStore
|
| 57 |
-
|
| 58 |
-
store = RulesStore()
|
| 59 |
-
print(f"Using Supabase: {store.use_supabase}")
|
| 60 |
-
|
| 61 |
-
# Add a test rule
|
| 62 |
-
store.add_rule("test_tenant", "Test rule", severity="high")
|
| 63 |
-
print("Rule added!")
|
| 64 |
-
|
| 65 |
-
# Get rules
|
| 66 |
-
rules = store.get_rules("test_tenant")
|
| 67 |
-
print(f"Rules: {rules}")
|
| 68 |
-
```
|
| 69 |
-
|
| 70 |
-
## Step 5: View Rules in Supabase
|
| 71 |
-
|
| 72 |
-
1. Go to Supabase Dashboard β Table Editor
|
| 73 |
-
2. Select the `admin_rules` table
|
| 74 |
-
3. You should see all your rules with tenant isolation
|
| 75 |
-
|
| 76 |
-
## Supabase Analytics Tables
|
| 77 |
-
|
| 78 |
-
To move analytics off SQLite, create the Supabase tables that mirror the local schema:
|
| 79 |
-
|
| 80 |
-
1. Open the Supabase SQL Editor.
|
| 81 |
-
2. Copy the contents of `supabase_analytics_tables.sql`.
|
| 82 |
-
3. Run the script. It creates the following tables with indexes + RLS policies:
|
| 83 |
-
- `tool_usage_events`
|
| 84 |
-
- `redflag_violations`
|
| 85 |
-
- `rag_search_events`
|
| 86 |
-
- `agent_query_events`
|
| 87 |
-
|
| 88 |
-
After the tables exist, the backend automatically detects Supabase credentials and writes analytics there (falling back to SQLite only when credentials or the Supabase client are missing).
|
| 89 |
-
|
| 90 |
-
## Migration from SQLite
|
| 91 |
-
|
| 92 |
-
If you already have local data that should be moved to Supabase, use the helper script:
|
| 93 |
-
|
| 94 |
-
```bash
|
| 95 |
-
python migrate_sqlite_to_supabase.py
|
| 96 |
-
```
|
| 97 |
-
|
| 98 |
-
The script:
|
| 99 |
-
- Loads `.env` for Supabase credentials
|
| 100 |
-
- Copies `data/admin_rules.db` β `admin_rules`
|
| 101 |
-
- Copies all analytics tables in `data/analytics.db` β Supabase equivalents
|
| 102 |
-
- Skips tables that already contain Supabase rows (pass `--force` to override)
|
| 103 |
-
|
| 104 |
-
> **Tip:** Back up your SQLite databases before migrating. The script does not delete local data.
|
| 105 |
-
|
| 106 |
-
## Troubleshooting
|
| 107 |
-
|
| 108 |
-
### Rules not appearing in Supabase
|
| 109 |
-
- Check that RLS policies allow your service role to read/write
|
| 110 |
-
- Verify environment variables are set correctly
|
| 111 |
-
- Check Supabase logs for errors
|
| 112 |
-
|
| 113 |
-
### Fallback to SQLite
|
| 114 |
-
- If Supabase credentials are missing, it automatically uses SQLite
|
| 115 |
-
- Check your `.env` file has correct values
|
| 116 |
-
- Restart your FastAPI server after changing `.env`
|
| 117 |
-
|
| 118 |
-
### Permission Errors
|
| 119 |
-
- Make sure you're using the **service_role** key (not anon key)
|
| 120 |
-
- Check RLS policies in Supabase allow service role access
|
| 121 |
-
|
| 122 |
-
## Benefits of Using Supabase
|
| 123 |
-
|
| 124 |
-
β
**Scalability** - Handle millions of rules
|
| 125 |
-
β
**Multi-region** - Global availability
|
| 126 |
-
β
**Backups** - Automatic backups
|
| 127 |
-
β
**Real-time** - Can subscribe to changes
|
| 128 |
-
β
**Security** - Row Level Security built-in
|
| 129 |
-
β
**Analytics** - Built-in query performance monitoring
|
| 130 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
TESTING_GUIDE.md
DELETED
|
@@ -1,421 +0,0 @@
|
|
| 1 |
-
# IntegraChat Testing Guide
|
| 2 |
-
|
| 3 |
-
This guide explains how to test all the new features and improvements in IntegraChat.
|
| 4 |
-
|
| 5 |
-
## Prerequisites
|
| 6 |
-
|
| 7 |
-
1. **Install Dependencies**
|
| 8 |
-
```bash
|
| 9 |
-
pip install -r requirements.txt
|
| 10 |
-
```
|
| 11 |
-
|
| 12 |
-
2. **Environment Setup**
|
| 13 |
-
- Create a `.env` file or set environment variables
|
| 14 |
-
- Optional: Set up Ollama for LLM testing
|
| 15 |
-
- Optional: Set up Supabase for production analytics
|
| 16 |
-
|
| 17 |
-
## Test Structure
|
| 18 |
-
|
| 19 |
-
### 1. Unit Tests
|
| 20 |
-
|
| 21 |
-
Run unit tests for individual components:
|
| 22 |
-
|
| 23 |
-
```bash
|
| 24 |
-
# Run all unit tests
|
| 25 |
-
pytest backend/tests/
|
| 26 |
-
|
| 27 |
-
# Run specific test files
|
| 28 |
-
pytest backend/tests/test_analytics_store.py -v
|
| 29 |
-
pytest backend/tests/test_enhanced_admin_rules.py -v
|
| 30 |
-
pytest backend/tests/test_api_endpoints.py -v
|
| 31 |
-
|
| 32 |
-
# Run with coverage
|
| 33 |
-
pytest backend/tests/ --cov=backend/api --cov-report=html
|
| 34 |
-
```
|
| 35 |
-
|
| 36 |
-
### 2. Integration Tests
|
| 37 |
-
|
| 38 |
-
Test API endpoints with the FastAPI test client:
|
| 39 |
-
|
| 40 |
-
```bash
|
| 41 |
-
pytest backend/tests/test_api_endpoints.py -v
|
| 42 |
-
```
|
| 43 |
-
|
| 44 |
-
**Note**: Some integration tests may fail if MCP servers or LLM are not running. That's expected.
|
| 45 |
-
|
| 46 |
-
### 3. Manual Testing Scripts
|
| 47 |
-
|
| 48 |
-
Create test data and verify functionality manually:
|
| 49 |
-
|
| 50 |
-
#### A. Test Analytics Store
|
| 51 |
-
|
| 52 |
-
```bash
|
| 53 |
-
python -c "
|
| 54 |
-
from backend.api.storage.analytics_store import AnalyticsStore
|
| 55 |
-
import time
|
| 56 |
-
|
| 57 |
-
store = AnalyticsStore()
|
| 58 |
-
|
| 59 |
-
# Log tool usage
|
| 60 |
-
store.log_tool_usage('test_tenant', 'rag', latency_ms=150, tokens_used=500, success=True)
|
| 61 |
-
store.log_tool_usage('test_tenant', 'web', latency_ms=80, success=True)
|
| 62 |
-
|
| 63 |
-
# Log red-flag violation
|
| 64 |
-
store.log_redflag_violation(
|
| 65 |
-
'test_tenant',
|
| 66 |
-
'rule1',
|
| 67 |
-
'.*password.*',
|
| 68 |
-
'high',
|
| 69 |
-
'password123',
|
| 70 |
-
confidence=0.95
|
| 71 |
-
)
|
| 72 |
-
|
| 73 |
-
# Log RAG search
|
| 74 |
-
store.log_rag_search('test_tenant', 'test query', hits_count=5, avg_score=0.85, top_score=0.92)
|
| 75 |
-
|
| 76 |
-
# Log agent query
|
| 77 |
-
store.log_agent_query('test_tenant', 'test message', intent='rag', tools_used=['rag', 'llm'], total_tokens=1000)
|
| 78 |
-
|
| 79 |
-
# Get stats
|
| 80 |
-
print('Tool Usage:', store.get_tool_usage_stats('test_tenant'))
|
| 81 |
-
print('Violations:', store.get_redflag_violations('test_tenant'))
|
| 82 |
-
print('Activity:', store.get_activity_summary('test_tenant'))
|
| 83 |
-
print('RAG Quality:', store.get_rag_quality_metrics('test_tenant'))
|
| 84 |
-
"
|
| 85 |
-
```
|
| 86 |
-
|
| 87 |
-
#### B. Test Admin Rules with Regex
|
| 88 |
-
|
| 89 |
-
```bash
|
| 90 |
-
python -c "
|
| 91 |
-
from backend.api.storage.rules_store import RulesStore
|
| 92 |
-
import re
|
| 93 |
-
|
| 94 |
-
store = RulesStore()
|
| 95 |
-
|
| 96 |
-
# Add rule with regex pattern
|
| 97 |
-
store.add_rule(
|
| 98 |
-
'test_tenant',
|
| 99 |
-
'Block password queries',
|
| 100 |
-
pattern='.*password.*|.*pwd.*',
|
| 101 |
-
severity='high',
|
| 102 |
-
description='Blocks password-related queries'
|
| 103 |
-
)
|
| 104 |
-
|
| 105 |
-
# Get detailed rules
|
| 106 |
-
rules = store.get_rules_detailed('test_tenant')
|
| 107 |
-
print('Rules:', rules)
|
| 108 |
-
|
| 109 |
-
# Test regex matching
|
| 110 |
-
pattern = rules[0]['pattern']
|
| 111 |
-
regex = re.compile(pattern, re.IGNORECASE)
|
| 112 |
-
test_text = 'What is my password?'
|
| 113 |
-
match = regex.search(test_text)
|
| 114 |
-
print(f'Match for \"{test_text}\": {match is not None}')
|
| 115 |
-
"
|
| 116 |
-
```
|
| 117 |
-
|
| 118 |
-
## API Endpoint Testing
|
| 119 |
-
|
| 120 |
-
### Using curl
|
| 121 |
-
|
| 122 |
-
#### 1. Test Analytics Endpoints
|
| 123 |
-
|
| 124 |
-
```bash
|
| 125 |
-
# Overview
|
| 126 |
-
curl -X GET "http://localhost:8000/analytics/overview?days=30" \
|
| 127 |
-
-H "x-tenant-id: test_tenant"
|
| 128 |
-
|
| 129 |
-
# Tool Usage
|
| 130 |
-
curl -X GET "http://localhost:8000/analytics/tool-usage?days=30" \
|
| 131 |
-
-H "x-tenant-id: test_tenant"
|
| 132 |
-
|
| 133 |
-
# RAG Quality
|
| 134 |
-
curl -X GET "http://localhost:8000/analytics/rag-quality?days=30" \
|
| 135 |
-
-H "x-tenant-id: test_tenant"
|
| 136 |
-
|
| 137 |
-
# Red Flags
|
| 138 |
-
curl -X GET "http://localhost:8000/analytics/redflags?limit=50&days=30" \
|
| 139 |
-
-H "x-tenant-id: test_tenant"
|
| 140 |
-
```
|
| 141 |
-
|
| 142 |
-
#### 2. Test Admin Endpoints
|
| 143 |
-
|
| 144 |
-
```bash
|
| 145 |
-
# Add rule with regex and severity
|
| 146 |
-
curl -X POST "http://localhost:8000/admin/rules" \
|
| 147 |
-
-H "x-tenant-id: test_tenant" \
|
| 148 |
-
-H "Content-Type: application/json" \
|
| 149 |
-
-d '{
|
| 150 |
-
"rule": "Block password queries",
|
| 151 |
-
"pattern": ".*password.*",
|
| 152 |
-
"severity": "high",
|
| 153 |
-
"description": "Blocks password-related queries"
|
| 154 |
-
}'
|
| 155 |
-
|
| 156 |
-
# Get detailed rules
|
| 157 |
-
curl -X GET "http://localhost:8000/admin/rules?detailed=true" \
|
| 158 |
-
-H "x-tenant-id: test_tenant"
|
| 159 |
-
|
| 160 |
-
# Get violations
|
| 161 |
-
curl -X GET "http://localhost:8000/admin/violations?limit=50&days=30" \
|
| 162 |
-
-H "x-tenant-id: test_tenant"
|
| 163 |
-
|
| 164 |
-
# Get tool logs
|
| 165 |
-
curl -X GET "http://localhost:8000/admin/tools/logs?tool_name=rag&days=7" \
|
| 166 |
-
-H "x-tenant-id: test_tenant"
|
| 167 |
-
```
|
| 168 |
-
|
| 169 |
-
#### 3. Test Agent Endpoints
|
| 170 |
-
|
| 171 |
-
```bash
|
| 172 |
-
# Agent chat (normal)
|
| 173 |
-
curl -X POST "http://localhost:8000/agent/message" \
|
| 174 |
-
-H "Content-Type: application/json" \
|
| 175 |
-
-d '{
|
| 176 |
-
"tenant_id": "test_tenant",
|
| 177 |
-
"message": "What is the company policy?",
|
| 178 |
-
"temperature": 0.0
|
| 179 |
-
}'
|
| 180 |
-
|
| 181 |
-
# Agent debug
|
| 182 |
-
curl -X POST "http://localhost:8000/agent/debug" \
|
| 183 |
-
-H "Content-Type: application/json" \
|
| 184 |
-
-d '{
|
| 185 |
-
"tenant_id": "test_tenant",
|
| 186 |
-
"message": "What is the company policy?",
|
| 187 |
-
"temperature": 0.0
|
| 188 |
-
}'
|
| 189 |
-
|
| 190 |
-
# Agent plan
|
| 191 |
-
curl -X POST "http://localhost:8000/agent/plan" \
|
| 192 |
-
-H "Content-Type: application/json" \
|
| 193 |
-
-d '{
|
| 194 |
-
"tenant_id": "test_tenant",
|
| 195 |
-
"message": "What is the company policy?",
|
| 196 |
-
"temperature": 0.0
|
| 197 |
-
}'
|
| 198 |
-
```
|
| 199 |
-
|
| 200 |
-
### Using Python requests
|
| 201 |
-
|
| 202 |
-
Create a test script `test_api_manual.py`:
|
| 203 |
-
|
| 204 |
-
```python
|
| 205 |
-
import requests
|
| 206 |
-
import json
|
| 207 |
-
|
| 208 |
-
BASE_URL = "http://localhost:8000"
|
| 209 |
-
TENANT_ID = "test_tenant"
|
| 210 |
-
|
| 211 |
-
headers = {"x-tenant-id": TENANT_ID}
|
| 212 |
-
|
| 213 |
-
# Test analytics
|
| 214 |
-
print("Testing Analytics Endpoints...")
|
| 215 |
-
response = requests.get(f"{BASE_URL}/analytics/overview?days=30", headers=headers)
|
| 216 |
-
print(f"Overview: {response.status_code} - {json.dumps(response.json(), indent=2)}")
|
| 217 |
-
|
| 218 |
-
response = requests.get(f"{BASE_URL}/analytics/tool-usage?days=30", headers=headers)
|
| 219 |
-
print(f"Tool Usage: {response.status_code} - {json.dumps(response.json(), indent=2)}")
|
| 220 |
-
|
| 221 |
-
# Test admin rules
|
| 222 |
-
print("\nTesting Admin Rules...")
|
| 223 |
-
response = requests.post(
|
| 224 |
-
f"{BASE_URL}/admin/rules",
|
| 225 |
-
headers=headers,
|
| 226 |
-
json={
|
| 227 |
-
"rule": "Block password queries",
|
| 228 |
-
"pattern": ".*password.*",
|
| 229 |
-
"severity": "high"
|
| 230 |
-
}
|
| 231 |
-
)
|
| 232 |
-
print(f"Add Rule: {response.status_code} - {json.dumps(response.json(), indent=2)}")
|
| 233 |
-
|
| 234 |
-
response = requests.get(
|
| 235 |
-
f"{BASE_URL}/admin/rules?detailed=true",
|
| 236 |
-
headers=headers
|
| 237 |
-
)
|
| 238 |
-
print(f"Get Rules: {response.status_code} - {json.dumps(response.json(), indent=2)}")
|
| 239 |
-
|
| 240 |
-
# Test agent endpoints
|
| 241 |
-
print("\nTesting Agent Endpoints...")
|
| 242 |
-
response = requests.post(
|
| 243 |
-
f"{BASE_URL}/agent/plan",
|
| 244 |
-
json={
|
| 245 |
-
"tenant_id": TENANT_ID,
|
| 246 |
-
"message": "What is the company policy?",
|
| 247 |
-
"temperature": 0.0
|
| 248 |
-
}
|
| 249 |
-
)
|
| 250 |
-
print(f"Agent Plan: {response.status_code} - {json.dumps(response.json(), indent=2)}")
|
| 251 |
-
```
|
| 252 |
-
|
| 253 |
-
Run it:
|
| 254 |
-
```bash
|
| 255 |
-
python test_api_manual.py
|
| 256 |
-
```
|
| 257 |
-
|
| 258 |
-
## End-to-End Testing Workflow
|
| 259 |
-
|
| 260 |
-
### Step 1: Start Backend Services
|
| 261 |
-
|
| 262 |
-
```bash
|
| 263 |
-
# Terminal 1: Start FastAPI backend
|
| 264 |
-
cd backend/api
|
| 265 |
-
uvicorn main:app --port 8000 --reload
|
| 266 |
-
|
| 267 |
-
# Terminal 2: Start unified MCP server (rag/web/admin tools)
|
| 268 |
-
python backend/mcp_server/server.py
|
| 269 |
-
|
| 270 |
-
# Optional: Start Ollama for LLM
|
| 271 |
-
ollama serve
|
| 272 |
-
```
|
| 273 |
-
|
| 274 |
-
### Step 2: Generate Test Data
|
| 275 |
-
|
| 276 |
-
Run the analytics and rules tests to populate the database:
|
| 277 |
-
|
| 278 |
-
```bash
|
| 279 |
-
pytest backend/tests/test_analytics_store.py -v
|
| 280 |
-
pytest backend/tests/test_enhanced_admin_rules.py -v
|
| 281 |
-
```
|
| 282 |
-
|
| 283 |
-
### Step 3: Test Agent Flow
|
| 284 |
-
|
| 285 |
-
1. **Add some admin rules:**
|
| 286 |
-
```bash
|
| 287 |
-
curl -X POST "http://localhost:8000/admin/rules" \
|
| 288 |
-
-H "x-tenant-id: test_tenant" \
|
| 289 |
-
-H "Content-Type: application/json" \
|
| 290 |
-
-d '{"rule": "Block password queries", "pattern": ".*password.*", "severity": "high"}'
|
| 291 |
-
```
|
| 292 |
-
|
| 293 |
-
2. **Send a query that triggers red-flag:**
|
| 294 |
-
```bash
|
| 295 |
-
curl -X POST "http://localhost:8000/agent/message" \
|
| 296 |
-
-H "Content-Type: application/json" \
|
| 297 |
-
-d '{"tenant_id": "test_tenant", "message": "What is my password?"}'
|
| 298 |
-
```
|
| 299 |
-
|
| 300 |
-
3. **Check violations were logged:**
|
| 301 |
-
```bash
|
| 302 |
-
curl -X GET "http://localhost:8000/admin/violations" \
|
| 303 |
-
-H "x-tenant-id: test_tenant"
|
| 304 |
-
```
|
| 305 |
-
|
| 306 |
-
4. **Send normal queries and check analytics:**
|
| 307 |
-
```bash
|
| 308 |
-
curl -X POST "http://localhost:8000/agent/message" \
|
| 309 |
-
-H "Content-Type: application/json" \
|
| 310 |
-
-d '{"tenant_id": "test_tenant", "message": "What is the company policy?"}'
|
| 311 |
-
|
| 312 |
-
curl -X GET "http://localhost:8000/analytics/overview" \
|
| 313 |
-
-H "x-tenant-id: test_tenant"
|
| 314 |
-
```
|
| 315 |
-
|
| 316 |
-
5. **Use debug endpoint to see reasoning:**
|
| 317 |
-
```bash
|
| 318 |
-
curl -X POST "http://localhost:8000/agent/debug" \
|
| 319 |
-
-H "Content-Type: application/json" \
|
| 320 |
-
-d '{"tenant_id": "test_tenant", "message": "What is the company policy?"}'
|
| 321 |
-
```
|
| 322 |
-
|
| 323 |
-
### Step 4: Verify Database
|
| 324 |
-
|
| 325 |
-
Check that data is being stored:
|
| 326 |
-
|
| 327 |
-
```bash
|
| 328 |
-
# SQLite databases are in data/ directory
|
| 329 |
-
sqlite3 data/analytics.db "SELECT * FROM tool_usage_events LIMIT 10;"
|
| 330 |
-
sqlite3 data/analytics.db "SELECT * FROM redflag_violations LIMIT 10;"
|
| 331 |
-
sqlite3 data/admin_rules.db "SELECT * FROM admin_rules;"
|
| 332 |
-
```
|
| 333 |
-
|
| 334 |
-
## Testing Checklist
|
| 335 |
-
|
| 336 |
-
### Analytics Store
|
| 337 |
-
- [ ] Tool usage logging works
|
| 338 |
-
- [ ] Red-flag violations are logged
|
| 339 |
-
- [ ] RAG search events are logged with quality metrics
|
| 340 |
-
- [ ] Agent query events are logged
|
| 341 |
-
- [ ] Stats can be filtered by time
|
| 342 |
-
- [ ] Multiple tenants are isolated
|
| 343 |
-
|
| 344 |
-
### Admin Rules
|
| 345 |
-
- [ ] Rules can be added with regex patterns
|
| 346 |
-
- [ ] Severity levels work (low/medium/high/critical)
|
| 347 |
-
- [ ] Rules without pattern use rule text
|
| 348 |
-
- [ ] Disabled rules are not returned
|
| 349 |
-
- [ ] Multiple tenants are isolated
|
| 350 |
-
- [ ] Regex patterns actually match correctly
|
| 351 |
-
|
| 352 |
-
### API Endpoints
|
| 353 |
-
- [ ] `/analytics/overview` returns correct data
|
| 354 |
-
- [ ] `/analytics/tool-usage` returns stats
|
| 355 |
-
- [ ] `/analytics/rag-quality` returns metrics
|
| 356 |
-
- [ ] `/admin/rules` accepts regex/severity
|
| 357 |
-
- [ ] `/admin/violations` returns violations
|
| 358 |
-
- [ ] `/admin/tools/logs` returns tool usage
|
| 359 |
-
- [ ] `/agent/debug` returns reasoning trace
|
| 360 |
-
- [ ] `/agent/plan` returns tool selection plan
|
| 361 |
-
- [ ] Missing tenant_id returns 400
|
| 362 |
-
|
| 363 |
-
### Integration
|
| 364 |
-
- [ ] Agent orchestrator logs to analytics
|
| 365 |
-
- [ ] Red-flag detector logs violations
|
| 366 |
-
- [ ] Tool calls are tracked
|
| 367 |
-
- [ ] Multi-step workflows are logged
|
| 368 |
-
- [ ] Errors are logged correctly
|
| 369 |
-
|
| 370 |
-
## Common Issues
|
| 371 |
-
|
| 372 |
-
### Database Not Found
|
| 373 |
-
- Ensure `data/` directory exists
|
| 374 |
-
- Analytics store will create it automatically
|
| 375 |
-
|
| 376 |
-
### Tests Fail Due to Missing Services
|
| 377 |
-
- Some tests require MCP servers or LLM to be running
|
| 378 |
-
- Mock these services or skip tests if services unavailable
|
| 379 |
-
- Unit tests should work without external services
|
| 380 |
-
|
| 381 |
-
### Import Errors
|
| 382 |
-
- Ensure you're running from project root
|
| 383 |
-
- Check that `backend/` is in Python path
|
| 384 |
-
- Install all dependencies: `pip install -r requirements.txt`
|
| 385 |
-
|
| 386 |
-
## Performance Testing
|
| 387 |
-
|
| 388 |
-
For large-scale testing:
|
| 389 |
-
|
| 390 |
-
```python
|
| 391 |
-
# Load test analytics store
|
| 392 |
-
from backend.api.storage.analytics_store import AnalyticsStore
|
| 393 |
-
import time
|
| 394 |
-
|
| 395 |
-
store = AnalyticsStore()
|
| 396 |
-
tenant_id = "load_test_tenant"
|
| 397 |
-
|
| 398 |
-
start = time.time()
|
| 399 |
-
for i in range(1000):
|
| 400 |
-
store.log_tool_usage(tenant_id, "rag", latency_ms=100 + i % 50)
|
| 401 |
-
|
| 402 |
-
elapsed = time.time() - start
|
| 403 |
-
print(f"Logged 1000 events in {elapsed:.2f}s ({1000/elapsed:.0f} events/sec)")
|
| 404 |
-
|
| 405 |
-
# Query performance
|
| 406 |
-
start = time.time()
|
| 407 |
-
stats = store.get_tool_usage_stats(tenant_id)
|
| 408 |
-
elapsed = time.time() - start
|
| 409 |
-
print(f"Query took {elapsed*1000:.2f}ms")
|
| 410 |
-
```
|
| 411 |
-
|
| 412 |
-
## Next Steps
|
| 413 |
-
|
| 414 |
-
1. **Add more test cases** for edge cases
|
| 415 |
-
2. **Set up CI/CD** to run tests automatically
|
| 416 |
-
3. **Add performance benchmarks** for analytics queries
|
| 417 |
-
4. **Create integration test suite** that spins up all services
|
| 418 |
-
5. **Add E2E tests** using Playwright or Selenium for frontend
|
| 419 |
-
|
| 420 |
-
For questions or issues, check the test files in `backend/tests/` or refer to the main README.md.
|
| 421 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
backend/api/mcp_clients/web_client.py
CHANGED
|
@@ -4,51 +4,63 @@ from dotenv import load_dotenv
|
|
| 4 |
|
| 5 |
load_dotenv()
|
| 6 |
|
|
|
|
| 7 |
class WebClient:
|
| 8 |
"""
|
| 9 |
Communicates with the Google Custom Search API.
|
| 10 |
"""
|
| 11 |
|
| 12 |
-
def __init__(self):
|
| 13 |
-
self.api_key = os.getenv("GOOGLE_SEARCH_API_KEY")
|
| 14 |
-
self.cx_id = os.getenv("GOOGLE_SEARCH_CX_ID")
|
| 15 |
self.search_endpoint = "https://www.googleapis.com/customsearch/v1"
|
| 16 |
|
| 17 |
-
async def search(self, query: str):
|
| 18 |
"""
|
| 19 |
Sends the query to Google Custom Search and returns search results.
|
| 20 |
"""
|
| 21 |
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
|
| 26 |
try:
|
| 27 |
-
async with httpx.AsyncClient() as client:
|
| 28 |
-
response = await client.get(
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
|
|
|
|
|
|
|
|
| 4 |
|
| 5 |
load_dotenv()
|
| 6 |
|
| 7 |
+
|
| 8 |
class WebClient:
|
| 9 |
"""
|
| 10 |
Communicates with the Google Custom Search API.
|
| 11 |
"""
|
| 12 |
|
| 13 |
+
def __init__(self) -> None:
|
|
|
|
|
|
|
| 14 |
self.search_endpoint = "https://www.googleapis.com/customsearch/v1"
|
| 15 |
|
| 16 |
+
async def search(self, query: str, max_results: int = 5, region: str = "us"):
|
| 17 |
"""
|
| 18 |
Sends the query to Google Custom Search and returns search results.
|
| 19 |
"""
|
| 20 |
|
| 21 |
+
max_results_value = self._sanitize_max_results(max_results)
|
| 22 |
+
|
| 23 |
+
api_key = os.getenv("GOOGLE_SEARCH_API_KEY")
|
| 24 |
+
cx_id = os.getenv("GOOGLE_SEARCH_CX_ID")
|
| 25 |
+
if not api_key or not cx_id:
|
| 26 |
+
raise RuntimeError("Google Custom Search credentials not configured.")
|
| 27 |
+
|
| 28 |
+
params = {
|
| 29 |
+
"key": api_key,
|
| 30 |
+
"cx": cx_id,
|
| 31 |
+
"q": query,
|
| 32 |
+
"num": max_results_value,
|
| 33 |
+
"gl": self._sanitize_region(region),
|
| 34 |
+
}
|
| 35 |
|
| 36 |
try:
|
| 37 |
+
async with httpx.AsyncClient(timeout=10) as client:
|
| 38 |
+
response = await client.get(self.search_endpoint, params=params)
|
| 39 |
+
response.raise_for_status()
|
| 40 |
+
except Exception as exc:
|
| 41 |
+
raise RuntimeError(f"Google Custom Search request failed: {exc}") from exc
|
| 42 |
+
|
| 43 |
+
data = response.json()
|
| 44 |
+
items = data.get("items", [])
|
| 45 |
+
return [
|
| 46 |
+
{
|
| 47 |
+
"title": item.get("title"),
|
| 48 |
+
"link": item.get("link"),
|
| 49 |
+
"snippet": item.get("snippet"),
|
| 50 |
+
}
|
| 51 |
+
for item in items
|
| 52 |
+
]
|
| 53 |
+
|
| 54 |
+
@staticmethod
|
| 55 |
+
def _sanitize_max_results(value: int) -> int:
|
| 56 |
+
try:
|
| 57 |
+
return max(1, min(int(value), 10))
|
| 58 |
+
except (TypeError, ValueError):
|
| 59 |
+
raise RuntimeError("max_results must be an integer between 1 and 10.")
|
| 60 |
+
|
| 61 |
+
@staticmethod
|
| 62 |
+
def _sanitize_region(region: str) -> str:
|
| 63 |
+
region_value = (region or "us").lower().split("-", 1)[0]
|
| 64 |
+
if len(region_value) != 2:
|
| 65 |
+
return "us"
|
| 66 |
+
return region_value
|
backend/api/routes/admin.py
CHANGED
|
@@ -1,3 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
| 1 |
from fastapi import APIRouter, Header, HTTPException, Query, UploadFile, File
|
| 2 |
from pydantic import BaseModel
|
| 3 |
from typing import List, Optional, Dict, Any
|
|
@@ -9,22 +12,74 @@ from backend.api.services.rule_enhancer import RuleEnhancer
|
|
| 9 |
from backend.api.services.document_ingestion import extract_text_from_file_bytes
|
| 10 |
|
| 11 |
router = APIRouter()
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
|
| 13 |
# Initialize stores (table creation disabled by default to avoid blocking startup)
|
| 14 |
rules_store = RulesStore(auto_create_table=False)
|
| 15 |
-
analytics_store = AnalyticsStore()
|
| 16 |
rule_enhancer = RuleEnhancer()
|
| 17 |
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
-
|
| 25 |
-
print("β
AnalyticsStore: Using Supabase backend")
|
| 26 |
-
else:
|
| 27 |
-
print("β οΈ AnalyticsStore: Using SQLite backend (set SUPABASE_URL + SUPABASE_SERVICE_KEY to use Supabase)")
|
| 28 |
|
| 29 |
|
| 30 |
class RulePayload(BaseModel):
|
|
@@ -319,7 +374,8 @@ async def get_violations(
|
|
| 319 |
raise HTTPException(status_code=400, detail="Missing tenant ID")
|
| 320 |
|
| 321 |
since_timestamp = int((datetime.now() - timedelta(days=days)).timestamp()) if days else None
|
| 322 |
-
|
|
|
|
| 323 |
|
| 324 |
# Convert timestamps to ISO format
|
| 325 |
for violation in violations:
|
|
@@ -351,7 +407,8 @@ async def get_tool_logs(
|
|
| 351 |
|
| 352 |
# For now, return aggregated stats. Full log querying would require extending AnalyticsStore
|
| 353 |
since_timestamp = int((datetime.now() - timedelta(days=days)).timestamp()) if days else None
|
| 354 |
-
|
|
|
|
| 355 |
|
| 356 |
# Filter by tool if specified
|
| 357 |
if tool_name:
|
|
|
|
| 1 |
+
import logging
|
| 2 |
+
import os
|
| 3 |
+
|
| 4 |
from fastapi import APIRouter, Header, HTTPException, Query, UploadFile, File
|
| 5 |
from pydantic import BaseModel
|
| 6 |
from typing import List, Optional, Dict, Any
|
|
|
|
| 12 |
from backend.api.services.document_ingestion import extract_text_from_file_bytes
|
| 13 |
|
| 14 |
router = APIRouter()
|
| 15 |
+
logger = logging.getLogger(__name__)
|
| 16 |
+
from dotenv import load_dotenv
|
| 17 |
+
|
| 18 |
+
load_dotenv()
|
| 19 |
|
| 20 |
# Initialize stores (table creation disabled by default to avoid blocking startup)
|
| 21 |
rules_store = RulesStore(auto_create_table=False)
|
|
|
|
| 22 |
rule_enhancer = RuleEnhancer()
|
| 23 |
|
| 24 |
+
_analytics_store: Optional[AnalyticsStore] = None
|
| 25 |
+
_analytics_disabled = os.getenv("ANALYTICS_DISABLED", "").lower() in {"1", "true", "yes"}
|
| 26 |
+
_analytics_failed = False
|
| 27 |
+
|
| 28 |
+
|
| 29 |
+
def _get_analytics_store() -> Optional[AnalyticsStore]:
|
| 30 |
+
global _analytics_store, _analytics_failed
|
| 31 |
+
|
| 32 |
+
if _analytics_disabled or _analytics_failed:
|
| 33 |
+
return None
|
| 34 |
+
|
| 35 |
+
if _analytics_store is not None:
|
| 36 |
+
return _analytics_store
|
| 37 |
+
|
| 38 |
+
try:
|
| 39 |
+
_analytics_store = AnalyticsStore()
|
| 40 |
+
except RuntimeError as exc:
|
| 41 |
+
logger.warning("Admin analytics disabled: %s", exc)
|
| 42 |
+
_analytics_failed = True
|
| 43 |
+
_analytics_store = None
|
| 44 |
+
except Exception as exc: # pragma: no cover - unexpected failures
|
| 45 |
+
logger.debug("Admin analytics unexpected init failure: %s", exc)
|
| 46 |
+
_analytics_failed = True
|
| 47 |
+
_analytics_store = None
|
| 48 |
+
|
| 49 |
+
return _analytics_store
|
| 50 |
+
|
| 51 |
+
|
| 52 |
+
def _get_analytics_or_503() -> AnalyticsStore:
|
| 53 |
+
store = _get_analytics_store()
|
| 54 |
+
if not store:
|
| 55 |
+
raise HTTPException(
|
| 56 |
+
status_code=503,
|
| 57 |
+
detail="Analytics is disabled or not configured (Supabase credentials missing).",
|
| 58 |
+
)
|
| 59 |
+
return store
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
def _log_backend_status_once() -> None:
|
| 63 |
+
if getattr(_log_backend_status_once, "_already_logged", False):
|
| 64 |
+
return
|
| 65 |
+
|
| 66 |
+
if rules_store.use_supabase:
|
| 67 |
+
print("β
RulesStore: Using Supabase backend")
|
| 68 |
+
else:
|
| 69 |
+
print("β οΈ RulesStore: Using SQLite backend (set SUPABASE_URL + SUPABASE_SERVICE_KEY to use Supabase)")
|
| 70 |
+
|
| 71 |
+
analytics = _get_analytics_store()
|
| 72 |
+
if analytics is None:
|
| 73 |
+
print("β οΈ AnalyticsStore: Disabled (Supabase not configured)")
|
| 74 |
+
elif analytics.use_supabase:
|
| 75 |
+
print("β
AnalyticsStore: Using Supabase backend")
|
| 76 |
+
else:
|
| 77 |
+
print("β οΈ AnalyticsStore: Using fallback backend")
|
| 78 |
+
|
| 79 |
+
_log_backend_status_once._already_logged = True # type: ignore[attr-defined]
|
| 80 |
+
|
| 81 |
|
| 82 |
+
_log_backend_status_once()
|
|
|
|
|
|
|
|
|
|
| 83 |
|
| 84 |
|
| 85 |
class RulePayload(BaseModel):
|
|
|
|
| 374 |
raise HTTPException(status_code=400, detail="Missing tenant ID")
|
| 375 |
|
| 376 |
since_timestamp = int((datetime.now() - timedelta(days=days)).timestamp()) if days else None
|
| 377 |
+
analytics = _get_analytics_or_503()
|
| 378 |
+
violations = analytics.get_redflag_violations(x_tenant_id, limit, since_timestamp)
|
| 379 |
|
| 380 |
# Convert timestamps to ISO format
|
| 381 |
for violation in violations:
|
|
|
|
| 407 |
|
| 408 |
# For now, return aggregated stats. Full log querying would require extending AnalyticsStore
|
| 409 |
since_timestamp = int((datetime.now() - timedelta(days=days)).timestamp()) if days else None
|
| 410 |
+
analytics = _get_analytics_or_503()
|
| 411 |
+
tool_stats = analytics.get_tool_usage_stats(x_tenant_id, since_timestamp)
|
| 412 |
|
| 413 |
# Filter by tool if specified
|
| 414 |
if tool_name:
|
backend/api/routes/web.py
CHANGED
|
@@ -1,4 +1,4 @@
|
|
| 1 |
-
from fastapi import APIRouter, Header, HTTPException
|
| 2 |
from api.mcp_clients.web_client import WebClient
|
| 3 |
|
| 4 |
router = APIRouter()
|
|
@@ -8,21 +8,26 @@ web_client = WebClient()
|
|
| 8 |
@router.post("/web/search")
|
| 9 |
async def web_search(
|
| 10 |
query: str,
|
| 11 |
-
|
|
|
|
|
|
|
| 12 |
):
|
| 13 |
"""
|
| 14 |
-
Perform a live
|
| 15 |
"""
|
| 16 |
|
| 17 |
if not x_tenant_id:
|
| 18 |
raise HTTPException(status_code=400, detail="Missing tenant ID")
|
| 19 |
|
| 20 |
try:
|
| 21 |
-
results = await web_client.search(query)
|
| 22 |
return {
|
| 23 |
"tenant_id": x_tenant_id,
|
| 24 |
"query": query,
|
| 25 |
-
"results": results
|
|
|
|
| 26 |
}
|
| 27 |
-
except
|
| 28 |
-
raise HTTPException(status_code=500, detail=str(
|
|
|
|
|
|
|
|
|
| 1 |
+
from fastapi import APIRouter, Header, HTTPException, Query
|
| 2 |
from api.mcp_clients.web_client import WebClient
|
| 3 |
|
| 4 |
router = APIRouter()
|
|
|
|
| 8 |
@router.post("/web/search")
|
| 9 |
async def web_search(
|
| 10 |
query: str,
|
| 11 |
+
max_results: int = Query(5, ge=1, le=10),
|
| 12 |
+
region: str = Query("us"),
|
| 13 |
+
x_tenant_id: str = Header(None),
|
| 14 |
):
|
| 15 |
"""
|
| 16 |
+
Perform a live Google Custom Search query for the tenant.
|
| 17 |
"""
|
| 18 |
|
| 19 |
if not x_tenant_id:
|
| 20 |
raise HTTPException(status_code=400, detail="Missing tenant ID")
|
| 21 |
|
| 22 |
try:
|
| 23 |
+
results = await web_client.search(query, max_results=max_results, region=region)
|
| 24 |
return {
|
| 25 |
"tenant_id": x_tenant_id,
|
| 26 |
"query": query,
|
| 27 |
+
"results": results,
|
| 28 |
+
"metadata": {"max_results": max_results, "region": region},
|
| 29 |
}
|
| 30 |
+
except RuntimeError as exc:
|
| 31 |
+
raise HTTPException(status_code=500, detail=str(exc)) from exc
|
| 32 |
+
except Exception as exc:
|
| 33 |
+
raise HTTPException(status_code=500, detail="Web search failed") from exc
|
backend/api/services/agent_orchestrator.py
CHANGED
|
@@ -13,6 +13,7 @@ import asyncio
|
|
| 13 |
import json
|
| 14 |
import os
|
| 15 |
from typing import List, Dict, Any, Optional
|
|
|
|
| 16 |
|
| 17 |
from ..models.agent import AgentRequest, AgentDecision, AgentResponse
|
| 18 |
from ..models.redflag import RedFlagMatch
|
|
@@ -26,6 +27,11 @@ from ..storage.analytics_store import AnalyticsStore
|
|
| 26 |
from .result_merger import merge_parallel_results, format_merged_context_for_prompt
|
| 27 |
import time
|
| 28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
class AgentOrchestrator:
|
| 31 |
|
|
@@ -43,14 +49,84 @@ class AgentOrchestrator:
|
|
| 43 |
self.intent = IntentClassifier(llm_client=self.llm)
|
| 44 |
self.selector = ToolSelector(llm_client=self.llm)
|
| 45 |
self.tool_scorer = ToolScoringService()
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
print("β
AgentOrchestrator Analytics: Using Supabase backend")
|
| 51 |
else:
|
| 52 |
-
print("β οΈ AgentOrchestrator Analytics: Using
|
| 53 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
|
| 55 |
async def handle(self, req: AgentRequest) -> AgentResponse:
|
| 56 |
start_time = time.time()
|
|
@@ -73,7 +149,7 @@ class AgentOrchestrator:
|
|
| 73 |
if matches:
|
| 74 |
# Log all rule matches
|
| 75 |
for match in matches:
|
| 76 |
-
self.
|
| 77 |
tenant_id=req.tenant_id,
|
| 78 |
rule_id=match.rule_id,
|
| 79 |
rule_pattern=match.pattern,
|
|
@@ -126,7 +202,7 @@ class AgentOrchestrator:
|
|
| 126 |
})
|
| 127 |
|
| 128 |
total_latency_ms = int((time.time() - start_time) * 1000)
|
| 129 |
-
self.
|
| 130 |
tenant_id=req.tenant_id,
|
| 131 |
message_preview=req.message[:200],
|
| 132 |
intent="greeting",
|
|
@@ -202,7 +278,7 @@ Response:"""
|
|
| 202 |
|
| 203 |
# Log LLM usage for red flag response
|
| 204 |
estimated_tokens = len(llm_response) // 4 + len(llm_prompt) // 4
|
| 205 |
-
self.
|
| 206 |
tenant_id=req.tenant_id,
|
| 207 |
tool_name="llm",
|
| 208 |
latency_ms=total_latency_ms,
|
|
@@ -211,7 +287,7 @@ Response:"""
|
|
| 211 |
user_id=req.user_id
|
| 212 |
)
|
| 213 |
|
| 214 |
-
self.
|
| 215 |
tenant_id=req.tenant_id,
|
| 216 |
message_preview=req.message[:200],
|
| 217 |
intent="admin",
|
|
@@ -261,7 +337,7 @@ Response:"""
|
|
| 261 |
if scores:
|
| 262 |
avg_score = sum(scores) / len(scores)
|
| 263 |
top_score = max(scores)
|
| 264 |
-
self.
|
| 265 |
tenant_id=req.tenant_id,
|
| 266 |
query=req.message[:500],
|
| 267 |
hits_count=hits_count,
|
|
@@ -270,7 +346,7 @@ Response:"""
|
|
| 270 |
latency_ms=rag_latency_ms
|
| 271 |
)
|
| 272 |
# Log tool usage
|
| 273 |
-
self.
|
| 274 |
tenant_id=req.tenant_id,
|
| 275 |
tool_name="rag",
|
| 276 |
latency_ms=rag_latency_ms,
|
|
@@ -286,7 +362,7 @@ Response:"""
|
|
| 286 |
except Exception as pref_err:
|
| 287 |
# If RAG fails, continue without it
|
| 288 |
rag_latency_ms = 0 # 0 for failed
|
| 289 |
-
self.
|
| 290 |
tenant_id=req.tenant_id,
|
| 291 |
tool_name="rag",
|
| 292 |
latency_ms=rag_latency_ms,
|
|
@@ -385,7 +461,7 @@ Response:"""
|
|
| 385 |
estimated_tokens = len(llm_out) // 4 + len(prompt) // 4
|
| 386 |
total_tokens += estimated_tokens
|
| 387 |
|
| 388 |
-
self.
|
| 389 |
tenant_id=req.tenant_id,
|
| 390 |
tool_name="llm",
|
| 391 |
latency_ms=llm_latency_ms,
|
|
@@ -402,7 +478,7 @@ Response:"""
|
|
| 402 |
})
|
| 403 |
|
| 404 |
total_latency_ms = int((time.time() - start_time) * 1000)
|
| 405 |
-
self.
|
| 406 |
tenant_id=req.tenant_id,
|
| 407 |
message_preview=req.message[:200],
|
| 408 |
intent=intent,
|
|
@@ -445,7 +521,7 @@ Response:"""
|
|
| 445 |
estimated_tokens = len(llm_out) // 4 + len(prompt) // 4
|
| 446 |
total_tokens += estimated_tokens
|
| 447 |
|
| 448 |
-
self.
|
| 449 |
tenant_id=req.tenant_id,
|
| 450 |
tool_name="llm",
|
| 451 |
latency_ms=llm_latency_ms,
|
|
@@ -462,7 +538,7 @@ Response:"""
|
|
| 462 |
})
|
| 463 |
|
| 464 |
total_latency_ms = int((time.time() - start_time) * 1000)
|
| 465 |
-
self.
|
| 466 |
tenant_id=req.tenant_id,
|
| 467 |
message_preview=req.message[:200],
|
| 468 |
intent=intent,
|
|
@@ -481,7 +557,7 @@ Response:"""
|
|
| 481 |
admin_latency_ms = int((time.time() - admin_start) * 1000)
|
| 482 |
tools_used.append("admin")
|
| 483 |
|
| 484 |
-
self.
|
| 485 |
tenant_id=req.tenant_id,
|
| 486 |
tool_name="admin",
|
| 487 |
latency_ms=admin_latency_ms,
|
|
@@ -498,7 +574,7 @@ Response:"""
|
|
| 498 |
})
|
| 499 |
|
| 500 |
total_latency_ms = int((time.time() - start_time) * 1000)
|
| 501 |
-
self.
|
| 502 |
tenant_id=req.tenant_id,
|
| 503 |
message_preview=req.message[:200],
|
| 504 |
intent=intent,
|
|
@@ -520,7 +596,7 @@ Response:"""
|
|
| 520 |
estimated_tokens = len(llm_out) // 4 + len(req.message) // 4
|
| 521 |
total_tokens += estimated_tokens
|
| 522 |
|
| 523 |
-
self.
|
| 524 |
tenant_id=req.tenant_id,
|
| 525 |
tool_name="llm",
|
| 526 |
latency_ms=llm_latency_ms,
|
|
@@ -537,7 +613,7 @@ Response:"""
|
|
| 537 |
})
|
| 538 |
|
| 539 |
total_latency_ms = int((time.time() - start_time) * 1000)
|
| 540 |
-
self.
|
| 541 |
tenant_id=req.tenant_id,
|
| 542 |
message_preview=req.message[:200],
|
| 543 |
intent=intent,
|
|
@@ -586,7 +662,7 @@ Response:"""
|
|
| 586 |
tools_used = ["llm"]
|
| 587 |
estimated_tokens = len(llm_out) // 4 + len(req.message) // 4
|
| 588 |
|
| 589 |
-
self.
|
| 590 |
tenant_id=req.tenant_id,
|
| 591 |
tool_name="llm",
|
| 592 |
latency_ms=llm_latency_ms,
|
|
@@ -610,7 +686,7 @@ Response:"""
|
|
| 610 |
else:
|
| 611 |
llm_out = f"I apologize, but I'm unable to process your request right now. The AI service is unavailable: {error_msg}"
|
| 612 |
|
| 613 |
-
self.
|
| 614 |
tenant_id=req.tenant_id,
|
| 615 |
tool_name="llm",
|
| 616 |
success=False,
|
|
@@ -624,7 +700,7 @@ Response:"""
|
|
| 624 |
})
|
| 625 |
|
| 626 |
total_latency_ms = int((time.time() - start_time) * 1000)
|
| 627 |
-
self.
|
| 628 |
tenant_id=req.tenant_id,
|
| 629 |
message_preview=req.message[:200],
|
| 630 |
intent=intent,
|
|
@@ -741,7 +817,7 @@ Response:"""
|
|
| 741 |
"error": str(rag_result),
|
| 742 |
"latency_ms": parallel_latency_ms
|
| 743 |
})
|
| 744 |
-
self.
|
| 745 |
tenant_id=req.tenant_id,
|
| 746 |
tool_name="rag",
|
| 747 |
latency_ms=parallel_latency_ms,
|
|
@@ -761,7 +837,7 @@ Response:"""
|
|
| 761 |
if scores:
|
| 762 |
avg_score = sum(scores) / len(scores)
|
| 763 |
top_score = max(scores)
|
| 764 |
-
self.
|
| 765 |
tenant_id=req.tenant_id,
|
| 766 |
query=req.message[:500],
|
| 767 |
hits_count=hits_count,
|
|
@@ -769,7 +845,7 @@ Response:"""
|
|
| 769 |
top_score=top_score,
|
| 770 |
latency_ms=parallel_latency_ms
|
| 771 |
)
|
| 772 |
-
self.
|
| 773 |
tenant_id=req.tenant_id,
|
| 774 |
tool_name="rag",
|
| 775 |
latency_ms=parallel_latency_ms,
|
|
@@ -797,7 +873,7 @@ Response:"""
|
|
| 797 |
"error": str(web_result),
|
| 798 |
"latency_ms": parallel_latency_ms
|
| 799 |
})
|
| 800 |
-
self.
|
| 801 |
tenant_id=req.tenant_id,
|
| 802 |
tool_name="web",
|
| 803 |
latency_ms=parallel_latency_ms,
|
|
@@ -810,7 +886,7 @@ Response:"""
|
|
| 810 |
tools_used.append("web")
|
| 811 |
tool_traces.append({"tool": "web", "response": web_result, "note": "parallel"})
|
| 812 |
hits_count = len(self._extract_hits(web_result))
|
| 813 |
-
self.
|
| 814 |
tenant_id=req.tenant_id,
|
| 815 |
tool_name="web",
|
| 816 |
latency_ms=parallel_latency_ms,
|
|
@@ -978,7 +1054,7 @@ Response:"""
|
|
| 978 |
estimated_tokens = len(llm_out) // 4 + len(prompt) // 4
|
| 979 |
total_tokens += estimated_tokens
|
| 980 |
|
| 981 |
-
self.
|
| 982 |
tenant_id=req.tenant_id,
|
| 983 |
tool_name="llm",
|
| 984 |
latency_ms=llm_latency_ms,
|
|
@@ -988,7 +1064,7 @@ Response:"""
|
|
| 988 |
)
|
| 989 |
|
| 990 |
total_latency_ms = int((time.time() - start_time) * 1000)
|
| 991 |
-
self.
|
| 992 |
tenant_id=req.tenant_id,
|
| 993 |
message_preview=req.message[:200],
|
| 994 |
intent="multi_step",
|
|
@@ -1103,7 +1179,7 @@ Response:"""
|
|
| 1103 |
"status": "recovered"
|
| 1104 |
})
|
| 1105 |
if tenant_id:
|
| 1106 |
-
self.
|
| 1107 |
tenant_id=tenant_id,
|
| 1108 |
tool_name=f"{tool_name}_retry_{attempt+1}",
|
| 1109 |
latency_ms=0,
|
|
@@ -1123,7 +1199,7 @@ Response:"""
|
|
| 1123 |
|
| 1124 |
# Log failed attempt
|
| 1125 |
if tenant_id:
|
| 1126 |
-
self.
|
| 1127 |
tenant_id=tenant_id,
|
| 1128 |
tool_name=tool_name,
|
| 1129 |
latency_ms=0,
|
|
@@ -1212,7 +1288,7 @@ Response:"""
|
|
| 1212 |
avg_score = sum(scores) / len(scores)
|
| 1213 |
|
| 1214 |
# Log retry
|
| 1215 |
-
self.
|
| 1216 |
tenant_id=tenant_id,
|
| 1217 |
tool_name="rag_retry_low_threshold",
|
| 1218 |
latency_ms=retry_latency_ms,
|
|
@@ -1244,7 +1320,7 @@ Response:"""
|
|
| 1244 |
avg_score = sum(scores) / len(scores)
|
| 1245 |
|
| 1246 |
# Log retry
|
| 1247 |
-
self.
|
| 1248 |
tenant_id=tenant_id,
|
| 1249 |
tool_name="rag_retry_expanded_query",
|
| 1250 |
latency_ms=retry_latency_ms,
|
|
@@ -1262,7 +1338,7 @@ Response:"""
|
|
| 1262 |
|
| 1263 |
# Log final RAG search
|
| 1264 |
if hits:
|
| 1265 |
-
self.
|
| 1266 |
tenant_id=tenant_id,
|
| 1267 |
query=query[:500],
|
| 1268 |
hits_count=len(hits),
|
|
@@ -1326,7 +1402,7 @@ Response:"""
|
|
| 1326 |
hits = self._extract_hits(result)
|
| 1327 |
|
| 1328 |
# Log retry
|
| 1329 |
-
self.
|
| 1330 |
tenant_id=tenant_id,
|
| 1331 |
tool_name=f"web_retry_rewrite_{i+1}",
|
| 1332 |
latency_ms=retry_latency_ms,
|
|
@@ -1344,7 +1420,7 @@ Response:"""
|
|
| 1344 |
break
|
| 1345 |
|
| 1346 |
# Log final web search
|
| 1347 |
-
self.
|
| 1348 |
tenant_id=tenant_id,
|
| 1349 |
tool_name="web",
|
| 1350 |
latency_ms=web_latency_ms,
|
|
|
|
| 13 |
import json
|
| 14 |
import os
|
| 15 |
from typing import List, Dict, Any, Optional
|
| 16 |
+
import logging
|
| 17 |
|
| 18 |
from ..models.agent import AgentRequest, AgentDecision, AgentResponse
|
| 19 |
from ..models.redflag import RedFlagMatch
|
|
|
|
| 27 |
from .result_merger import merge_parallel_results, format_merged_context_for_prompt
|
| 28 |
import time
|
| 29 |
|
| 30 |
+
logger = logging.getLogger(__name__)
|
| 31 |
+
|
| 32 |
+
from dotenv import load_dotenv
|
| 33 |
+
|
| 34 |
+
load_dotenv()
|
| 35 |
|
| 36 |
class AgentOrchestrator:
|
| 37 |
|
|
|
|
| 49 |
self.intent = IntentClassifier(llm_client=self.llm)
|
| 50 |
self.selector = ToolSelector(llm_client=self.llm)
|
| 51 |
self.tool_scorer = ToolScoringService()
|
| 52 |
+
|
| 53 |
+
self._analytics: Optional[AnalyticsStore] = None
|
| 54 |
+
self._analytics_disabled = os.getenv("ANALYTICS_DISABLED", "").lower() in {"1", "true", "yes"}
|
| 55 |
+
self._analytics_failed = False
|
| 56 |
+
self._log_analytics_backend_once()
|
| 57 |
+
|
| 58 |
+
def _log_analytics_backend_once(self) -> None:
|
| 59 |
+
if getattr(AgentOrchestrator, "_analytics_backend_logged", False):
|
| 60 |
+
return
|
| 61 |
+
|
| 62 |
+
if self._analytics_disabled:
|
| 63 |
+
print("β οΈ AgentOrchestrator Analytics: Disabled via ANALYTICS_DISABLED")
|
| 64 |
+
else:
|
| 65 |
+
store = self._get_analytics()
|
| 66 |
+
if store is None:
|
| 67 |
+
print("β οΈ AgentOrchestrator Analytics: Disabled (Supabase not configured)")
|
| 68 |
+
elif store.use_supabase:
|
| 69 |
print("β
AgentOrchestrator Analytics: Using Supabase backend")
|
| 70 |
else:
|
| 71 |
+
print("β οΈ AgentOrchestrator Analytics: Using fallback backend")
|
| 72 |
+
|
| 73 |
+
AgentOrchestrator._analytics_backend_logged = True
|
| 74 |
+
|
| 75 |
+
def _get_analytics(self) -> Optional[AnalyticsStore]:
|
| 76 |
+
if self._analytics_disabled or self._analytics_failed:
|
| 77 |
+
return None
|
| 78 |
+
|
| 79 |
+
if self._analytics is not None:
|
| 80 |
+
return self._analytics
|
| 81 |
+
|
| 82 |
+
try:
|
| 83 |
+
self._analytics = AnalyticsStore()
|
| 84 |
+
except RuntimeError as exc:
|
| 85 |
+
logger.warning("AgentOrchestrator analytics disabled: %s", exc)
|
| 86 |
+
self._analytics_failed = True
|
| 87 |
+
self._analytics = None
|
| 88 |
+
except Exception as exc: # pragma: no cover - unexpected initialization failures
|
| 89 |
+
logger.debug("AgentOrchestrator analytics unexpected init failure: %s", exc)
|
| 90 |
+
self._analytics_failed = True
|
| 91 |
+
self._analytics = None
|
| 92 |
+
|
| 93 |
+
return self._analytics
|
| 94 |
+
|
| 95 |
+
def _analytics_log_tool_usage(self, **kwargs: Any) -> None:
|
| 96 |
+
analytics = self._get_analytics()
|
| 97 |
+
if not analytics:
|
| 98 |
+
return
|
| 99 |
+
try:
|
| 100 |
+
analytics.log_tool_usage(**kwargs)
|
| 101 |
+
except Exception as exc: # pragma: no cover - analytics failures should not break flow
|
| 102 |
+
logger.debug("AgentOrchestrator tool analytics failed: %s", exc)
|
| 103 |
+
|
| 104 |
+
def _analytics_log_agent_query(self, **kwargs: Any) -> None:
|
| 105 |
+
analytics = self._get_analytics()
|
| 106 |
+
if not analytics:
|
| 107 |
+
return
|
| 108 |
+
try:
|
| 109 |
+
analytics.log_agent_query(**kwargs)
|
| 110 |
+
except Exception as exc: # pragma: no cover
|
| 111 |
+
logger.debug("AgentOrchestrator agent query analytics failed: %s", exc)
|
| 112 |
+
|
| 113 |
+
def _analytics_log_rag_search(self, **kwargs: Any) -> None:
|
| 114 |
+
analytics = self._get_analytics()
|
| 115 |
+
if not analytics:
|
| 116 |
+
return
|
| 117 |
+
try:
|
| 118 |
+
analytics.log_rag_search(**kwargs)
|
| 119 |
+
except Exception as exc: # pragma: no cover
|
| 120 |
+
logger.debug("AgentOrchestrator RAG analytics failed: %s", exc)
|
| 121 |
+
|
| 122 |
+
def _analytics_log_redflag_violation(self, **kwargs: Any) -> None:
|
| 123 |
+
analytics = self._get_analytics()
|
| 124 |
+
if not analytics:
|
| 125 |
+
return
|
| 126 |
+
try:
|
| 127 |
+
analytics.log_redflag_violation(**kwargs)
|
| 128 |
+
except Exception as exc: # pragma: no cover
|
| 129 |
+
logger.debug("AgentOrchestrator redflag analytics failed: %s", exc)
|
| 130 |
|
| 131 |
async def handle(self, req: AgentRequest) -> AgentResponse:
|
| 132 |
start_time = time.time()
|
|
|
|
| 149 |
if matches:
|
| 150 |
# Log all rule matches
|
| 151 |
for match in matches:
|
| 152 |
+
self._analytics_log_redflag_violation(
|
| 153 |
tenant_id=req.tenant_id,
|
| 154 |
rule_id=match.rule_id,
|
| 155 |
rule_pattern=match.pattern,
|
|
|
|
| 202 |
})
|
| 203 |
|
| 204 |
total_latency_ms = int((time.time() - start_time) * 1000)
|
| 205 |
+
self._analytics_log_agent_query(
|
| 206 |
tenant_id=req.tenant_id,
|
| 207 |
message_preview=req.message[:200],
|
| 208 |
intent="greeting",
|
|
|
|
| 278 |
|
| 279 |
# Log LLM usage for red flag response
|
| 280 |
estimated_tokens = len(llm_response) // 4 + len(llm_prompt) // 4
|
| 281 |
+
self._analytics_log_tool_usage(
|
| 282 |
tenant_id=req.tenant_id,
|
| 283 |
tool_name="llm",
|
| 284 |
latency_ms=total_latency_ms,
|
|
|
|
| 287 |
user_id=req.user_id
|
| 288 |
)
|
| 289 |
|
| 290 |
+
self._analytics_log_agent_query(
|
| 291 |
tenant_id=req.tenant_id,
|
| 292 |
message_preview=req.message[:200],
|
| 293 |
intent="admin",
|
|
|
|
| 337 |
if scores:
|
| 338 |
avg_score = sum(scores) / len(scores)
|
| 339 |
top_score = max(scores)
|
| 340 |
+
self._analytics_log_rag_search(
|
| 341 |
tenant_id=req.tenant_id,
|
| 342 |
query=req.message[:500],
|
| 343 |
hits_count=hits_count,
|
|
|
|
| 346 |
latency_ms=rag_latency_ms
|
| 347 |
)
|
| 348 |
# Log tool usage
|
| 349 |
+
self._analytics_log_tool_usage(
|
| 350 |
tenant_id=req.tenant_id,
|
| 351 |
tool_name="rag",
|
| 352 |
latency_ms=rag_latency_ms,
|
|
|
|
| 362 |
except Exception as pref_err:
|
| 363 |
# If RAG fails, continue without it
|
| 364 |
rag_latency_ms = 0 # 0 for failed
|
| 365 |
+
self._analytics_log_tool_usage(
|
| 366 |
tenant_id=req.tenant_id,
|
| 367 |
tool_name="rag",
|
| 368 |
latency_ms=rag_latency_ms,
|
|
|
|
| 461 |
estimated_tokens = len(llm_out) // 4 + len(prompt) // 4
|
| 462 |
total_tokens += estimated_tokens
|
| 463 |
|
| 464 |
+
self._analytics_log_tool_usage(
|
| 465 |
tenant_id=req.tenant_id,
|
| 466 |
tool_name="llm",
|
| 467 |
latency_ms=llm_latency_ms,
|
|
|
|
| 478 |
})
|
| 479 |
|
| 480 |
total_latency_ms = int((time.time() - start_time) * 1000)
|
| 481 |
+
self._analytics_log_agent_query(
|
| 482 |
tenant_id=req.tenant_id,
|
| 483 |
message_preview=req.message[:200],
|
| 484 |
intent=intent,
|
|
|
|
| 521 |
estimated_tokens = len(llm_out) // 4 + len(prompt) // 4
|
| 522 |
total_tokens += estimated_tokens
|
| 523 |
|
| 524 |
+
self._analytics_log_tool_usage(
|
| 525 |
tenant_id=req.tenant_id,
|
| 526 |
tool_name="llm",
|
| 527 |
latency_ms=llm_latency_ms,
|
|
|
|
| 538 |
})
|
| 539 |
|
| 540 |
total_latency_ms = int((time.time() - start_time) * 1000)
|
| 541 |
+
self._analytics_log_agent_query(
|
| 542 |
tenant_id=req.tenant_id,
|
| 543 |
message_preview=req.message[:200],
|
| 544 |
intent=intent,
|
|
|
|
| 557 |
admin_latency_ms = int((time.time() - admin_start) * 1000)
|
| 558 |
tools_used.append("admin")
|
| 559 |
|
| 560 |
+
self._analytics_log_tool_usage(
|
| 561 |
tenant_id=req.tenant_id,
|
| 562 |
tool_name="admin",
|
| 563 |
latency_ms=admin_latency_ms,
|
|
|
|
| 574 |
})
|
| 575 |
|
| 576 |
total_latency_ms = int((time.time() - start_time) * 1000)
|
| 577 |
+
self._analytics_log_agent_query(
|
| 578 |
tenant_id=req.tenant_id,
|
| 579 |
message_preview=req.message[:200],
|
| 580 |
intent=intent,
|
|
|
|
| 596 |
estimated_tokens = len(llm_out) // 4 + len(req.message) // 4
|
| 597 |
total_tokens += estimated_tokens
|
| 598 |
|
| 599 |
+
self._analytics_log_tool_usage(
|
| 600 |
tenant_id=req.tenant_id,
|
| 601 |
tool_name="llm",
|
| 602 |
latency_ms=llm_latency_ms,
|
|
|
|
| 613 |
})
|
| 614 |
|
| 615 |
total_latency_ms = int((time.time() - start_time) * 1000)
|
| 616 |
+
self._analytics_log_agent_query(
|
| 617 |
tenant_id=req.tenant_id,
|
| 618 |
message_preview=req.message[:200],
|
| 619 |
intent=intent,
|
|
|
|
| 662 |
tools_used = ["llm"]
|
| 663 |
estimated_tokens = len(llm_out) // 4 + len(req.message) // 4
|
| 664 |
|
| 665 |
+
self._analytics_log_tool_usage(
|
| 666 |
tenant_id=req.tenant_id,
|
| 667 |
tool_name="llm",
|
| 668 |
latency_ms=llm_latency_ms,
|
|
|
|
| 686 |
else:
|
| 687 |
llm_out = f"I apologize, but I'm unable to process your request right now. The AI service is unavailable: {error_msg}"
|
| 688 |
|
| 689 |
+
self._analytics_log_tool_usage(
|
| 690 |
tenant_id=req.tenant_id,
|
| 691 |
tool_name="llm",
|
| 692 |
success=False,
|
|
|
|
| 700 |
})
|
| 701 |
|
| 702 |
total_latency_ms = int((time.time() - start_time) * 1000)
|
| 703 |
+
self._analytics_log_agent_query(
|
| 704 |
tenant_id=req.tenant_id,
|
| 705 |
message_preview=req.message[:200],
|
| 706 |
intent=intent,
|
|
|
|
| 817 |
"error": str(rag_result),
|
| 818 |
"latency_ms": parallel_latency_ms
|
| 819 |
})
|
| 820 |
+
self._analytics_log_tool_usage(
|
| 821 |
tenant_id=req.tenant_id,
|
| 822 |
tool_name="rag",
|
| 823 |
latency_ms=parallel_latency_ms,
|
|
|
|
| 837 |
if scores:
|
| 838 |
avg_score = sum(scores) / len(scores)
|
| 839 |
top_score = max(scores)
|
| 840 |
+
self._analytics_log_rag_search(
|
| 841 |
tenant_id=req.tenant_id,
|
| 842 |
query=req.message[:500],
|
| 843 |
hits_count=hits_count,
|
|
|
|
| 845 |
top_score=top_score,
|
| 846 |
latency_ms=parallel_latency_ms
|
| 847 |
)
|
| 848 |
+
self._analytics_log_tool_usage(
|
| 849 |
tenant_id=req.tenant_id,
|
| 850 |
tool_name="rag",
|
| 851 |
latency_ms=parallel_latency_ms,
|
|
|
|
| 873 |
"error": str(web_result),
|
| 874 |
"latency_ms": parallel_latency_ms
|
| 875 |
})
|
| 876 |
+
self._analytics_log_tool_usage(
|
| 877 |
tenant_id=req.tenant_id,
|
| 878 |
tool_name="web",
|
| 879 |
latency_ms=parallel_latency_ms,
|
|
|
|
| 886 |
tools_used.append("web")
|
| 887 |
tool_traces.append({"tool": "web", "response": web_result, "note": "parallel"})
|
| 888 |
hits_count = len(self._extract_hits(web_result))
|
| 889 |
+
self._analytics_log_tool_usage(
|
| 890 |
tenant_id=req.tenant_id,
|
| 891 |
tool_name="web",
|
| 892 |
latency_ms=parallel_latency_ms,
|
|
|
|
| 1054 |
estimated_tokens = len(llm_out) // 4 + len(prompt) // 4
|
| 1055 |
total_tokens += estimated_tokens
|
| 1056 |
|
| 1057 |
+
self._analytics_log_tool_usage(
|
| 1058 |
tenant_id=req.tenant_id,
|
| 1059 |
tool_name="llm",
|
| 1060 |
latency_ms=llm_latency_ms,
|
|
|
|
| 1064 |
)
|
| 1065 |
|
| 1066 |
total_latency_ms = int((time.time() - start_time) * 1000)
|
| 1067 |
+
self._analytics_log_agent_query(
|
| 1068 |
tenant_id=req.tenant_id,
|
| 1069 |
message_preview=req.message[:200],
|
| 1070 |
intent="multi_step",
|
|
|
|
| 1179 |
"status": "recovered"
|
| 1180 |
})
|
| 1181 |
if tenant_id:
|
| 1182 |
+
self._analytics_log_tool_usage(
|
| 1183 |
tenant_id=tenant_id,
|
| 1184 |
tool_name=f"{tool_name}_retry_{attempt+1}",
|
| 1185 |
latency_ms=0,
|
|
|
|
| 1199 |
|
| 1200 |
# Log failed attempt
|
| 1201 |
if tenant_id:
|
| 1202 |
+
self._analytics_log_tool_usage(
|
| 1203 |
tenant_id=tenant_id,
|
| 1204 |
tool_name=tool_name,
|
| 1205 |
latency_ms=0,
|
|
|
|
| 1288 |
avg_score = sum(scores) / len(scores)
|
| 1289 |
|
| 1290 |
# Log retry
|
| 1291 |
+
self._analytics_log_tool_usage(
|
| 1292 |
tenant_id=tenant_id,
|
| 1293 |
tool_name="rag_retry_low_threshold",
|
| 1294 |
latency_ms=retry_latency_ms,
|
|
|
|
| 1320 |
avg_score = sum(scores) / len(scores)
|
| 1321 |
|
| 1322 |
# Log retry
|
| 1323 |
+
self._analytics_log_tool_usage(
|
| 1324 |
tenant_id=tenant_id,
|
| 1325 |
tool_name="rag_retry_expanded_query",
|
| 1326 |
latency_ms=retry_latency_ms,
|
|
|
|
| 1338 |
|
| 1339 |
# Log final RAG search
|
| 1340 |
if hits:
|
| 1341 |
+
self._analytics_log_rag_search(
|
| 1342 |
tenant_id=tenant_id,
|
| 1343 |
query=query[:500],
|
| 1344 |
hits_count=len(hits),
|
|
|
|
| 1402 |
hits = self._extract_hits(result)
|
| 1403 |
|
| 1404 |
# Log retry
|
| 1405 |
+
self._analytics_log_tool_usage(
|
| 1406 |
tenant_id=tenant_id,
|
| 1407 |
tool_name=f"web_retry_rewrite_{i+1}",
|
| 1408 |
latency_ms=retry_latency_ms,
|
|
|
|
| 1420 |
break
|
| 1421 |
|
| 1422 |
# Log final web search
|
| 1423 |
+
self._analytics_log_tool_usage(
|
| 1424 |
tenant_id=tenant_id,
|
| 1425 |
tool_name="web",
|
| 1426 |
latency_ms=web_latency_ms,
|
backend/mcp_server/common/logging.py
CHANGED
|
@@ -3,6 +3,9 @@ from __future__ import annotations
|
|
| 3 |
import logging
|
| 4 |
import os
|
| 5 |
from typing import Any, Dict, Optional
|
|
|
|
|
|
|
|
|
|
| 6 |
|
| 7 |
logger = logging.getLogger("integrachat.mcp")
|
| 8 |
if not logger.handlers:
|
|
@@ -20,9 +23,43 @@ try:
|
|
| 20 |
from backend.api.storage.analytics_store import AnalyticsStore
|
| 21 |
except Exception: # pragma: no cover - analytics storage is optional during tests
|
| 22 |
AnalyticsStore = None # type: ignore
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
|
| 27 |
|
| 28 |
def log_tool_usage(
|
|
@@ -51,9 +88,10 @@ def log_tool_usage(
|
|
| 51 |
else:
|
| 52 |
logger.warning("tool_failed %s", log_data)
|
| 53 |
|
| 54 |
-
|
|
|
|
| 55 |
try:
|
| 56 |
-
|
| 57 |
tenant_id=tenant_id,
|
| 58 |
tool_name=tool_name,
|
| 59 |
latency_ms=latency_ms,
|
|
@@ -74,9 +112,10 @@ def log_rag_search_metrics(
|
|
| 74 |
top_score: Optional[float],
|
| 75 |
latency_ms: Optional[int] = None,
|
| 76 |
):
|
| 77 |
-
|
|
|
|
| 78 |
try:
|
| 79 |
-
|
| 80 |
tenant_id=tenant_id,
|
| 81 |
query=query,
|
| 82 |
hits_count=hits_count,
|
|
@@ -99,9 +138,10 @@ def log_redflag_violation(
|
|
| 99 |
message_preview: Optional[str] = None,
|
| 100 |
user_id: Optional[str] = None,
|
| 101 |
):
|
| 102 |
-
|
|
|
|
| 103 |
try:
|
| 104 |
-
|
| 105 |
tenant_id=tenant_id,
|
| 106 |
rule_id=rule_id,
|
| 107 |
rule_pattern=rule_pattern,
|
|
|
|
| 3 |
import logging
|
| 4 |
import os
|
| 5 |
from typing import Any, Dict, Optional
|
| 6 |
+
from dotenv import load_dotenv
|
| 7 |
+
|
| 8 |
+
load_dotenv()
|
| 9 |
|
| 10 |
logger = logging.getLogger("integrachat.mcp")
|
| 11 |
if not logger.handlers:
|
|
|
|
| 23 |
from backend.api.storage.analytics_store import AnalyticsStore
|
| 24 |
except Exception: # pragma: no cover - analytics storage is optional during tests
|
| 25 |
AnalyticsStore = None # type: ignore
|
| 26 |
+
|
| 27 |
+
_analytics_store: Optional["AnalyticsStore"] = None
|
| 28 |
+
_analytics_failed = False
|
| 29 |
+
_analytics_disabled = os.getenv("ANALYTICS_DISABLED", "").lower() in {"1", "true", "yes"}
|
| 30 |
+
|
| 31 |
+
|
| 32 |
+
def _get_analytics_store() -> Optional["AnalyticsStore"]:
|
| 33 |
+
"""
|
| 34 |
+
Lazily create the analytics store so missing Supabase credentials or package
|
| 35 |
+
do not prevent the MCP server from starting. When initialization fails we
|
| 36 |
+
keep analytics disabled for the remainder of the process.
|
| 37 |
+
"""
|
| 38 |
+
|
| 39 |
+
global _analytics_store, _analytics_failed
|
| 40 |
+
|
| 41 |
+
if _analytics_disabled or _analytics_failed:
|
| 42 |
+
return None
|
| 43 |
+
|
| 44 |
+
if _analytics_store is not None:
|
| 45 |
+
return _analytics_store
|
| 46 |
+
|
| 47 |
+
if AnalyticsStore is None:
|
| 48 |
+
_analytics_failed = True
|
| 49 |
+
return None
|
| 50 |
+
|
| 51 |
+
try:
|
| 52 |
+
_analytics_store = AnalyticsStore()
|
| 53 |
+
except RuntimeError as exc:
|
| 54 |
+
logger.warning("Analytics disabled: %s", exc)
|
| 55 |
+
_analytics_failed = True
|
| 56 |
+
_analytics_store = None
|
| 57 |
+
except Exception as exc: # pragma: no cover - unexpected failures
|
| 58 |
+
logger.debug("Unexpected analytics init failure: %s", exc)
|
| 59 |
+
_analytics_failed = True
|
| 60 |
+
_analytics_store = None
|
| 61 |
+
|
| 62 |
+
return _analytics_store
|
| 63 |
|
| 64 |
|
| 65 |
def log_tool_usage(
|
|
|
|
| 88 |
else:
|
| 89 |
logger.warning("tool_failed %s", log_data)
|
| 90 |
|
| 91 |
+
store = _get_analytics_store()
|
| 92 |
+
if store and tenant_id:
|
| 93 |
try:
|
| 94 |
+
store.log_tool_usage(
|
| 95 |
tenant_id=tenant_id,
|
| 96 |
tool_name=tool_name,
|
| 97 |
latency_ms=latency_ms,
|
|
|
|
| 112 |
top_score: Optional[float],
|
| 113 |
latency_ms: Optional[int] = None,
|
| 114 |
):
|
| 115 |
+
store = _get_analytics_store()
|
| 116 |
+
if store:
|
| 117 |
try:
|
| 118 |
+
store.log_rag_search(
|
| 119 |
tenant_id=tenant_id,
|
| 120 |
query=query,
|
| 121 |
hits_count=hits_count,
|
|
|
|
| 138 |
message_preview: Optional[str] = None,
|
| 139 |
user_id: Optional[str] = None,
|
| 140 |
):
|
| 141 |
+
store = _get_analytics_store()
|
| 142 |
+
if store:
|
| 143 |
try:
|
| 144 |
+
store.log_redflag_violation(
|
| 145 |
tenant_id=tenant_id,
|
| 146 |
rule_id=rule_id,
|
| 147 |
rule_pattern=rule_pattern,
|
backend/mcp_server/web/search.py
CHANGED
|
@@ -2,16 +2,22 @@ from __future__ import annotations
|
|
| 2 |
|
| 3 |
from typing import Mapping
|
| 4 |
|
| 5 |
-
from
|
| 6 |
|
|
|
|
| 7 |
from backend.mcp_server.common.tenant import TenantContext
|
| 8 |
-
from backend.mcp_server.common.utils import
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
|
| 10 |
|
| 11 |
@tool_handler("web.search")
|
| 12 |
async def web_search(context: TenantContext, payload: Mapping[str, object]) -> dict[str, object]:
|
| 13 |
"""
|
| 14 |
-
Perform a
|
| 15 |
"""
|
| 16 |
|
| 17 |
query = payload.get("query")
|
|
@@ -24,33 +30,30 @@ async def web_search(context: TenantContext, payload: Mapping[str, object]) -> d
|
|
| 24 |
except (TypeError, ValueError):
|
| 25 |
raise ToolValidationError("max_results must be an integer between 1 and 10")
|
| 26 |
|
| 27 |
-
region = str(payload.get("region", "us
|
| 28 |
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
try:
|
| 36 |
-
results = ddg.text(query_string, max_results=max_results_value, region=region)
|
| 37 |
-
except TypeError:
|
| 38 |
-
results = ddg.text(query_string, max_results=max_results_value)
|
| 39 |
|
| 40 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 41 |
{
|
| 42 |
"title": item.get("title"),
|
| 43 |
-
"snippet": item.get("
|
| 44 |
-
"url": item.get("
|
| 45 |
}
|
| 46 |
for item in results
|
| 47 |
-
]
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
"query": query,
|
| 51 |
-
"results": formatted,
|
| 52 |
-
"metadata": {"max_results": max_results_value, "region": region},
|
| 53 |
-
}
|
| 54 |
-
except Exception as exc:
|
| 55 |
-
raise ToolExecutionError(f"web search failed: {exc}") from exc
|
| 56 |
|
|
|
|
| 2 |
|
| 3 |
from typing import Mapping
|
| 4 |
|
| 5 |
+
from dotenv import load_dotenv
|
| 6 |
|
| 7 |
+
from backend.api.mcp_clients.web_client import WebClient
|
| 8 |
from backend.mcp_server.common.tenant import TenantContext
|
| 9 |
+
from backend.mcp_server.common.utils import ToolValidationError, tool_handler
|
| 10 |
+
|
| 11 |
+
|
| 12 |
+
load_dotenv()
|
| 13 |
+
|
| 14 |
+
_web_client = WebClient()
|
| 15 |
|
| 16 |
|
| 17 |
@tool_handler("web.search")
|
| 18 |
async def web_search(context: TenantContext, payload: Mapping[str, object]) -> dict[str, object]:
|
| 19 |
"""
|
| 20 |
+
Perform a Google Custom Search query with basic max-results and region controls.
|
| 21 |
"""
|
| 22 |
|
| 23 |
query = payload.get("query")
|
|
|
|
| 30 |
except (TypeError, ValueError):
|
| 31 |
raise ToolValidationError("max_results must be an integer between 1 and 10")
|
| 32 |
|
| 33 |
+
region = str(payload.get("region", "us"))
|
| 34 |
|
| 35 |
+
metadata = {
|
| 36 |
+
"max_results": max_results_value,
|
| 37 |
+
"region": region,
|
| 38 |
+
"source": "google",
|
| 39 |
+
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
|
| 41 |
+
try:
|
| 42 |
+
results = await _web_client.search(query, max_results=max_results_value, region=region)
|
| 43 |
+
except RuntimeError as exc:
|
| 44 |
+
metadata["error"] = str(exc)
|
| 45 |
+
return {"query": query, "results": [], "metadata": metadata}
|
| 46 |
+
|
| 47 |
+
return {
|
| 48 |
+
"query": query,
|
| 49 |
+
"results": [
|
| 50 |
{
|
| 51 |
"title": item.get("title"),
|
| 52 |
+
"snippet": item.get("snippet"),
|
| 53 |
+
"url": item.get("link"),
|
| 54 |
}
|
| 55 |
for item in results
|
| 56 |
+
],
|
| 57 |
+
"metadata": metadata,
|
| 58 |
+
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 59 |
|
backend/tests/README_RETRY_TESTS.md
CHANGED
|
@@ -260,3 +260,5 @@ pytest backend/tests/ -v -k retry
|
|
| 260 |
|
| 261 |
For more information, see `TESTING_GUIDE.md` in the project root.
|
| 262 |
|
|
|
|
|
|
|
|
|
| 260 |
|
| 261 |
For more information, see `TESTING_GUIDE.md` in the project root.
|
| 262 |
|
| 263 |
+
|
| 264 |
+
|
setup_env.py
DELETED
|
@@ -1,127 +0,0 @@
|
|
| 1 |
-
#!/usr/bin/env python3
|
| 2 |
-
"""
|
| 3 |
-
Helper script to create or update .env file with Supabase credentials.
|
| 4 |
-
"""
|
| 5 |
-
|
| 6 |
-
import os
|
| 7 |
-
from pathlib import Path
|
| 8 |
-
|
| 9 |
-
def main():
|
| 10 |
-
print("=" * 70)
|
| 11 |
-
print("Supabase .env Setup Helper")
|
| 12 |
-
print("=" * 70)
|
| 13 |
-
print()
|
| 14 |
-
|
| 15 |
-
env_file = Path(".env")
|
| 16 |
-
env_example = Path("env.example")
|
| 17 |
-
|
| 18 |
-
# Check if .env already exists
|
| 19 |
-
if env_file.exists():
|
| 20 |
-
print("β οΈ .env file already exists!")
|
| 21 |
-
response = input(" Do you want to update it? (y/n): ").strip().lower()
|
| 22 |
-
if response != 'y':
|
| 23 |
-
print(" Skipping. Edit .env manually if needed.")
|
| 24 |
-
return
|
| 25 |
-
print()
|
| 26 |
-
|
| 27 |
-
# Read existing .env if it exists
|
| 28 |
-
existing_vars = {}
|
| 29 |
-
if env_file.exists():
|
| 30 |
-
with open(env_file, 'r') as f:
|
| 31 |
-
for line in f:
|
| 32 |
-
line = line.strip()
|
| 33 |
-
if line and not line.startswith('#') and '=' in line:
|
| 34 |
-
key, value = line.split('=', 1)
|
| 35 |
-
existing_vars[key.strip()] = value.strip()
|
| 36 |
-
|
| 37 |
-
print("Enter your Supabase credentials:")
|
| 38 |
-
print("(You can find these at: https://app.supabase.com β Your Project β Settings β API)")
|
| 39 |
-
print()
|
| 40 |
-
|
| 41 |
-
# Get Supabase URL
|
| 42 |
-
current_url = existing_vars.get('SUPABASE_URL', '')
|
| 43 |
-
if current_url:
|
| 44 |
-
print(f"Current SUPABASE_URL: {current_url[:50]}...")
|
| 45 |
-
response = input("Keep current? (y/n): ").strip().lower()
|
| 46 |
-
if response == 'y':
|
| 47 |
-
supabase_url = current_url
|
| 48 |
-
else:
|
| 49 |
-
supabase_url = input("Enter SUPABASE_URL (https://xxxxx.supabase.co): ").strip()
|
| 50 |
-
else:
|
| 51 |
-
supabase_url = input("Enter SUPABASE_URL (https://xxxxx.supabase.co): ").strip()
|
| 52 |
-
|
| 53 |
-
# Get Supabase Service Key
|
| 54 |
-
current_key = existing_vars.get('SUPABASE_SERVICE_KEY', '')
|
| 55 |
-
if current_key:
|
| 56 |
-
print(f"Current SUPABASE_SERVICE_KEY: {current_key[:20]}...")
|
| 57 |
-
response = input("Keep current? (y/n): ").strip().lower()
|
| 58 |
-
if response == 'y':
|
| 59 |
-
supabase_key = current_key
|
| 60 |
-
else:
|
| 61 |
-
supabase_key = input("Enter SUPABASE_SERVICE_KEY (service_role key): ").strip()
|
| 62 |
-
else:
|
| 63 |
-
supabase_key = input("Enter SUPABASE_SERVICE_KEY (service_role key): ").strip()
|
| 64 |
-
|
| 65 |
-
# Validate
|
| 66 |
-
if not supabase_url.startswith('https://'):
|
| 67 |
-
print("β οΈ Warning: SUPABASE_URL should start with https://")
|
| 68 |
-
if not supabase_key.startswith('eyJ'):
|
| 69 |
-
print("β οΈ Warning: SUPABASE_SERVICE_KEY should start with 'eyJ' (JWT token)")
|
| 70 |
-
|
| 71 |
-
print()
|
| 72 |
-
print("π Creating/updating .env file...")
|
| 73 |
-
|
| 74 |
-
# Read env.example as template
|
| 75 |
-
lines = []
|
| 76 |
-
if env_example.exists():
|
| 77 |
-
with open(env_example, 'r') as f:
|
| 78 |
-
lines = f.readlines()
|
| 79 |
-
else:
|
| 80 |
-
# Create basic template
|
| 81 |
-
lines = [
|
| 82 |
-
"# IntegraChat Environment Variables\n",
|
| 83 |
-
"# Supabase Configuration\n",
|
| 84 |
-
"SUPABASE_URL=\n",
|
| 85 |
-
"SUPABASE_SERVICE_KEY=\n",
|
| 86 |
-
]
|
| 87 |
-
|
| 88 |
-
# Update or add Supabase variables
|
| 89 |
-
updated_lines = []
|
| 90 |
-
url_found = False
|
| 91 |
-
key_found = False
|
| 92 |
-
|
| 93 |
-
for line in lines:
|
| 94 |
-
if line.startswith('SUPABASE_URL='):
|
| 95 |
-
updated_lines.append(f'SUPABASE_URL={supabase_url}\n')
|
| 96 |
-
url_found = True
|
| 97 |
-
elif line.startswith('SUPABASE_SERVICE_KEY='):
|
| 98 |
-
updated_lines.append(f'SUPABASE_SERVICE_KEY={supabase_key}\n')
|
| 99 |
-
key_found = True
|
| 100 |
-
else:
|
| 101 |
-
updated_lines.append(line)
|
| 102 |
-
|
| 103 |
-
# Add if not found
|
| 104 |
-
if not url_found:
|
| 105 |
-
updated_lines.append(f'SUPABASE_URL={supabase_url}\n')
|
| 106 |
-
if not key_found:
|
| 107 |
-
updated_lines.append(f'SUPABASE_SERVICE_KEY={supabase_key}\n')
|
| 108 |
-
|
| 109 |
-
# Write .env file
|
| 110 |
-
with open(env_file, 'w') as f:
|
| 111 |
-
f.writelines(updated_lines)
|
| 112 |
-
|
| 113 |
-
print(f"β
.env file created/updated at: {env_file.absolute()}")
|
| 114 |
-
print()
|
| 115 |
-
print("Next steps:")
|
| 116 |
-
print("1. Make sure your Supabase project is active (not paused)")
|
| 117 |
-
print("2. Create the tables in Supabase:")
|
| 118 |
-
print(" - Run supabase_admin_rules_table.sql in SQL Editor")
|
| 119 |
-
print(" - Run supabase_analytics_tables.sql in SQL Editor")
|
| 120 |
-
print("3. Test the connection:")
|
| 121 |
-
print(" python check_supabase_rules.py")
|
| 122 |
-
print("4. Run the migration:")
|
| 123 |
-
print(" python migrate_sqlite_to_supabase.py")
|
| 124 |
-
|
| 125 |
-
if __name__ == "__main__":
|
| 126 |
-
main()
|
| 127 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
setup_supabase_table.py
DELETED
|
@@ -1,121 +0,0 @@
|
|
| 1 |
-
"""
|
| 2 |
-
Automated Supabase Table Setup
|
| 3 |
-
Creates the admin_rules table in Supabase using the Management API.
|
| 4 |
-
"""
|
| 5 |
-
|
| 6 |
-
import os
|
| 7 |
-
import sys
|
| 8 |
-
from pathlib import Path
|
| 9 |
-
from dotenv import load_dotenv
|
| 10 |
-
|
| 11 |
-
# Load environment variables
|
| 12 |
-
load_dotenv()
|
| 13 |
-
|
| 14 |
-
SUPABASE_URL = os.getenv("SUPABASE_URL")
|
| 15 |
-
SUPABASE_SERVICE_KEY = os.getenv("SUPABASE_SERVICE_KEY")
|
| 16 |
-
|
| 17 |
-
if not SUPABASE_URL or not SUPABASE_SERVICE_KEY:
|
| 18 |
-
print("β Missing Supabase credentials!")
|
| 19 |
-
print(" Please set SUPABASE_URL and SUPABASE_SERVICE_KEY in your .env file")
|
| 20 |
-
sys.exit(1)
|
| 21 |
-
|
| 22 |
-
def create_table_via_supabase():
|
| 23 |
-
"""
|
| 24 |
-
Create table using Supabase client and direct table operations.
|
| 25 |
-
Since Supabase doesn't allow direct SQL execution via REST API,
|
| 26 |
-
we'll create the table structure using the Supabase client.
|
| 27 |
-
"""
|
| 28 |
-
try:
|
| 29 |
-
from supabase import create_client
|
| 30 |
-
|
| 31 |
-
print("π Connecting to Supabase...")
|
| 32 |
-
client = create_client(SUPABASE_URL, SUPABASE_SERVICE_KEY)
|
| 33 |
-
|
| 34 |
-
# Read SQL file
|
| 35 |
-
sql_file = Path(__file__).parent / "supabase_admin_rules_table.sql"
|
| 36 |
-
if not sql_file.exists():
|
| 37 |
-
print(f"β SQL file not found: {sql_file}")
|
| 38 |
-
return False
|
| 39 |
-
|
| 40 |
-
with open(sql_file, "r", encoding="utf-8") as f:
|
| 41 |
-
sql_content = f.read()
|
| 42 |
-
|
| 43 |
-
print("π SQL Script loaded from supabase_admin_rules_table.sql")
|
| 44 |
-
print("\n" + "=" * 60)
|
| 45 |
-
print("β οΈ IMPORTANT: Supabase Python client cannot execute raw SQL")
|
| 46 |
-
print("=" * 60)
|
| 47 |
-
print("\nYou need to run the SQL manually in Supabase Dashboard:")
|
| 48 |
-
print("\nπ Steps:")
|
| 49 |
-
print(" 1. Open: https://app.supabase.com")
|
| 50 |
-
print(" 2. Select your project")
|
| 51 |
-
print(" 3. Go to: SQL Editor (left sidebar)")
|
| 52 |
-
print(" 4. Click: 'New query'")
|
| 53 |
-
print(" 5. Copy the SQL below and paste it:")
|
| 54 |
-
print("\n" + "-" * 60)
|
| 55 |
-
print(sql_content)
|
| 56 |
-
print("-" * 60)
|
| 57 |
-
print("\n 6. Click 'Run' button (or press Ctrl+Enter)")
|
| 58 |
-
print(" 7. Wait for success message")
|
| 59 |
-
print("\nβ
After running, the table will be created!")
|
| 60 |
-
|
| 61 |
-
# Try to verify table exists (after user runs SQL)
|
| 62 |
-
print("\nπ Checking if table exists...")
|
| 63 |
-
try:
|
| 64 |
-
result = client.table("admin_rules").select("id").limit(1).execute()
|
| 65 |
-
print("β
Table 'admin_rules' exists and is accessible!")
|
| 66 |
-
return True
|
| 67 |
-
except Exception as e:
|
| 68 |
-
if "relation" in str(e).lower() or "does not exist" in str(e).lower():
|
| 69 |
-
print("β οΈ Table does not exist yet.")
|
| 70 |
-
print(" Please run the SQL script in Supabase SQL Editor first.")
|
| 71 |
-
return False
|
| 72 |
-
else:
|
| 73 |
-
# Table might be empty, which is fine
|
| 74 |
-
print("β
Table exists (might be empty)")
|
| 75 |
-
return True
|
| 76 |
-
|
| 77 |
-
except ImportError:
|
| 78 |
-
print("β Supabase client not installed")
|
| 79 |
-
print(" Run: pip install supabase")
|
| 80 |
-
return False
|
| 81 |
-
except Exception as e:
|
| 82 |
-
print(f"β Error: {e}")
|
| 83 |
-
return False
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
def create_table_via_http():
|
| 87 |
-
"""
|
| 88 |
-
Alternative: Try to create table via HTTP POST to Supabase REST API.
|
| 89 |
-
This method uses the PostgREST API to create tables.
|
| 90 |
-
Note: This typically requires admin privileges and may not work.
|
| 91 |
-
"""
|
| 92 |
-
import httpx
|
| 93 |
-
|
| 94 |
-
# This approach won't work because Supabase doesn't allow DDL via REST API
|
| 95 |
-
# But we can try to use the pg_net extension if available
|
| 96 |
-
print("β οΈ Direct HTTP table creation is not supported by Supabase REST API")
|
| 97 |
-
print(" Supabase requires SQL execution via the SQL Editor for security reasons")
|
| 98 |
-
return False
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
if __name__ == "__main__":
|
| 102 |
-
print("=" * 60)
|
| 103 |
-
print("Supabase Admin Rules Table Setup")
|
| 104 |
-
print("=" * 60)
|
| 105 |
-
print()
|
| 106 |
-
|
| 107 |
-
# Method 1: Try via Supabase client (will show instructions)
|
| 108 |
-
success = create_table_via_supabase()
|
| 109 |
-
|
| 110 |
-
if not success:
|
| 111 |
-
print("\n" + "=" * 60)
|
| 112 |
-
print("π Manual Setup Required")
|
| 113 |
-
print("=" * 60)
|
| 114 |
-
print("\nSince Supabase doesn't allow programmatic SQL execution")
|
| 115 |
-
print("for security reasons, you need to run the SQL manually.")
|
| 116 |
-
print("\nThe SQL script is ready in: supabase_admin_rules_table.sql")
|
| 117 |
-
print("\nAfter running the SQL in Supabase Dashboard:")
|
| 118 |
-
print(" - The table will be created")
|
| 119 |
-
print(" - RulesStore will automatically use Supabase")
|
| 120 |
-
print(" - All rules will be saved to Supabase instead of SQLite")
|
| 121 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
test_all.py
DELETED
|
@@ -1,233 +0,0 @@
|
|
| 1 |
-
"""
|
| 2 |
-
Single-file test suite for IntegraChat backend (unit + integration + simulation).
|
| 3 |
-
This version aligns with the current backend API surface.
|
| 4 |
-
"""
|
| 5 |
-
|
| 6 |
-
from __future__ import annotations
|
| 7 |
-
|
| 8 |
-
import os
|
| 9 |
-
import sys
|
| 10 |
-
from pathlib import Path
|
| 11 |
-
from typing import List, Dict
|
| 12 |
-
|
| 13 |
-
import pytest
|
| 14 |
-
from fastapi.testclient import TestClient
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
# ---------------------------------------------------------------------------
|
| 18 |
-
# Ensure backend package is importable
|
| 19 |
-
# ---------------------------------------------------------------------------
|
| 20 |
-
PROJECT_ROOT = Path(__file__).resolve().parent
|
| 21 |
-
if str(PROJECT_ROOT) not in sys.path:
|
| 22 |
-
sys.path.insert(0, str(PROJECT_ROOT))
|
| 23 |
-
backend_path = PROJECT_ROOT / "backend"
|
| 24 |
-
if str(backend_path) not in sys.path:
|
| 25 |
-
sys.path.insert(0, str(backend_path))
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
# ---------------------------------------------------------------------------
|
| 29 |
-
# Shared fixtures
|
| 30 |
-
# ---------------------------------------------------------------------------
|
| 31 |
-
|
| 32 |
-
@pytest.fixture(autouse=True, scope="session")
|
| 33 |
-
def set_test_env():
|
| 34 |
-
os.environ.setdefault("RAG_MCP_URL", "http://mock-rag")
|
| 35 |
-
os.environ.setdefault("WEB_MCP_URL", "http://mock-web")
|
| 36 |
-
os.environ.setdefault("ADMIN_MCP_URL", "http://mock-admin")
|
| 37 |
-
os.environ.setdefault("OLLAMA_URL", "http://localhost:11434")
|
| 38 |
-
os.environ.setdefault("OLLAMA_MODEL", "llama3")
|
| 39 |
-
os.environ.setdefault("LLM_BACKEND", "ollama")
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
@pytest.fixture
|
| 43 |
-
def mock_backend_dependencies(monkeypatch):
|
| 44 |
-
print(">> applying backend dependency patches for tests")
|
| 45 |
-
"""Patch MCP client calls and red-flag detector for deterministic tests."""
|
| 46 |
-
from backend.api.models.redflag import RedFlagMatch
|
| 47 |
-
from backend.api.services.tool_scoring import ToolScoringService
|
| 48 |
-
import types
|
| 49 |
-
|
| 50 |
-
async def fake_call_rag(self, tenant_id: str, query: str) -> Dict:
|
| 51 |
-
return {
|
| 52 |
-
"results": [
|
| 53 |
-
{"text": "HR policy includes onboarding, leave rules.", "relevance": 0.92},
|
| 54 |
-
{"text": "General company announcement", "relevance": 0.42}
|
| 55 |
-
],
|
| 56 |
-
"metadata": {"total_retrieved": 2, "returned": 2, "threshold": 0.55}
|
| 57 |
-
}
|
| 58 |
-
|
| 59 |
-
async def fake_call_web(self, tenant_id: str, query: str) -> Dict:
|
| 60 |
-
return {
|
| 61 |
-
"results": [
|
| 62 |
-
{"title": "Latest inflation update", "snippet": "Inflation is 3.2%", "url": "https://example.com"},
|
| 63 |
-
{"title": "Global news", "snippet": "Market highlights", "url": "https://news.example.com"}
|
| 64 |
-
]
|
| 65 |
-
}
|
| 66 |
-
|
| 67 |
-
async def fake_call_admin(self, tenant_id: str, query: str) -> Dict:
|
| 68 |
-
return {"status": "ok", "tenant_id": tenant_id, "query": query}
|
| 69 |
-
|
| 70 |
-
monkeypatch.setattr("backend.api.mcp_clients.mcp_client.MCPClient.call_rag", fake_call_rag)
|
| 71 |
-
monkeypatch.setattr("backend.api.mcp_clients.mcp_client.MCPClient.call_web", fake_call_web)
|
| 72 |
-
monkeypatch.setattr("backend.api.mcp_clients.mcp_client.MCPClient.call_admin", fake_call_admin)
|
| 73 |
-
|
| 74 |
-
async def fake_redflag_check(self, tenant_id: str, text: str) -> List[RedFlagMatch]:
|
| 75 |
-
if "delete" in text.lower():
|
| 76 |
-
return [
|
| 77 |
-
RedFlagMatch(
|
| 78 |
-
rule_id="1",
|
| 79 |
-
pattern="delete",
|
| 80 |
-
severity="high",
|
| 81 |
-
description="Deletion request",
|
| 82 |
-
matched_text="delete",
|
| 83 |
-
confidence=0.9,
|
| 84 |
-
explanation="Matched on keyword 'delete'"
|
| 85 |
-
)
|
| 86 |
-
]
|
| 87 |
-
return []
|
| 88 |
-
|
| 89 |
-
async def fake_notify(self, tenant_id, violations, source_payload=None):
|
| 90 |
-
return None
|
| 91 |
-
|
| 92 |
-
monkeypatch.setattr("backend.api.services.redflag_detector.RedFlagDetector.check", fake_redflag_check)
|
| 93 |
-
monkeypatch.setattr("backend.api.services.redflag_detector.RedFlagDetector.notify_admin", fake_notify)
|
| 94 |
-
|
| 95 |
-
def fake_score(self, message: str, intent: str, rag_results: List[Dict]) -> Dict[str, float]:
|
| 96 |
-
return {"rag_fitness": 0.82, "web_fitness": 0.78, "llm_only": 0.25}
|
| 97 |
-
|
| 98 |
-
monkeypatch.setattr(ToolScoringService, "score", fake_score)
|
| 99 |
-
|
| 100 |
-
# Ensure already-instantiated orchestrator uses the same patches
|
| 101 |
-
from backend.api.routes import agent as agent_routes
|
| 102 |
-
|
| 103 |
-
agent_routes.orchestrator.mcp.call_rag = types.MethodType(fake_call_rag, agent_routes.orchestrator.mcp)
|
| 104 |
-
agent_routes.orchestrator.mcp.call_web = types.MethodType(fake_call_web, agent_routes.orchestrator.mcp)
|
| 105 |
-
agent_routes.orchestrator.mcp.call_admin = types.MethodType(fake_call_admin, agent_routes.orchestrator.mcp)
|
| 106 |
-
agent_routes.orchestrator.redflag.check = types.MethodType(fake_redflag_check, agent_routes.orchestrator.redflag)
|
| 107 |
-
agent_routes.orchestrator.redflag.notify_admin = types.MethodType(fake_notify, agent_routes.orchestrator.redflag)
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
@pytest.fixture
|
| 111 |
-
def api_client(mock_backend_dependencies):
|
| 112 |
-
from backend.api.main import app
|
| 113 |
-
return TestClient(app)
|
| 114 |
-
|
| 115 |
-
|
| 116 |
-
# ---------------------------------------------------------------------------
|
| 117 |
-
# Unit tests
|
| 118 |
-
# ---------------------------------------------------------------------------
|
| 119 |
-
|
| 120 |
-
@pytest.mark.asyncio
|
| 121 |
-
async def test_redflag_detector():
|
| 122 |
-
import time
|
| 123 |
-
from backend.api.services.redflag_detector import RedFlagDetector
|
| 124 |
-
from backend.api.models.redflag import RedFlagRule
|
| 125 |
-
from backend.api.services.semantic_encoder import embed_text
|
| 126 |
-
|
| 127 |
-
detector = RedFlagDetector(supabase_url="http://fake", supabase_key="fake")
|
| 128 |
-
rule = RedFlagRule(
|
| 129 |
-
id="rule-salary",
|
| 130 |
-
pattern="salary",
|
| 131 |
-
description="Salary access",
|
| 132 |
-
severity="high",
|
| 133 |
-
source="test",
|
| 134 |
-
enabled=True,
|
| 135 |
-
keywords=["salary"]
|
| 136 |
-
)
|
| 137 |
-
detector._rules_cache["tenant-x"] = {"fetched_at": int(time.time()), "rules": [rule]}
|
| 138 |
-
detector._rule_embeddings["tenant-x"] = {rule.id: embed_text("salary access")}
|
| 139 |
-
|
| 140 |
-
matches = await detector.check("tenant-x", "Show me employee salary details")
|
| 141 |
-
|
| 142 |
-
assert matches
|
| 143 |
-
assert matches[0].matched_text.lower() == "salary"
|
| 144 |
-
assert matches[0].confidence is not None
|
| 145 |
-
|
| 146 |
-
|
| 147 |
-
def test_tool_scoring():
|
| 148 |
-
from backend.api.services.tool_scoring import ToolScoringService
|
| 149 |
-
|
| 150 |
-
scorer = ToolScoringService()
|
| 151 |
-
scores = scorer.score("What is inflation today?", intent="web", rag_results=[])
|
| 152 |
-
|
| 153 |
-
assert set(scores.keys()) == {"rag_fitness", "web_fitness", "llm_only"}
|
| 154 |
-
assert scores["web_fitness"] > scores["rag_fitness"]
|
| 155 |
-
|
| 156 |
-
|
| 157 |
-
@pytest.mark.asyncio
|
| 158 |
-
async def test_tool_selector():
|
| 159 |
-
from backend.api.services.tool_selector import ToolSelector
|
| 160 |
-
|
| 161 |
-
selector = ToolSelector()
|
| 162 |
-
decision = await selector.select(
|
| 163 |
-
intent="rag",
|
| 164 |
-
text="Tell me HR policy and compare with external news",
|
| 165 |
-
ctx={"rag_results": [{"text": "Policy"}], "tool_scores": {"rag_fitness": 0.9, "web_fitness": 0.8}}
|
| 166 |
-
)
|
| 167 |
-
|
| 168 |
-
steps = decision.tool_input["steps"]
|
| 169 |
-
assert steps[0]["tool"] == "rag"
|
| 170 |
-
assert any(step["tool"] == "web" for step in steps)
|
| 171 |
-
assert steps[-1]["tool"] == "llm"
|
| 172 |
-
|
| 173 |
-
|
| 174 |
-
def test_reasoning_trace_via_response(api_client):
|
| 175 |
-
payload = {"tenant_id": "tenant1", "message": "Summarize our HR policies"}
|
| 176 |
-
res = api_client.post("/agent/message", json=payload)
|
| 177 |
-
data = res.json()
|
| 178 |
-
|
| 179 |
-
assert data["reasoning_trace"]
|
| 180 |
-
step_names = [entry["step"] for entry in data["reasoning_trace"]]
|
| 181 |
-
assert "intent_detection" in step_names
|
| 182 |
-
|
| 183 |
-
|
| 184 |
-
# ---------------------------------------------------------------------------
|
| 185 |
-
# Integration tests
|
| 186 |
-
# ---------------------------------------------------------------------------
|
| 187 |
-
|
| 188 |
-
def test_full_agent_pipeline(api_client):
|
| 189 |
-
payload = {"tenant_id": "tenant123", "message": "What are our HR policies and latest updates?"}
|
| 190 |
-
response = api_client.post("/agent/message", json=payload)
|
| 191 |
-
data = response.json()
|
| 192 |
-
|
| 193 |
-
assert data["text"]
|
| 194 |
-
assert len(data["reasoning_trace"]) >= 3
|
| 195 |
-
|
| 196 |
-
rag_steps = [step for step in data["reasoning_trace"] if step.get("tool") == "rag"]
|
| 197 |
-
assert rag_steps, "expected rag tool execution in reasoning trace"
|
| 198 |
-
|
| 199 |
-
|
| 200 |
-
def test_parallel_execution_detected(api_client):
|
| 201 |
-
payload = {"tenant_id": "t1", "message": "Summarize HR policies and latest news updates"}
|
| 202 |
-
response = api_client.post("/agent/message", json=payload)
|
| 203 |
-
data = response.json()
|
| 204 |
-
|
| 205 |
-
tools_used = {trace.get("tool") for trace in data["tool_traces"] if trace.get("tool")}
|
| 206 |
-
assert "rag" in tools_used and "web" in tools_used
|
| 207 |
-
|
| 208 |
-
|
| 209 |
-
# ---------------------------------------------------------------------------
|
| 210 |
-
# Simulation tests
|
| 211 |
-
# ---------------------------------------------------------------------------
|
| 212 |
-
|
| 213 |
-
SIM_QUERIES = [
|
| 214 |
-
"What is the inflation rate today?",
|
| 215 |
-
"Summarize our HR policies",
|
| 216 |
-
"Delete all records",
|
| 217 |
-
"Explain our refund policy",
|
| 218 |
-
"How many employees are in the company?"
|
| 219 |
-
]
|
| 220 |
-
|
| 221 |
-
|
| 222 |
-
@pytest.mark.parametrize("message", SIM_QUERIES)
|
| 223 |
-
def test_agent_simulation(api_client, message):
|
| 224 |
-
res = api_client.post("/agent/message", json={"tenant_id": "demo", "message": message})
|
| 225 |
-
data = res.json()
|
| 226 |
-
|
| 227 |
-
assert data["text"]
|
| 228 |
-
assert data["reasoning_trace"]
|
| 229 |
-
|
| 230 |
-
if "delete" in message.lower():
|
| 231 |
-
assert data["decision"]["action"] in {"block", "multi_step"}
|
| 232 |
-
reason = (data["decision"]["reason"] or "").lower()
|
| 233 |
-
assert "admin" in reason or "redflag" in reason
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
test_key.py
DELETED
|
@@ -1,45 +0,0 @@
|
|
| 1 |
-
import os
|
| 2 |
-
from dotenv import load_dotenv
|
| 3 |
-
|
| 4 |
-
load_dotenv()
|
| 5 |
-
|
| 6 |
-
key = os.getenv("SUPABASE_SERVICE_KEY")
|
| 7 |
-
url = os.getenv("SUPABASE_URL")
|
| 8 |
-
|
| 9 |
-
print("Checking Supabase Configuration:")
|
| 10 |
-
print("=" * 50)
|
| 11 |
-
|
| 12 |
-
if key:
|
| 13 |
-
print(f"SUPABASE_SERVICE_KEY:")
|
| 14 |
-
print(f" Length: {len(key)} characters")
|
| 15 |
-
print(f" Starts with 'eyJ': {key.startswith('eyJ')}")
|
| 16 |
-
print(f" First 30 chars: {key[:30]}...")
|
| 17 |
-
print(f" Last 30 chars: ...{key[-30:]}")
|
| 18 |
-
|
| 19 |
-
if len(key) >= 200:
|
| 20 |
-
print(f" [OK] Key length is correct")
|
| 21 |
-
else:
|
| 22 |
-
print(f" [WARNING] Key might be too short (expected 200+)")
|
| 23 |
-
|
| 24 |
-
if key.startswith("eyJ"):
|
| 25 |
-
print(f" [OK] Key format looks correct (JWT)")
|
| 26 |
-
else:
|
| 27 |
-
print(f" [WARNING] Key doesn't start with 'eyJ'")
|
| 28 |
-
else:
|
| 29 |
-
print("SUPABASE_SERVICE_KEY: NOT SET")
|
| 30 |
-
|
| 31 |
-
print()
|
| 32 |
-
|
| 33 |
-
if url:
|
| 34 |
-
print(f"SUPABASE_URL:")
|
| 35 |
-
print(f" Value: {url}")
|
| 36 |
-
if url.startswith("https://") and ".supabase.co" in url:
|
| 37 |
-
print(f" [OK] URL format looks correct")
|
| 38 |
-
else:
|
| 39 |
-
print(f" [WARNING] URL format might be incorrect")
|
| 40 |
-
else:
|
| 41 |
-
print("SUPABASE_URL: NOT SET")
|
| 42 |
-
|
| 43 |
-
print()
|
| 44 |
-
print("=" * 50)
|
| 45 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
test_manual.py
DELETED
|
@@ -1,306 +0,0 @@
|
|
| 1 |
-
"""
|
| 2 |
-
Manual testing script for IntegraChat improvements
|
| 3 |
-
|
| 4 |
-
Run this script to test all new features:
|
| 5 |
-
- Analytics logging
|
| 6 |
-
- Enhanced admin rules with regex/severity
|
| 7 |
-
- API endpoints
|
| 8 |
-
- Agent debug/plan endpoints
|
| 9 |
-
|
| 10 |
-
Usage:
|
| 11 |
-
python test_manual.py
|
| 12 |
-
"""
|
| 13 |
-
|
| 14 |
-
import requests
|
| 15 |
-
import json
|
| 16 |
-
import time
|
| 17 |
-
from pathlib import Path
|
| 18 |
-
import sys
|
| 19 |
-
|
| 20 |
-
# Add backend to path
|
| 21 |
-
backend_dir = Path(__file__).parent / "backend"
|
| 22 |
-
sys.path.insert(0, str(backend_dir))
|
| 23 |
-
|
| 24 |
-
# Also add root for backend.api imports
|
| 25 |
-
root_dir = Path(__file__).parent
|
| 26 |
-
sys.path.insert(0, str(root_dir))
|
| 27 |
-
|
| 28 |
-
BASE_URL = "http://localhost:8000"
|
| 29 |
-
TENANT_ID = "test_tenant_manual"
|
| 30 |
-
|
| 31 |
-
def print_section(title):
|
| 32 |
-
print("\n" + "=" * 60)
|
| 33 |
-
print(f" {title}")
|
| 34 |
-
print("=" * 60)
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
def test_analytics_store():
|
| 38 |
-
"""Test AnalyticsStore directly."""
|
| 39 |
-
print_section("Testing AnalyticsStore")
|
| 40 |
-
|
| 41 |
-
try:
|
| 42 |
-
from api.storage.analytics_store import AnalyticsStore
|
| 43 |
-
|
| 44 |
-
store = AnalyticsStore()
|
| 45 |
-
|
| 46 |
-
# Log various events
|
| 47 |
-
print("Logging tool usage...")
|
| 48 |
-
store.log_tool_usage(TENANT_ID, "rag", latency_ms=150, tokens_used=500, success=True)
|
| 49 |
-
store.log_tool_usage(TENANT_ID, "web", latency_ms=80, success=True)
|
| 50 |
-
store.log_tool_usage(TENANT_ID, "llm", latency_ms=200, tokens_used=1000, success=True)
|
| 51 |
-
|
| 52 |
-
print("Logging red-flag violation...")
|
| 53 |
-
store.log_redflag_violation(
|
| 54 |
-
TENANT_ID,
|
| 55 |
-
"rule1",
|
| 56 |
-
".*password.*",
|
| 57 |
-
"high",
|
| 58 |
-
"password123",
|
| 59 |
-
confidence=0.95,
|
| 60 |
-
message_preview="User asked about password"
|
| 61 |
-
)
|
| 62 |
-
|
| 63 |
-
print("Logging RAG search...")
|
| 64 |
-
store.log_rag_search(
|
| 65 |
-
TENANT_ID,
|
| 66 |
-
"What is the company policy?",
|
| 67 |
-
hits_count=5,
|
| 68 |
-
avg_score=0.85,
|
| 69 |
-
top_score=0.92,
|
| 70 |
-
latency_ms=120
|
| 71 |
-
)
|
| 72 |
-
|
| 73 |
-
print("Logging agent query...")
|
| 74 |
-
store.log_agent_query(
|
| 75 |
-
TENANT_ID,
|
| 76 |
-
"What is the company policy?",
|
| 77 |
-
intent="rag",
|
| 78 |
-
tools_used=["rag", "llm"],
|
| 79 |
-
total_tokens=1000,
|
| 80 |
-
total_latency_ms=250,
|
| 81 |
-
success=True
|
| 82 |
-
)
|
| 83 |
-
|
| 84 |
-
# Get stats
|
| 85 |
-
print("\nπ Tool Usage Stats:")
|
| 86 |
-
print(json.dumps(store.get_tool_usage_stats(TENANT_ID), indent=2))
|
| 87 |
-
|
| 88 |
-
print("\nπ¨ Red-Flag Violations:")
|
| 89 |
-
violations = store.get_redflag_violations(TENANT_ID)
|
| 90 |
-
print(json.dumps(violations, indent=2, default=str))
|
| 91 |
-
|
| 92 |
-
print("\nπ Activity Summary:")
|
| 93 |
-
print(json.dumps(store.get_activity_summary(TENANT_ID), indent=2, default=str))
|
| 94 |
-
|
| 95 |
-
print("\nπ RAG Quality Metrics:")
|
| 96 |
-
print(json.dumps(store.get_rag_quality_metrics(TENANT_ID), indent=2))
|
| 97 |
-
|
| 98 |
-
print("\nβ
AnalyticsStore tests passed!")
|
| 99 |
-
return True
|
| 100 |
-
|
| 101 |
-
except Exception as e:
|
| 102 |
-
print(f"β AnalyticsStore test failed: {e}")
|
| 103 |
-
import traceback
|
| 104 |
-
traceback.print_exc()
|
| 105 |
-
return False
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
def test_admin_rules():
|
| 109 |
-
"""Test enhanced admin rules with regex and severity."""
|
| 110 |
-
print_section("Testing Enhanced Admin Rules")
|
| 111 |
-
|
| 112 |
-
try:
|
| 113 |
-
from api.storage.rules_store import RulesStore
|
| 114 |
-
import re
|
| 115 |
-
|
| 116 |
-
store = RulesStore()
|
| 117 |
-
|
| 118 |
-
# Add rules with regex and severity
|
| 119 |
-
print("Adding rules with regex patterns...")
|
| 120 |
-
store.add_rule(
|
| 121 |
-
TENANT_ID,
|
| 122 |
-
"Block password queries",
|
| 123 |
-
pattern=".*password.*|.*pwd.*",
|
| 124 |
-
severity="high",
|
| 125 |
-
description="Blocks password-related queries"
|
| 126 |
-
)
|
| 127 |
-
store.add_rule(
|
| 128 |
-
TENANT_ID,
|
| 129 |
-
"Block email sharing",
|
| 130 |
-
pattern=".*@.*\\..*",
|
| 131 |
-
severity="medium",
|
| 132 |
-
description="Blocks email addresses"
|
| 133 |
-
)
|
| 134 |
-
store.add_rule(
|
| 135 |
-
TENANT_ID,
|
| 136 |
-
"Simple keyword rule",
|
| 137 |
-
severity="low"
|
| 138 |
-
)
|
| 139 |
-
|
| 140 |
-
# Get detailed rules
|
| 141 |
-
rules = store.get_rules_detailed(TENANT_ID)
|
| 142 |
-
print("\nπ Rules with Metadata:")
|
| 143 |
-
print(json.dumps(rules, indent=2, default=str))
|
| 144 |
-
|
| 145 |
-
# Test regex matching
|
| 146 |
-
print("\nπ§ͺ Testing Regex Patterns:")
|
| 147 |
-
for rule in rules:
|
| 148 |
-
if rule.get("pattern"):
|
| 149 |
-
pattern = rule["pattern"]
|
| 150 |
-
regex = re.compile(pattern, re.IGNORECASE)
|
| 151 |
-
test_cases = [
|
| 152 |
-
"What is my password?",
|
| 153 |
-
"My email is test@example.com",
|
| 154 |
-
"Just regular text"
|
| 155 |
-
]
|
| 156 |
-
for test_text in test_cases:
|
| 157 |
-
match = regex.search(test_text)
|
| 158 |
-
print(f" Pattern: {pattern[:30]}... | Text: \"{test_text}\" | Match: {match is not None}")
|
| 159 |
-
|
| 160 |
-
print("\nβ
Admin Rules tests passed!")
|
| 161 |
-
return True
|
| 162 |
-
|
| 163 |
-
except Exception as e:
|
| 164 |
-
print(f"β Admin Rules test failed: {e}")
|
| 165 |
-
import traceback
|
| 166 |
-
traceback.print_exc()
|
| 167 |
-
return False
|
| 168 |
-
|
| 169 |
-
|
| 170 |
-
def test_api_endpoints():
|
| 171 |
-
"""Test API endpoints."""
|
| 172 |
-
print_section("Testing API Endpoints")
|
| 173 |
-
|
| 174 |
-
headers = {"x-tenant-id": TENANT_ID}
|
| 175 |
-
|
| 176 |
-
endpoints = [
|
| 177 |
-
("GET", "/analytics/overview?days=30", None),
|
| 178 |
-
("GET", "/analytics/tool-usage?days=30", None),
|
| 179 |
-
("GET", "/analytics/rag-quality?days=30", None),
|
| 180 |
-
("GET", "/analytics/redflags?limit=50&days=30", None),
|
| 181 |
-
("GET", "/admin/rules?detailed=true", None),
|
| 182 |
-
("GET", "/admin/violations?limit=50&days=30", None),
|
| 183 |
-
("GET", "/admin/tools/logs?days=7", None),
|
| 184 |
-
]
|
| 185 |
-
|
| 186 |
-
results = []
|
| 187 |
-
|
| 188 |
-
for method, endpoint, data in endpoints:
|
| 189 |
-
try:
|
| 190 |
-
url = f"{BASE_URL}{endpoint}"
|
| 191 |
-
if method == "GET":
|
| 192 |
-
response = requests.get(url, headers=headers, timeout=5)
|
| 193 |
-
else:
|
| 194 |
-
response = requests.post(url, headers=headers, json=data, timeout=5)
|
| 195 |
-
|
| 196 |
-
status = "β
" if response.status_code == 200 else "β οΈ"
|
| 197 |
-
print(f"{status} {method} {endpoint} - Status: {response.status_code}")
|
| 198 |
-
|
| 199 |
-
if response.status_code == 200:
|
| 200 |
-
result = response.json()
|
| 201 |
-
print(f" Response keys: {list(result.keys())[:5]}")
|
| 202 |
-
|
| 203 |
-
results.append(response.status_code == 200)
|
| 204 |
-
|
| 205 |
-
except requests.exceptions.ConnectionError:
|
| 206 |
-
print(f"β {method} {endpoint} - Cannot connect to {BASE_URL}")
|
| 207 |
-
print(" Make sure the FastAPI server is running on port 8000")
|
| 208 |
-
results.append(False)
|
| 209 |
-
except Exception as e:
|
| 210 |
-
print(f"β {method} {endpoint} - Error: {e}")
|
| 211 |
-
results.append(False)
|
| 212 |
-
|
| 213 |
-
# Test POST endpoints
|
| 214 |
-
print("\nπ Testing POST Endpoints...")
|
| 215 |
-
|
| 216 |
-
try:
|
| 217 |
-
# Add admin rule
|
| 218 |
-
response = requests.post(
|
| 219 |
-
f"{BASE_URL}/admin/rules",
|
| 220 |
-
headers=headers,
|
| 221 |
-
json={
|
| 222 |
-
"rule": "Test rule via API",
|
| 223 |
-
"pattern": ".*test.*",
|
| 224 |
-
"severity": "medium"
|
| 225 |
-
},
|
| 226 |
-
timeout=5
|
| 227 |
-
)
|
| 228 |
-
status = "β
" if response.status_code == 200 else "β οΈ"
|
| 229 |
-
print(f"{status} POST /admin/rules - Status: {response.status_code}")
|
| 230 |
-
results.append(response.status_code == 200)
|
| 231 |
-
except Exception as e:
|
| 232 |
-
print(f"β POST /admin/rules - Error: {e}")
|
| 233 |
-
results.append(False)
|
| 234 |
-
|
| 235 |
-
# Test agent endpoints (may fail if services not running)
|
| 236 |
-
print("\nπ€ Testing Agent Endpoints...")
|
| 237 |
-
|
| 238 |
-
agent_endpoints = [
|
| 239 |
-
("/agent/plan", {"tenant_id": TENANT_ID, "message": "Test message", "temperature": 0.0}),
|
| 240 |
-
]
|
| 241 |
-
|
| 242 |
-
for endpoint, data in agent_endpoints:
|
| 243 |
-
try:
|
| 244 |
-
response = requests.post(
|
| 245 |
-
f"{BASE_URL}{endpoint}",
|
| 246 |
-
json=data,
|
| 247 |
-
timeout=10
|
| 248 |
-
)
|
| 249 |
-
status = "β
" if response.status_code == 200 else "β οΈ"
|
| 250 |
-
print(f"{status} POST {endpoint} - Status: {response.status_code}")
|
| 251 |
-
if response.status_code == 200:
|
| 252 |
-
result = response.json()
|
| 253 |
-
print(f" Response keys: {list(result.keys())[:5]}")
|
| 254 |
-
results.append(response.status_code in [200, 500, 503]) # Accept various status codes
|
| 255 |
-
except Exception as e:
|
| 256 |
-
print(f"β οΈ POST {endpoint} - Error: {e} (May be expected if services not running)")
|
| 257 |
-
results.append(True) # Don't fail if services not running
|
| 258 |
-
|
| 259 |
-
success_count = sum(results)
|
| 260 |
-
total_count = len(results)
|
| 261 |
-
|
| 262 |
-
print(f"\nπ API Endpoint Tests: {success_count}/{total_count} passed")
|
| 263 |
-
return success_count == total_count or success_count >= total_count * 0.8 # 80% pass rate
|
| 264 |
-
|
| 265 |
-
|
| 266 |
-
def main():
|
| 267 |
-
"""Run all manual tests."""
|
| 268 |
-
print("\n" + "π" * 30)
|
| 269 |
-
print("IntegraChat Manual Testing Suite")
|
| 270 |
-
print("π" * 30)
|
| 271 |
-
|
| 272 |
-
results = []
|
| 273 |
-
|
| 274 |
-
# Test Analytics Store
|
| 275 |
-
results.append(test_analytics_store())
|
| 276 |
-
time.sleep(1)
|
| 277 |
-
|
| 278 |
-
# Test Admin Rules
|
| 279 |
-
results.append(test_admin_rules())
|
| 280 |
-
time.sleep(1)
|
| 281 |
-
|
| 282 |
-
# Test API Endpoints
|
| 283 |
-
results.append(test_api_endpoints())
|
| 284 |
-
|
| 285 |
-
# Summary
|
| 286 |
-
print_section("Test Summary")
|
| 287 |
-
passed = sum(results)
|
| 288 |
-
total = len(results)
|
| 289 |
-
|
| 290 |
-
print(f"Tests Passed: {passed}/{total}")
|
| 291 |
-
if passed == total:
|
| 292 |
-
print("β
All tests passed!")
|
| 293 |
-
elif passed >= total * 0.8:
|
| 294 |
-
print("β οΈ Most tests passed (some may require running services)")
|
| 295 |
-
else:
|
| 296 |
-
print("β Some tests failed. Check errors above.")
|
| 297 |
-
|
| 298 |
-
print("\nπ‘ Tips:")
|
| 299 |
-
print(" - For API tests, ensure FastAPI server is running: uvicorn backend.api.main:app --port 8000")
|
| 300 |
-
print(" - Agent endpoints may require MCP servers and LLM to be running")
|
| 301 |
-
print(" - Check TESTING_GUIDE.md for more detailed testing instructions")
|
| 302 |
-
|
| 303 |
-
|
| 304 |
-
if __name__ == "__main__":
|
| 305 |
-
main()
|
| 306 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
test_retry_integration.py
DELETED
|
@@ -1,529 +0,0 @@
|
|
| 1 |
-
#!/usr/bin/env python3
|
| 2 |
-
"""
|
| 3 |
-
Integration tests for autonomous retry and self-correction system.
|
| 4 |
-
|
| 5 |
-
This script tests the retry functionality with a running backend.
|
| 6 |
-
It verifies that retry steps appear in reasoning traces and analytics.
|
| 7 |
-
|
| 8 |
-
Usage:
|
| 9 |
-
python test_retry_integration.py
|
| 10 |
-
|
| 11 |
-
Prerequisites:
|
| 12 |
-
- FastAPI backend running on http://localhost:8000
|
| 13 |
-
- MCP server running
|
| 14 |
-
- Optional: LLM service available
|
| 15 |
-
"""
|
| 16 |
-
|
| 17 |
-
import requests
|
| 18 |
-
import json
|
| 19 |
-
import time
|
| 20 |
-
import sys
|
| 21 |
-
from pathlib import Path
|
| 22 |
-
|
| 23 |
-
BASE_URL = "http://localhost:8000"
|
| 24 |
-
TENANT_ID = "retry_test_tenant"
|
| 25 |
-
TIMEOUT = 120 # Increased timeout for LLM calls (model loading can take time)
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
def print_section(title, char="=", width=70):
|
| 29 |
-
"""Print a formatted section header."""
|
| 30 |
-
print("\n" + char * width)
|
| 31 |
-
print(f" {title}")
|
| 32 |
-
print(char * width)
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
def print_success(msg):
|
| 36 |
-
"""Print success message."""
|
| 37 |
-
print(f"β
{msg}")
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
def print_warning(msg):
|
| 41 |
-
"""Print warning message."""
|
| 42 |
-
print(f"β οΈ {msg}")
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
def print_error(msg):
|
| 46 |
-
"""Print error message."""
|
| 47 |
-
print(f"β {msg}")
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
def print_info(msg):
|
| 51 |
-
"""Print info message."""
|
| 52 |
-
print(f"βΉοΈ {msg}")
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
def check_backend():
|
| 56 |
-
"""Check if backend is running."""
|
| 57 |
-
try:
|
| 58 |
-
response = requests.get(f"{BASE_URL}/health", timeout=5)
|
| 59 |
-
return response.status_code == 200
|
| 60 |
-
except:
|
| 61 |
-
return False
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
def test_rag_retry_scenario():
|
| 65 |
-
"""Test RAG retry when scores are low."""
|
| 66 |
-
print_section("Test 1: RAG Retry with Low Scores")
|
| 67 |
-
|
| 68 |
-
# First, ingest a document that might not be highly relevant to test query
|
| 69 |
-
print_info("Ingesting test document...")
|
| 70 |
-
try:
|
| 71 |
-
ingest_response = requests.post(
|
| 72 |
-
f"{BASE_URL}/rag/ingest",
|
| 73 |
-
json={
|
| 74 |
-
"tenant_id": TENANT_ID,
|
| 75 |
-
"content": "This is a general document about various topics. It mentions computers, technology, and general information."
|
| 76 |
-
},
|
| 77 |
-
timeout=TIMEOUT
|
| 78 |
-
)
|
| 79 |
-
print(f" Ingest status: {ingest_response.status_code}")
|
| 80 |
-
except requests.exceptions.Timeout:
|
| 81 |
-
print_warning(f"Ingest request timed out after {TIMEOUT} seconds")
|
| 82 |
-
except Exception as e:
|
| 83 |
-
print_warning(f"Could not ingest document: {e}")
|
| 84 |
-
|
| 85 |
-
# Send a query that will likely have low relevance initially
|
| 86 |
-
print_info("Sending query that should trigger RAG retry...")
|
| 87 |
-
try:
|
| 88 |
-
debug_response = requests.post(
|
| 89 |
-
f"{BASE_URL}/agent/debug",
|
| 90 |
-
json={
|
| 91 |
-
"tenant_id": TENANT_ID,
|
| 92 |
-
"message": "What is quantum computing and how does quantum entanglement work?"
|
| 93 |
-
},
|
| 94 |
-
timeout=TIMEOUT
|
| 95 |
-
)
|
| 96 |
-
|
| 97 |
-
if debug_response.status_code == 200:
|
| 98 |
-
debug_data = debug_response.json()
|
| 99 |
-
reasoning_trace = debug_data.get("reasoning_trace", [])
|
| 100 |
-
|
| 101 |
-
# Look for retry steps in reasoning trace
|
| 102 |
-
retry_steps = []
|
| 103 |
-
for step in reasoning_trace:
|
| 104 |
-
step_str = json.dumps(step).lower()
|
| 105 |
-
if "retry" in step_str or "rag_retry" in step_str or "threshold" in step_str:
|
| 106 |
-
retry_steps.append(step)
|
| 107 |
-
|
| 108 |
-
print(f"\n Found {len(retry_steps)} retry-related steps:")
|
| 109 |
-
for step in retry_steps[:5]: # Show first 5
|
| 110 |
-
step_name = step.get("step", "unknown")
|
| 111 |
-
print(f" - {step_name}")
|
| 112 |
-
|
| 113 |
-
if retry_steps:
|
| 114 |
-
print_success("RAG retry system is working!")
|
| 115 |
-
return True
|
| 116 |
-
else:
|
| 117 |
-
print_warning("No retry steps found (may not have triggered - scores might be good)")
|
| 118 |
-
return True # Not a failure, just didn't need retry
|
| 119 |
-
else:
|
| 120 |
-
print_error(f"Request failed: {debug_response.status_code}")
|
| 121 |
-
print_error(f"Response: {debug_response.text[:200]}")
|
| 122 |
-
return False
|
| 123 |
-
|
| 124 |
-
except requests.exceptions.Timeout:
|
| 125 |
-
print_error(f"Request timed out after {TIMEOUT} seconds")
|
| 126 |
-
print_error(" Possible causes:")
|
| 127 |
-
print_error(" - Ollama is not running or model is not loaded")
|
| 128 |
-
print_error(" - MCP server is not running")
|
| 129 |
-
print_error(" - LLM call is taking too long")
|
| 130 |
-
print_error("\n To fix:")
|
| 131 |
-
print_error(" 1. Check if Ollama is running: ollama serve")
|
| 132 |
-
print_error(" 2. Check if model is available: ollama list")
|
| 133 |
-
print_error(" 3. Pull the model if needed: ollama pull llama3.1:latest")
|
| 134 |
-
return False
|
| 135 |
-
except requests.exceptions.ConnectionError:
|
| 136 |
-
print_error("Cannot connect to backend. Is it running on port 8000?")
|
| 137 |
-
return False
|
| 138 |
-
except Exception as e:
|
| 139 |
-
print_error(f"Error: {e}")
|
| 140 |
-
import traceback
|
| 141 |
-
traceback.print_exc()
|
| 142 |
-
return False
|
| 143 |
-
|
| 144 |
-
|
| 145 |
-
def test_web_retry_scenario():
|
| 146 |
-
"""Test web search retry when results are empty."""
|
| 147 |
-
print_section("Test 2: Web Search Retry with Empty Results")
|
| 148 |
-
|
| 149 |
-
# Send a query with an obscure term that might return empty results
|
| 150 |
-
print_info("Sending obscure query to trigger web retry...")
|
| 151 |
-
try:
|
| 152 |
-
debug_response = requests.post(
|
| 153 |
-
f"{BASE_URL}/agent/debug",
|
| 154 |
-
json={
|
| 155 |
-
"tenant_id": TENANT_ID,
|
| 156 |
-
"message": "Explain the concept of zyxwvutsrqp in detail"
|
| 157 |
-
},
|
| 158 |
-
timeout=TIMEOUT
|
| 159 |
-
)
|
| 160 |
-
|
| 161 |
-
if debug_response.status_code == 200:
|
| 162 |
-
debug_data = debug_response.json()
|
| 163 |
-
reasoning_trace = debug_data.get("reasoning_trace", [])
|
| 164 |
-
|
| 165 |
-
# Look for web retry steps
|
| 166 |
-
retry_steps = []
|
| 167 |
-
for step in reasoning_trace:
|
| 168 |
-
step_str = json.dumps(step).lower()
|
| 169 |
-
if "web_retry" in step_str or ("web" in step_str and "retry" in step_str):
|
| 170 |
-
retry_steps.append(step)
|
| 171 |
-
|
| 172 |
-
print(f"\n Found {len(retry_steps)} web retry steps:")
|
| 173 |
-
for step in retry_steps[:5]:
|
| 174 |
-
step_name = step.get("step", "unknown")
|
| 175 |
-
print(f" - {step_name}")
|
| 176 |
-
if 'rewritten_query' in step:
|
| 177 |
-
print(f" Rewritten: {step['rewritten_query'][:60]}...")
|
| 178 |
-
|
| 179 |
-
if retry_steps:
|
| 180 |
-
print_success("Web retry system is working!")
|
| 181 |
-
return True
|
| 182 |
-
else:
|
| 183 |
-
print_warning("No web retry steps found (results might have been found on first try)")
|
| 184 |
-
return True # Not a failure
|
| 185 |
-
else:
|
| 186 |
-
print_error(f"Request failed: {debug_response.status_code}")
|
| 187 |
-
return False
|
| 188 |
-
|
| 189 |
-
except requests.exceptions.Timeout:
|
| 190 |
-
print_error(f"Request timed out after {TIMEOUT} seconds")
|
| 191 |
-
print_warning(" This may happen if Ollama is loading the model")
|
| 192 |
-
return False
|
| 193 |
-
except requests.exceptions.ConnectionError:
|
| 194 |
-
print_error("Cannot connect to backend")
|
| 195 |
-
return False
|
| 196 |
-
except requests.exceptions.Timeout:
|
| 197 |
-
print_error(f"Request timed out after {TIMEOUT} seconds")
|
| 198 |
-
print_warning(" This may happen if Ollama is loading the model")
|
| 199 |
-
return False
|
| 200 |
-
except Exception as e:
|
| 201 |
-
print_error(f"Error: {e}")
|
| 202 |
-
return False
|
| 203 |
-
|
| 204 |
-
|
| 205 |
-
def test_reasoning_trace_contains_retry_info():
|
| 206 |
-
"""Verify retry steps appear in reasoning traces."""
|
| 207 |
-
print_section("Test 3: Verify Reasoning Trace Contains Retry Info")
|
| 208 |
-
|
| 209 |
-
try:
|
| 210 |
-
debug_response = requests.post(
|
| 211 |
-
f"{BASE_URL}/agent/debug",
|
| 212 |
-
json={
|
| 213 |
-
"tenant_id": TENANT_ID,
|
| 214 |
-
"message": "What is artificial intelligence and machine learning?"
|
| 215 |
-
},
|
| 216 |
-
timeout=TIMEOUT
|
| 217 |
-
)
|
| 218 |
-
|
| 219 |
-
if debug_response.status_code == 200:
|
| 220 |
-
debug_data = debug_response.json()
|
| 221 |
-
reasoning_trace = debug_data.get("reasoning_trace", [])
|
| 222 |
-
|
| 223 |
-
print(f"\n Reasoning trace has {len(reasoning_trace)} steps")
|
| 224 |
-
print("\n Step breakdown:")
|
| 225 |
-
|
| 226 |
-
retry_related_count = 0
|
| 227 |
-
for i, step in enumerate(reasoning_trace[:10]): # Show first 10
|
| 228 |
-
step_name = step.get("step", "unknown")
|
| 229 |
-
step_str = str(step).lower()
|
| 230 |
-
|
| 231 |
-
is_retry_related = "retry" in step_str or "repair" in step_str or "threshold" in step_str
|
| 232 |
-
if is_retry_related:
|
| 233 |
-
retry_related_count += 1
|
| 234 |
-
marker = "β‘"
|
| 235 |
-
else:
|
| 236 |
-
marker = " "
|
| 237 |
-
|
| 238 |
-
print(f" {marker} {i+1}. {step_name}")
|
| 239 |
-
|
| 240 |
-
if retry_related_count > 0:
|
| 241 |
-
print_success(f"Found {retry_related_count} retry-related steps in reasoning trace")
|
| 242 |
-
return True
|
| 243 |
-
else:
|
| 244 |
-
print_warning("No retry-related steps found (may not have been needed)")
|
| 245 |
-
return True
|
| 246 |
-
else:
|
| 247 |
-
print_error(f"Request failed: {debug_response.status_code}")
|
| 248 |
-
return False
|
| 249 |
-
|
| 250 |
-
except requests.exceptions.Timeout:
|
| 251 |
-
print_error(f"Request timed out after {TIMEOUT} seconds")
|
| 252 |
-
print_warning(" This may happen if Ollama is loading the model")
|
| 253 |
-
return False
|
| 254 |
-
except Exception as e:
|
| 255 |
-
print_error(f"Error: {e}")
|
| 256 |
-
return False
|
| 257 |
-
|
| 258 |
-
|
| 259 |
-
def test_analytics_logging():
|
| 260 |
-
"""Test that retry attempts are logged to analytics."""
|
| 261 |
-
print_section("Test 4: Analytics Logging for Retries")
|
| 262 |
-
|
| 263 |
-
try:
|
| 264 |
-
# Send a query that might trigger retries
|
| 265 |
-
print_info("Sending query to generate activity...")
|
| 266 |
-
requests.post(
|
| 267 |
-
f"{BASE_URL}/agent/message",
|
| 268 |
-
json={
|
| 269 |
-
"tenant_id": TENANT_ID,
|
| 270 |
-
"message": "Explain quantum mechanics"
|
| 271 |
-
},
|
| 272 |
-
timeout=TIMEOUT
|
| 273 |
-
)
|
| 274 |
-
|
| 275 |
-
# Wait a moment for analytics to be logged
|
| 276 |
-
time.sleep(1)
|
| 277 |
-
|
| 278 |
-
# Check analytics
|
| 279 |
-
print_info("Checking analytics for retry tool calls...")
|
| 280 |
-
analytics_response = requests.get(
|
| 281 |
-
f"{BASE_URL}/analytics/tool-usage?days=1",
|
| 282 |
-
headers={"x-tenant-id": TENANT_ID},
|
| 283 |
-
timeout=TIMEOUT
|
| 284 |
-
)
|
| 285 |
-
|
| 286 |
-
if analytics_response.status_code == 200:
|
| 287 |
-
data = analytics_response.json()
|
| 288 |
-
tool_logs = data.get("logs", [])
|
| 289 |
-
|
| 290 |
-
print(f" Found {len(tool_logs)} tool usage logs")
|
| 291 |
-
|
| 292 |
-
# Look for retry-related tool names
|
| 293 |
-
retry_tools = []
|
| 294 |
-
for log in tool_logs:
|
| 295 |
-
tool_name = log.get("tool_name", "").lower()
|
| 296 |
-
if "retry" in tool_name:
|
| 297 |
-
retry_tools.append(log)
|
| 298 |
-
|
| 299 |
-
print(f" Found {len(retry_tools)} retry-related tool calls:")
|
| 300 |
-
for tool in retry_tools[:5]:
|
| 301 |
-
tool_name = tool.get("tool_name")
|
| 302 |
-
timestamp = tool.get("timestamp", "unknown")
|
| 303 |
-
success = tool.get("success", False)
|
| 304 |
-
status = "β
" if success else "β"
|
| 305 |
-
print(f" {status} {tool_name} at {timestamp}")
|
| 306 |
-
|
| 307 |
-
if len(retry_tools) > 0:
|
| 308 |
-
print_success("Retry attempts are being logged to analytics!")
|
| 309 |
-
return True
|
| 310 |
-
else:
|
| 311 |
-
print_warning("No retry tool calls found (may not have triggered retries)")
|
| 312 |
-
return True
|
| 313 |
-
else:
|
| 314 |
-
print_warning(f"Could not fetch analytics: {analytics_response.status_code}")
|
| 315 |
-
return True # Don't fail on analytics endpoint issues
|
| 316 |
-
|
| 317 |
-
except requests.exceptions.Timeout:
|
| 318 |
-
print_warning(f"Analytics check timed out after {TIMEOUT} seconds")
|
| 319 |
-
return True # Don't fail the whole test on analytics issues
|
| 320 |
-
except Exception as e:
|
| 321 |
-
print_warning(f"Analytics check failed: {e}")
|
| 322 |
-
return True # Don't fail the whole test on analytics issues
|
| 323 |
-
|
| 324 |
-
|
| 325 |
-
def test_full_agent_flow():
|
| 326 |
-
"""Test full agent flow with retry system integrated."""
|
| 327 |
-
print_section("Test 5: Full Agent Flow with Retry Integration")
|
| 328 |
-
|
| 329 |
-
try:
|
| 330 |
-
print_info("Sending complete agent request...")
|
| 331 |
-
response = requests.post(
|
| 332 |
-
f"{BASE_URL}/agent/message",
|
| 333 |
-
json={
|
| 334 |
-
"tenant_id": TENANT_ID,
|
| 335 |
-
"message": "What is machine learning and how does it differ from deep learning?",
|
| 336 |
-
"temperature": 0.0
|
| 337 |
-
},
|
| 338 |
-
timeout=TIMEOUT
|
| 339 |
-
)
|
| 340 |
-
|
| 341 |
-
if response.status_code == 200:
|
| 342 |
-
data = response.json()
|
| 343 |
-
|
| 344 |
-
has_text = "text" in data and data["text"]
|
| 345 |
-
has_decision = "decision" in data
|
| 346 |
-
has_tool_traces = "tool_traces" in data
|
| 347 |
-
|
| 348 |
-
print(f"\n Response components:")
|
| 349 |
-
print(f" - Has text: {'β
' if has_text else 'β'}")
|
| 350 |
-
print(f" - Has decision: {'β
' if has_decision else 'β'}")
|
| 351 |
-
print(f" - Has tool traces: {'β
' if has_tool_traces else 'β'}")
|
| 352 |
-
|
| 353 |
-
if has_text:
|
| 354 |
-
text_preview = data["text"][:100] + "..." if len(data["text"]) > 100 else data["text"]
|
| 355 |
-
print(f"\n Response preview: {text_preview}")
|
| 356 |
-
|
| 357 |
-
if has_tool_traces:
|
| 358 |
-
tool_traces = data["tool_traces"]
|
| 359 |
-
print(f"\n Tool traces: {len(tool_traces)} steps")
|
| 360 |
-
for trace in tool_traces[:3]:
|
| 361 |
-
tool = trace.get("tool", "unknown")
|
| 362 |
-
print(f" - {tool}")
|
| 363 |
-
|
| 364 |
-
if has_text and has_decision:
|
| 365 |
-
print_success("Full agent flow completed successfully!")
|
| 366 |
-
return True
|
| 367 |
-
else:
|
| 368 |
-
print_error("Agent flow incomplete")
|
| 369 |
-
return False
|
| 370 |
-
else:
|
| 371 |
-
print_error(f"Request failed: {response.status_code}")
|
| 372 |
-
print_error(f"Response: {response.text[:200]}")
|
| 373 |
-
return False
|
| 374 |
-
|
| 375 |
-
except requests.exceptions.Timeout:
|
| 376 |
-
print_error(f"Request timed out after {TIMEOUT} seconds")
|
| 377 |
-
print_warning(" This may happen if Ollama is loading the model")
|
| 378 |
-
return False
|
| 379 |
-
except requests.exceptions.Timeout:
|
| 380 |
-
print_error(f"Request timed out after {TIMEOUT} seconds")
|
| 381 |
-
print_warning(" This may happen if Ollama is loading the model")
|
| 382 |
-
return False
|
| 383 |
-
except Exception as e:
|
| 384 |
-
print_error(f"Error: {e}")
|
| 385 |
-
return False
|
| 386 |
-
|
| 387 |
-
|
| 388 |
-
def test_agent_plan_endpoint():
|
| 389 |
-
"""Test agent plan endpoint shows retry considerations."""
|
| 390 |
-
print_section("Test 6: Agent Plan Endpoint")
|
| 391 |
-
|
| 392 |
-
try:
|
| 393 |
-
print_info("Checking agent plan for query...")
|
| 394 |
-
response = requests.post(
|
| 395 |
-
f"{BASE_URL}/agent/plan",
|
| 396 |
-
json={
|
| 397 |
-
"tenant_id": TENANT_ID,
|
| 398 |
-
"message": "Explain neural networks"
|
| 399 |
-
},
|
| 400 |
-
timeout=TIMEOUT
|
| 401 |
-
)
|
| 402 |
-
|
| 403 |
-
if response.status_code == 200:
|
| 404 |
-
data = response.json()
|
| 405 |
-
|
| 406 |
-
has_plan = "plan" in data
|
| 407 |
-
has_intent = "intent" in data
|
| 408 |
-
has_reason = "reason" in data
|
| 409 |
-
|
| 410 |
-
print(f"\n Plan components:")
|
| 411 |
-
print(f" - Has plan: {'β
' if has_plan else 'β'}")
|
| 412 |
-
print(f" - Has intent: {'β
' if has_intent else 'β'}")
|
| 413 |
-
print(f" - Has reason: {'β
' if has_reason else 'β'}")
|
| 414 |
-
|
| 415 |
-
if has_plan:
|
| 416 |
-
plan = data["plan"]
|
| 417 |
-
print(f"\n Plan action: {plan.get('action', 'unknown')}")
|
| 418 |
-
print(f" Plan tool: {plan.get('tool', 'none')}")
|
| 419 |
-
|
| 420 |
-
if has_reason:
|
| 421 |
-
print(f" Reason: {data['reason'][:100]}...")
|
| 422 |
-
|
| 423 |
-
print_success("Agent plan endpoint working!")
|
| 424 |
-
return True
|
| 425 |
-
else:
|
| 426 |
-
print_warning(f"Plan endpoint returned: {response.status_code}")
|
| 427 |
-
return True # Don't fail on plan endpoint
|
| 428 |
-
|
| 429 |
-
except requests.exceptions.Timeout:
|
| 430 |
-
print_warning(f"Plan endpoint request timed out after {TIMEOUT} seconds")
|
| 431 |
-
return True # Don't fail on this
|
| 432 |
-
except Exception as e:
|
| 433 |
-
print_warning(f"Plan endpoint check failed: {e}")
|
| 434 |
-
return True # Don't fail on this
|
| 435 |
-
|
| 436 |
-
|
| 437 |
-
def main():
|
| 438 |
-
"""Run all integration tests."""
|
| 439 |
-
print("\n" + "π" * 35)
|
| 440 |
-
print(" Retry & Self-Correction System Integration Tests")
|
| 441 |
-
print("π" * 35)
|
| 442 |
-
|
| 443 |
-
# Check backend
|
| 444 |
-
print_section("Prerequisites Check")
|
| 445 |
-
if not check_backend():
|
| 446 |
-
print_error("Backend is not running on http://localhost:8000")
|
| 447 |
-
print_error("Please start the backend before running tests:")
|
| 448 |
-
print_error(" uvicorn backend.api.main:app --port 8000")
|
| 449 |
-
print_error("\nOr run: python start.bat")
|
| 450 |
-
sys.exit(1)
|
| 451 |
-
else:
|
| 452 |
-
print_success("Backend is running!")
|
| 453 |
-
|
| 454 |
-
print("\n" + "=" * 70)
|
| 455 |
-
print(" Starting Integration Tests")
|
| 456 |
-
print("=" * 70)
|
| 457 |
-
print(f"\nβ±οΈ Timeout: {TIMEOUT} seconds per request")
|
| 458 |
-
print(" (First request may take longer if Ollama needs to load the model)")
|
| 459 |
-
print("\nβ οΈ Note: Some tests may not trigger retries if:")
|
| 460 |
-
print(" - RAG scores are already high (no retry needed)")
|
| 461 |
-
print(" - Web search finds results immediately")
|
| 462 |
-
print(" - System is working perfectly (which is good!)")
|
| 463 |
-
print("\nPress Enter to continue or Ctrl+C to cancel...")
|
| 464 |
-
try:
|
| 465 |
-
input()
|
| 466 |
-
except KeyboardInterrupt:
|
| 467 |
-
print("\n\nTests cancelled.")
|
| 468 |
-
sys.exit(0)
|
| 469 |
-
|
| 470 |
-
results = []
|
| 471 |
-
|
| 472 |
-
# Run tests
|
| 473 |
-
results.append(("RAG Retry Scenario", test_rag_retry_scenario()))
|
| 474 |
-
time.sleep(0.5)
|
| 475 |
-
|
| 476 |
-
results.append(("Web Retry Scenario", test_web_retry_scenario()))
|
| 477 |
-
time.sleep(0.5)
|
| 478 |
-
|
| 479 |
-
results.append(("Reasoning Trace Verification", test_reasoning_trace_contains_retry_info()))
|
| 480 |
-
time.sleep(0.5)
|
| 481 |
-
|
| 482 |
-
results.append(("Analytics Logging", test_analytics_logging()))
|
| 483 |
-
time.sleep(0.5)
|
| 484 |
-
|
| 485 |
-
results.append(("Full Agent Flow", test_full_agent_flow()))
|
| 486 |
-
time.sleep(0.5)
|
| 487 |
-
|
| 488 |
-
results.append(("Agent Plan Endpoint", test_agent_plan_endpoint()))
|
| 489 |
-
|
| 490 |
-
# Summary
|
| 491 |
-
print_section("Test Summary", "=", 70)
|
| 492 |
-
|
| 493 |
-
passed = 0
|
| 494 |
-
for test_name, result in results:
|
| 495 |
-
status = "β
PASS" if result else "β FAIL"
|
| 496 |
-
print(f"{status} - {test_name}")
|
| 497 |
-
if result:
|
| 498 |
-
passed += 1
|
| 499 |
-
|
| 500 |
-
print(f"\nπ Results: {passed}/{len(results)} tests passed")
|
| 501 |
-
|
| 502 |
-
if passed == len(results):
|
| 503 |
-
print_success("All tests passed!")
|
| 504 |
-
elif passed >= len(results) * 0.8:
|
| 505 |
-
print_warning("Most tests passed (some may not have triggered retries, which is fine)")
|
| 506 |
-
else:
|
| 507 |
-
print_error("Some tests failed. Check errors above.")
|
| 508 |
-
|
| 509 |
-
print("\nπ‘ Tips:")
|
| 510 |
-
print(" - Use /agent/debug endpoint to see detailed reasoning traces")
|
| 511 |
-
print(" - Check /analytics/tool-usage for retry attempt logs")
|
| 512 |
-
print(" - Retry system works automatically - no configuration needed")
|
| 513 |
-
print("\nπ Next steps:")
|
| 514 |
-
print(" - Run unit tests: pytest backend/tests/test_retry_system.py -v")
|
| 515 |
-
print(" - Check TESTING_GUIDE.md for more testing options")
|
| 516 |
-
|
| 517 |
-
|
| 518 |
-
if __name__ == "__main__":
|
| 519 |
-
try:
|
| 520 |
-
main()
|
| 521 |
-
except KeyboardInterrupt:
|
| 522 |
-
print("\n\nTests interrupted by user.")
|
| 523 |
-
sys.exit(0)
|
| 524 |
-
except Exception as e:
|
| 525 |
-
print_error(f"Unexpected error: {e}")
|
| 526 |
-
import traceback
|
| 527 |
-
traceback.print_exc()
|
| 528 |
-
sys.exit(1)
|
| 529 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
test_retry_quick.py
DELETED
|
@@ -1,128 +0,0 @@
|
|
| 1 |
-
#!/usr/bin/env python3
|
| 2 |
-
"""
|
| 3 |
-
Quick test script for retry system - minimal version.
|
| 4 |
-
|
| 5 |
-
Run this to quickly verify retry functionality is working.
|
| 6 |
-
Usage: python test_retry_quick.py
|
| 7 |
-
"""
|
| 8 |
-
|
| 9 |
-
import requests
|
| 10 |
-
import json
|
| 11 |
-
|
| 12 |
-
BASE_URL = "http://localhost:8000"
|
| 13 |
-
TENANT_ID = "quick_test"
|
| 14 |
-
TIMEOUT = 120 # Increased timeout for LLM calls (model loading can take time)
|
| 15 |
-
|
| 16 |
-
def check_server_health():
|
| 17 |
-
"""Check if the backend server is running."""
|
| 18 |
-
try:
|
| 19 |
-
response = requests.get(f"{BASE_URL}/health", timeout=5)
|
| 20 |
-
if response.status_code == 200:
|
| 21 |
-
return True
|
| 22 |
-
except:
|
| 23 |
-
pass
|
| 24 |
-
return False
|
| 25 |
-
|
| 26 |
-
def test_debug_endpoint():
|
| 27 |
-
"""Quick test using debug endpoint."""
|
| 28 |
-
print("π Testing retry system via /agent/debug endpoint...\n")
|
| 29 |
-
|
| 30 |
-
# First check if server is running
|
| 31 |
-
print("π‘ Checking if backend server is running...")
|
| 32 |
-
if not check_server_health():
|
| 33 |
-
print(f"β Cannot connect to {BASE_URL}")
|
| 34 |
-
print(" Make sure backend is running:")
|
| 35 |
-
print(" - uvicorn backend.api.main:app --port 8000")
|
| 36 |
-
print(" - Or use: python backend/mcp_server/server.py")
|
| 37 |
-
return False
|
| 38 |
-
print("β
Backend server is running\n")
|
| 39 |
-
|
| 40 |
-
try:
|
| 41 |
-
print(f"β±οΈ Sending request (timeout: {TIMEOUT}s)...")
|
| 42 |
-
print(" Note: First request may take longer if Ollama needs to load the model\n")
|
| 43 |
-
|
| 44 |
-
response = requests.post(
|
| 45 |
-
f"{BASE_URL}/agent/debug",
|
| 46 |
-
json={
|
| 47 |
-
"tenant_id": TENANT_ID,
|
| 48 |
-
"message": "What is quantum computing?"
|
| 49 |
-
},
|
| 50 |
-
timeout=TIMEOUT
|
| 51 |
-
)
|
| 52 |
-
|
| 53 |
-
if response.status_code == 200:
|
| 54 |
-
data = response.json()
|
| 55 |
-
reasoning_trace = data.get("reasoning_trace", [])
|
| 56 |
-
|
| 57 |
-
print(f"β
Connected to backend")
|
| 58 |
-
print(f"π Found {len(reasoning_trace)} reasoning steps\n")
|
| 59 |
-
|
| 60 |
-
# Look for retry steps
|
| 61 |
-
retry_steps = []
|
| 62 |
-
for step in reasoning_trace:
|
| 63 |
-
step_str = json.dumps(step).lower()
|
| 64 |
-
if any(keyword in step_str for keyword in ["retry", "repair", "threshold", "rewritten"]):
|
| 65 |
-
retry_steps.append(step)
|
| 66 |
-
|
| 67 |
-
if retry_steps:
|
| 68 |
-
print(f"β‘ Found {len(retry_steps)} retry-related steps:")
|
| 69 |
-
for step in retry_steps[:3]:
|
| 70 |
-
print(f" - {step.get('step', 'unknown')}")
|
| 71 |
-
print("\nβ
Retry system is active and working!")
|
| 72 |
-
return True
|
| 73 |
-
else:
|
| 74 |
-
print("βΉοΈ No retry steps found (system working optimally - no retries needed)")
|
| 75 |
-
print("\nβ
Retry system is integrated (retries only happen when needed)")
|
| 76 |
-
return True
|
| 77 |
-
else:
|
| 78 |
-
print(f"β Request failed: {response.status_code}")
|
| 79 |
-
try:
|
| 80 |
-
error_data = response.json()
|
| 81 |
-
print(f" Error details: {error_data}")
|
| 82 |
-
except:
|
| 83 |
-
print(f" Response: {response.text[:200]}")
|
| 84 |
-
return False
|
| 85 |
-
|
| 86 |
-
except requests.exceptions.Timeout:
|
| 87 |
-
print(f"β Request timed out after {TIMEOUT} seconds")
|
| 88 |
-
print("\n Possible causes:")
|
| 89 |
-
print(" - Ollama is not running or model is not loaded")
|
| 90 |
-
print(" - MCP server is not running")
|
| 91 |
-
print(" - LLM call is taking too long")
|
| 92 |
-
print("\n To fix:")
|
| 93 |
-
print(" 1. Check if Ollama is running: ollama serve")
|
| 94 |
-
print(" 2. Check if model is available: ollama list")
|
| 95 |
-
print(" 3. Pull the model if needed: ollama pull llama3.1:latest")
|
| 96 |
-
print(" 4. Check if MCP server is running")
|
| 97 |
-
return False
|
| 98 |
-
except requests.exceptions.ConnectionError:
|
| 99 |
-
print(f"β Cannot connect to {BASE_URL}")
|
| 100 |
-
print(" Make sure backend is running:")
|
| 101 |
-
print(" - uvicorn backend.api.main:app --port 8000")
|
| 102 |
-
print(" - Or use: python backend/mcp_server/server.py")
|
| 103 |
-
return False
|
| 104 |
-
except Exception as e:
|
| 105 |
-
print(f"β Error: {e}")
|
| 106 |
-
print(f" Error type: {type(e).__name__}")
|
| 107 |
-
return False
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
if __name__ == "__main__":
|
| 111 |
-
print("=" * 60)
|
| 112 |
-
print(" Quick Retry System Test")
|
| 113 |
-
print("=" * 60 + "\n")
|
| 114 |
-
|
| 115 |
-
success = test_debug_endpoint()
|
| 116 |
-
|
| 117 |
-
if success:
|
| 118 |
-
print("\n" + "=" * 60)
|
| 119 |
-
print("β
Test completed successfully!")
|
| 120 |
-
print("=" * 60)
|
| 121 |
-
print("\nπ‘ For comprehensive tests, run:")
|
| 122 |
-
print(" - pytest backend/tests/test_retry_system.py -v")
|
| 123 |
-
print(" - python test_retry_integration.py")
|
| 124 |
-
else:
|
| 125 |
-
print("\n" + "=" * 60)
|
| 126 |
-
print("β Test failed - check errors above")
|
| 127 |
-
print("=" * 60)
|
| 128 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
test_simple.py
DELETED
|
@@ -1,148 +0,0 @@
|
|
| 1 |
-
"""
|
| 2 |
-
Simple standalone test script - can be run directly without pytest
|
| 3 |
-
|
| 4 |
-
Usage:
|
| 5 |
-
python test_simple.py
|
| 6 |
-
"""
|
| 7 |
-
|
| 8 |
-
import sys
|
| 9 |
-
from pathlib import Path
|
| 10 |
-
|
| 11 |
-
# Setup paths
|
| 12 |
-
backend_dir = Path(__file__).parent / "backend"
|
| 13 |
-
sys.path.insert(0, str(backend_dir))
|
| 14 |
-
root_dir = Path(__file__).parent
|
| 15 |
-
sys.path.insert(0, str(root_dir))
|
| 16 |
-
|
| 17 |
-
def test_analytics_store():
|
| 18 |
-
"""Test AnalyticsStore"""
|
| 19 |
-
print("\n" + "="*60)
|
| 20 |
-
print("Testing AnalyticsStore")
|
| 21 |
-
print("="*60)
|
| 22 |
-
|
| 23 |
-
try:
|
| 24 |
-
from api.storage.analytics_store import AnalyticsStore
|
| 25 |
-
|
| 26 |
-
store = AnalyticsStore()
|
| 27 |
-
tenant_id = "test_simple"
|
| 28 |
-
|
| 29 |
-
# Log some events
|
| 30 |
-
print("β Logging tool usage...")
|
| 31 |
-
store.log_tool_usage(tenant_id, "rag", latency_ms=150, tokens_used=500, success=True)
|
| 32 |
-
store.log_tool_usage(tenant_id, "web", latency_ms=80, success=True)
|
| 33 |
-
|
| 34 |
-
print("β Logging red-flag violation...")
|
| 35 |
-
store.log_redflag_violation(
|
| 36 |
-
tenant_id, "rule1", ".*password.*", "high",
|
| 37 |
-
"password123", confidence=0.95
|
| 38 |
-
)
|
| 39 |
-
|
| 40 |
-
print("β Logging RAG search...")
|
| 41 |
-
store.log_rag_search(tenant_id, "test query", hits_count=5, avg_score=0.85)
|
| 42 |
-
|
| 43 |
-
# Get stats
|
| 44 |
-
print("\nπ Tool Usage Stats:")
|
| 45 |
-
stats = store.get_tool_usage_stats(tenant_id)
|
| 46 |
-
print(f" RAG: {stats.get('rag', {})}")
|
| 47 |
-
print(f" Web: {stats.get('web', {})}")
|
| 48 |
-
|
| 49 |
-
print("\nπ¨ Violations:")
|
| 50 |
-
violations = store.get_redflag_violations(tenant_id)
|
| 51 |
-
print(f" Count: {len(violations)}")
|
| 52 |
-
if violations:
|
| 53 |
-
print(f" First: {violations[0]['severity']} - {violations[0]['matched_text']}")
|
| 54 |
-
|
| 55 |
-
print("\nβ
AnalyticsStore test PASSED!")
|
| 56 |
-
return True
|
| 57 |
-
|
| 58 |
-
except Exception as e:
|
| 59 |
-
print(f"\nβ AnalyticsStore test FAILED: {e}")
|
| 60 |
-
import traceback
|
| 61 |
-
traceback.print_exc()
|
| 62 |
-
return False
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
def test_admin_rules():
|
| 66 |
-
"""Test Admin Rules with regex"""
|
| 67 |
-
print("\n" + "="*60)
|
| 68 |
-
print("Testing Admin Rules (Regex & Severity)")
|
| 69 |
-
print("="*60)
|
| 70 |
-
|
| 71 |
-
try:
|
| 72 |
-
from api.storage.rules_store import RulesStore
|
| 73 |
-
import re
|
| 74 |
-
|
| 75 |
-
store = RulesStore()
|
| 76 |
-
tenant_id = "test_simple"
|
| 77 |
-
|
| 78 |
-
# Add rule with regex
|
| 79 |
-
print("β Adding rule with regex pattern...")
|
| 80 |
-
store.add_rule(
|
| 81 |
-
tenant_id,
|
| 82 |
-
"Block password queries",
|
| 83 |
-
pattern=".*password.*",
|
| 84 |
-
severity="high",
|
| 85 |
-
description="Blocks password queries"
|
| 86 |
-
)
|
| 87 |
-
|
| 88 |
-
# Get detailed rules
|
| 89 |
-
rules = store.get_rules_detailed(tenant_id)
|
| 90 |
-
print(f"\nπ Rules found: {len(rules)}")
|
| 91 |
-
|
| 92 |
-
if rules:
|
| 93 |
-
rule = rules[0]
|
| 94 |
-
print(f" Pattern: {rule['pattern']}")
|
| 95 |
-
print(f" Severity: {rule['severity']}")
|
| 96 |
-
print(f" Description: {rule['description']}")
|
| 97 |
-
|
| 98 |
-
# Test regex
|
| 99 |
-
print("\nπ§ͺ Testing regex pattern...")
|
| 100 |
-
regex = re.compile(rule['pattern'], re.IGNORECASE)
|
| 101 |
-
test_cases = [
|
| 102 |
-
("What is my password?", True),
|
| 103 |
-
("Regular text", False)
|
| 104 |
-
]
|
| 105 |
-
for text, should_match in test_cases:
|
| 106 |
-
match = regex.search(text) is not None
|
| 107 |
-
status = "β" if match == should_match else "β"
|
| 108 |
-
print(f" {status} '{text}' -> {match} (expected {should_match})")
|
| 109 |
-
|
| 110 |
-
print("\nβ
Admin Rules test PASSED!")
|
| 111 |
-
return True
|
| 112 |
-
|
| 113 |
-
except Exception as e:
|
| 114 |
-
print(f"\nβ Admin Rules test FAILED: {e}")
|
| 115 |
-
import traceback
|
| 116 |
-
traceback.print_exc()
|
| 117 |
-
return False
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
def main():
|
| 121 |
-
"""Run all tests"""
|
| 122 |
-
print("\nπ IntegraChat Simple Tests")
|
| 123 |
-
print("="*60)
|
| 124 |
-
|
| 125 |
-
results = []
|
| 126 |
-
|
| 127 |
-
results.append(test_analytics_store())
|
| 128 |
-
results.append(test_admin_rules())
|
| 129 |
-
|
| 130 |
-
# Summary
|
| 131 |
-
print("\n" + "="*60)
|
| 132 |
-
print("Test Summary")
|
| 133 |
-
print("="*60)
|
| 134 |
-
passed = sum(results)
|
| 135 |
-
total = len(results)
|
| 136 |
-
print(f"Tests Passed: {passed}/{total}")
|
| 137 |
-
|
| 138 |
-
if passed == total:
|
| 139 |
-
print("β
All tests passed!")
|
| 140 |
-
return 0
|
| 141 |
-
else:
|
| 142 |
-
print("β Some tests failed")
|
| 143 |
-
return 1
|
| 144 |
-
|
| 145 |
-
|
| 146 |
-
if __name__ == "__main__":
|
| 147 |
-
exit(main())
|
| 148 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
test_supabase_connection.py
DELETED
|
@@ -1,81 +0,0 @@
|
|
| 1 |
-
#!/usr/bin/env python3
|
| 2 |
-
"""Test Supabase connection directly"""
|
| 3 |
-
|
| 4 |
-
import os
|
| 5 |
-
from dotenv import load_dotenv
|
| 6 |
-
|
| 7 |
-
load_dotenv()
|
| 8 |
-
|
| 9 |
-
try:
|
| 10 |
-
from supabase import create_client
|
| 11 |
-
|
| 12 |
-
supabase_url = os.getenv("SUPABASE_URL")
|
| 13 |
-
supabase_key = os.getenv("SUPABASE_SERVICE_KEY")
|
| 14 |
-
|
| 15 |
-
print("Testing Supabase Connection:")
|
| 16 |
-
print("=" * 50)
|
| 17 |
-
print(f"URL: {supabase_url}")
|
| 18 |
-
print(f"Key length: {len(supabase_key) if supabase_key else 0}")
|
| 19 |
-
print()
|
| 20 |
-
|
| 21 |
-
if not supabase_url or not supabase_key:
|
| 22 |
-
print("ERROR: Missing Supabase credentials")
|
| 23 |
-
exit(1)
|
| 24 |
-
|
| 25 |
-
print("Creating Supabase client...")
|
| 26 |
-
client = create_client(supabase_url, supabase_key)
|
| 27 |
-
print("[OK] Client created successfully")
|
| 28 |
-
|
| 29 |
-
print()
|
| 30 |
-
print("Testing table access...")
|
| 31 |
-
tables = ["tool_usage_events", "redflag_violations", "rag_search_events", "agent_query_events"]
|
| 32 |
-
|
| 33 |
-
for table in tables:
|
| 34 |
-
try:
|
| 35 |
-
result = client.table(table).select("id").limit(1).execute()
|
| 36 |
-
print(f"[OK] Table '{table}' is accessible")
|
| 37 |
-
except Exception as e:
|
| 38 |
-
error_msg = str(e)
|
| 39 |
-
if "does not exist" in error_msg.lower() or "relation" in error_msg.lower():
|
| 40 |
-
print(f"[ERROR] Table '{table}' does NOT exist")
|
| 41 |
-
print(f" Solution: Run supabase_analytics_tables.sql in Supabase SQL Editor")
|
| 42 |
-
elif "401" in error_msg or "Invalid API key" in error_msg:
|
| 43 |
-
print(f"[ERROR] Table '{table}' access denied - Invalid API key")
|
| 44 |
-
print(f" Error: {error_msg[:100]}")
|
| 45 |
-
else:
|
| 46 |
-
print(f"[ERROR] Table '{table}' error: {error_msg[:100]}")
|
| 47 |
-
|
| 48 |
-
print()
|
| 49 |
-
print("Testing insert...")
|
| 50 |
-
try:
|
| 51 |
-
test_payload = {
|
| 52 |
-
"tenant_id": "test_connection",
|
| 53 |
-
"tool_name": "connection_test",
|
| 54 |
-
"timestamp": 1234567890,
|
| 55 |
-
"success": True
|
| 56 |
-
}
|
| 57 |
-
result = client.table("tool_usage_events").insert(test_payload).execute()
|
| 58 |
-
print("[OK] Test insert successful!")
|
| 59 |
-
print(f" Inserted {len(result.data) if result.data else 1} row(s)")
|
| 60 |
-
except Exception as e:
|
| 61 |
-
error_msg = str(e)
|
| 62 |
-
print(f"[ERROR] Test insert failed: {error_msg[:200]}")
|
| 63 |
-
if "401" in error_msg or "Invalid API key" in error_msg:
|
| 64 |
-
print(" This indicates an invalid API key")
|
| 65 |
-
elif "does not exist" in error_msg.lower():
|
| 66 |
-
print(" This indicates the table doesn't exist")
|
| 67 |
-
elif "RLS" in error_msg or "policy" in error_msg.lower():
|
| 68 |
-
print(" This indicates RLS policy blocking the insert")
|
| 69 |
-
|
| 70 |
-
print()
|
| 71 |
-
print("=" * 50)
|
| 72 |
-
print("Connection test complete!")
|
| 73 |
-
|
| 74 |
-
except ImportError:
|
| 75 |
-
print("ERROR: supabase-py package not installed")
|
| 76 |
-
print("Install it with: pip install supabase")
|
| 77 |
-
except Exception as e:
|
| 78 |
-
print(f"ERROR: {e}")
|
| 79 |
-
import traceback
|
| 80 |
-
traceback.print_exc()
|
| 81 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
verify_supabase_key.py
DELETED
|
@@ -1,106 +0,0 @@
|
|
| 1 |
-
#!/usr/bin/env python3
|
| 2 |
-
"""
|
| 3 |
-
Quick script to verify your Supabase API key format and connection.
|
| 4 |
-
"""
|
| 5 |
-
|
| 6 |
-
import os
|
| 7 |
-
from dotenv import load_dotenv
|
| 8 |
-
|
| 9 |
-
load_dotenv()
|
| 10 |
-
|
| 11 |
-
url = os.getenv("SUPABASE_URL")
|
| 12 |
-
key = os.getenv("SUPABASE_SERVICE_KEY")
|
| 13 |
-
|
| 14 |
-
print("=" * 70)
|
| 15 |
-
print("Supabase API Key Verification")
|
| 16 |
-
print("=" * 70)
|
| 17 |
-
print()
|
| 18 |
-
|
| 19 |
-
if not url:
|
| 20 |
-
print("β SUPABASE_URL is not set in .env file")
|
| 21 |
-
exit(1)
|
| 22 |
-
|
| 23 |
-
if not key:
|
| 24 |
-
print("β SUPABASE_SERVICE_KEY is not set in .env file")
|
| 25 |
-
exit(1)
|
| 26 |
-
|
| 27 |
-
# Clean the key
|
| 28 |
-
key = key.strip()
|
| 29 |
-
|
| 30 |
-
print(f"π SUPABASE_URL: {url[:30]}...")
|
| 31 |
-
print(f"π SUPABASE_SERVICE_KEY: {key[:20]}...{key[-10:] if len(key) > 30 else ''} ({len(key)} chars)")
|
| 32 |
-
print()
|
| 33 |
-
|
| 34 |
-
# Check key format
|
| 35 |
-
issues = []
|
| 36 |
-
|
| 37 |
-
if not key.startswith("eyJ"):
|
| 38 |
-
issues.append("β Key doesn't start with 'eyJ' (not a JWT token)")
|
| 39 |
-
|
| 40 |
-
if len(key) < 100:
|
| 41 |
-
issues.append(f"β Key is too short ({len(key)} chars, expected ~200+)")
|
| 42 |
-
|
| 43 |
-
if len(key) > 500:
|
| 44 |
-
issues.append(f"β οΈ Key is unusually long ({len(key)} chars)")
|
| 45 |
-
|
| 46 |
-
if " " in key or "\n" in key or "\t" in key:
|
| 47 |
-
issues.append("β Key contains whitespace (spaces, newlines, tabs)")
|
| 48 |
-
|
| 49 |
-
if key.startswith('"') or key.endswith('"'):
|
| 50 |
-
issues.append("β Key is wrapped in quotes (remove quotes from .env)")
|
| 51 |
-
|
| 52 |
-
if key.startswith("'") or key.endswith("'"):
|
| 53 |
-
issues.append("β Key is wrapped in single quotes (remove quotes from .env)")
|
| 54 |
-
|
| 55 |
-
if issues:
|
| 56 |
-
print("β οΈ Issues found with API key format:")
|
| 57 |
-
for issue in issues:
|
| 58 |
-
print(f" {issue}")
|
| 59 |
-
print()
|
| 60 |
-
else:
|
| 61 |
-
print("β
Key format looks good!")
|
| 62 |
-
print()
|
| 63 |
-
|
| 64 |
-
# Try to connect
|
| 65 |
-
print("π Testing connection to Supabase...")
|
| 66 |
-
try:
|
| 67 |
-
from supabase import create_client
|
| 68 |
-
client = create_client(url, key)
|
| 69 |
-
|
| 70 |
-
# Try a simple query
|
| 71 |
-
try:
|
| 72 |
-
client.table("admin_rules").select("id").limit(0).execute()
|
| 73 |
-
print("β
Connection successful! API key is valid.")
|
| 74 |
-
print()
|
| 75 |
-
print("π‘ Next steps:")
|
| 76 |
-
print(" 1. Make sure tables exist (run SQL scripts in Supabase)")
|
| 77 |
-
print(" 2. Run: python migrate_sqlite_to_supabase.py")
|
| 78 |
-
except Exception as e:
|
| 79 |
-
error_str = str(e)
|
| 80 |
-
if "Invalid API key" in error_str or "401" in error_str:
|
| 81 |
-
print("β Connection failed: Invalid API key")
|
| 82 |
-
print()
|
| 83 |
-
print("π§ How to fix:")
|
| 84 |
-
print(" 1. Go to https://app.supabase.com")
|
| 85 |
-
print(" 2. Select your project")
|
| 86 |
-
print(" 3. Go to Settings β API")
|
| 87 |
-
print(" 4. Find 'service_role' key (NOT 'anon' key)")
|
| 88 |
-
print(" 5. Click 'Reveal' to show the full key")
|
| 89 |
-
print(" 6. Copy the ENTIRE key (it's very long)")
|
| 90 |
-
print(" 7. Update SUPABASE_SERVICE_KEY in .env file")
|
| 91 |
-
print(" 8. Make sure NO quotes or spaces around the value")
|
| 92 |
-
elif "does not exist" in error_str or "relation" in error_str.lower():
|
| 93 |
-
print("β οΈ Connection works, but table doesn't exist yet")
|
| 94 |
-
print(" This is OK - create tables first, then migrate")
|
| 95 |
-
else:
|
| 96 |
-
print(f"β Connection error: {error_str}")
|
| 97 |
-
|
| 98 |
-
except ImportError:
|
| 99 |
-
print("β Supabase Python client not installed")
|
| 100 |
-
print(" Run: pip install supabase")
|
| 101 |
-
except Exception as e:
|
| 102 |
-
print(f"β Error: {e}")
|
| 103 |
-
|
| 104 |
-
print()
|
| 105 |
-
print("=" * 70)
|
| 106 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
verify_supabase_setup.py
DELETED
|
@@ -1,181 +0,0 @@
|
|
| 1 |
-
#!/usr/bin/env python3
|
| 2 |
-
"""
|
| 3 |
-
Verification script to ensure Supabase is configured and will be used for all future data.
|
| 4 |
-
Run this after migration to confirm everything is set up correctly.
|
| 5 |
-
"""
|
| 6 |
-
|
| 7 |
-
import os
|
| 8 |
-
import sys
|
| 9 |
-
from pathlib import Path
|
| 10 |
-
from dotenv import load_dotenv
|
| 11 |
-
|
| 12 |
-
# Add backend to path
|
| 13 |
-
backend_dir = Path(__file__).resolve().parent
|
| 14 |
-
sys.path.insert(0, str(backend_dir))
|
| 15 |
-
|
| 16 |
-
load_dotenv()
|
| 17 |
-
|
| 18 |
-
from backend.api.storage.rules_store import RulesStore
|
| 19 |
-
from backend.api.storage.analytics_store import AnalyticsStore
|
| 20 |
-
|
| 21 |
-
def main():
|
| 22 |
-
print("=" * 70)
|
| 23 |
-
print("Supabase Configuration Verification")
|
| 24 |
-
print("=" * 70)
|
| 25 |
-
print()
|
| 26 |
-
|
| 27 |
-
# Check environment variables
|
| 28 |
-
print("1. Checking Environment Variables:")
|
| 29 |
-
postgres_url = os.getenv("POSTGRESQL_URL")
|
| 30 |
-
supabase_url = os.getenv("SUPABASE_URL")
|
| 31 |
-
supabase_key = os.getenv("SUPABASE_SERVICE_KEY")
|
| 32 |
-
|
| 33 |
-
has_postgres = bool(postgres_url)
|
| 34 |
-
has_supabase_api = bool(supabase_url and supabase_key)
|
| 35 |
-
|
| 36 |
-
if has_postgres:
|
| 37 |
-
masked = postgres_url[:30] + "..." + postgres_url[-20:] if len(postgres_url) > 50 else postgres_url
|
| 38 |
-
print(f" β
POSTGRESQL_URL is set: {masked}")
|
| 39 |
-
else:
|
| 40 |
-
print(" β POSTGRESQL_URL is not set")
|
| 41 |
-
|
| 42 |
-
if supabase_url:
|
| 43 |
-
print(f" β
SUPABASE_URL is set: {supabase_url[:50]}...")
|
| 44 |
-
else:
|
| 45 |
-
print(" β SUPABASE_URL is not set")
|
| 46 |
-
|
| 47 |
-
if supabase_key:
|
| 48 |
-
if len(supabase_key) > 100:
|
| 49 |
-
print(f" β
SUPABASE_SERVICE_KEY is set: {supabase_key[:20]}... ({len(supabase_key)} chars)")
|
| 50 |
-
else:
|
| 51 |
-
print(f" β SUPABASE_SERVICE_KEY seems incomplete ({len(supabase_key)} chars, expected 200+)")
|
| 52 |
-
print(" β οΈ This looks like an 'anon' key, not a 'service_role' key!")
|
| 53 |
-
print(" π‘ You need the SERVICE_ROLE key (not anon key) for backend operations")
|
| 54 |
-
print(" π‘ Get it from: Supabase Dashboard β Settings β API β service_role key")
|
| 55 |
-
else:
|
| 56 |
-
print(" β SUPABASE_SERVICE_KEY is not set")
|
| 57 |
-
|
| 58 |
-
print()
|
| 59 |
-
|
| 60 |
-
# Check RulesStore
|
| 61 |
-
print("2. Checking RulesStore Configuration:")
|
| 62 |
-
try:
|
| 63 |
-
rules_store = RulesStore()
|
| 64 |
-
if rules_store.use_supabase:
|
| 65 |
-
print(" β
RulesStore is using Supabase")
|
| 66 |
-
print(f" π¦ Backend: Supabase (REST API)")
|
| 67 |
-
else:
|
| 68 |
-
print(" β RulesStore is using SQLite (not Supabase)")
|
| 69 |
-
print(" β οΈ Future rules will be saved to SQLite, not Supabase!")
|
| 70 |
-
print()
|
| 71 |
-
print(" To fix:")
|
| 72 |
-
print(" - Set SUPABASE_URL and SUPABASE_SERVICE_KEY in .env")
|
| 73 |
-
except Exception as e:
|
| 74 |
-
print(f" β Error initializing RulesStore: {e}")
|
| 75 |
-
|
| 76 |
-
print()
|
| 77 |
-
|
| 78 |
-
# Check AnalyticsStore
|
| 79 |
-
print("3. Checking AnalyticsStore Configuration:")
|
| 80 |
-
analytics_store = None
|
| 81 |
-
try:
|
| 82 |
-
analytics_store = AnalyticsStore()
|
| 83 |
-
if analytics_store.use_supabase:
|
| 84 |
-
print(" β
AnalyticsStore is using Supabase")
|
| 85 |
-
print(f" π¦ Backend: Supabase (REST API)")
|
| 86 |
-
|
| 87 |
-
# Test table verification
|
| 88 |
-
if analytics_store._tables_verified:
|
| 89 |
-
print(" β
Analytics tables verified and accessible")
|
| 90 |
-
else:
|
| 91 |
-
print(" β οΈ Analytics tables not verified")
|
| 92 |
-
print(" β οΈ This may cause inserts to fail!")
|
| 93 |
-
print(" π‘ Solution: Run supabase_analytics_tables.sql in Supabase SQL Editor")
|
| 94 |
-
|
| 95 |
-
# Test actual insert
|
| 96 |
-
print()
|
| 97 |
-
print(" π§ͺ Testing actual insert to Supabase...")
|
| 98 |
-
try:
|
| 99 |
-
test_tenant = "test_verification"
|
| 100 |
-
analytics_store.log_tool_usage(
|
| 101 |
-
tenant_id=test_tenant,
|
| 102 |
-
tool_name="verification_test",
|
| 103 |
-
latency_ms=1,
|
| 104 |
-
success=True
|
| 105 |
-
)
|
| 106 |
-
print(" β
Test insert successful! Data is being saved to Supabase.")
|
| 107 |
-
except Exception as insert_error:
|
| 108 |
-
error_str = str(insert_error)
|
| 109 |
-
print(f" β Test insert failed: {insert_error}")
|
| 110 |
-
print(" π‘ This indicates:")
|
| 111 |
-
|
| 112 |
-
# Check for specific error types
|
| 113 |
-
if "Invalid API key" in error_str or "401" in error_str:
|
| 114 |
-
print(" β INVALID API KEY - This is the main issue!")
|
| 115 |
-
print(" π‘ Your SUPABASE_SERVICE_KEY is incorrect or incomplete")
|
| 116 |
-
print(" π‘ Get the correct key from: Supabase Dashboard β Settings β API")
|
| 117 |
-
print(" π‘ Make sure you're using the 'service_role' key (not 'anon' key)")
|
| 118 |
-
print(" π‘ The service_role key should be 200+ characters long")
|
| 119 |
-
elif "does not exist" in error_str.lower() or "relation" in error_str.lower():
|
| 120 |
-
print(" - Tables may not exist (run supabase_analytics_tables.sql)")
|
| 121 |
-
elif "RLS" in error_str or "policy" in error_str.lower():
|
| 122 |
-
print(" - RLS policies may be blocking inserts")
|
| 123 |
-
else:
|
| 124 |
-
print(" - Schema mismatch between code and database")
|
| 125 |
-
print(" - Check Supabase logs for more details")
|
| 126 |
-
else:
|
| 127 |
-
print(" β AnalyticsStore is using SQLite (not Supabase)")
|
| 128 |
-
print(" β οΈ Future analytics will be saved to SQLite, not Supabase!")
|
| 129 |
-
print()
|
| 130 |
-
print(" To fix:")
|
| 131 |
-
if has_postgres:
|
| 132 |
-
print(" - POSTGRESQL_URL is set, but AnalyticsStore needs SUPABASE_URL + SUPABASE_SERVICE_KEY")
|
| 133 |
-
else:
|
| 134 |
-
print(" - Set SUPABASE_URL and SUPABASE_SERVICE_KEY in .env")
|
| 135 |
-
except Exception as e:
|
| 136 |
-
print(f" β Error initializing AnalyticsStore: {e}")
|
| 137 |
-
|
| 138 |
-
print()
|
| 139 |
-
|
| 140 |
-
# Summary
|
| 141 |
-
print("4. Summary:")
|
| 142 |
-
rules_ok = rules_store.use_supabase if 'rules_store' in locals() else False
|
| 143 |
-
analytics_ok = analytics_store.use_supabase if 'analytics_store' in locals() else False
|
| 144 |
-
|
| 145 |
-
if rules_ok and analytics_ok:
|
| 146 |
-
print(" β
All systems configured to use Supabase!")
|
| 147 |
-
print(" β
Future data will be saved to Supabase")
|
| 148 |
-
print()
|
| 149 |
-
print(" π‘ Next steps:")
|
| 150 |
-
print(" 1. Restart your FastAPI/MCP services to apply changes")
|
| 151 |
-
print(" 2. Test by adding a rule or generating analytics")
|
| 152 |
-
print(" 3. Verify data appears in Supabase Dashboard β Table Editor")
|
| 153 |
-
elif rules_ok or analytics_ok:
|
| 154 |
-
print(" β οΈ Partial configuration:")
|
| 155 |
-
if rules_ok:
|
| 156 |
-
print(" β
Rules will use Supabase")
|
| 157 |
-
else:
|
| 158 |
-
print(" β Rules will use SQLite")
|
| 159 |
-
if analytics_ok:
|
| 160 |
-
print(" β
Analytics will use Supabase")
|
| 161 |
-
else:
|
| 162 |
-
print(" β Analytics will use SQLite")
|
| 163 |
-
print()
|
| 164 |
-
print(" To fully migrate to Supabase:")
|
| 165 |
-
print(" - Ensure SUPABASE_URL and SUPABASE_SERVICE_KEY are set in .env")
|
| 166 |
-
print(" - Restart your services")
|
| 167 |
-
else:
|
| 168 |
-
print(" β Not configured for Supabase")
|
| 169 |
-
print(" β οΈ All data will be saved to SQLite")
|
| 170 |
-
print()
|
| 171 |
-
print(" To migrate to Supabase:")
|
| 172 |
-
print(" 1. Set SUPABASE_URL and SUPABASE_SERVICE_KEY in .env")
|
| 173 |
-
print(" 2. Restart your services")
|
| 174 |
-
print(" 3. Run this verification again")
|
| 175 |
-
|
| 176 |
-
print()
|
| 177 |
-
print("=" * 70)
|
| 178 |
-
|
| 179 |
-
if __name__ == "__main__":
|
| 180 |
-
main()
|
| 181 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
verify_tenant_isolation.py
DELETED
|
@@ -1,449 +0,0 @@
|
|
| 1 |
-
"""
|
| 2 |
-
verify_tenant_isolation.py
|
| 3 |
-
Script to verify tenant_id is properly used for data isolation
|
| 4 |
-
|
| 5 |
-
Usage:
|
| 6 |
-
python verify_tenant_isolation.py
|
| 7 |
-
|
| 8 |
-
This script tests:
|
| 9 |
-
- Admin rules isolation
|
| 10 |
-
- Analytics isolation
|
| 11 |
-
- RAG document isolation
|
| 12 |
-
- Database direct verification
|
| 13 |
-
"""
|
| 14 |
-
|
| 15 |
-
import requests
|
| 16 |
-
import json
|
| 17 |
-
from pathlib import Path
|
| 18 |
-
import sys
|
| 19 |
-
|
| 20 |
-
# Add backend to path
|
| 21 |
-
backend_dir = Path(__file__).parent / "backend"
|
| 22 |
-
sys.path.insert(0, str(backend_dir))
|
| 23 |
-
root_dir = Path(__file__).parent
|
| 24 |
-
sys.path.insert(0, str(root_dir))
|
| 25 |
-
|
| 26 |
-
BASE_URL = "http://localhost:8000"
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
def print_section(title):
|
| 30 |
-
"""Print a formatted section header"""
|
| 31 |
-
print("\n" + "="*60)
|
| 32 |
-
print(f" {title}")
|
| 33 |
-
print("="*60)
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
def verify_admin_rules_isolation():
|
| 37 |
-
"""Verify admin rules are isolated by tenant_id"""
|
| 38 |
-
print_section("Testing Admin Rules Isolation")
|
| 39 |
-
|
| 40 |
-
tenant1 = "verify_tenant1"
|
| 41 |
-
tenant2 = "verify_tenant2"
|
| 42 |
-
|
| 43 |
-
try:
|
| 44 |
-
# Add rules for different tenants
|
| 45 |
-
print(f"\n1. Adding rule for {tenant1}...")
|
| 46 |
-
response = requests.post(
|
| 47 |
-
f"{BASE_URL}/admin/rules",
|
| 48 |
-
headers={"x-tenant-id": tenant1, "Content-Type": "application/json"},
|
| 49 |
-
json={"rule": f"Rule for {tenant1}", "severity": "high"},
|
| 50 |
-
timeout=5
|
| 51 |
-
)
|
| 52 |
-
print(f" Status: {response.status_code}")
|
| 53 |
-
|
| 54 |
-
print(f"\n2. Adding rule for {tenant2}...")
|
| 55 |
-
response = requests.post(
|
| 56 |
-
f"{BASE_URL}/admin/rules",
|
| 57 |
-
headers={"x-tenant-id": tenant2, "Content-Type": "application/json"},
|
| 58 |
-
json={"rule": f"Rule for {tenant2}", "severity": "low"},
|
| 59 |
-
timeout=5
|
| 60 |
-
)
|
| 61 |
-
print(f" Status: {response.status_code}")
|
| 62 |
-
|
| 63 |
-
# Get rules for tenant1
|
| 64 |
-
print(f"\n3. Getting rules for {tenant1}...")
|
| 65 |
-
response = requests.get(
|
| 66 |
-
f"{BASE_URL}/admin/rules",
|
| 67 |
-
headers={"x-tenant-id": tenant1},
|
| 68 |
-
timeout=5
|
| 69 |
-
)
|
| 70 |
-
tenant1_rules = response.json().get("rules", [])
|
| 71 |
-
print(f" Found {len(tenant1_rules)} rules")
|
| 72 |
-
print(f" Rules: {tenant1_rules}")
|
| 73 |
-
|
| 74 |
-
# Get rules for tenant2
|
| 75 |
-
print(f"\n4. Getting rules for {tenant2}...")
|
| 76 |
-
response = requests.get(
|
| 77 |
-
f"{BASE_URL}/admin/rules",
|
| 78 |
-
headers={"x-tenant-id": tenant2},
|
| 79 |
-
timeout=5
|
| 80 |
-
)
|
| 81 |
-
tenant2_rules = response.json().get("rules", [])
|
| 82 |
-
print(f" Found {len(tenant2_rules)} rules")
|
| 83 |
-
print(f" Rules: {tenant2_rules}")
|
| 84 |
-
|
| 85 |
-
# Verify isolation
|
| 86 |
-
print("\n5. Verifying isolation...")
|
| 87 |
-
tenant1_rule_text = f"Rule for {tenant1}"
|
| 88 |
-
tenant2_rule_text = f"Rule for {tenant2}"
|
| 89 |
-
|
| 90 |
-
tenant1_has_own_rule = tenant1_rule_text in tenant1_rules
|
| 91 |
-
tenant1_has_other_rule = tenant2_rule_text in tenant1_rules
|
| 92 |
-
|
| 93 |
-
tenant2_has_own_rule = tenant2_rule_text in tenant2_rules
|
| 94 |
-
tenant2_has_other_rule = tenant1_rule_text in tenant2_rules
|
| 95 |
-
|
| 96 |
-
print(f" Tenant1 has own rule: {tenant1_has_own_rule} β")
|
| 97 |
-
print(f" Tenant1 has other's rule: {tenant1_has_other_rule} {'β FAILED!' if tenant1_has_other_rule else 'β PASSED'}")
|
| 98 |
-
print(f" Tenant2 has own rule: {tenant2_has_own_rule} β")
|
| 99 |
-
print(f" Tenant2 has other's rule: {tenant2_has_other_rule} {'β FAILED!' if tenant2_has_other_rule else 'β PASSED'}")
|
| 100 |
-
|
| 101 |
-
if not tenant1_has_other_rule and not tenant2_has_other_rule:
|
| 102 |
-
print("\nβ
Admin Rules Isolation: PASSED")
|
| 103 |
-
return True
|
| 104 |
-
else:
|
| 105 |
-
print("\nβ Admin Rules Isolation: FAILED")
|
| 106 |
-
return False
|
| 107 |
-
|
| 108 |
-
except requests.exceptions.ConnectionError:
|
| 109 |
-
print("\nβ οΈ Cannot connect to API. Make sure it's running:")
|
| 110 |
-
print(" uvicorn backend.api.main:app --port 8000")
|
| 111 |
-
return None
|
| 112 |
-
except Exception as e:
|
| 113 |
-
print(f"\nβ Error: {e}")
|
| 114 |
-
import traceback
|
| 115 |
-
traceback.print_exc()
|
| 116 |
-
return False
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
def verify_analytics_isolation():
|
| 120 |
-
"""Verify analytics are isolated by tenant_id"""
|
| 121 |
-
print_section("Testing Analytics Isolation")
|
| 122 |
-
|
| 123 |
-
tenant1 = "verify_tenant1"
|
| 124 |
-
tenant2 = "verify_tenant2"
|
| 125 |
-
|
| 126 |
-
try:
|
| 127 |
-
# Make queries for different tenants
|
| 128 |
-
print(f"\n1. Making query as {tenant1}...")
|
| 129 |
-
response = requests.post(
|
| 130 |
-
f"{BASE_URL}/agent/message",
|
| 131 |
-
json={"tenant_id": tenant1, "message": "Test query from tenant1"},
|
| 132 |
-
timeout=10
|
| 133 |
-
)
|
| 134 |
-
print(f" Status: {response.status_code}")
|
| 135 |
-
|
| 136 |
-
print(f"\n2. Making query as {tenant2}...")
|
| 137 |
-
response = requests.post(
|
| 138 |
-
f"{BASE_URL}/agent/message",
|
| 139 |
-
json={"tenant_id": tenant2, "message": "Test query from tenant2"},
|
| 140 |
-
timeout=10
|
| 141 |
-
)
|
| 142 |
-
print(f" Status: {response.status_code}")
|
| 143 |
-
|
| 144 |
-
# Get analytics for tenant1
|
| 145 |
-
print(f"\n3. Getting analytics for {tenant1}...")
|
| 146 |
-
response = requests.get(
|
| 147 |
-
f"{BASE_URL}/analytics/overview?days=30",
|
| 148 |
-
headers={"x-tenant-id": tenant1},
|
| 149 |
-
timeout=5
|
| 150 |
-
)
|
| 151 |
-
tenant1_analytics = response.json()
|
| 152 |
-
print(f" Total queries: {tenant1_analytics.get('total_queries', 0)}")
|
| 153 |
-
|
| 154 |
-
# Get analytics for tenant2
|
| 155 |
-
print(f"\n4. Getting analytics for {tenant2}...")
|
| 156 |
-
response = requests.get(
|
| 157 |
-
f"{BASE_URL}/analytics/overview?days=30",
|
| 158 |
-
headers={"x-tenant-id": tenant2},
|
| 159 |
-
timeout=5
|
| 160 |
-
)
|
| 161 |
-
tenant2_analytics = response.json()
|
| 162 |
-
print(f" Total queries: {tenant2_analytics.get('total_queries', 0)}")
|
| 163 |
-
|
| 164 |
-
# Verify they're different
|
| 165 |
-
print("\n5. Verifying isolation...")
|
| 166 |
-
tenant1_queries = tenant1_analytics.get('total_queries', 0)
|
| 167 |
-
tenant2_queries = tenant2_analytics.get('total_queries', 0)
|
| 168 |
-
|
| 169 |
-
print(f" Tenant1 queries: {tenant1_queries}")
|
| 170 |
-
print(f" Tenant2 queries: {tenant2_queries}")
|
| 171 |
-
|
| 172 |
-
if tenant1_queries > 0 and tenant2_queries > 0:
|
| 173 |
-
print("\nβ
Analytics Isolation: PASSED (both tenants have their own data)")
|
| 174 |
-
return True
|
| 175 |
-
else:
|
| 176 |
-
print("\nβ οΈ Analytics Isolation: Need more queries to verify")
|
| 177 |
-
return True
|
| 178 |
-
|
| 179 |
-
except requests.exceptions.ConnectionError:
|
| 180 |
-
print("\nβ οΈ Cannot connect to API. Make sure it's running:")
|
| 181 |
-
print(" uvicorn backend.api.main:app --port 8000")
|
| 182 |
-
return None
|
| 183 |
-
except Exception as e:
|
| 184 |
-
print(f"\nβ Error: {e}")
|
| 185 |
-
import traceback
|
| 186 |
-
traceback.print_exc()
|
| 187 |
-
return False
|
| 188 |
-
|
| 189 |
-
|
| 190 |
-
def verify_rag_isolation():
|
| 191 |
-
"""Verify RAG documents are isolated by tenant_id"""
|
| 192 |
-
print_section("Testing RAG Document Isolation")
|
| 193 |
-
|
| 194 |
-
tenant1 = "verify_tenant1"
|
| 195 |
-
tenant2 = "verify_tenant2"
|
| 196 |
-
|
| 197 |
-
try:
|
| 198 |
-
# Ingest documents for different tenants
|
| 199 |
-
print(f"\n1. Ingesting document for {tenant1}...")
|
| 200 |
-
response = requests.post(
|
| 201 |
-
f"{BASE_URL}/rag/ingest-document",
|
| 202 |
-
headers={"x-tenant-id": tenant1, "Content-Type": "application/json"},
|
| 203 |
-
json={
|
| 204 |
-
"content": "This is a confidential document for Tenant 1 only. Secret code: TENANT1_SECRET_12345",
|
| 205 |
-
"source_type": "raw_text"
|
| 206 |
-
},
|
| 207 |
-
timeout=10
|
| 208 |
-
)
|
| 209 |
-
print(f" Status: {response.status_code}")
|
| 210 |
-
if response.status_code != 200:
|
| 211 |
-
print(f" Error: {response.text}")
|
| 212 |
-
|
| 213 |
-
print(f"\n2. Ingesting document for {tenant2}...")
|
| 214 |
-
response = requests.post(
|
| 215 |
-
f"{BASE_URL}/rag/ingest-document",
|
| 216 |
-
headers={"x-tenant-id": tenant2, "Content-Type": "application/json"},
|
| 217 |
-
json={
|
| 218 |
-
"content": "This is a confidential document for Tenant 2 only. Secret code: TENANT2_SECRET_67890",
|
| 219 |
-
"source_type": "raw_text"
|
| 220 |
-
},
|
| 221 |
-
timeout=10
|
| 222 |
-
)
|
| 223 |
-
print(f" Status: {response.status_code}")
|
| 224 |
-
if response.status_code != 200:
|
| 225 |
-
print(f" Error: {response.text}")
|
| 226 |
-
|
| 227 |
-
# List documents for tenant1
|
| 228 |
-
print(f"\n3. Listing documents for {tenant1}...")
|
| 229 |
-
response = requests.get(
|
| 230 |
-
f"{BASE_URL}/rag/list",
|
| 231 |
-
headers={"x-tenant-id": tenant1},
|
| 232 |
-
timeout=5
|
| 233 |
-
)
|
| 234 |
-
tenant1_docs = response.json().get("documents", [])
|
| 235 |
-
print(f" Found {len(tenant1_docs)} documents")
|
| 236 |
-
|
| 237 |
-
# List documents for tenant2
|
| 238 |
-
print(f"\n4. Listing documents for {tenant2}...")
|
| 239 |
-
response = requests.get(
|
| 240 |
-
f"{BASE_URL}/rag/list",
|
| 241 |
-
headers={"x-tenant-id": tenant2},
|
| 242 |
-
timeout=5
|
| 243 |
-
)
|
| 244 |
-
tenant2_docs = response.json().get("documents", [])
|
| 245 |
-
print(f" Found {len(tenant2_docs)} documents")
|
| 246 |
-
|
| 247 |
-
# Search for tenant1's secret
|
| 248 |
-
print(f"\n5. Searching for tenant1's secret as tenant1...")
|
| 249 |
-
response = requests.post(
|
| 250 |
-
f"{BASE_URL}/rag/search",
|
| 251 |
-
headers={"x-tenant-id": tenant1, "Content-Type": "application/json"},
|
| 252 |
-
json={"query": "TENANT1_SECRET"},
|
| 253 |
-
timeout=10
|
| 254 |
-
)
|
| 255 |
-
tenant1_search = response.json()
|
| 256 |
-
|
| 257 |
-
# Check only the result texts, not the entire JSON (which includes the query)
|
| 258 |
-
tenant1_results = tenant1_search.get("results", [])
|
| 259 |
-
tenant1_found = False
|
| 260 |
-
for result in tenant1_results:
|
| 261 |
-
result_text = result.get("text", "") or result.get("content", "") or str(result)
|
| 262 |
-
if "TENANT1_SECRET" in result_text:
|
| 263 |
-
tenant1_found = True
|
| 264 |
-
break
|
| 265 |
-
|
| 266 |
-
print(f" Found: {tenant1_found}")
|
| 267 |
-
if tenant1_results:
|
| 268 |
-
print(f" Results count: {len(tenant1_results)}")
|
| 269 |
-
if tenant1_results:
|
| 270 |
-
print(f" First result preview: {str(tenant1_results[0].get('text', ''))[:100]}...")
|
| 271 |
-
|
| 272 |
-
# Search for tenant1's secret as tenant2 (should NOT find it)
|
| 273 |
-
print(f"\n6. Searching for tenant1's secret as tenant2 (should NOT find)...")
|
| 274 |
-
response = requests.post(
|
| 275 |
-
f"{BASE_URL}/rag/search",
|
| 276 |
-
headers={"x-tenant-id": tenant2, "Content-Type": "application/json"},
|
| 277 |
-
json={"query": "TENANT1_SECRET"},
|
| 278 |
-
timeout=10
|
| 279 |
-
)
|
| 280 |
-
tenant2_search = response.json()
|
| 281 |
-
|
| 282 |
-
# Check results more carefully
|
| 283 |
-
tenant2_results = tenant2_search.get("results", [])
|
| 284 |
-
tenant2_found = False
|
| 285 |
-
tenant2_found_texts = []
|
| 286 |
-
|
| 287 |
-
for result in tenant2_results:
|
| 288 |
-
result_text = result.get("text", "") or result.get("content", "") or str(result)
|
| 289 |
-
if "TENANT1_SECRET" in result_text:
|
| 290 |
-
tenant2_found = True
|
| 291 |
-
tenant2_found_texts.append(result_text[:100])
|
| 292 |
-
|
| 293 |
-
print(f" Found: {tenant2_found}")
|
| 294 |
-
print(f" Results count: {len(tenant2_results)}")
|
| 295 |
-
if tenant2_results:
|
| 296 |
-
print(f" First result preview: {str(tenant2_results[0])[:150]}")
|
| 297 |
-
if tenant2_found_texts:
|
| 298 |
-
print(f" β οΈ Found TENANT1_SECRET in {len(tenant2_found_texts)} result(s):")
|
| 299 |
-
for i, text in enumerate(tenant2_found_texts, 1):
|
| 300 |
-
print(f" {i}. {text}...")
|
| 301 |
-
|
| 302 |
-
# Verify isolation
|
| 303 |
-
print("\n7. Verifying isolation...")
|
| 304 |
-
if tenant1_found and not tenant2_found:
|
| 305 |
-
print(" β
Tenant1 can find their own secret")
|
| 306 |
-
print(" β
Tenant2 cannot find tenant1's secret")
|
| 307 |
-
print("\nβ
RAG Isolation: PASSED")
|
| 308 |
-
return True
|
| 309 |
-
elif tenant1_found and tenant2_found:
|
| 310 |
-
print(" β Tenant2 can see tenant1's secret - ISOLATION FAILED!")
|
| 311 |
-
print(f" Debug: tenant2 found {len(tenant2_found_texts)} result(s) containing TENANT1_SECRET")
|
| 312 |
-
print("\nβ RAG Isolation: FAILED")
|
| 313 |
-
return False
|
| 314 |
-
else:
|
| 315 |
-
print(" β οΈ Could not verify (may need RAG server running)")
|
| 316 |
-
print("\nβ οΈ RAG Isolation: INCONCLUSIVE")
|
| 317 |
-
return None
|
| 318 |
-
|
| 319 |
-
except requests.exceptions.ConnectionError:
|
| 320 |
-
print("\nβ οΈ Cannot connect to API/RAG server. Make sure they're running:")
|
| 321 |
-
print(" uvicorn backend.api.main:app --port 8000")
|
| 322 |
-
print(" python backend/mcp_server/server.py")
|
| 323 |
-
return None
|
| 324 |
-
except Exception as e:
|
| 325 |
-
print(f"\nβ Error: {e}")
|
| 326 |
-
import traceback
|
| 327 |
-
traceback.print_exc()
|
| 328 |
-
return False
|
| 329 |
-
|
| 330 |
-
|
| 331 |
-
def verify_database_directly():
|
| 332 |
-
"""Verify tenant_id in database directly"""
|
| 333 |
-
print_section("Verifying Database Directly")
|
| 334 |
-
|
| 335 |
-
try:
|
| 336 |
-
from api.storage.analytics_store import AnalyticsStore
|
| 337 |
-
from api.storage.rules_store import RulesStore
|
| 338 |
-
|
| 339 |
-
# Check analytics store
|
| 340 |
-
print("\n1. Checking Analytics Store...")
|
| 341 |
-
analytics = AnalyticsStore()
|
| 342 |
-
|
| 343 |
-
# Log events for different tenants
|
| 344 |
-
analytics.log_tool_usage("db_verify_tenant1", "rag", latency_ms=100)
|
| 345 |
-
analytics.log_tool_usage("db_verify_tenant2", "web", latency_ms=200)
|
| 346 |
-
|
| 347 |
-
# Get stats
|
| 348 |
-
tenant1_stats = analytics.get_tool_usage_stats("db_verify_tenant1")
|
| 349 |
-
tenant2_stats = analytics.get_tool_usage_stats("db_verify_tenant2")
|
| 350 |
-
|
| 351 |
-
print(f" Tenant1 stats: {list(tenant1_stats.keys())}")
|
| 352 |
-
print(f" Tenant2 stats: {list(tenant2_stats.keys())}")
|
| 353 |
-
|
| 354 |
-
# Check rules store
|
| 355 |
-
print("\n2. Checking Rules Store...")
|
| 356 |
-
rules = RulesStore()
|
| 357 |
-
|
| 358 |
-
rules.add_rule("db_verify_tenant1", "Rule 1", severity="high")
|
| 359 |
-
rules.add_rule("db_verify_tenant2", "Rule 2", severity="low")
|
| 360 |
-
|
| 361 |
-
tenant1_rules = rules.get_rules("db_verify_tenant1")
|
| 362 |
-
tenant2_rules = rules.get_rules("db_verify_tenant2")
|
| 363 |
-
|
| 364 |
-
print(f" Tenant1 rules: {tenant1_rules}")
|
| 365 |
-
print(f" Tenant2 rules: {tenant2_rules}")
|
| 366 |
-
|
| 367 |
-
# Verify isolation
|
| 368 |
-
print("\n3. Verifying isolation...")
|
| 369 |
-
tenant1_has_rule1 = "Rule 1" in tenant1_rules
|
| 370 |
-
tenant1_has_rule2 = "Rule 2" in tenant1_rules
|
| 371 |
-
tenant2_has_rule1 = "Rule 1" in tenant2_rules
|
| 372 |
-
tenant2_has_rule2 = "Rule 2" in tenant2_rules
|
| 373 |
-
|
| 374 |
-
print(f" Tenant1 has Rule 1: {tenant1_has_rule1} β")
|
| 375 |
-
print(f" Tenant1 has Rule 2: {tenant1_has_rule2} {'β FAILED!' if tenant1_has_rule2 else 'β PASSED'}")
|
| 376 |
-
print(f" Tenant2 has Rule 1: {tenant2_has_rule1} {'β FAILED!' if tenant2_has_rule1 else 'β PASSED'}")
|
| 377 |
-
print(f" Tenant2 has Rule 2: {tenant2_has_rule2} β")
|
| 378 |
-
|
| 379 |
-
if tenant1_has_rule1 and not tenant1_has_rule2 and not tenant2_has_rule1 and tenant2_has_rule2:
|
| 380 |
-
print("\nβ
Database Direct Verification: PASSED")
|
| 381 |
-
return True
|
| 382 |
-
else:
|
| 383 |
-
print("\nβ Database Direct Verification: FAILED")
|
| 384 |
-
return False
|
| 385 |
-
|
| 386 |
-
except Exception as e:
|
| 387 |
-
print(f"\nβ Error: {e}")
|
| 388 |
-
import traceback
|
| 389 |
-
traceback.print_exc()
|
| 390 |
-
return False
|
| 391 |
-
|
| 392 |
-
|
| 393 |
-
def main():
|
| 394 |
-
"""Run all verification tests"""
|
| 395 |
-
print("\n" + "π" * 30)
|
| 396 |
-
print("Tenant ID Isolation Verification")
|
| 397 |
-
print("π" * 30)
|
| 398 |
-
|
| 399 |
-
results = []
|
| 400 |
-
|
| 401 |
-
# Test 1: Database direct verification (always runs, no API needed)
|
| 402 |
-
print("\nπ Running database direct verification (no API required)...")
|
| 403 |
-
result = verify_database_directly()
|
| 404 |
-
if result is not None:
|
| 405 |
-
results.append(result)
|
| 406 |
-
|
| 407 |
-
# Test 2: Admin rules isolation (requires API running)
|
| 408 |
-
print("\nπ Testing admin rules isolation (requires API)...")
|
| 409 |
-
result = verify_admin_rules_isolation()
|
| 410 |
-
if result is not None:
|
| 411 |
-
results.append(result)
|
| 412 |
-
|
| 413 |
-
# Test 3: Analytics isolation (requires API running)
|
| 414 |
-
print("\nπ Testing analytics isolation (requires API)...")
|
| 415 |
-
result = verify_analytics_isolation()
|
| 416 |
-
if result is not None:
|
| 417 |
-
results.append(result)
|
| 418 |
-
|
| 419 |
-
# Test 4: RAG isolation (requires API and RAG server running)
|
| 420 |
-
print("\nπ Testing RAG document isolation (requires API + RAG server)...")
|
| 421 |
-
result = verify_rag_isolation()
|
| 422 |
-
if result is not None:
|
| 423 |
-
results.append(result)
|
| 424 |
-
|
| 425 |
-
# Summary
|
| 426 |
-
print_section("Verification Summary")
|
| 427 |
-
passed = sum(1 for r in results if r is True)
|
| 428 |
-
failed = sum(1 for r in results if r is False)
|
| 429 |
-
total = len(results)
|
| 430 |
-
|
| 431 |
-
print(f"\nTests Completed: {total}")
|
| 432 |
-
print(f"β
Passed: {passed}")
|
| 433 |
-
print(f"β Failed: {failed}")
|
| 434 |
-
|
| 435 |
-
if total == 0:
|
| 436 |
-
print("\nβ οΈ No tests could run. Make sure services are running:")
|
| 437 |
-
print(" - API: uvicorn backend.api.main:app --port 8000")
|
| 438 |
-
print(" - MCP Server: python backend/mcp_server/server.py")
|
| 439 |
-
elif failed == 0 and passed > 0:
|
| 440 |
-
print("\nβ
All tenant isolation tests PASSED!")
|
| 441 |
-
elif failed > 0:
|
| 442 |
-
print("\nβ Some tenant isolation tests FAILED!")
|
| 443 |
-
else:
|
| 444 |
-
print("\nβ οΈ Some tests were inconclusive or skipped")
|
| 445 |
-
|
| 446 |
-
|
| 447 |
-
if __name__ == "__main__":
|
| 448 |
-
main()
|
| 449 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|