File size: 10,611 Bytes
3647b02
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
# Universal Deep Research Backend (UDR-B)

A FastAPI-based backend service that provides intelligent research and reporting capabilities using large language models and web search APIs. The system can perform comprehensive research on user queries, aggregate findings, and generate detailed reports.

This software is provided exclusively for research and demonstration purposes. It is intended solely as a prototype to demonstrate research concepts and methodologies in artificial intelligence and automated research systems.

- This software is not intended for production deployment, commercial use, or any real-world application where reliability, accuracy, or safety is required.
- This software contains experimental features, unproven methodologies, and research-grade implementations that may contain bugs, security vulnerabilities, or other issues.
- The software is provided "AS IS" without any warranties. Neither NVIDIA Corporation nor the authors shall be liable for any damages arising from the use of this software to the fullest extent permitted by law.

By using this software, you acknowledge that you have read and understood the complete DISCLAIMER file and agree to be bound by its terms. For the complete legal disclaimer, please see the [DISCLAIMER](DISCLAIMER.txt) file in this directory.

## Features

- **Intelligent Research**: Automated web search and content analysis using Tavily API
- **Multi-Model Support**: Configurable LLM backends (OpenAI, NVIDIA, local vLLM)
- **Streaming Responses**: Real-time progress updates via Server-Sent Events
- **Session Management**: Persistent research sessions with unique identifiers
- **Flexible Architecture**: Modular design with configurable components
- **Dry Run Mode**: Testing capabilities with mock data
- **Advanced Framing**: Custom FrameV4 system for increased reliability of instruction following across all models

## Architecture

The backend consists of several key components:

- **`main.py`**: FastAPI application with research endpoints
- **`scan_research.py`**: Core research and reporting logic
- **`clients.py`**: LLM and search API client management
- **`frame/`**: Advanced reliability framework (FrameV4)
- **`items.py`**: Data persistence utilities
- **`sessions.py`**: Session key generation and management

## Quick Start

### Prerequisites

- Python 3.8+
- API keys for your chosen LLM provider
- Tavily API key for web search functionality

### Installation

#### Option 1: Automated Setup (Recommended)

The easiest way to set up the backend is using the provided `setup.py` script:

1. **Clone the repository**:

   ```bash
   git clone <repository-url>
   cd backend
   ```

2. **Run the setup script**:

   ```bash
   python3 setup.py
   ```

   The setup script will:

   - Check Python version compatibility
   - Create necessary directories (`logs/`, `instances/`, `mock_instances/`)
   - Set up environment configuration (`.env` file)
   - Check for required API key files
   - Install Python dependencies
   - Validate the setup

3. **Configure API keys**:
   Create the following files with your API keys:

   ```bash
   echo "your-tavily-api-key" > tavily_api.txt
   echo "your-llm-api-key" > nvdev_api.txt  # or openai_api.txt
   ```

4. **Start the server**:

   ```bash
   ./launch_server.sh
   ```

   **Note**: The `launch_server.sh` script is the recommended way to start the server as it:

   - Automatically loads environment variables from `.env`
   - Sets proper default configurations
   - Runs the server in the background with logging
   - Provides process management information

#### Option 2: Manual Setup

If you prefer to set up the backend manually, follow these steps:

1. **Clone the repository**:

   ```bash
   git clone <repository-url>
   cd backend
   ```

2. **Create virtual environment**:

   ```bash
   python3 -m venv venv
   source venv/bin/activate  # On Windows: venv\Scripts\activate
   ```

3. **Install dependencies**:

   ```bash
   pip install -r requirements.txt
   ```

4. **Create necessary directories**:

   ```bash
   mkdir -p logs instances mock_instances
   ```

5. **Set up environment configuration**:
   Copy the example environment file and configure it:

   ```bash
   cp env.example .env
   # Edit .env file with your configuration
   ```

6. **Configure API keys**:
   Create the following files with your API keys:

   ```bash
   echo "your-tavily-api-key" > tavily_api.txt
   echo "your-llm-api-key" > nvdev_api.txt  # or e.g., openai_api.txt
   ```

7. **Start the server**:

   ```bash
   ./launch_server.sh
   ```

   **Note**: As noted for Option 1, the `launch_server.sh` script is the recommended way to start the server as it:

   - Automatically loads environment variables from `.env`
   - Sets proper default configurations
   - Runs the server in the background with logging
   - Provides process management information

The server will be available at `http://localhost:8000`

You can now quickly test the server.

```bash
curl -X POST http://localhost:8000/api/research \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "What are the latest developments in quantum computing?",
    "start_from": "research"
  }'
```

## Configuration

### Environment Variables

Create a `.env` file in the backend directory:

```env
# Server Configuration
HOST=0.0.0.0
PORT=8000
LOG_LEVEL=info

# CORS Configuration
FRONTEND_URL=http://localhost:3000

# Model Configuration
DEFAULT_MODEL=llama-3.1-nemotron-253b
LLM_BASE_URL=https://integrate.api.nvidia.com/v1
LLM_API_KEY_FILE=nvdev_api.txt

# Search Configuration
TAVILY_API_KEY_FILE=tavily_api.txt

# Research Configuration
MAX_TOPICS=1
MAX_SEARCH_PHRASES=1
MOCK_DIRECTORY=mock_instances/stocks_24th_3_sections

# Logging Configuration
LOG_DIR=logs
TRACE_ENABLED=true
```

### Model Configuration

The system supports multiple LLM providers. Configure models in `clients.py`:

```python
MODEL_CONFIGS = {
    "llama-3.1-8b": {
        "base_url": "https://integrate.api.nvidia.com/v1",
        "api_type": "nvdev",
        "completion_config": {
            "model": "nvdev/meta/llama-3.1-8b-instruct",
            "temperature": 0.2,
            "top_p": 0.7,
            "max_tokens": 2048,
            "stream": True
        }
    },
    # Add more models as needed
}
```

### API Key Files

The system expects API keys in text files:

- `tavily_api.txt`: Tavily search API key
- `nvdev_api.txt`: NVIDIA API key
- `openai_api.txt`: OpenAI API key

## API Endpoints

### GET `/`

Health check endpoint that returns a status message.

### POST `/api/research`

Main research endpoint that performs research and generates reports.

**Request Body**:

```json
{
  "dry": false,
  "session_key": "optional-session-key",
  "start_from": "research",
  "prompt": "Your research query here",
  "mock_directory": "mock_instances/stocks_24th_3_sections"
}
```

**Parameters**:

- `dry` (boolean): Use mock data for testing
- `session_key` (string, optional): Existing session to continue
- `start_from` (string): "research" or "reporting"
- `prompt` (string): Research query (required for research phase)
- `mock_directory` (string): Directory for mock data

**Response**: Server-Sent Events stream with research progress

### POST `/api/research2`

Advanced reliability framework endpoint using FrameV4 system. This is the endpoint that supports custom user deep research strategies.

**Request Body**:

```json
{
  "prompt": "Your research query",
  "strategy_id": "custom-strategy",
  "strategy_content": "Custom research strategy"
}
```

## Usage Examples

### Basic Research Request

```bash
curl -X POST http://localhost:8000/api/research \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "What are the latest developments in quantum computing?",
    "start_from": "research"
  }'
```

### Dry Run Testing

```bash
curl -X POST http://localhost:8000/api/research \
  -H "Content-Type: application/json" \
  -d '{
    "dry": true,
    "prompt": "Test research query",
    "start_from": "research"
  }'
```

### Continue from Reporting Phase

```bash
curl -X POST http://localhost:8000/api/research \
  -H "Content-Type: application/json" \
  -d '{
    "session_key": "20241201T120000Z-abc12345", # This would be the key of the session which you have previously started
    "start_from": "reporting"
  }'
```

## Development

### Logging

Logs are stored in the `logs/` directory:

- `comms_YYYYMMDD_HH-MM-SS.log`: Communication traces
- `{instance_id}_compilation.log`: Frame compilation logs
- `{instance_id}_execution.log`: Frame execution logs

### Mock Data

Mock research data is available in `mock_instances/`:

- `stocks_24th_3_sections/`: Stock market research data
- `stocks_30th_short/`: Short stock market data

## Deployment

### Production Deployment

1. **Set up environment**:

   ```bash
   export HOST=0.0.0.0
   export PORT=8000
   export LOG_LEVEL=info
   ```

2. **Run with gunicorn**:

   ```bash
   pip install gunicorn
   gunicorn main:app -w 4 -k uvicorn.workers.UvicornWorker --bind 0.0.0.0:8000
   ```

   **Note**: For development, prefer using `./launch_server.sh` which provides better process management and logging.

### Docker Deployment

Create a `Dockerfile`:

```dockerfile
FROM python:3.9-slim

WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt

COPY . .
EXPOSE 8000

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
```

## Troubleshooting

### Common Issues

1. **API Key Errors**: Ensure API key files exist and contain valid keys
2. **CORS Errors**: Check `FRONTEND_URL` configuration
3. **Model Errors**: Verify model configuration in `clients.py`
4. **Permission Errors**: Ensure write permissions for `logs/` and `instances/` directories

### Debug Mode

Enable debug logging by setting the LOG_LEVEL environment variable:

```bash
export LOG_LEVEL=debug
./launch_server.sh
```

Or run uvicorn directly for debugging:

```bash
uvicorn main:app --reload --log-level=debug
```

## Contributing

1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests if applicable
5. Submit a pull request

## License and Disclaimer

This software is provided for research and demonstration purposes only. Please refer to the [DISCLAIMER](DISCLAIMER.txt) file for complete terms and conditions regarding the use of this software. You can find the license in [LICENSE](LICENSE.txt).

**Do not use this code in production.**

## Support

For issues and questions:

- Create an issue in the repository
- Check the logs in the `logs/` directory
- Review the configuration settings