Commit ·
c02d04e
1
Parent(s): 75758a5
adding the intelligence agent
Browse files
README.md
CHANGED
|
@@ -47,43 +47,41 @@ Retail investors are at a massive disadvantage. They lack the sophisticated tool
|
|
| 47 |
|
| 48 |
|
| 49 |
|
| 50 |
-
Local Setup & Installation
|
| 51 |
-
Follow these steps to run the project locally.
|
| 52 |
-
Prerequisites:
|
| 53 |
-
Docker & Docker Compose
|
| 54 |
-
Python 3.10+
|
| 55 |
-
Node.js & npm
|
| 56 |
|
| 57 |
|
| 58 |
1. Clone the repository:
|
| 59 |
-
|
| 60 |
-
Bash
|
| 61 |
git clone https://github.com/your-username/quantitative-analysis-platform.git
|
| 62 |
cd quantitative-analysis-platform
|
|
|
|
| 63 |
|
| 64 |
|
| 65 |
2. Set up environment variables:
|
|
|
|
| 66 |
Create a .env file in the root of the project by copying the example:
|
| 67 |
-
code
|
| 68 |
-
Bash
|
| 69 |
cp .env.example .env
|
| 70 |
-
|
| 71 |
|
| 72 |
3. Build and run the services:
|
| 73 |
-
|
| 74 |
-
Bash
|
| 75 |
docker-compose up --build -d
|
|
|
|
| 76 |
|
| 77 |
|
| 78 |
4. Access the applications:
|
|
|
|
| 79 |
Frontend: http://localhost:5173
|
| 80 |
Backend API Docs: http://localhost:8000/docs
|
| 81 |
-
|
| 82 |
-
Asynchronous Workflow: Building a resilient, multi-stage pipeline with Celery required careful state management and error handling to ensure the process could continue even if one of the scraping agents failed.
|
| 83 |
-
Database Session Management: The most challenging bug was ensuring that the SQLAlchemy database sessions were correctly handled within the forked processes of the Celery workers. The final solution involved a "one task, multiple commits" pattern for maximum reliability.
|
| 84 |
-
AI Prompt Engineering: Crafting the perfect prompt for the Gemini Analyst Agent was an iterative process. It involved structuring the input data and giving the LLM a clear "persona" and a required output format (Markdown) to get consistent, high-quality results.
|
| 85 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 86 |
|
| 87 |
-
Fill in the Blanks:
|
| 88 |
-
Take a great screenshot of your final, beautiful dashboard and save it in your project. Update the path in the README.md.
|
| 89 |
-
Create a .env.example file in your root directory. Copy your .env file, but remove your actual secret keys and replace them with placeholders like your_key_here. This is a professional standard.
|
|
|
|
| 47 |
|
| 48 |
|
| 49 |
|
| 50 |
+
## Local Setup & Installation
|
| 51 |
+
- Follow these steps to run the project locally.
|
| 52 |
+
- Prerequisites:
|
| 53 |
+
- Docker & Docker Compose
|
| 54 |
+
- Python 3.10+
|
| 55 |
+
- Node.js & npm
|
| 56 |
|
| 57 |
|
| 58 |
1. Clone the repository:
|
| 59 |
+
```bash
|
|
|
|
| 60 |
git clone https://github.com/your-username/quantitative-analysis-platform.git
|
| 61 |
cd quantitative-analysis-platform
|
| 62 |
+
```
|
| 63 |
|
| 64 |
|
| 65 |
2. Set up environment variables:
|
| 66 |
+
```bash
|
| 67 |
Create a .env file in the root of the project by copying the example:
|
|
|
|
|
|
|
| 68 |
cp .env.example .env
|
| 69 |
+
```
|
| 70 |
|
| 71 |
3. Build and run the services:
|
| 72 |
+
```bash
|
|
|
|
| 73 |
docker-compose up --build -d
|
| 74 |
+
```
|
| 75 |
|
| 76 |
|
| 77 |
4. Access the applications:
|
| 78 |
+
```bash
|
| 79 |
Frontend: http://localhost:5173
|
| 80 |
Backend API Docs: http://localhost:8000/docs
|
| 81 |
+
```
|
|
|
|
|
|
|
|
|
|
| 82 |
|
| 83 |
+
## Key Challenges & Learnings
|
| 84 |
+
- Asynchronous Workflow: Building a resilient, multi-stage pipeline with Celery required careful state management and error handling to ensure the process could continue even if one of the scraping agents failed.
|
| 85 |
+
- Database Session Management: The most challenging bug was ensuring that the SQLAlchemy database sessions were correctly handled within the forked processes of the Celery workers. The final solution involved a "one task, multiple commits" pattern for maximum reliability.
|
| 86 |
+
- AI Prompt Engineering: Crafting the perfect prompt for the Gemini Analyst Agent was an iterative process. It involved structuring the input data and giving the LLM a clear "persona" and a required output format (Markdown) to get consistent, high-quality results.
|
| 87 |
|
|
|
|
|
|
|
|
|