Commit ·
27d7a12
1
Parent(s): 2b83a21
adding the readme file
Browse files
README.md
CHANGED
|
@@ -48,5 +48,44 @@ graph TD
|
|
| 48 |
B -->|13. Read Status/Result| F;
|
| 49 |
end
|
| 50 |
|
| 51 |
-
## Local Setup
|
| 52 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
B -->|13. Read Status/Result| F;
|
| 49 |
end
|
| 50 |
|
|
|
|
| 51 |
|
| 52 |
+
Local Setup & Installation
|
| 53 |
+
Follow these steps to run the project locally.
|
| 54 |
+
Prerequisites:
|
| 55 |
+
Docker & Docker Compose
|
| 56 |
+
Python 3.10+
|
| 57 |
+
Node.js & npm
|
| 58 |
+
|
| 59 |
+
|
| 60 |
+
1. Clone the repository:
|
| 61 |
+
code
|
| 62 |
+
Bash
|
| 63 |
+
git clone https://github.com/your-username/quantitative-analysis-platform.git
|
| 64 |
+
cd quantitative-analysis-platform
|
| 65 |
+
|
| 66 |
+
|
| 67 |
+
2. Set up environment variables:
|
| 68 |
+
Create a .env file in the root of the project by copying the example:
|
| 69 |
+
code
|
| 70 |
+
Bash
|
| 71 |
+
cp .env.example .env
|
| 72 |
+
|
| 73 |
+
|
| 74 |
+
3. Build and run the services:
|
| 75 |
+
code
|
| 76 |
+
Bash
|
| 77 |
+
docker-compose up --build -d
|
| 78 |
+
|
| 79 |
+
|
| 80 |
+
4. Access the applications:
|
| 81 |
+
Frontend: http://localhost:5173
|
| 82 |
+
Backend API Docs: http://localhost:8000/docs
|
| 83 |
+
💡 Key Challenges & Learnings
|
| 84 |
+
Asynchronous Workflow: Building a resilient, multi-stage pipeline with Celery required careful state management and error handling to ensure the process could continue even if one of the scraping agents failed.
|
| 85 |
+
Database Session Management: The most challenging bug was ensuring that the SQLAlchemy database sessions were correctly handled within the forked processes of the Celery workers. The final solution involved a "one task, multiple commits" pattern for maximum reliability.
|
| 86 |
+
AI Prompt Engineering: Crafting the perfect prompt for the Gemini Analyst Agent was an iterative process. It involved structuring the input data and giving the LLM a clear "persona" and a required output format (Markdown) to get consistent, high-quality results.
|
| 87 |
+
|
| 88 |
+
|
| 89 |
+
Fill in the Blanks:
|
| 90 |
+
Take a great screenshot of your final, beautiful dashboard and save it in your project. Update the path in the README.md.
|
| 91 |
+
Create a .env.example file in your root directory. Copy your .env file, but remove your actual secret keys and replace them with placeholders like your_key_here. This is a professional standard.
|