| --- |
| language: en |
| library_name: pytorch |
| license: mit |
| tags: |
| - reinforcement-learning |
| - tabular-classification |
| - pytorch |
| - trading |
| - finance |
| - pim |
| --- |
| |
| # Risk_qdrant - Layer 2 RL Agent |
| |
| Part of the PassiveIncomeMaximizer (PIM) trading system. |
| |
| ## Model Description |
| |
| Layer 2 RL agents for signal filtering (PPO-trained) |
| |
| This is a Proximal Policy Optimization (PPO) reinforcement learning agent trained to filter trading signals from FinColl predictions. The agent evaluates prediction confidence and signal quality based on risk criteria. |
| |
| ## Architecture |
| |
| - **Algorithm**: Proximal Policy Optimization (PPO) |
| - **Input**: 414-dimensional SymVectors from FinColl |
| - **Output**: Confidence score (0-1) and action recommendation |
| - **Training**: Trained on historical market data with profit-based rewards |
| - **Framework**: PyTorch with custom RL implementation |
| |
| ## Layer 2 System |
| |
| PIM uses 9 Layer 2 RL agents that collaborate to filter predictions: |
| 1. MomentumAgent - Price momentum patterns |
| 2. TechnicalAgent - Chart patterns and indicators |
| 3. RiskAgent - Volatility and drawdown assessment |
| 4. OptionsAgent - Options flow analysis |
| 5. MacroAgent - Economic indicators |
| 6. SentimentAgent - News and social sentiment |
| 7. VolumeAgent - Trading volume patterns |
| 8. SectorRotationAgent - Sector strength |
| 9. MeanReversionAgent - Overbought/oversold detection |
| |
| ## Usage |
| |
| ```python |
| import torch |
| from pim.learning.agents.layer2_mlp import Layer2MLPAgents |
|
|
| # Load model |
| agents = Layer2MLPAgents(device='cuda') |
| agents.load_trained_agents('path/to/trained_agents/') |
| |
| # Evaluate a SymVector |
| import numpy as np |
| symvector = np.random.rand(414) # 414D feature vector from FinColl |
| scores = agents.evaluate(symvector) # Returns dict of agent scores |
| |
| # Aggregate scores |
| composite, confidence = agents.aggregate_scores(scores) |
| print(f"Composite score: {composite:.3f}, Confidence: {confidence}") |
| ``` |
| |
| ## Training Data |
| |
| - **Period**: 2024 historical equity data (35,084 SymVectors) |
| - **Symbols**: 332 equities from diversified portfolio |
| - **Features**: 414-dimensional vectors (price, sentiment, fundamentals, technical indicators) |
| - **Source**: FinColl API with TradeStation market data |
| |
| ## Performance Metrics |
| |
| Based on January 2024 backtests: |
| - **Directional Accuracy**: 71.88% (10-day horizon) |
| - **Sharpe Ratio**: 7.24 (annualized) |
| - **Profit Factor**: 3.45 |
| - **Win Rate**: 71.9% |
| |
| ## Limitations |
| |
| - Trained on 2024 equity data only (not tested on other asset classes) |
| - Requires FinColl SymVectors (414D) as input |
| - Performance may degrade in unprecedented market conditions |
| - Best used as part of complete PIM dual-layer system |
| |
| ## Intended Use |
| |
| This model is intended for: |
| - Signal filtering in automated trading systems |
| - Research into RL-based trading strategies |
| - Educational purposes in quantitative finance |
| |
| **Not intended for**: |
| - Standalone trading decisions (use full PIM system) |
| - Financial advice or recommendations |
| - Unmonitored autonomous trading |
| |
| ## Citation |
| |
| ```bibtex |
| @software{pim_layer2_risk, |
| author = {PassiveIncomeMaximizer Team}, |
| title = {Risk_qdrant - Layer 2 RL Agent}, |
| year = {2025}, |
| url = {https://github.com/yourusername/PassiveIncomeMaximizer} |
| } |
| ``` |
| |
| ## More Information |
| |
| - **Repository**: https://github.com/yourusername/PassiveIncomeMaximizer |
| - **Documentation**: See LAYER2_README.md in docs/architecture/layer2/ |
| - **License**: MIT |
|
|