File size: 3,346 Bytes
f1f2e72 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 | ---
language: en
library_name: pytorch
license: mit
tags:
- reinforcement-learning
- tabular-classification
- pytorch
- trading
- finance
- pim
---
# Risk_qdrant - Layer 2 RL Agent
Part of the PassiveIncomeMaximizer (PIM) trading system.
## Model Description
Layer 2 RL agents for signal filtering (PPO-trained)
This is a Proximal Policy Optimization (PPO) reinforcement learning agent trained to filter trading signals from FinColl predictions. The agent evaluates prediction confidence and signal quality based on risk criteria.
## Architecture
- **Algorithm**: Proximal Policy Optimization (PPO)
- **Input**: 414-dimensional SymVectors from FinColl
- **Output**: Confidence score (0-1) and action recommendation
- **Training**: Trained on historical market data with profit-based rewards
- **Framework**: PyTorch with custom RL implementation
## Layer 2 System
PIM uses 9 Layer 2 RL agents that collaborate to filter predictions:
1. MomentumAgent - Price momentum patterns
2. TechnicalAgent - Chart patterns and indicators
3. RiskAgent - Volatility and drawdown assessment
4. OptionsAgent - Options flow analysis
5. MacroAgent - Economic indicators
6. SentimentAgent - News and social sentiment
7. VolumeAgent - Trading volume patterns
8. SectorRotationAgent - Sector strength
9. MeanReversionAgent - Overbought/oversold detection
## Usage
```python
import torch
from pim.learning.agents.layer2_mlp import Layer2MLPAgents
# Load model
agents = Layer2MLPAgents(device='cuda')
agents.load_trained_agents('path/to/trained_agents/')
# Evaluate a SymVector
import numpy as np
symvector = np.random.rand(414) # 414D feature vector from FinColl
scores = agents.evaluate(symvector) # Returns dict of agent scores
# Aggregate scores
composite, confidence = agents.aggregate_scores(scores)
print(f"Composite score: {composite:.3f}, Confidence: {confidence}")
```
## Training Data
- **Period**: 2024 historical equity data (35,084 SymVectors)
- **Symbols**: 332 equities from diversified portfolio
- **Features**: 414-dimensional vectors (price, sentiment, fundamentals, technical indicators)
- **Source**: FinColl API with TradeStation market data
## Performance Metrics
Based on January 2024 backtests:
- **Directional Accuracy**: 71.88% (10-day horizon)
- **Sharpe Ratio**: 7.24 (annualized)
- **Profit Factor**: 3.45
- **Win Rate**: 71.9%
## Limitations
- Trained on 2024 equity data only (not tested on other asset classes)
- Requires FinColl SymVectors (414D) as input
- Performance may degrade in unprecedented market conditions
- Best used as part of complete PIM dual-layer system
## Intended Use
This model is intended for:
- Signal filtering in automated trading systems
- Research into RL-based trading strategies
- Educational purposes in quantitative finance
**Not intended for**:
- Standalone trading decisions (use full PIM system)
- Financial advice or recommendations
- Unmonitored autonomous trading
## Citation
```bibtex
@software{pim_layer2_risk,
author = {PassiveIncomeMaximizer Team},
title = {Risk_qdrant - Layer 2 RL Agent},
year = {2025},
url = {https://github.com/yourusername/PassiveIncomeMaximizer}
}
```
## More Information
- **Repository**: https://github.com/yourusername/PassiveIncomeMaximizer
- **Documentation**: See LAYER2_README.md in docs/architecture/layer2/
- **License**: MIT
|