Technical_qdrant - Layer 2 RL Agent
Part of the PassiveIncomeMaximizer (PIM) trading system.
Model Description
Layer 2 RL agents for signal filtering (PPO-trained)
This is a Proximal Policy Optimization (PPO) reinforcement learning agent trained to filter trading signals from FinColl predictions. The agent evaluates prediction confidence and signal quality based on technical criteria.
Architecture
- Algorithm: Proximal Policy Optimization (PPO)
- Input: 414-dimensional SymVectors from FinColl
- Output: Confidence score (0-1) and action recommendation
- Training: Trained on historical market data with profit-based rewards
- Framework: PyTorch with custom RL implementation
Layer 2 System
PIM uses 9 Layer 2 RL agents that collaborate to filter predictions:
- MomentumAgent - Price momentum patterns
- TechnicalAgent - Chart patterns and indicators
- RiskAgent - Volatility and drawdown assessment
- OptionsAgent - Options flow analysis
- MacroAgent - Economic indicators
- SentimentAgent - News and social sentiment
- VolumeAgent - Trading volume patterns
- SectorRotationAgent - Sector strength
- MeanReversionAgent - Overbought/oversold detection
Usage
import torch
from pim.learning.agents.layer2_mlp import Layer2MLPAgents
# Load model
agents = Layer2MLPAgents(device='cuda')
agents.load_trained_agents('path/to/trained_agents/')
# Evaluate a SymVector
import numpy as np
symvector = np.random.rand(414) # 414D feature vector from FinColl
scores = agents.evaluate(symvector) # Returns dict of agent scores
# Aggregate scores
composite, confidence = agents.aggregate_scores(scores)
print(f"Composite score: {composite:.3f}, Confidence: {confidence}")
Training Data
- Period: 2024 historical equity data (35,084 SymVectors)
- Symbols: 332 equities from diversified portfolio
- Features: 414-dimensional vectors (price, sentiment, fundamentals, technical indicators)
- Source: FinColl API with TradeStation market data
Performance Metrics
Based on January 2024 backtests:
- Directional Accuracy: 71.88% (10-day horizon)
- Sharpe Ratio: 7.24 (annualized)
- Profit Factor: 3.45
- Win Rate: 71.9%
Limitations
- Trained on 2024 equity data only (not tested on other asset classes)
- Requires FinColl SymVectors (414D) as input
- Performance may degrade in unprecedented market conditions
- Best used as part of complete PIM dual-layer system
Intended Use
This model is intended for:
- Signal filtering in automated trading systems
- Research into RL-based trading strategies
- Educational purposes in quantitative finance
Not intended for:
- Standalone trading decisions (use full PIM system)
- Financial advice or recommendations
- Unmonitored autonomous trading
Citation
@software{pim_layer2_technical,
author = {PassiveIncomeMaximizer Team},
title = {Technical_qdrant - Layer 2 RL Agent},
year = {2025},
url = {https://github.com/yourusername/PassiveIncomeMaximizer}
}
More Information
- Repository: https://github.com/yourusername/PassiveIncomeMaximizer
- Documentation: See LAYER2_README.md in docs/architecture/layer2/
- License: MIT