File size: 226 Bytes
fa0ae89
c5eba3f
fa0ae89
 
c5eba3f
fa0ae89
1
2
3
4
5
6
7
# PPO-LSTM Model
This model was trained using a custom multi-layer LSTM with PPO.

**Training Data**: Custom sequence dataset
**Algorithm**: Proximal Policy Optimization (PPO) with a custom LSTM
**Library**: Stable-Baselines3