pattern-zero / README.md
emxia18's picture
Upload README.md with huggingface_hub
fa0ae89 verified
|
raw
history blame
228 Bytes

PPO-LSTM Model

This model was trained using RecurrentPPO with LSTMs for sequence learning.

Training Data: Custom sequence dataset Algorithm: Proximal Policy Optimization (PPO) with LSTM Library: Stable-Baselines3