pattern-zero / README.md
emxia18's picture
Upload README.md with huggingface_hub
c5eba3f verified
|
raw
history blame
226 Bytes

PPO-LSTM Model

This model was trained using a custom multi-layer LSTM with PPO.

Training Data: Custom sequence dataset Algorithm: Proximal Policy Optimization (PPO) with a custom LSTM Library: Stable-Baselines3