emxia18 commited on
Commit
fa0ae89
·
verified ·
1 Parent(s): 5cf8576

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +6 -0
README.md ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ # PPO-LSTM Model
2
+ This model was trained using RecurrentPPO with LSTMs for sequence learning.
3
+
4
+ **Training Data**: Custom sequence dataset
5
+ **Algorithm**: Proximal Policy Optimization (PPO) with LSTM
6
+ **Library**: Stable-Baselines3