JonusNattapong commited on
Commit
610e83c
·
verified ·
1 Parent(s): 62c08ad

Add comprehensive README with model details, metrics, and usage instructions

Browse files
Files changed (1) hide show
  1. README.md +96 -0
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language: en
4
+ library_name: stable-baselines3
5
+ tags:
6
+ - reinforcement-learning
7
+ - finance
8
+ - gold-trading
9
+ - xauusd
10
+ - ppo
11
+ metrics:
12
+ - sharpe_ratio
13
+ - win_rate
14
+ pipeline_tag: reinforcement-learning
15
+ ---
16
+
17
+ # PPO Model for XAUUSD Gold Trading
18
+
19
+ This repository contains a Reinforcement Learning model trained using Proximal Policy Optimization (PPO) for trading XAUUSD (Gold vs US Dollar) on 15-minute timeframes.
20
+
21
+ ## Model Details
22
+
23
+ - **Model Type**: PPO (Proximal Policy Optimization)
24
+ - **Framework**: Stable-Baselines3
25
+ - **Environment**: Custom Gym environment for XAUUSD trading
26
+ - **Training Data**: Historical XAUUSD data from 2004 to 2025 (resampled to 15-min bars)
27
+ - **Total Timesteps**: 1,000,000
28
+ - **Position Sizing**: Base 5.0 oz, Max 7.5 oz
29
+ - **Initial Capital**: 200 USD
30
+ - **Transaction Cost**: 0.65 USD per oz
31
+
32
+ ## Performance Metrics (Test Set)
33
+
34
+ - **Average Daily Profit**: 51.46 USD
35
+ - **Win Rate**: 69.0%
36
+ - **Max Drawdown**: 12.0%
37
+ - **Sharpe Ratio**: 7.56
38
+ - **Average Trades per Day**: 2.66
39
+
40
+ ## Features Used
41
+
42
+ - Log Return
43
+ - RSI (14-period)
44
+ - Moving Averages (short/long)
45
+ - Bollinger Bands
46
+ - MACD
47
+ - Volume indicators
48
+
49
+ ## Usage
50
+
51
+ ### Loading the Model
52
+
53
+ ```python
54
+ from safetensors.torch import load_file
55
+ from stable_baselines3 import PPO
56
+ import torch
57
+
58
+ # Load state dict from safetensors
59
+ state_dict = load_file("ppo_xauusd.safetensors")
60
+ policy = PPO.policy_class(observation_space, action_space) # Define spaces accordingly
61
+ policy.load_state_dict(state_dict)
62
+
63
+ # Create model
64
+ model = PPO(policy=policy, env=env) # Or load full model if available
65
+ ```
66
+
67
+ ### For Full Inference
68
+
69
+ To use the model for trading, you'll need to:
70
+ 1. Set up the trading environment (`XAUUSDTradingEnv`)
71
+ 2. Load VecNormalize stats
72
+ 3. Run predictions
73
+
74
+ Note: This is a simulation model. Use with caution in real trading.
75
+
76
+ ## Training Configuration
77
+
78
+ - Learning Rate: 0.0003
79
+ - Batch Size: 256
80
+ - Gamma: 0.99
81
+ - GAE Lambda: 0.95
82
+ - Clip Range: 0.2
83
+ - Entropy Coefficient: 0.01
84
+
85
+ ## Files
86
+
87
+ - `ppo_xauusd.safetensors`: Model weights in SafeTensors format
88
+ - `vecnormalize.pkl`: VecNormalize statistics for observation normalization
89
+
90
+ ## License
91
+
92
+ MIT License
93
+
94
+ ## Disclaimer
95
+
96
+ This model is for educational and research purposes only. Trading involves risk, and past performance does not guarantee future results. Always backtest and validate before using in live trading.