TJIET commited on
Commit
fa993b6
Β·
verified Β·
1 Parent(s): 6346c1a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -7
README.md CHANGED
@@ -23,7 +23,6 @@ configs:
23
  - Multiple D3RLpy-compatible algorithm configs
24
  - Custom reward function and thermal modeling
25
 
26
- ---
27
 
28
  ## πŸ“¦ Project Structure
29
 
@@ -38,7 +37,6 @@ configs:
38
  └── README.md # You are here
39
  ```
40
 
41
- ---
42
 
43
  ## βš™οΈ Environment Setup
44
 
@@ -63,7 +61,6 @@ cd eDriveMORL
63
  pip install -r requirements.txt
64
  ```
65
 
66
- ---
67
 
68
  ## πŸ—‚οΈ Step-by-Step Usage
69
 
@@ -77,7 +74,7 @@ python register_minari_dataset.py
77
 
78
  This ensures that your local offline dataset (e.g., collected via MPC) is discoverable by `minari.load_dataset()`.
79
 
80
- ---
81
 
82
  ### 2️⃣ (Optional) Regenerate Offline Dataset
83
 
@@ -95,7 +92,6 @@ datasets/fcev-mpc-v1.h5
95
 
96
  You can switch to different reward shaping or normalization settings inside `train.py`.
97
 
98
- ---
99
 
100
  ### 3️⃣ Run Offline RL Benchmarks
101
 
@@ -126,7 +122,6 @@ python run.py \
126
 
127
  Use `--wandb` to enable logging to Weights & Biases.
128
 
129
- ---
130
 
131
  ## πŸ“Š Dataset: `eDriveMORL`
132
 
@@ -138,7 +133,6 @@ Dataset fields include:
138
  - Reward: Custom function reflecting energy and thermal efficiency
139
  - Termination: Episode end or infeasibility
140
 
141
- ---
142
 
143
  ## πŸ“ˆ Logging & Evaluation
144
 
 
23
  - Multiple D3RLpy-compatible algorithm configs
24
  - Custom reward function and thermal modeling
25
 
 
26
 
27
  ## πŸ“¦ Project Structure
28
 
 
37
  └── README.md # You are here
38
  ```
39
 
 
40
 
41
  ## βš™οΈ Environment Setup
42
 
 
61
  pip install -r requirements.txt
62
  ```
63
 
 
64
 
65
  ## πŸ—‚οΈ Step-by-Step Usage
66
 
 
74
 
75
  This ensures that your local offline dataset (e.g., collected via MPC) is discoverable by `minari.load_dataset()`.
76
 
77
+
78
 
79
  ### 2️⃣ (Optional) Regenerate Offline Dataset
80
 
 
92
 
93
  You can switch to different reward shaping or normalization settings inside `train.py`.
94
 
 
95
 
96
  ### 3️⃣ Run Offline RL Benchmarks
97
 
 
122
 
123
  Use `--wandb` to enable logging to Weights & Biases.
124
 
 
125
 
126
  ## πŸ“Š Dataset: `eDriveMORL`
127
 
 
133
  - Reward: Custom function reflecting energy and thermal efficiency
134
  - Termination: Episode end or infeasibility
135
 
 
136
 
137
  ## πŸ“ˆ Logging & Evaluation
138