Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -23,7 +23,6 @@ configs:
|
|
| 23 |
- Multiple D3RLpy-compatible algorithm configs
|
| 24 |
- Custom reward function and thermal modeling
|
| 25 |
|
| 26 |
-
---
|
| 27 |
|
| 28 |
## π¦ Project Structure
|
| 29 |
|
|
@@ -38,7 +37,6 @@ configs:
|
|
| 38 |
βββ README.md # You are here
|
| 39 |
```
|
| 40 |
|
| 41 |
-
---
|
| 42 |
|
| 43 |
## βοΈ Environment Setup
|
| 44 |
|
|
@@ -63,7 +61,6 @@ cd eDriveMORL
|
|
| 63 |
pip install -r requirements.txt
|
| 64 |
```
|
| 65 |
|
| 66 |
-
---
|
| 67 |
|
| 68 |
## ποΈ Step-by-Step Usage
|
| 69 |
|
|
@@ -77,7 +74,7 @@ python register_minari_dataset.py
|
|
| 77 |
|
| 78 |
This ensures that your local offline dataset (e.g., collected via MPC) is discoverable by `minari.load_dataset()`.
|
| 79 |
|
| 80 |
-
|
| 81 |
|
| 82 |
### 2οΈβ£ (Optional) Regenerate Offline Dataset
|
| 83 |
|
|
@@ -95,7 +92,6 @@ datasets/fcev-mpc-v1.h5
|
|
| 95 |
|
| 96 |
You can switch to different reward shaping or normalization settings inside `train.py`.
|
| 97 |
|
| 98 |
-
---
|
| 99 |
|
| 100 |
### 3οΈβ£ Run Offline RL Benchmarks
|
| 101 |
|
|
@@ -126,7 +122,6 @@ python run.py \
|
|
| 126 |
|
| 127 |
Use `--wandb` to enable logging to Weights & Biases.
|
| 128 |
|
| 129 |
-
---
|
| 130 |
|
| 131 |
## π Dataset: `eDriveMORL`
|
| 132 |
|
|
@@ -138,7 +133,6 @@ Dataset fields include:
|
|
| 138 |
- Reward: Custom function reflecting energy and thermal efficiency
|
| 139 |
- Termination: Episode end or infeasibility
|
| 140 |
|
| 141 |
-
---
|
| 142 |
|
| 143 |
## π Logging & Evaluation
|
| 144 |
|
|
|
|
| 23 |
- Multiple D3RLpy-compatible algorithm configs
|
| 24 |
- Custom reward function and thermal modeling
|
| 25 |
|
|
|
|
| 26 |
|
| 27 |
## π¦ Project Structure
|
| 28 |
|
|
|
|
| 37 |
βββ README.md # You are here
|
| 38 |
```
|
| 39 |
|
|
|
|
| 40 |
|
| 41 |
## βοΈ Environment Setup
|
| 42 |
|
|
|
|
| 61 |
pip install -r requirements.txt
|
| 62 |
```
|
| 63 |
|
|
|
|
| 64 |
|
| 65 |
## ποΈ Step-by-Step Usage
|
| 66 |
|
|
|
|
| 74 |
|
| 75 |
This ensures that your local offline dataset (e.g., collected via MPC) is discoverable by `minari.load_dataset()`.
|
| 76 |
|
| 77 |
+
|
| 78 |
|
| 79 |
### 2οΈβ£ (Optional) Regenerate Offline Dataset
|
| 80 |
|
|
|
|
| 92 |
|
| 93 |
You can switch to different reward shaping or normalization settings inside `train.py`.
|
| 94 |
|
|
|
|
| 95 |
|
| 96 |
### 3οΈβ£ Run Offline RL Benchmarks
|
| 97 |
|
|
|
|
| 122 |
|
| 123 |
Use `--wandb` to enable logging to Weights & Biases.
|
| 124 |
|
|
|
|
| 125 |
|
| 126 |
## π Dataset: `eDriveMORL`
|
| 127 |
|
|
|
|
| 133 |
- Reward: Custom function reflecting energy and thermal efficiency
|
| 134 |
- Termination: Episode end or infeasibility
|
| 135 |
|
|
|
|
| 136 |
|
| 137 |
## π Logging & Evaluation
|
| 138 |
|