|
|
--- |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- code |
|
|
pretty_name: eDriveMORL |
|
|
size_categories: |
|
|
- 100K<n<1M |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: mpc_trajectories |
|
|
path: |
|
|
- minari_export/minari_MPC/data/dataset.json |
|
|
- split: rule_trajectories |
|
|
path: |
|
|
- minari_export/minari_Rule/data/dataset.json |
|
|
--- |
|
|
# eDriveMORL: Offline Reinforcement Learning Dataset and Benchmark for FCEVs |
|
|
|
|
|
**eDriveMORL** is a benchmark suite for offline reinforcement learning on fuel cell electric vehicle (FCEV) systems. It includes: |
|
|
|
|
|
- High-fidelity FCEV dynamic simulation |
|
|
- Minari-compatible offline datasets |
|
|
- Multiple D3RLpy-compatible algorithm configs |
|
|
- Custom reward function and thermal modeling |
|
|
|
|
|
|
|
|
## π¦ Project Structure |
|
|
|
|
|
``` |
|
|
. |
|
|
βββ run.py # Run benchmark for all algorithms via CLI |
|
|
βββ train.py # Generate offline dataset from Minari |
|
|
βββ register_minari_dataset.py # Register Minari-compatible FCEV dataset |
|
|
βββ datasets/ # Stores generated D3RLpy HDF5 datasets |
|
|
βββ requirements.txt # Python dependency list |
|
|
βββ fcev/ # Core model implementation |
|
|
βββ README.md # You are here |
|
|
``` |
|
|
|
|
|
|
|
|
## βοΈ Environment Setup |
|
|
|
|
|
0. **Git Download the code and dataset** |
|
|
|
|
|
```bash |
|
|
git lfs install |
|
|
git clone git@hf.co:datasets/TJIET/eDriveMORL |
|
|
``` |
|
|
|
|
|
1. **Create a Conda environment** (Python 3.9 recommended): |
|
|
|
|
|
```bash |
|
|
conda create -n fcev-benchmark python=3.9 |
|
|
conda activate fcev-benchmark |
|
|
``` |
|
|
|
|
|
2. **Install required dependencies**: |
|
|
|
|
|
```bash |
|
|
cd eDriveMORL |
|
|
pip install -r requirements.txt |
|
|
``` |
|
|
|
|
|
|
|
|
## ποΈ Step-by-Step Usage |
|
|
|
|
|
### 1οΈβ£ Register the Minari Dataset |
|
|
|
|
|
Before any training or dataset generation, register the Minari dataset: |
|
|
|
|
|
```bash |
|
|
python register_minari_dataset.py |
|
|
``` |
|
|
|
|
|
This ensures that your local offline dataset (e.g., collected via MPC) is discoverable by `minari.load_dataset()`. |
|
|
|
|
|
|
|
|
|
|
|
### 2οΈβ£ (Optional) Regenerate Offline Dataset |
|
|
|
|
|
If you want to regenerate a D3RLpy-compatible dataset (HDF5 format), modify and run: |
|
|
|
|
|
```bash |
|
|
python train.py |
|
|
``` |
|
|
|
|
|
This will create a `.h5` dataset under the `datasets/` folder, such as: |
|
|
|
|
|
``` |
|
|
datasets/fcev-mpc-v1.h5 |
|
|
``` |
|
|
|
|
|
You can switch to different reward shaping or normalization settings inside `train.py`. |
|
|
|
|
|
|
|
|
### 3οΈβ£ Run Offline RL Benchmarks |
|
|
|
|
|
Run the benchmark suite using different algorithms (TD3+BC, CQL, AWAC, etc.): |
|
|
|
|
|
```bash |
|
|
python run.py \ |
|
|
--algo CQL \ |
|
|
--dataset-path datasets/fcev-mpc-v1.h5 \ |
|
|
--drive-cycle CLTC-P-PartI.csv \ |
|
|
--n-steps 10000 \ |
|
|
--wandb-project fcev-offline-benchmark |
|
|
``` |
|
|
|
|
|
π§ Available algorithms: |
|
|
|
|
|
- TD3PlusBC |
|
|
- IQL |
|
|
- CQL |
|
|
- BCQ |
|
|
- CalQL |
|
|
- AWAC |
|
|
- ReBRAC |
|
|
- TACR |
|
|
- PLAS |
|
|
- PRDC |
|
|
- BEAR |
|
|
|
|
|
Use `--wandb` to enable logging to Weights & Biases. |
|
|
|
|
|
|
|
|
## π Dataset: `eDriveMORL` |
|
|
|
|
|
All offline training is based on the **eDriveMORL** dataset, registered through Minari. It captures state-action-reward sequences collected via expert controllers (e.g., MPC) from a simulated FCEV model. |
|
|
|
|
|
Dataset fields include: |
|
|
- State: `[SOC, T_fc, T_core, T_surf, speed, acc]` |
|
|
- Action: `[fc_ratio, cooling_level, coolant_split_ratio]` |
|
|
- Reward: Custom function reflecting energy and thermal efficiency |
|
|
- Termination: Episode end or infeasibility |
|
|
|
|
|
|
|
|
## π Logging & Evaluation |
|
|
|
|
|
- π TensorBoard logs saved under `tensorboard_logs/{algo}` |
|
|
- π File logs (e.g., model snapshots) under `d3rlpy_logs/{algo}` |
|
|
- π WandB metrics (optional): view your experiment dashboard online. |