Datasets:
File size: 3,474 Bytes
b04b5e2 17daa29 b04b5e2 17daa29 b04b5e2 2948c94 7739345 2948c94 fef7e6d 2948c94 fef7e6d 2948c94 fa993b6 2948c94 6346c1a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 |
---
language:
- en
tags:
- code
pretty_name: eDriveMORL
size_categories:
- 100K<n<1M
configs:
- config_name: default
data_files:
- split: mpc_trajectories
path:
- minari_export/minari_MPC/data/dataset.json
- split: rule_trajectories
path:
- minari_export/minari_Rule/data/dataset.json
---
# eDriveMORL: Offline Reinforcement Learning Dataset and Benchmark for FCEVs
**eDriveMORL** is a benchmark suite for offline reinforcement learning on fuel cell electric vehicle (FCEV) systems. It includes:
- High-fidelity FCEV dynamic simulation
- Minari-compatible offline datasets
- Multiple D3RLpy-compatible algorithm configs
- Custom reward function and thermal modeling
## π¦ Project Structure
```
.
βββ run.py # Run benchmark for all algorithms via CLI
βββ train.py # Generate offline dataset from Minari
βββ register_minari_dataset.py # Register Minari-compatible FCEV dataset
βββ datasets/ # Stores generated D3RLpy HDF5 datasets
βββ requirements.txt # Python dependency list
βββ fcev/ # Core model implementation
βββ README.md # You are here
```
## βοΈ Environment Setup
0. **Git Download the code and dataset**
```bash
git lfs install
git clone git@hf.co:datasets/TJIET/eDriveMORL
```
1. **Create a Conda environment** (Python 3.9 recommended):
```bash
conda create -n fcev-benchmark python=3.9
conda activate fcev-benchmark
```
2. **Install required dependencies**:
```bash
cd eDriveMORL
pip install -r requirements.txt
```
## ποΈ Step-by-Step Usage
### 1οΈβ£ Register the Minari Dataset
Before any training or dataset generation, register the Minari dataset:
```bash
python register_minari_dataset.py
```
This ensures that your local offline dataset (e.g., collected via MPC) is discoverable by `minari.load_dataset()`.
### 2οΈβ£ (Optional) Regenerate Offline Dataset
If you want to regenerate a D3RLpy-compatible dataset (HDF5 format), modify and run:
```bash
python train.py
```
This will create a `.h5` dataset under the `datasets/` folder, such as:
```
datasets/fcev-mpc-v1.h5
```
You can switch to different reward shaping or normalization settings inside `train.py`.
### 3οΈβ£ Run Offline RL Benchmarks
Run the benchmark suite using different algorithms (TD3+BC, CQL, AWAC, etc.):
```bash
python run.py \
--algo CQL \
--dataset-path datasets/fcev-mpc-v1.h5 \
--drive-cycle CLTC-P-PartI.csv \
--n-steps 10000 \
--wandb-project fcev-offline-benchmark
```
π§ Available algorithms:
- TD3PlusBC
- IQL
- CQL
- BCQ
- CalQL
- AWAC
- ReBRAC
- TACR
- PLAS
- PRDC
- BEAR
Use `--wandb` to enable logging to Weights & Biases.
## π Dataset: `eDriveMORL`
All offline training is based on the **eDriveMORL** dataset, registered through Minari. It captures state-action-reward sequences collected via expert controllers (e.g., MPC) from a simulated FCEV model.
Dataset fields include:
- State: `[SOC, T_fc, T_core, T_surf, speed, acc]`
- Action: `[fc_ratio, cooling_level, coolant_split_ratio]`
- Reward: Custom function reflecting energy and thermal efficiency
- Termination: Episode end or infeasibility
## π Logging & Evaluation
- π TensorBoard logs saved under `tensorboard_logs/{algo}`
- π File logs (e.g., model snapshots) under `d3rlpy_logs/{algo}`
- π WandB metrics (optional): view your experiment dashboard online. |