eDriveMORL / README.md
TJIET's picture
Update README.md
17daa29 verified
metadata
language:
  - en
tags:
  - code
pretty_name: eDriveMORL
size_categories:
  - 100K<n<1M
configs:
  - config_name: default
    data_files:
      - split: mpc_trajectories
        path:
          - minari_export/minari_MPC/data/dataset.json
      - split: rule_trajectories
        path:
          - minari_export/minari_Rule/data/dataset.json

eDriveMORL: Offline Reinforcement Learning Dataset and Benchmark for FCEVs

eDriveMORL is a benchmark suite for offline reinforcement learning on fuel cell electric vehicle (FCEV) systems. It includes:

  • High-fidelity FCEV dynamic simulation
  • Minari-compatible offline datasets
  • Multiple D3RLpy-compatible algorithm configs
  • Custom reward function and thermal modeling

πŸ“¦ Project Structure

.
β”œβ”€β”€ run.py                      # Run benchmark for all algorithms via CLI
β”œβ”€β”€ train.py                    # Generate offline dataset from Minari
β”œβ”€β”€ register_minari_dataset.py  # Register Minari-compatible FCEV dataset
β”œβ”€β”€ datasets/                   # Stores generated D3RLpy HDF5 datasets
β”œβ”€β”€ requirements.txt            # Python dependency list
β”œβ”€β”€ fcev/                       # Core model implementation
└── README.md                   # You are here

βš™οΈ Environment Setup

  1. Git Download the code and dataset
git lfs install
git clone git@hf.co:datasets/TJIET/eDriveMORL
  1. Create a Conda environment (Python 3.9 recommended):
conda create -n fcev-benchmark python=3.9
conda activate fcev-benchmark
  1. Install required dependencies:
cd eDriveMORL
pip install -r requirements.txt

πŸ—‚οΈ Step-by-Step Usage

1️⃣ Register the Minari Dataset

Before any training or dataset generation, register the Minari dataset:

python register_minari_dataset.py

This ensures that your local offline dataset (e.g., collected via MPC) is discoverable by minari.load_dataset().

2️⃣ (Optional) Regenerate Offline Dataset

If you want to regenerate a D3RLpy-compatible dataset (HDF5 format), modify and run:

python train.py

This will create a .h5 dataset under the datasets/ folder, such as:

datasets/fcev-mpc-v1.h5

You can switch to different reward shaping or normalization settings inside train.py.

3️⃣ Run Offline RL Benchmarks

Run the benchmark suite using different algorithms (TD3+BC, CQL, AWAC, etc.):

python run.py \
  --algo CQL \
  --dataset-path datasets/fcev-mpc-v1.h5 \
  --drive-cycle CLTC-P-PartI.csv \
  --n-steps 10000 \
  --wandb-project fcev-offline-benchmark

πŸ”§ Available algorithms:

  • TD3PlusBC
  • IQL
  • CQL
  • BCQ
  • CalQL
  • AWAC
  • ReBRAC
  • TACR
  • PLAS
  • PRDC
  • BEAR

Use --wandb to enable logging to Weights & Biases.

πŸ“Š Dataset: eDriveMORL

All offline training is based on the eDriveMORL dataset, registered through Minari. It captures state-action-reward sequences collected via expert controllers (e.g., MPC) from a simulated FCEV model.

Dataset fields include:

  • State: [SOC, T_fc, T_core, T_surf, speed, acc]
  • Action: [fc_ratio, cooling_level, coolant_split_ratio]
  • Reward: Custom function reflecting energy and thermal efficiency
  • Termination: Episode end or infeasibility

πŸ“ˆ Logging & Evaluation

  • πŸ“ TensorBoard logs saved under tensorboard_logs/{algo}
  • πŸ“ File logs (e.g., model snapshots) under d3rlpy_logs/{algo}
  • πŸ“Š WandB metrics (optional): view your experiment dashboard online.