File size: 4,988 Bytes
406662d | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 | Population Based Training
=========================
What PBT Does
-------------
* Trains *N* policies in parallel (a "population") on the **same task**.
* Every ``interval_steps``:
#. Save each policy's checkpoint and objective.
#. Score the population and identify **leaders** and **underperformers**.
#. For underperformers, replace weights from a random leader and **mutate** selected hyperparameters.
#. Restart that process with the new weights/params automatically.
Leader / Underperformer Selection
---------------------------------
Let ``o_i`` be each initialized policy's objective, with mean ``μ`` and std ``σ``.
Upper and lower performance cuts are::
upper_cut = max(μ + threshold_std * σ, μ + threshold_abs)
lower_cut = min(μ - threshold_std * σ, μ - threshold_abs)
* **Leaders**: ``o_i > upper_cut``
* **Underperformers**: ``o_i < lower_cut``
The "Natural-Selection" rules:
1. Only underperformers are acted on (mutated or replaced).
2. If leaders exist, replace an underperformer with a random leader; otherwise, self-mutate.
Mutation (Hyperparameters)
--------------------------
* Each param has a mutation function (e.g., ``mutate_float``, ``mutate_discount``, etc.).
* A param is mutated with probability ``mutation_rate``.
* When mutated, its value is perturbed within ``change_range = (min, max)``.
* Only whitelisted keys (from the PBT config) are considered.
Example Config
--------------
.. code-block:: yaml
pbt:
enabled: True
policy_idx: 0
num_policies: 8
directory: .
workspace: "pbt_workspace"
objective: episode.Curriculum/difficulty_level
interval_steps: 50000000
threshold_std: 0.1
threshold_abs: 0.025
mutation_rate: 0.25
change_range: [1.1, 2.0]
mutation:
agent.params.config.learning_rate: "mutate_float"
agent.params.config.grad_norm: "mutate_float"
agent.params.config.entropy_coef: "mutate_float"
agent.params.config.critic_coef: "mutate_float"
agent.params.config.bounds_loss_coef: "mutate_float"
agent.params.config.kl_threshold: "mutate_float"
agent.params.config.gamma: "mutate_discount"
agent.params.config.tau: "mutate_discount"
``objective: episode.Curriculum/difficulty_level`` is the dotted expression that uses
``infos["episode"]["Curriculum/difficulty_level"]`` as the scalar to **rank policies** (higher is better).
With ``num_policies: 8``, launch eight processes sharing the same ``workspace`` and unique ``policy_idx`` (0-7).
Launching PBT
-------------
You must start **one process per policy** and point them to the **same workspace**. Set a unique
``policy_idx`` for each process and the common ``num_policies``.
Minimal flags you need:
* ``agent.pbt.enabled=True``
* ``agent.pbt.directory=<path/to/shared_folder>``
* ``agent.pbt.policy_idx=<0..num_policies-1>``
.. note::
All processes must use the same ``agent.pbt.workspace`` so they can see each other's checkpoints.
.. caution::
PBT is currently supported **only** with the **rl_games** library. Other RL libraries are not supported yet.
Tips
----
* Keep checkpoints reasonable: reduce ``interval_steps`` only if you really need tighter PBT cadence.
* Use larger ``threshold_std`` and ``threshold_abs`` for greater population diversity.
* It is recommended to run 6+ workers to see benefit of pbt.
Training Example
----------------
We provide a reference PPO config here for task:
`Isaac-Dexsuite-Kuka-Allegro-Lift-v0 <https://github.com/isaac-sim/IsaacLab/blob/main/source/isaaclab_tasks/isaaclab_tasks/manager_based/manipulation/dexsuite/config/kuka_allegro/agents/rl_games_ppo_cfg.yaml>`_.
For the best logging experience, we recommend using wandb for the logging in the script.
Launch *N* workers, where *n* indicates each worker index:
.. code-block:: bash
# Run this once per worker (n = 0..N-1), all pointing to the same directory/workspace
./isaaclab.sh -p scripts/reinforcement_learning/rl_games/train.py \
--seed=<n> \
--task=Isaac-Dexsuite-Kuka-Allegro-Lift-v0 \
--num_envs=8192 \
--headless \
--track \
--wandb-name=idx<n> \
--wandb-entity=<**entity**> \
--wandb-project-name=<**project**>
agent.pbt.enabled=True \
agent.pbt.num_policies=<N> \
agent.pbt.policy_idx=<n> \
agent.pbt.workspace=<**pbt_workspace_name**> \
agent.pbt.directory=<**/path/to/shared_folder**> \
References
----------
This PBT implementation reimplements and is inspired by *Dexpbt: Scaling up dexterous manipulation for hand-arm systems with population based training* (Petrenko et al., 2023).
.. code-block:: bibtex
@article{petrenko2023dexpbt,
title={Dexpbt: Scaling up dexterous manipulation for hand-arm systems with population based training},
author={Petrenko, Aleksei and Allshire, Arthur and State, Gavriel and Handa, Ankur and Makoviychuk, Viktor},
journal={arXiv preprint arXiv:2305.12127},
year={2023}
}
|