ReinFlow commited on
Commit
22f0139
·
verified ·
1 Parent(s): 721ba13

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -3,7 +3,7 @@ license: mit
3
  ---
4
 
5
  ## This repository...
6
- Contains the core data, checkpoints, and training records for the **paper: "ReinFlow: Fine-tuning Flow Matching Policy with Online Reinforcement Learning"**.
7
 
8
  Here are the links to more assets of this project:
9
 
@@ -16,7 +16,7 @@ Here are the links to more assets of this project:
16
 
17
  * **data-offline**: This directory includes the train.npz and normalization.npz files for OpenAI Gym tasks, derived and normalized from the official D4RL datasets. An exception is the Humanoid-v3 environment, where the data was collected from our pre-trained SAC agent.
18
  Note that these datasets differ from those used in our reference paper DPPO, as the DPPO dataset curation process was not disclosed. Instead, we opted to use the official D4RL datasets, which include offline RL rewards, unlike the DPPO datasets. These rewards enable the data to support training offline-to-online RL algorithms for flow matching policies, such as FQL.
19
- Datasets for other tasks can be automatically downloaded using the pre-training scripts in our repository and are not uploaded to this Hugging Face repository.
20
 
21
  * **log**: This directory contains pre-trained and fine-tuned model checkpoints. The pre-trained checkpoints include DDPM, 1-Rectified Flow, and Shortcut Models trained on the OpenAI Gym-D4RL dataset, Franka Kitchen dataset, and robomimic datasets (processed as per DPPO).
22
 
 
3
  ---
4
 
5
  ## This repository...
6
+ Contains the core data, checkpoints, and training records for the **[paper](https://huggingface.co/papers/2505.22094): "ReinFlow: Fine-tuning Flow Matching Policy with Online Reinforcement Learning"**.
7
 
8
  Here are the links to more assets of this project:
9
 
 
16
 
17
  * **data-offline**: This directory includes the train.npz and normalization.npz files for OpenAI Gym tasks, derived and normalized from the official D4RL datasets. An exception is the Humanoid-v3 environment, where the data was collected from our pre-trained SAC agent.
18
  Note that these datasets differ from those used in our reference paper DPPO, as the DPPO dataset curation process was not disclosed. Instead, we opted to use the official D4RL datasets, which include offline RL rewards, unlike the DPPO datasets. These rewards enable the data to support training offline-to-online RL algorithms for flow matching policies, such as FQL.
19
+ Datasets for other tasks can be automatically downloaded using the pre-training scripts in our repository and are not uploaded to this Hugging Face repository to save storage.
20
 
21
  * **log**: This directory contains pre-trained and fine-tuned model checkpoints. The pre-trained checkpoints include DDPM, 1-Rectified Flow, and Shortcut Models trained on the OpenAI Gym-D4RL dataset, Franka Kitchen dataset, and robomimic datasets (processed as per DPPO).
22