ReinFlow commited on
Commit
29c009f
·
verified ·
1 Parent(s): de40974

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -23
README.md CHANGED
@@ -2,35 +2,28 @@
2
  license: mit
3
  ---
4
 
5
- ## This repository...
6
- Contains the core data, checkpoints, and training records for the paper: **"ReinFlow: Fine-tuning Flow Matching Policy with Online Reinforcement Learning"**.
7
 
 
8
 
9
- ## What is in this dataset?
 
 
10
 
11
- * **data-offline**: This directory includes the train.npz and normalization.npz files for OpenAI Gym tasks, derived and normalized from the official D4RL datasets. An exception is the Humanoid-v3 environment, where the data was collected from our pre-trained SAC agent.
12
- Note that these datasets differ from those used in our reference paper DPPO, as the DPPO dataset curation process was not disclosed. Instead, we opted to use the official D4RL datasets, which include offline RL rewards, unlike the DPPO datasets. These rewards enable the data to support training offline-to-online RL algorithms for flow matching policies, such as FQL.
13
- Datasets for other tasks can be automatically downloaded using the pre-training scripts in our repository and are not uploaded to this Hugging Face repository to save storage.
14
 
15
- * **log**: This directory contains pre-trained and fine-tuned model checkpoints. The pre-trained checkpoints include DDPM, 1-Rectified Flow, and Shortcut Models trained on the OpenAI Gym-D4RL dataset, Franka Kitchen dataset, and robomimic datasets (processed as per DPPO).
16
 
17
- * **visualize**: This directory includes the figures presented in the paper and the corresponding training records (in .csv files) required to reproduce them. These data can serve as a baseline for developing new algorithms and benchmarking against our method.
18
 
19
- For how to make use of these data,
20
- please refer to [the first document](https://github.com/ReinFlow/ReinFlow/blob/release/docs/ReproduceExps.md) to see how to do pre-training on the offline datasets and fine-tuning with the pre-trained checkpoints,
21
- and refer to [the second document](https://github.com/ReinFlow/ReinFlow/blob/release/docs/ReproduceFigs.md) to extract and visualizee the training records in the `./visualize` folder.
22
 
23
- ## Other assets
24
- Here are the links to some other important assets of this project:
25
-
26
- * (**De-anonymized**) Hugging Face Paper:
27
-
28
- **Note:** If you are a reviewer of this paper, please do **NOT** click into the link below: since the link was not submitted for review, clicking on it exposes the author's identity. Thank you.
29
-
30
- If you are not a reviewer, [this URL](https://huggingface.co/papers/2505.22094) leads you to the Hugging Face paper page.
31
-
32
- * (Anonymized) Project Website: [this URL](https://reinflow.github.io/).
33
-
34
- * (Anonymized) Code: [this repository](https://github.com/ReinFlow/ReinFlow).
35
 
 
36
 
 
 
2
  license: mit
3
  ---
4
 
5
+ ## This Repository
6
+ This repository contains the core data, checkpoints, and training records for the paper: **"ReinFlow: Fine-Tuning Flow Matching Policy with Online Reinforcement Learning"**.
7
 
8
+ ## What Is in This Dataset?
9
 
10
+ - **data-offline**: This directory includes the `train.npz` and `normalization.npz` files for OpenAI Gym tasks, derived and normalized from the official D4RL datasets. An exception is the Humanoid-v3 environment, where the data was collected from our pre-trained SAC agent.
11
+ Note that these datasets differ from those used in our reference paper, DPPO, as the DPPO dataset curation process was not disclosed. Instead, we opted to use the official D4RL datasets, which include offline RL rewards, unlike the DPPO datasets. These rewards enable the data to support training offline-to-online RL algorithms for flow-matching policies, such as FQL.
12
+ Datasets for other tasks can be automatically downloaded using the pre-training scripts in our repository and are not uploaded to this Hugging Face repository to save storage.
13
 
14
+ - **log**: This directory contains pre-trained and fine-tuned model checkpoints. The pre-trained checkpoints include DDPM, 1-Rectified Flow, and Shortcut Models trained on the OpenAI Gym-D4RL dataset, Franka Kitchen dataset, and Robomimic datasets (processed as per DPPO).
 
 
15
 
16
+ - **visualize**: This directory includes the figures presented in the paper and the corresponding training records (in `.csv` files) required to reproduce them. These data can serve as a baseline for developing new algorithms and benchmarking against our method.
17
 
18
+ To make use of these data, please refer to [the first document](https://github.com/ReinFlow/ReinFlow/blob/release/docs/ReproduceExps.md) for instructions on pre-training with the offline datasets and fine-tuning with the pre-trained checkpoints, and refer to [the second document](https://github.com/ReinFlow/ReinFlow/blob/release/docs/ReproduceFigs.md) to extract and visualize the training records in the `./visualize` folder.
19
 
20
+ ## Other Assets
21
+ Here are links to other important assets of this project:
 
22
 
23
+ - (**De-anonymized**) Hugging Face Paper:
24
+ **Note:** If you are a reviewer of this paper, please do **not** click the link below, as it was not submitted for review and may expose the authors' identities. Thank you.
25
+ If you are not a reviewer, [this URL](https://huggingface.co/papers/2505.22094) leads to the Hugging Face paper page.
 
 
 
 
 
 
 
 
 
26
 
27
+ - (Anonymized) Project Website: [this URL](https://reinflow.github.io/).
28
 
29
+ - (Anonymized) Code: [this repository](https://github.com/ReinFlow/ReinFlow).