ReinFlow commited on
Commit
9b497bc
·
verified ·
1 Parent(s): 367156f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -7
README.md CHANGED
@@ -2,20 +2,23 @@
2
  license: mit
3
  ---
4
 
5
- This repository contains the core data, checkpoints, and training records for the paper "ReinFlow: Fine-tuning Flow Matching Policy with Online Reinforcement Learning".
6
- Project website: [this url](https://reinflow.github.io/).
7
- Code: [this repository](https://github.com/ReinFlow/ReinFlow).
8
 
 
9
 
10
- Here is a summary of what is contained in this Hugging Face dataset:
11
 
12
- **data-offline**: This directory includes the train.npz and normalization.npz files for OpenAI Gym tasks, derived and normalized from the official D4RL datasets. An exception is the Humanoid-v3 environment, where the data was collected from our pre-trained SAC agent.
 
 
 
13
  Note that these datasets differ from those used in our reference paper DPPO, as the DPPO dataset curation process was not disclosed. Instead, we opted to use the official D4RL datasets, which include offline RL rewards, unlike the DPPO datasets. These rewards enable the data to support training offline-to-online RL algorithms for flow matching policies, such as FQL.
14
  Datasets for other tasks can be automatically downloaded using the pre-training scripts in our repository and are not uploaded to this Hugging Face repository.
15
 
16
- **log**: This directory contains pre-trained and fine-tuned model checkpoints. The pre-trained checkpoints include DDPM, 1-Rectified Flow, and Shortcut Models trained on the OpenAI Gym-D4RL dataset, Franka Kitchen dataset, and robomimic datasets (processed as per DPPO).
17
 
18
- **visualize**: This directory includes the figures presented in the paper and the corresponding training records (in .csv files) required to reproduce them. These data can serve as a baseline for developing new algorithms and benchmarking against our method.
19
 
20
  For how to make use of these data,
21
  please refer to [the first document](https://github.com/ReinFlow/ReinFlow/blob/release/docs/ReproduceExps.md) to see how to do pre-training on the offline datasets and fine-tuning with the pre-trained checkpoints,
 
2
  license: mit
3
  ---
4
 
5
+ ## This repository contains...
6
+ The core data, checkpoints, and training records for the paper "ReinFlow: Fine-tuning Flow Matching Policy with Online Reinforcement Learning".
 
7
 
8
+ * Project Website: [this url](https://reinflow.github.io/).
9
 
10
+ * Code: [this repository](https://github.com/ReinFlow/ReinFlow).
11
 
12
+
13
+ ## Here is a summary of what is contained in this Hugging Face dataset:
14
+
15
+ * **data-offline**: This directory includes the train.npz and normalization.npz files for OpenAI Gym tasks, derived and normalized from the official D4RL datasets. An exception is the Humanoid-v3 environment, where the data was collected from our pre-trained SAC agent.
16
  Note that these datasets differ from those used in our reference paper DPPO, as the DPPO dataset curation process was not disclosed. Instead, we opted to use the official D4RL datasets, which include offline RL rewards, unlike the DPPO datasets. These rewards enable the data to support training offline-to-online RL algorithms for flow matching policies, such as FQL.
17
  Datasets for other tasks can be automatically downloaded using the pre-training scripts in our repository and are not uploaded to this Hugging Face repository.
18
 
19
+ * **log**: This directory contains pre-trained and fine-tuned model checkpoints. The pre-trained checkpoints include DDPM, 1-Rectified Flow, and Shortcut Models trained on the OpenAI Gym-D4RL dataset, Franka Kitchen dataset, and robomimic datasets (processed as per DPPO).
20
 
21
+ * **visualize**: This directory includes the figures presented in the paper and the corresponding training records (in .csv files) required to reproduce them. These data can serve as a baseline for developing new algorithms and benchmarking against our method.
22
 
23
  For how to make use of these data,
24
  please refer to [the first document](https://github.com/ReinFlow/ReinFlow/blob/release/docs/ReproduceExps.md) to see how to do pre-training on the offline datasets and fine-tuning with the pre-trained checkpoints,