ReinFlow commited on
Commit
721ba13
·
verified ·
1 Parent(s): 9b497bc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -2,15 +2,17 @@
2
  license: mit
3
  ---
4
 
5
- ## This repository contains...
6
- The core data, checkpoints, and training records for the paper "ReinFlow: Fine-tuning Flow Matching Policy with Online Reinforcement Learning".
 
 
7
 
8
  * Project Website: [this url](https://reinflow.github.io/).
9
 
10
  * Code: [this repository](https://github.com/ReinFlow/ReinFlow).
11
 
12
 
13
- ## Here is a summary of what is contained in this Hugging Face dataset:
14
 
15
  * **data-offline**: This directory includes the train.npz and normalization.npz files for OpenAI Gym tasks, derived and normalized from the official D4RL datasets. An exception is the Humanoid-v3 environment, where the data was collected from our pre-trained SAC agent.
16
  Note that these datasets differ from those used in our reference paper DPPO, as the DPPO dataset curation process was not disclosed. Instead, we opted to use the official D4RL datasets, which include offline RL rewards, unlike the DPPO datasets. These rewards enable the data to support training offline-to-online RL algorithms for flow matching policies, such as FQL.
 
2
  license: mit
3
  ---
4
 
5
+ ## This repository...
6
+ Contains the core data, checkpoints, and training records for the **paper: "ReinFlow: Fine-tuning Flow Matching Policy with Online Reinforcement Learning"**.
7
+
8
+ Here are the links to more assets of this project:
9
 
10
  * Project Website: [this url](https://reinflow.github.io/).
11
 
12
  * Code: [this repository](https://github.com/ReinFlow/ReinFlow).
13
 
14
 
15
+ ## What is in this dataset?
16
 
17
  * **data-offline**: This directory includes the train.npz and normalization.npz files for OpenAI Gym tasks, derived and normalized from the official D4RL datasets. An exception is the Humanoid-v3 environment, where the data was collected from our pre-trained SAC agent.
18
  Note that these datasets differ from those used in our reference paper DPPO, as the DPPO dataset curation process was not disclosed. Instead, we opted to use the official D4RL datasets, which include offline RL rewards, unlike the DPPO datasets. These rewards enable the data to support training offline-to-online RL algorithms for flow matching policies, such as FQL.