Update README.md
Browse files
README.md
CHANGED
|
@@ -3,7 +3,6 @@ license: apache-2.0
|
|
| 3 |
---
|
| 4 |
# Dataset Card for Pong-v4-expert-MCTS
|
| 5 |
## Table of Contents
|
| 6 |
-
|
| 7 |
- [Supported Tasks and Baseline](#support-tasks-and-baseline)
|
| 8 |
|
| 9 |
- [Data Usage](#data-usage)
|
|
@@ -24,10 +23,8 @@ license: apache-2.0
|
|
| 24 |
- [Contributions](##Contributions)
|
| 25 |
|
| 26 |
## Supported Tasks and Baseline
|
| 27 |
-
|
| 28 |
- This dataset supports the training for [Procedure Cloning (PC )](https://arxiv.org/abs/2205.10816) algorithm.
|
| 29 |
- Baselines when sequence length for decision is 0:
|
| 30 |
-
|
| 31 |
| Train loss | Test Acc | Reward |
|
| 32 |
| -------------------------------------------------- | -------- | ------ |
|
| 33 |
|  | 0.90 | 20 |
|
|
@@ -36,13 +33,9 @@ license: apache-2.0
|
|
| 36 |
| ----------------------------------------------------- | ------------------------------------------------- | --------------------------------------------------- | ------ |
|
| 37 |
|  |  |  | -21 |
|
| 38 |
## Data Usage
|
| 39 |
-
|
| 40 |
### Data description
|
| 41 |
-
|
| 42 |
This dataset includes 8 episodes of pong-v4 environment. The expert policy is [EfficientZero](https://arxiv.org/abs/2111.00210), which is able to generate MCTS hidden states. Because of the contained hidden states for each observation, this dataset is suitable for Imitation Learning methods that learn from a sequence like PC.
|
| 43 |
-
|
| 44 |
### Data Fields
|
| 45 |
-
|
| 46 |
- `obs`: An Array3D containing observations from 8 trajectories of an evaluated agent. The data type is uint8 and each value is in 0 to 255. The shape of this tensor is [96, 96, 3], that is, the channel dimension in the last dimension.
|
| 47 |
- `actions`: An integer containing actions from 8 trajectories of an evaluated agent. This value is from 0 to 5. Details about this environment can be viewed at [Pong - Gym Documentation](https://www.gymlibrary.dev/environments/atari/pong/).
|
| 48 |
- `hidden_state`: An Array3D containing corresponding hidden states generated by EfficientZero, from 8 trajectories of an evaluated agent. The data type is float32.
|
|
@@ -70,7 +63,6 @@ def generate_examples(self, filepath):
|
|
| 70 |
There is only a training set for this dataset, as evaluation is undertaken by interacting with a simulator.
|
| 71 |
|
| 72 |
### Initial Data Collection and Normalization
|
| 73 |
-
|
| 74 |
- This dataset is collected by EfficientZero policy.
|
| 75 |
- The standard for expert data is that each return of 8 episodes is over 20.
|
| 76 |
- No normalization is previously applied ( i.e. each value of observation is a uint8 scalar in [0, 255] )
|
|
|
|
| 3 |
---
|
| 4 |
# Dataset Card for Pong-v4-expert-MCTS
|
| 5 |
## Table of Contents
|
|
|
|
| 6 |
- [Supported Tasks and Baseline](#support-tasks-and-baseline)
|
| 7 |
|
| 8 |
- [Data Usage](#data-usage)
|
|
|
|
| 23 |
- [Contributions](##Contributions)
|
| 24 |
|
| 25 |
## Supported Tasks and Baseline
|
|
|
|
| 26 |
- This dataset supports the training for [Procedure Cloning (PC )](https://arxiv.org/abs/2205.10816) algorithm.
|
| 27 |
- Baselines when sequence length for decision is 0:
|
|
|
|
| 28 |
| Train loss | Test Acc | Reward |
|
| 29 |
| -------------------------------------------------- | -------- | ------ |
|
| 30 |
|  | 0.90 | 20 |
|
|
|
|
| 33 |
| ----------------------------------------------------- | ------------------------------------------------- | --------------------------------------------------- | ------ |
|
| 34 |
|  |  |  | -21 |
|
| 35 |
## Data Usage
|
|
|
|
| 36 |
### Data description
|
|
|
|
| 37 |
This dataset includes 8 episodes of pong-v4 environment. The expert policy is [EfficientZero](https://arxiv.org/abs/2111.00210), which is able to generate MCTS hidden states. Because of the contained hidden states for each observation, this dataset is suitable for Imitation Learning methods that learn from a sequence like PC.
|
|
|
|
| 38 |
### Data Fields
|
|
|
|
| 39 |
- `obs`: An Array3D containing observations from 8 trajectories of an evaluated agent. The data type is uint8 and each value is in 0 to 255. The shape of this tensor is [96, 96, 3], that is, the channel dimension in the last dimension.
|
| 40 |
- `actions`: An integer containing actions from 8 trajectories of an evaluated agent. This value is from 0 to 5. Details about this environment can be viewed at [Pong - Gym Documentation](https://www.gymlibrary.dev/environments/atari/pong/).
|
| 41 |
- `hidden_state`: An Array3D containing corresponding hidden states generated by EfficientZero, from 8 trajectories of an evaluated agent. The data type is float32.
|
|
|
|
| 63 |
There is only a training set for this dataset, as evaluation is undertaken by interacting with a simulator.
|
| 64 |
|
| 65 |
### Initial Data Collection and Normalization
|
|
|
|
| 66 |
- This dataset is collected by EfficientZero policy.
|
| 67 |
- The standard for expert data is that each return of 8 episodes is over 20.
|
| 68 |
- No normalization is previously applied ( i.e. each value of observation is a uint8 scalar in [0, 255] )
|