CPR-Coach / README.md
ShunliWang's picture
Upload 4 files
c69adb5 verified
# Packages Check
The complete file list is shown below.
```shell
.
├── CPR_Dataset_S0.tar.gz.00 # 10 GB
├── CPR_Dataset_S0.tar.gz.01 # 10 GB
├── CPR_Dataset_S0.tar.gz.02 # 10 GB
├── CPR_Dataset_S0.tar.gz.03 # 10 GB
├── CPR_Dataset_S0.tar.gz.04 # 10 GB
├── CPR_Dataset_S0.tar.gz.05 # 10 GB
├── CPR_Dataset_S0.tar.gz.06 # 10 GB
├── CPR_Dataset_S0.tar.gz.07 # 5.4 GB
├── CPR_Dataset_S1.tar.gz.00 # 10 GB
├── CPR_Dataset_S1.tar.gz.01 # 10 GB
├── CPR_Dataset_S1.tar.gz.02 # 10 GB
├── CPR_Dataset_S1.tar.gz.03 # 10 GB
├── CPR_Dataset_S1.tar.gz.04 # 10 GB
├── CPR_Dataset_S1.tar.gz.05 # 10 GB
├── CPR_Dataset_S1.tar.gz.06 # 10 GB
├── CPR_Dataset_S1.tar.gz.07 # 4.9 GB
├── CPR_Double_Dataset_S0.tar.gz.00 # 10 GB
├── CPR_Double_Dataset_S0.tar.gz.01 # 10 GB
├── CPR_Double_Dataset_S0.tar.gz.02 # 10 GB
├── CPR_Double_Dataset_S0.tar.gz.03 # 10 GB
├── CPR_Double_Dataset_S0.tar.gz.04 # 10 GB
├── CPR_Double_Dataset_S0.tar.gz.05 # 10 GB
├── CPR_Double_Dataset_S0.tar.gz.06 # 10 GB
├── CPR_Double_Dataset_S0.tar.gz.07 # 10 GB
├── CPR_Double_Dataset_S0.tar.gz.08 # 10 GB
├── CPR_Double_Dataset_S0.tar.gz.09 # 2.7 GB
├── CPR_Double_Dataset_S1.tar.gz.00 # 10 GB
├── CPR_Double_Dataset_S1.tar.gz.01 # 10 GB
├── CPR_Double_Dataset_S1.tar.gz.02 # 10 GB
├── CPR_Double_Dataset_S1.tar.gz.03 # 10 GB
├── CPR_Double_Dataset_S1.tar.gz.04 # 10 GB
├── CPR_Double_Dataset_S1.tar.gz.05 # 10 GB
├── CPR_Double_Dataset_S1.tar.gz.06 # 10 GB
├── CPR_Double_Dataset_S1.tar.gz.07 # 10 GB
├── CPR_Double_Dataset_S1.tar.gz.08 # 9.6 GB
├── SupC14_Dataset.tar.gz # 9.1 GB
├── SupDC59_Dataset.tar.gz.00 # 10 GB
├── SupDC59_Dataset.tar.gz.01 # 10 GB
├── SupDC59_Dataset.tar.gz.02 # 10 GB
├── SupDC59_Dataset.tar.gz.03 # 8.9 GB
├── SupMC15_Dataset.tar.gz.00 # 10 GB
├── SupMC15_Dataset.tar.gz.01 # 10 GB
├── SupMC15_Dataset.tar.gz.02 # 6.0 GB
├── Keypoints.tar.gz # 2.5 GB
└── ann.tar.gz # 128KB
```
# Run the Preprocessing Script
We have written the preprocessing code of the CPR-Coach dataset. The preprocessing includs a series of merge, decompression and deletion operations.
The whole process may take 20-30 minutes.
Since the parallel processing is adopted, program interruption should be avoided.
After running the `process.sh` file, you can get the entire dataset.
```shell
sh ./process.sh
```
# Dataset Check
After the preprocessing, make sure that the structure of the dataset is same with the follow.
The total size of the dataset should be `451 GB`.
```shell
.
├── CPR_Dataset_S0
│   ├── C00_S0
│   ├── C01_S0
│   ├── C02_S0
│   └── ...
├── CPR_Dataset_S1
│   ├── C00_S1
│   ├── C01_S1
│   ├── C02_S1
│   └── ...
├── CPR_Double_Dataset_S0
│   ├── DC00_S0
│   ├── DC01_S0
│   ├── DC02_S0
│   └── ...
├── CPR_Double_Dataset_S1
│   ├── DC00_S1
│   ├── DC01_S1
│   ├── DC02_S1
│   └── ...
├── SupC14_Dataset
│   ├── SupC00
│   ├── SupC01
│   ├── SupC02
│   └── ...
├── SupDC59_Dataset
│   ├── SupDC00
│   ├── SupDC01
│   ├── SupDC02
│   └── ...
├── SupMC15_Dataset
│   ├── SupMC00
│   ├── SupMC01
│   ├── SupMC02
│   └── ...
├── Keypoints
│   ├── all_errors_keypoints.pkl
│   ├── double_errors_keypoints.pkl
│   ├── test_keypoints.pkl
│   └── train_keypoints.pkl
├── ann
│ ├── ActionList.txt
│ ├── all_errors_testlist.txt # Contains All paired-, triple- and quad-errors Samples
│ ├── double_errors_testlist.txt # Contains paired-errors Samples
│ ├── testlist.txt # Used for Single-class Testing
│ ├── testlist_of.txt # Used for Single-class Testing (Optical Flow)
│ ├── trainlist.txt # Used for Single-class Training
│ ├── trainlist_of.txt # Used for Single-class Training (Optical Flow)
│ └── triquad_errors_testlist.txt # Contains triple- and quad-errors Samples
└── process.sh
```
# Dataset Structure Description
In the initial version of the dataset, we only collected videos of a single person dressed in two different clothes.
And only 59 types of double errors composite actions were explored.
These data exist in `CPR_Dataset_S0`, `CPR_Dataset_S1`, `CPR_Double_Dataset_S0` and `CPR_Double_Dataset_S1`.
We put the initial version of this work into CVPR-2023, and the reviewers gave us valuable suggestions: further enrich the types of samples and composite errors.
Therefore, we subsequently improved this dataset and recruited several volunteers to conduct supplementary data collection. These data are stored in `SupC14_Dataset`, `SupDC59_Dataset` and `SupMC15_Dataset`. We divided these samples in the `ann` directory.