mathewsaji sayantan47 commited on
Commit
87d5a2d
·
0 Parent(s):

Duplicate from sayantan47/POLAR-Posture-Level-Action-Recognition-Dataset

Browse files

Co-authored-by: synthwave <sayantan47@users.noreply.huggingface.co>

Files changed (4) hide show
  1. .gitattributes +59 -0
  2. POLAR.zip +3 -0
  3. README.md +146 -0
  4. cropped.zip +3 -0
.gitattributes ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mds filter=lfs diff=lfs merge=lfs -text
13
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
+ *.model filter=lfs diff=lfs merge=lfs -text
15
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
16
+ *.npy filter=lfs diff=lfs merge=lfs -text
17
+ *.npz filter=lfs diff=lfs merge=lfs -text
18
+ *.onnx filter=lfs diff=lfs merge=lfs -text
19
+ *.ot filter=lfs diff=lfs merge=lfs -text
20
+ *.parquet filter=lfs diff=lfs merge=lfs -text
21
+ *.pb filter=lfs diff=lfs merge=lfs -text
22
+ *.pickle filter=lfs diff=lfs merge=lfs -text
23
+ *.pkl filter=lfs diff=lfs merge=lfs -text
24
+ *.pt filter=lfs diff=lfs merge=lfs -text
25
+ *.pth filter=lfs diff=lfs merge=lfs -text
26
+ *.rar filter=lfs diff=lfs merge=lfs -text
27
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
28
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar filter=lfs diff=lfs merge=lfs -text
31
+ *.tflite filter=lfs diff=lfs merge=lfs -text
32
+ *.tgz filter=lfs diff=lfs merge=lfs -text
33
+ *.wasm filter=lfs diff=lfs merge=lfs -text
34
+ *.xz filter=lfs diff=lfs merge=lfs -text
35
+ *.zip filter=lfs diff=lfs merge=lfs -text
36
+ *.zst filter=lfs diff=lfs merge=lfs -text
37
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
38
+ # Audio files - uncompressed
39
+ *.pcm filter=lfs diff=lfs merge=lfs -text
40
+ *.sam filter=lfs diff=lfs merge=lfs -text
41
+ *.raw filter=lfs diff=lfs merge=lfs -text
42
+ # Audio files - compressed
43
+ *.aac filter=lfs diff=lfs merge=lfs -text
44
+ *.flac filter=lfs diff=lfs merge=lfs -text
45
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
46
+ *.ogg filter=lfs diff=lfs merge=lfs -text
47
+ *.wav filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - uncompressed
49
+ *.bmp filter=lfs diff=lfs merge=lfs -text
50
+ *.gif filter=lfs diff=lfs merge=lfs -text
51
+ *.png filter=lfs diff=lfs merge=lfs -text
52
+ *.tiff filter=lfs diff=lfs merge=lfs -text
53
+ # Image files - compressed
54
+ *.jpg filter=lfs diff=lfs merge=lfs -text
55
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
56
+ *.webp filter=lfs diff=lfs merge=lfs -text
57
+ # Video files - compressed
58
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
59
+ *.webm filter=lfs diff=lfs merge=lfs -text
POLAR.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:62daf3a8c6adc2959deddba077500221e1a8607d5328862950dd8ea7737c03ab
3
+ size 3181629223
README.md ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: other # Original dataset license not explicitly stated; refer to Mendeley terms at https://data.mendeley.com/datasets/hvnsh7rwz7/1
5
+ pretty_name: "POLAR: Posture-Level Action Recognition Dataset"
6
+ size_categories: "10K<n<100K"
7
+ tags:
8
+ - computer-vision
9
+ - image-classification
10
+ - object-detection
11
+ - action-recognition
12
+ - human-pose-estimation
13
+ dataset_info:
14
+ features:
15
+ - name: image
16
+ dtype: image
17
+ - name: objects
18
+ list_of:
19
+ - name: id
20
+ dtype: int64
21
+ - name: bbox
22
+ list_of:
23
+ dtype: float64
24
+ - name: category
25
+ dtype: int64
26
+ splits:
27
+ - name: train
28
+ # num_examples: 28259 # Approximate; adjust based on your splits/train.txt count
29
+ - name: val
30
+ # num_examples: 3532 # Approximate (10% of total)
31
+ - name: test
32
+ # num_examples: 3533 # Approximate (10% of total)
33
+ supervised_keys:
34
+ - image
35
+ - objects
36
+ task_templates:
37
+ - task: image-object-detection
38
+ citations:
39
+ - title: "POLAR: Posture-level Action Recognition Dataset"
40
+ authors:
41
+ - Wentao Ma
42
+ - Shuang Liang
43
+ year: 2021
44
+ doi: 10.17632/hvnsh7rwz7.1
45
+ url: https://data.mendeley.com/datasets/hvnsh7rwz7/1
46
+ ---
47
+
48
+ # POLAR: Posture-Level Action Recognition Dataset
49
+
50
+ ## Disclaimer
51
+
52
+ This dataset is a restructured and YOLO-formatted version of the original **POsture-Level Action Recognition (POLAR)** dataset. I do not claim ownership or licensing rights over this dataset. For full details, including original licensing and usage terms, please refer to the [original dataset on Mendeley Data](https://data.mendeley.com/datasets/hvnsh7rwz7/1).
53
+
54
+ ## Motivation
55
+
56
+ The original POLAR dataset, while comprehensive, has a somewhat complex structure that can make it challenging to navigate and integrate with modern object detection frameworks like YOLO. To address this, I reorganized the dataset into a clean, split-based format and converted the annotations to YOLO-compatible labels. This makes it easier to use for training action recognition models directly.
57
+
58
+ ## Description
59
+
60
+ The **POLAR (POsture-Level Action Recognition)** dataset focuses on nine categories of human actions directly tied to posture: **bending**, **jumping**, **lying**, **running**, **sitting**, **squatting**, **standing**, **stretching**, and **walking**. It contains a total of **35,324 images** and covers approximately **99% of posture-level human actions** in daily life, based on the authors' analysis of the PASCAL VOC dataset.
61
+
62
+ This dataset is suitable for tasks such as:
63
+ - **Image Classification**
64
+ - **Action Recognition**
65
+ - **Object Detection** (with YOLO-formatted bounding boxes around persons)
66
+
67
+ Each image features a single or multiple persons with bounding box annotations labeled by their primary action/pose.
68
+
69
+ ## Dataset Structure
70
+
71
+ The dataset is pre-split into **train**, **val**, and **test** sets. The directory structure is as follows:
72
+
73
+ ```
74
+ POLAR/
75
+ ├── Annotations/ # Original JSON annotation files (for reference)
76
+ │ ├── test/
77
+ │ ├── train/
78
+ │ └── val/
79
+ ├── images/ # Original images (.jpg)
80
+ │ ├── test/
81
+ │ ├── train/
82
+ │ └── val/
83
+ ├── labels/ # YOLO-formatted .txt label files
84
+ │ ├── test/
85
+ │ ├── train/
86
+ │ └── val/
87
+ ├── splits/ # Split definition files
88
+ │ ├── test.txt
89
+ │ ├── train.txt
90
+ │ └── val.txt
91
+ └── dataset.yaml # YOLO configuration file (for training)
92
+ ```
93
+
94
+ - **splits/**: Text files listing image filenames (one per line, without extensions) for each split.
95
+ - **labels/**: For each image (e.g., `images/train/p1_00001.jpg`), there is a corresponding `labels/train/p1_00001.txt` with YOLO-format annotations (class ID + normalized bounding box coordinates).
96
+ - **dataset.yaml**: Pre-configured for Ultralytics YOLO training (see [YOLO Dataset Format](https://docs.ultralytics.com/datasets/detect/#ultralytics-yolo-format) for details).
97
+
98
+ ## Changes Made
99
+
100
+ Compared to the original dataset, the following modifications were applied:
101
+
102
+ 1. **Restructured Splits**:
103
+ - Organized images and annotations into explicit **train**, **val**, and **test** subfolders.
104
+ - Used the original split definitions from the provided `.txt` files in `splits/` to ensure consistency.
105
+
106
+ 2. **YOLO Formatting**:
107
+ - Converted JSON annotations to YOLO `.txt` files in the `labels/` folder.
108
+ - Each line in a `.txt` file follows the format: `<class_id> <center_x> <center_y> <norm_width> <norm_height>` (normalized to [0,1]).
109
+ - Class IDs map to actions as follows (0-8):
110
+ - 0: bending
111
+ - 1: jumping
112
+ - 2: lying
113
+ - 3: running
114
+ - 4: sitting
115
+ - 5: squatting
116
+ - 6: standing
117
+ - 7: stretching
118
+ - 8: walking
119
+ - Included a ready-to-use `dataset.yaml` for YOLOv8+ training.
120
+
121
+ These changes simplify setup while preserving the original data integrity.
122
+
123
+ ## Usage
124
+
125
+ ### Training with YOLO (Ultralytics)
126
+ 1. Clone or download this dataset to your working directory.
127
+ 2. Install Ultralytics: `pip install ultralytics`.
128
+ 3. Train a model (e.g., using YOLOv8 nano):
129
+ ```
130
+ yolo detect train data=dataset.yaml model=yolov8n.pt epochs=100 imgsz=640
131
+ ```
132
+ - This assumes the YAML is in the root (`POLAR/`).
133
+ - Adjust `epochs`, `imgsz`, or other hyperparameters as needed.
134
+ - YOLO will automatically pair images with labels based on filenames.
135
+
136
+ For more details on YOLO integration, see the [Ultralytics documentation](https://docs.ultralytics.com/).
137
+
138
+ ## Citation
139
+
140
+ If you use this dataset in your research, please cite the original work:
141
+
142
+ > Ma, Wentao; Liang, Shuang (2021), “POLAR: Posture-level Action Recognition Dataset”, Mendeley Data, V1, doi: [10.17632/hvnsh7rwz7.1](https://doi.org/10.17632/hvnsh7rwz7.1).
143
+
144
+ ---
145
+
146
+ *Last updated: October 20, 2025*
cropped.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e234f867d6ea6c709effcb31896dcd5a98442f728a3647c3456b254eb73fce83
3
+ size 406904849