Datasets:
Tasks:
Video Classification
Formats:
json
Languages:
English
Size:
10K - 100K
Tags:
computer vision
machine learning
video understanding
classification
human-machine-interaction
human-robot-interaction
License:
File size: 4,976 Bytes
512ca7a 2a913d9 512ca7a 828fadf 512ca7a 2a913d9 512ca7a 86b4d79 512ca7a 3a95df4 512ca7a 0096a90 512ca7a d1aab33 512ca7a | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 | ---
dataset_info:
features:
- name: id
dtype: string
- name: action
dtype:
class_label:
names:
'0': None
'1': Waving
'2': Pointing
'3': Clapping
'4': Follow
'5': Walking
'6': Stop
'7': Turn
'8': Jumping
'9': Come here
'10': Calm
- name: camera
dtype: int64
- name: subject
dtype: int64
- name: idx
dtype: int64
- name: label
dtype: string
- name: link
dtype: string
splits:
- name: train
- name: val
license: mit
tags:
- computer vision
- machine learning
- video understanding
- classification
- human-machine-interaction
- human-robot-interaction
- human-action-recognition
task_categories:
- video-classification
language:
- en
pretty_name: University of Technology Chemnitz - Human Robot Interaction Dataset
---
<img src="https://cdn-uploads.huggingface.co/production/uploads/6792a78d9967b7f195400ea2/JrWkycuOahvZ2AQqD9UbU.png" alt="drawing" width="200"/>
University of Technology Chemnitz, Germany<br>
Department Robotics and Human Machine Interaction<br>
Author: [Robert Schulz](mailto:robert.schulz@s2021.tu-chemnitz.de?subject=TUC-AR%20Dataset%20-%20HuggingFace)
# TUC-HRI Dataset Card
TUC-AR is an action recognition dataset, containing 10(+1) action categories for human machine interaction. This version contains video sequences, stored as images, frame by frame.
We introduce two validation types: random validation and cross-subject validation. This is the **random validation** dataset. For cross-subject validation, please use https://huggingface.co/datasets/SchulzR97/TUC-HRI-CS.
- In **random validation**, a train and a validation split are obtained by randomly splitting the sequences while maintaining an allocation rate of approximately 80\% train / 20\% validation. This ensures that each action, subject, and camera, as well as the overall number of sequences, are distributed in this ratio among the splits. Thus, we obtained 17,263 train sequences and 4,220 validation sequences.
- For **cross-subject validation**, subject _0_ and _8_ were chosen as validation subjects. All other subjects were assigned to the train split.
## Dataset Details
- RGB and depth input recorded by Intel RealSense D435 depth camera
- 12 subjects
- 11,031 sequences (train 8,893/ val 2,138)
- 3 perspectives per scene
- 10(+1) action classes
| Action | Label |
|--------|-----------|
| A000 | None |
| A001 | Waving |
| A002 | Pointing |
| A003 | Clapping |
| A004 | Follow |
| A005 | Walking |
| A006 | Stop |
| A007 | Turn |
| A008 | Jumping |
| A009 | Come here |
| A010 | Calm |
## How to Use this Dataset
1. Install the RSProduction Machine Learning package ([PyPi](https://pypi.org/project/rsp-ml/), [GitHub](https://github.com/SchulzR97/rsp-ml))
```bash
pip install rsp-ml
```
2. Use the HF datasat with `rsp.ml.dataset.TUCHRI`
```python
from rsp.ml.dataset import TUCHRI
import rsp.ml.multi_transforms as multi_transforms
import torchvision.transforms as transforms
USE_DEPTH_DATA = True
class ToNumpy:
def __call__(self, x):
if isinstance(x, Image.Image):
return np.array(x)
elif isinstance(x, torch.Tensor):
return x.permute(1, 2, 0).numpy() # Tensor (C, H, W) -> (H, W, C)
else:
raise TypeError("Input must be a PIL.Image or torch.Tensor")
transform = transforms.Compose([
transforms.Resize((600, 600)),
transforms.ColorJitter(brightness=0.8, contrast=0.8, saturation=0.8, hue=0.5),
transforms.RandomRotation(180, expand=True),
transforms.CenterCrop((375, 500)),
#transforms.RandomCrop(input_size),
#transforms.ToTensor(),
ToNumpy()
])
dtd_dataset = torchvision.datasets.DTD(download=True, split='val', transform=transform)
tranforms_train = multi_transforms.Compose([
multi_transforms.ReplaceBackground(
backgrounds = backgrounds,
hsv_filter=[(69, 87, 139, 255, 52, 255)],
p = 0.8
),
multi_transforms.Resize((400, 400), auto_crop=False),
multi_transforms.Color(0.1, p = 0.2),
multi_transforms.Brightness(0.7, 1.3),
multi_transforms.Satturation(0.7, 1.3),
multi_transforms.RandomHorizontalFlip(),
multi_transforms.GaussianNoise(0.002),
multi_transforms.Rotate(max_angle=3),
multi_transforms.Stack()
])
transforms_val = multi_transforms.Compose([
multi_transforms.Resize((400, 400), auto_crop=False),
multi_transforms.Stack()
])
ds_train = TUCHRI(
phase='train',
load_depth_data=True,
sequence_length=30,
num_classes=11,
transforms=tranforms_train
)
ds_val = TUCHRI(
phase='val',
load_depth_data=True,
sequence_length=30,
num_classes=11,
transforms=transforms_val
)
```
## Dataset Card Contact
In case of any doubts about the dataset preprocessing and preparation, please contact [TUC RHMi](mailto:robert.schulz@s2021.tu-chemnitz.de?subject=TUC-AR%20Dataset%20-%20HuggingFace).
|