--- license: mit language: - en tags: - gesture-recognition - sensor-data - flex-sensors - accelerometer size_categories: - n<1K configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* dataset_info: features: - name: label dtype: string - name: batch list: list: int64 splits: - name: train num_examples: 180 - name: test num_examples: 48 --- # Gesture Recognition Dataset ## Dataset Structure - **Labels**: ['Good', 'Null', 'Thirsty', 'Bad', 'Me', 'Hungry'] - **Format**: Each record contains a 'label' and a 'batch' field - **Batch Size**: 30 rows per batch (30 time steps) - **Features**: 15 columns per row - **Selection Method**: cosine_similarity - Files selected based on similarity to majority pattern ## Column Information Each row in a batch contains 15 values in this order: 1. Timestamp - Timestamp 2. F1 - Flex sensor 1 3. F2 - Flex sensor 2 4. F3 - Flex sensor 3 5. F4 - Flex sensor 4 6. F5 - Flex sensor 5 7. Acc_Fin_x - Accelerometer Fin x axis 8. Acc_Fin_y - Accelerometer Fin y axis 9. Acc_Fin_z - Accelerometer Fin z axis 10. Acc_Palm_x - Accelerometer Palm x axis 11. Acc_Palm_y - Accelerometer Palm y axis 12. Acc_Palm_z - Accelerometer Palm z axis 13. Acc_Arm_x - Accelerometer Arm x axis 14. Acc_Arm_y - Accelerometer Arm y axis 15. Acc_Arm_z - Accelerometer Arm z axis ## Data Format ```python { 'label': 'gesture_name', # One of: ['Good', 'Null', 'Thirsty', 'Bad', 'Me', 'Hungry'] 'batch': [ [Timestamp,F1, F2, F3, F4, F5, Acc_Fin_x, Acc_Fin_y, Acc_Fin_z, Acc_Palm_x, Acc_Palm_y, Acc_Palm_z, Acc_Arm_x, Acc_Arm_y, Acc_Arm_z], # Row 1 [Timestamp,F1, F2, F3, F4, F5, Acc_Fin_x, Acc_Fin_y, Acc_Fin_z, Acc_Palm_x, Acc_Palm_y, Acc_Palm_z, Acc_Arm_x, Acc_Arm_y, Acc_Arm_z], # Row 2 ... # 30 rows total ] } ``` ## Sensors - **F1-F5**: Flex sensors measuring finger bend (5 sensors) - **Acc_Fin**: Accelerometer on finger (x, y, z axes) - **Acc_Palm**: Accelerometer on palm (x, y, z axes) - **Acc_Arm**: Accelerometer on arm (x, y, z axes) ## Data Quality Files were selected using cosine_similarity to ensure the most representative samples for each gesture class.