|
|
---
|
|
|
language:
|
|
|
- en
|
|
|
tags:
|
|
|
- sign language recognition
|
|
|
- emergency response
|
|
|
- computer vision
|
|
|
---
|
|
|
|
|
|
# CLARIS - Critical Emergency Sign Language Dataset
|
|
|
|
|
|
This dataset is a curated subset of the "Google - Isolated Sign Language Recognition" dataset, specifically filtered for the **CLARIS (Clear and Live Automated Response for Inclusive Safety)** project.
|
|
|
|
|
|
## Dataset Description
|
|
|
|
|
|
The primary goal of the CLARIS project is to develop a mobile application that provides a lifeline for the Deaf community during emergencies. This dataset was created to train a proof-of-concept AI model capable of recognizing a vocabulary of critical emergency-related signs.
|
|
|
|
|
|
The data consists of pre-extracted landmark coordinates from video clips of isolated signs. It originates from the [Google - Isolated Sign Language Recognition Kaggle Competition](https://www.kaggle.com/competitions/asl-signs).
|
|
|
|
|
|
## Dataset Structure
|
|
|
|
|
|
The dataset is provided in both CSV and Parquet (coming soon) formats. Each row represents the coordinates of a single landmark in a single frame of a video sequence.
|
|
|
|
|
|
| Column | Dtype | Description |
|
|
|
| ---------------- | ------- | --------------------------------------------------------------------------- |
|
|
|
| `frame` | int16 | The frame number within the sequence. |
|
|
|
| `row_id` | object | A unique identifier for the landmark within the frame. |
|
|
|
| `type` | object | The type of landmark (`face`, `left_hand`, `right_hand`, `pose`). |
|
|
|
| `landmark_index` | int16 | The index of the landmark within its type. |
|
|
|
| `x` | float64 | The normalized x-coordinate of the landmark. |
|
|
|
| `y` | float64 | The normalized y-coordinate of the landmark. |
|
|
|
| `z` | float64 | The normalized z-coordinate of the landmark (depth). |
|
|
|
| `path` | object | The path to the original source parquet file for the sequence. |
|
|
|
| `participant_id` | int64 | A unique identifier for the participant (signer). |
|
|
|
| `sequence_id` | int64 | A unique identifier for the sign sequence. |
|
|
|
| `sign` | object | The ground truth label for the sign being performed. |
|
|
|
|
|
|
## Curation Process
|
|
|
|
|
|
To create a focused dataset for our specific use case, we performed a two-step curation process:
|
|
|
|
|
|
1. **Vocabulary Filtering:** We selected **62 signs** deemed most relevant for describing medical, fire, or intruder emergencies.
|
|
|
2. **Participant Filtering:** To create a manageable dataset for rapid prototyping, we constrained the data to sequences from **two distinct participants** who had a balanced distribution of the target signs.
|
|
|
|
|
|
This process resulted in a final dataset containing **1,719 unique sign sequences**, comprising over 37 million landmark rows.
|
|
|
|
|
|
## Usage
|
|
|
|
|
|
We recommend using the Parquet file for faster loading times.
|
|
|
|
|
|
```python
|
|
|
import pandas as pd
|
|
|
|
|
|
# Load the full curated dataset
|
|
|
df = pd.read_parquet('claris_curated_dataset.parquet')
|
|
|
|
|
|
# Or load the smaller, subsampled version
|
|
|
df_sample = pd.read_parquet('claris_subsample_dataset.parquet')
|
|
|
|
|
|
print(df.head())
|
|
|
```
|
|
|
|
|
|
## Link to Project Notebook
|
|
|
|
|
|
The complete methodology, including data preprocessing, model training, and analysis, can be found in our Kaggle notebook:
|
|
|
https://www.kaggle.com/code/eveelyn/datathon2025-med |