|
|
--- |
|
|
license: cc-by-4.0 |
|
|
task_categories: |
|
|
- image-classification |
|
|
tags: |
|
|
- art |
|
|
- sign |
|
|
- ISL |
|
|
- accessibility |
|
|
- deaf |
|
|
- gesture |
|
|
- mutlimodal |
|
|
pretty_name: ChatDEAF ISL/TID Gesture dataset |
|
|
size_categories: |
|
|
- 10K<n<100K |
|
|
--- |
|
|
ChatDEAF ISL Dataset-Initial README and metadata update |
|
|
|
|
|
--- |
|
|
license: cc-by-4.0 |
|
|
pretty_name: ChatDEAF ISL Gesture Dataset |
|
|
tags: |
|
|
|
|
|
- sign-language |
|
|
|
|
|
- accessibility |
|
|
|
|
|
- isl |
|
|
|
|
|
- gesture-recognition |
|
|
|
|
|
- multimodal |
|
|
|
|
|
- chatdeaf |
|
|
|
|
|
task_categories: |
|
|
- image-classification |
|
|
|
|
|
size_categories: |
|
|
- 10<n<100 |
|
|
|
|
|
language: |
|
|
- en |
|
|
--- |
|
|
|
|
|
tags: |
|
|
- sign-language |
|
|
- isl |
|
|
- international-sign |
|
|
- gestures |
|
|
- accessibility |
|
|
- chatdeaf |
|
|
- image-classification |
|
|
- visual-language |
|
|
|
|
|
{ |
|
|
"belt": "Gesture for 'belt' in ISL.", |
|
|
"glasses": "Gesture for 'glasses' in ISL.", |
|
|
"tree": "Gesture for 'tree' in ISL.", |
|
|
"wow": "Gesture expressing surprise or excitement.", |
|
|
"bravo": "Clapping gesture for approval." |
|
|
} |
|
|
|
|
|
# ChatDEAF ISL Gesture Dataset |
|
|
|
|
|
The **ChatDEAF ISL Gesture Dataset** is a reference collection of International Sign Language (ISL) gestures. It includes images, gesture labels, and written descriptions designed to support AI models in accessibility, sign recognition, and gesture-text classification. |
|
|
|
|
|
## Contents |
|
|
|
|
|
- 18 sample ISL gestures |
|
|
- English word labels |
|
|
- Gesture descriptions |
|
|
- Image file references |
|
|
|
|
|
## Purpose |
|
|
|
|
|
This dataset supports accessibility research and aims to enhance multimodal AI systems (like GPT-4) in understanding visual sign languages. |
|
|
|
|
|
## License |
|
|
|
|
|
This dataset is released under the **CC BY 4.0** license. |
|
|
|
|
|
*ChatDEAF: giving voice to the silence.* |
|
|
|
|
|
# ChatDEAF ISL Gesture Dataset |
|
|
|
|
|
This is the first public dataset of International Sign Language (ISL) gestures created as part of the **ChatDEAF Project**. The dataset is intended to support multimodal AI models and improve accessibility in communication for the global deaf community. |
|
|
|
|
|
## Dataset Details |
|
|
|
|
|
- **Total Samples**: 18 labeled ISL gesture entries |
|
|
- **Fields**: |
|
|
- `Word`: English meaning of the gesture |
|
|
- `ISL Sign Description`: Text description of the sign gesture |
|
|
- `Image`: Corresponding photo or image file (to be expanded in future versions) |
|
|
|
|
|
## License |
|
|
|
|
|
CC-BY-4.0 — This dataset is open for academic, research, and responsible AI usage with proper attribution. |
|
|
|
|
|
## Purpose |
|
|
|
|
|
This dataset supports gesture recognition training, sign language understanding models, and accessibility technology research. It's a foundational step in training AI to understand international sign language. |
|
|
|
|
|
## Future Plans |
|
|
|
|
|
- Expand to 100+ ISL gestures |
|
|
- Add Turkish Sign Language (TİD) support |
|
|
- Create a HuggingFace Space demo for live ISL gesture recognition |
|
|
|
|
|
# ChatDEAF ISL Gesture Dataset |
|
|
|
|
|
This dataset contains labeled images and descriptions of International Sign Language (ISL) gestures. All files are located under the `/data` directory. |
|
|
|
|
|
## Folder Structure |
|
|
|
|
|
chatdeaf-isl-gestures/ ├── README.md ├── data/ │ ├── ChatDEAF_ISL_Dataset_Proposal.csv │ ├── police-1.jpg │ ├── pray-1.jpg │ └── ... |
|
|
|
|
|
## Fields |
|
|
|
|
|
- **Word**: The English translation of the sign |
|
|
- **ISL Sign Description**: Textual description of the gesture |
|
|
- **Image**: Visual photo of the sign (JPEG) |
|
|
|
|
|
## License: CC BY 4.0 |
|
|
|
|
|
## Purpose |
|
|
To support gesture recognition, sign language translation models, and accessibility research. |
|
|
|
|
|
The dataset includes labeled gesture names, visual references, and gesture descriptions in English. It aims to support AI research in accessibility, sign recognition, and multimodal understanding. |
|
|
|
|
|
This is the first version (v1) with 18 sample gestures. The dataset will be expanded in future releases. |
|
|
|
|
|
|