mohibkhansherwani commited on
Commit
30142c5
·
verified ·
1 Parent(s): 292f858

Create Readme.md

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - tabular-classification
5
+ - feature-extraction
6
+ language:
7
+ - psl
8
+ tags:
9
+ - sign-language
10
+ - mediapipe
11
+ - landmarks
12
+ - pakistan
13
+ - gesture-recognition
14
+ pretty_name: Dynamic Word Level Pakistan Sign Language (PSL) Dataset
15
+ size_categories:
16
+ - 1K<n<10K
17
+ ---
18
+
19
+ # Dynamic Word Level Pakistan Sign Language (PSL) Dataset
20
+ ## Overview
21
+ This dataset contains **MediaPipe hand landmark sequences** for 60+ words in Pakistan Sign Language (PSL). It is designed to support research into dynamic, word-level gesture recognition. Unlike existing datasets that focus on static finger spelling or small vocabularies, this project provides a high-quality, research-ready landmark collection for complex sign language translation.
22
+
23
+ **Kaggle Link** https://www.kaggle.com/datasets/mohib123456/dynamic-word-level-pakistan-sign-language-dataset/data
24
+
25
+ **Github Project Link** https://github.com/MohibUllahKhanSherwani/SignSpeak_FYP
26
+ ### Dataset Statistics
27
+ - **Total Signs:** 60+ unique word classes.
28
+ - **Samples per Sign:** Each sign is performed 70 times (Combined).
29
+ - **Sequence Length:** Each gesture is fixed at 60 frames for temporal consistency.
30
+ - **Format:** Landmark coordinate data (X, Y, Z) extracted via MediaPipe.
31
+ ## Dataset Structure & Generalization
32
+ The data is organized into two primary subsets to ensure model robustness across different hardware and camera types:
33
+ 1. **MP_Data:** contains 50 samples per sign recorded using standard webcams and fixed desktop cameras.
34
+ 2. **MP_Data_mobile:** contains 20 samples per sign recorded using mobile phone cameras to introduce varied angles, motion blur, and lighting.
35
+ By training on both sets, models are less likely to overfit to a specific lens or environment, making them more suitable for real-world mobile applications like **SignSpeak**.
36
+ ## Motivation & Rationale
37
+ Most existing PSL datasets are limited to static signs or small vocabularies. We collected this word-level dataset specifically for the SignSpeak project.
38
+ - Privacy: No raw video recordings were saved to protect participant identity. Only anonymized MediaPipe landmarks were kept.
39
+ - Scale: Large vocabulary (60+) to move beyond simple finger-spelling
40
+ ## Usage
41
+ This data is ideal for training sequence-based architectures like **LSTMs, GRUs, or Transformers** for temporal classification. The primary use case is for building real-time sign language translation tools for the Deaf community in Pakistan.
42
+ ### Loading the Data
43
+ For details on how to use the dataset and sample loading and training script please visit: https://www.kaggle.com/datasets/mohib123456/dynamic-word-level-pakistan-sign-language-dataset/data
44
+ ## Authors
45
+ This dataset was curated by the **SignSpeak** team as part of our Final Year Project at **COMSATS University Islamabad, Abbottabad Campus**.
46
+ - **Main Author:** Mohib Ullah Khan Sherwani
47
+ - **Repository:** https://github.com/MohibUllahKhanSherwani/SignSpeak-FYP
48
+ ## License
49
+ This dataset is released under the **Apache License 2.0**. You are free to use, modify, and distribute this data for both research and commercial purposes, provided that attribution is given to the authors.