Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

Malaysian Sign Language (MSL) Dataset / Bahasa Isyarat Malaysia (BIM) Dataset

Refer to this code repository on how to use the dataset.

Disclaimer

This dataset is collected by students from Faculty of Computer Science & Information Technology, Universiti Malaya for a coursework project (WQF7006 Computer Vision & Image Processing). Raw data is available at Google Drive. I'm not the owner/author of this raw dataset (video/), but I have processed and organized it for research purposes (features/, tensors).

Contributions are welcome! Please feel free to open an issue or submit a pull request to contribute more extracted features or tensors.

Overview

This repository provides tools to:

  1. Extract features from raw video files using MediaPipe Holistic to extract pose and hand landmarks
  2. Build tensors from extracted features for machine learning model training

The extracted features are 258-dimensional vectors containing:

  • Pose landmarks: 33 points Γ— 4 values (x, y, z, visibility) = 132 dimensions
  • Left hand landmarks: 21 points Γ— 3 values (x, y, z) = 63 dimensions
  • Right hand landmarks: 21 points Γ— 3 values (x, y, z) = 63 dimensions
  • Total: 258 dimensions

Dataset Structure

The dataset follows this structure:

BIM Dataset V3/
    β”œβ”€β”€ video/                          # Raw video files (organized by gloss/sign)
    β”‚   β”œβ”€β”€ abang/
    β”‚   β”‚   β”œβ”€β”€ abang_1_1_1.mp4
    β”‚   β”‚   β”œβ”€β”€ ...
    β”‚   β”‚   └── abang_4_6_3.mp4
    β”‚   β”œβ”€β”€ ada/
    β”‚   β”œβ”€β”€ ...
    β”‚   └── ambil/
    β”‚
    β”œβ”€β”€ features/                       # Extracted MediaPipe features
    β”‚   β”œβ”€β”€ first_30/                   # First 30 frames with hand landmarks
    β”‚   β”‚   β”œβ”€β”€ abang/
    β”‚   β”‚   β”‚   β”œβ”€β”€ abang_1_1_1.npy    # Shape: (30, 258) - all frames in one file
    β”‚   β”‚   β”‚   β”œβ”€β”€ abang_4_6_3.npy    # Shape: (22, 258) - might be less than 30 frames, padding is not done
    β”‚   β”‚   β”‚   └── ...
    β”‚   β”‚   β”œβ”€β”€ ada/
    β”‚   β”‚   └── ...
    β”‚   β”‚
    β”‚   └── uniform_30/                # Uniformly sampled 30 frames
    β”‚       β”œβ”€β”€ abang/
    β”‚       β”‚   β”œβ”€β”€ abang_1_1_1.npy    # Shape: (30, 258) - all frames in one file
    β”‚       β”‚   β”œβ”€β”€ abang_4_6_3.npy    # Shape: (22, 258) - might be less than 30 frames, padding is not done
    β”‚       β”‚   └── ...
    β”‚       └── ...
    β”‚
    └── tensors/                        # Processed tensors for ML training
        β”œβ”€β”€ first_30/
        β”‚   β”œβ”€β”€ X.npy                   # Full dataset features (N, T, D), T is the number of specified frames (30 by default), padding is done here
        β”‚   β”œβ”€β”€ y.npy                   # Full dataset labels (N,)
        β”‚   β”œβ”€β”€ X_train.npy             # Training features
        β”‚   β”œβ”€β”€ y_train.npy             # Training labels
        β”‚   β”œβ”€β”€ X_test.npy              # Test features
        β”‚   β”œβ”€β”€ y_test.npy              # Test labels
        β”‚   └── label_map.json          # Gloss to label index mapping
        β”‚
        └── uniform_30/
            └── [same structure as first_30/]

Where:

  • N: Number of videos
  • T: Number of frames per video (30)
  • D: Feature dimension (258)

Feature Extraction

Overview

The msl-extract-feats script extracts MediaPipe Holistic landmarks from video files. It processes videos frame-by-frame and extracts:

  • Pose landmarks: 33 points with x, y, z coordinates and visibility
  • Hand landmarks: 21 points per hand (left and right) with x, y, z coordinates

Only frames with detected hand landmarks are saved, ensuring quality features for sign language recognition.

Usage

Extract First 30 Frames

Extracts the first 30 frames that contain hand landmarks from each video:

uv run msl-extract-feats \
    --video-root "BIM Dataset V3/video" \
    --output-root "BIM Dataset V3/features" \
    --sampling first \
    --num-frames 30 \
    --num-workers 4 \
    --gloss hi beli pukul nasi_lemak lemak kereta nasi marah anak_lelaki baik jangan apa_khabar main pinjam buat ribut pandai_2 emak_saudara jahat panas assalamualaikum lelaki bomba emak sejuk masalah beli_2 panas_2 perempuan bagaimana

# If you've built the package, you can run: `msl-extract-feats` without `uv run`

Extract Uniformly Sampled 30 Frames

Extracts 30 frames uniformly sampled across the entire video:

uv run msl-extract-feats \
    --video-root "BIM Dataset V3/video" \
    --output-root "BIM Dataset V3/features" \
    --sampling uniform \
    --num-frames 30 \
    --num-workers 4 \
    --gloss hi beli pukul nasi_lemak lemak kereta nasi marah anak_lelaki baik jangan apa_khabar main pinjam buat ribut pandai_2 emak_saudara jahat panas assalamualaikum lelaki bomba emak sejuk masalah beli_2 panas_2 perempuan bagaimana

# If you've built the package, you can run: `msl-extract-feats` without `uv run`

Output

Features are saved as single .npy files (one per video) in the following structure:

features/{sampling}_{num_frames}/
    {gloss}/
        {video_name}.npy    # Shape: (num_frames, 258)

Each .npy file contains all frames for a video as a NumPy array with shape (num_frames, 258), where:

  • First dimension: number of frames (30 by default, or less if the video doesn't have enough frames with valid landmarks)
  • Second dimension: feature vector dimension (258)

Building Tensors

Overview

The msl-build-tensors script converts extracted features into NumPy tensors suitable for machine learning model training. It:

  1. Loads all feature files from the specified directory
  2. Pads sequences to a fixed length (30 by default)
  3. Creates feature matrix X (N, T, D) and label vector y (N,)
  4. Optionally performs stratified train/test split

Usage

Build Tensors from First 30 Features

uv run msl-build-tensors \
    --features-root "BIM Dataset V3/features/first_30" \
    --output-root "BIM Dataset V3/tensors/first_30" \
    --num-frames 30 \
    --split \
    --test-size 0.1 \
    --seed 42

# If you've built the package, you can run: `msl-build-tensors` without `uv run`

Build Tensors from Uniform 30 Features

uv run msl-build-tensors \
    --features-root "BIM Dataset V3/features/uniform_30" \
    --output-root "BIM Dataset V3/tensors/uniform_30" \
    --num-frames 30 \
    --split \
    --test-size 0.1 \
    --seed 42

# If you've built the package, you can run: `msl-build-tensors` without `uv run`

Output

The script generates the following files:

  • X.npy: Full feature tensor of shape (N, T, D)
  • y.npy: Full label vector of shape (N,)
  • label_map.json: Mapping from gloss names to label indices
  • X_train.npy, y_train.npy: Training set (if --split is used)
  • X_test.npy, y_test.npy: Test set (if --split is used)

Example: Loading Tensors

import numpy as np
import json

# Load tensors
X = np.load("BIM Dataset V3/tensors/first_30/X.npy")
y = np.load("BIM Dataset V3/tensors/first_30/y.npy")

# Load label mapping
with open("BIM Dataset V3/tensors/first_30/label_map.json") as f:
    label_map = json.load(f)

# Reverse mapping: label index -> gloss name
idx_to_gloss = {v: k for k, v in label_map.items()}

print(f"Dataset shape: {X.shape}")  # (N, 30, 258)
print(f"Labels shape: {y.shape}")   # (N,)
print(f"Number of classes: {len(label_map)}")
Downloads last month
118