metadata
task_categories:
- translation
language:
- en
Information
- Language: English
- The dataset contains both RGB (frontal and side view) and keypoints (only frontal view) data. However, the translation text is only available for frontal-view RGB data. Therefore, this repo only support this type of data.
- Gloss is not currently available.
- Storage
- RGB
- Train: 30.7 GB
- Validation: 1.65 GB
- Test: 2.24 GB
- RGB
Structure
Each sample will have a structure as follows:
{
"id": <id-of-sample>,
"type" <rgb-or-keypoints-data>,
"view": <frontal-or-side-view>,
"text": <translation-of-sample-in-spoken-language>,
"video": <video-in-bytes>,
}
How To Use
Because the returned video will be in bytes, here is a way to extract frames and fps:
# pip install av
import av
import io
import numpy as np
import os
from datasets import load_dataset
def extract_frames(video_bytes):
# Create a memory-mapped file from the bytes
container = av.open(io.BytesIO(video_bytes))
# Find the video stream
visual_stream = next(iter(container.streams.video), None)
# Extract video properties
video_fps = visual_stream.average_rate
# Initialize arrays to store frames and audio
frames_array = []
# Extract frames
for packet in container.demux([visual_stream]):
for frame in packet.decode():
img_array = np.array(frame.to_image())
frames_array.append(img_array)
return frames_array, video_fps
dataset = load_dataset("VieSignLang/how2sign-clips", split="test", streaming=True)
sample = next(iter(dataset))["video"]
frames, video_fps = extract_frames(sample)
print(f"Number of frames: {frames.shape[0]}")
print(f"Video FPS: {video_fps}")