metadata
task_categories:
- translation
language:
- de
Information
- Language: German
- The dataset contains only frontal-view RGB data.
- Gloss is available.
- Storage
- RGB
- Train: 632 MB
- Development: 42.8 MB
- Test: 49.8 MB
- RGB
Structure
Each sample will have a structure as follows:
{
'name': Value(dtype='string', id=None),
'video': Value(dtype='large_binary', id=None),
'start': Value(dtype='int8', id=None),
'end': Value(dtype='int8', id=None),
'speaker': Value(dtype='string', id=None),
'orth': Value(dtype='string', id=None),
'translation': Value(dtype='string', id=None)
}
{
'name': '06October_2012_Saturday_tagesschau-8730',
'video': <video-bytes>,
'start': -1,
'end': -1,
'speaker': 'Signer08',
'orth': 'MORGEN DEUTSCH LAND IX TIEF KOMMEN KUEHL KOMMEN',
'translation': 'ein tiefausläufer bringt morgen vor allem dem süden deutschlands noch regen',
}
How To Use
Because the returned video will be in bytes, here is a way to extract frames and fps:
# pip install av
import av
import io
import numpy as np
import os
from datasets import load_dataset
def extract_frames(video_bytes):
# Create a memory-mapped file from the bytes
container = av.open(io.BytesIO(video_bytes))
# Find the video stream
visual_stream = next(iter(container.streams.video), None)
# Extract video properties
video_fps = visual_stream.average_rate
# Initialize arrays to store frames and audio
frames_array = []
# Extract frames
for packet in container.demux([visual_stream]):
for frame in packet.decode():
img_array = np.array(frame.to_image())
frames_array.append(img_array)
return frames_array, video_fps
dataset = load_dataset("VieSignLang/phoenix14-t", split="test", streaming=True)
sample = next(iter(dataset))["video"]
frames, video_fps = extract_frames(sample)
print(f"Number of frames: {frames.shape[0]}")
print(f"Video FPS: {video_fps}")