File size: 2,331 Bytes
81885b2 4303acf 81885b2 48f17c8 81885b2 15c61ea 81885b2 5e471a6 81885b2 febf6aa 81885b2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
---
task_categories:
- translation
language:
- en
---
# Information
* Language: English
* The dataset contains both RGB (frontal and side view) and keypoints (only frontal view) data. However, the translation text is only available for frontal-view RGB data. Therefore, this repo only support this type of data.
* Gloss is not currently available.
* Storage
* RGB
* Train: 30.7 GB
* Validation: 1.65 GB
* Test: 2.24 GB
# Structure
Each sample will have a structure as follows:
```
{
'VIDEO_ID': Value(dtype='string', id=None),
'VIDEO_NAME': Value(dtype='string', id=None),
'SENTENCE_ID': Value(dtype='string', id=None),
'SENTENCE_NAME': Value(dtype='string', id=None),
'START_REALIGNED': Value(dtype='float64', id=None),
'END_REALIGNED': Value(dtype='float64', id=None),
'SENTENCE': Value(dtype='string', id=None),
'VIDEO': Value(dtype='large_binary', id=None)
}
{
'VIDEO_ID': '--7E2sU6zP4',
'VIDEO_NAME': '--7E2sU6zP4-5-rgb_front',
'SENTENCE_ID': '--7E2sU6zP4_10',
'SENTENCE_NAME': '--7E2sU6zP4_10-5-rgb_front',
'START_REALIGNED': 129.06,
'END_REALIGNED': 142.48,
'SENTENCE': "And I call them decorative elements because basically all they're meant to do is to enrich and color the page.",
'VIDEO': <video-bytes>
}
```
# How To Use
Because the returned video will be in bytes, here is a way to extract frames and fps:
```python
# pip install av
import av
import io
import numpy as np
import os
from datasets import load_dataset
def extract_frames(video_bytes):
# Create a memory-mapped file from the bytes
container = av.open(io.BytesIO(video_bytes))
# Find the video stream
visual_stream = next(iter(container.streams.video), None)
# Extract video properties
video_fps = visual_stream.average_rate
# Initialize arrays to store frames
frames_array = []
# Extract frames
for packet in container.demux([visual_stream]):
for frame in packet.decode():
img_array = np.array(frame.to_image())
frames_array.append(img_array)
return frames_array, video_fps
dataset = load_dataset("VieSignLang/how2sign-clips", split="test", streaming=True)
sample = next(iter(dataset))["video"]
frames, video_fps = extract_frames(sample)
print(f"Number of frames: {frames.shape[0]}")
print(f"Video FPS: {video_fps}")
``` |