Sinjhin's picture
Update README.md
53e3fd6 verified
metadata
license: cc
pretty_name: Neuro Evolution for eXtensible Universal Semantics Dataset
task_categories:
  - automatic-speech-recognition
  - image-to-text
  - image-text-to-text
  - audio-text-to-text
  - feature-extraction
  - video-classification
  - video-text-to-text
language:
  - en
size_categories:
  - 10M<n<100M
tags:
  - multimodal
  - audio
  - image
  - text
  - time-series
  - video
configs:
  - config_name: slices
    data_files:
      - split: train
        path: slices-*.parquet
  - config_name: moments
    data_files:
      - split: train
        path: moments-*.parquet
  - config_name: seconds
    data_files:
      - split: train
        path: seconds-*.parquet
  - config_name: experiences
    data_files:
      - split: train
        path: experiences-*.parquet
  - config_name: minutes
    data_files:
      - split: train
        path: minutes-*.parquet
  - config_name: frames
    data_files:
      - split: train
        path: frames-*.parquet
  - config_name: meta
    data_files:
      - split: train
        path: meta-*.parquet
dataset_info:
  dataset_size: 547608330240

NEXUS: Neural Evolution for eXtensible Universal Semantics Dataset

(Temporal Multimodal Slices)

This dataset is a multi-modal, hierarchical, temporal representation derived from HuggingFaceFV/finevideo. It is designed for streaming training where the primary unit is a 10 ms "slice" that aggregates upward into moments (100 ms), seconds (1 s), experiences (10 s), and minutes (60 s).

It is meant to represent an extensible stream of "experience" as there are placeholders for many other modalities as well as a lump "data" key.

Visual data is stored as per-frame JPEG bytes (which can be seen as image or video), and audio is stored as PCM16 bytes in 10 ms chunks.

The Montreal Forced Aligner model was used for time-aligned orthographic transcription to both phoneme and words at the moment and second level, respectively. The original text transcription was checked for error and time-aligned to 'statements' at the experiences level.

Each data type is positioned at the appropriate temporal level (eg. phonemes at moment, words at second, gestures at experience, actions at minute).

This is a working dataset and will be changing and getting more filled out as I modify it for my needs and beging taking in real data from hardware currently in development. Currently only ~2k of 10k videos have been translated.

Quickstart

To stream by video with all modalities grouped together (slices, moments, seconds, experiences, minutes, frames, and meta), we need a helper script:

from itertools import groupby
from datasets import load_dataset

DATASET = "Ardea/NEXUS-temporal_hierarchical_multi-modal"
TABLES = ["slices", "moments", "seconds", "experiences", "minutes", "frames"]

def group_by_video(rows):
    for video_id, group in groupby(rows, key=lambda r: r["video_id"]):
        yield video_id, list(group)

def stream_by_video():
    table_iters = {
        name: group_by_video(
            iter(load_dataset(DATASET, name, split="train", streaming=True))
        )
        for name in TABLES
    }
    table_heads = {name: next(it, None) for name, it in table_iters.items()}
    meta = load_dataset(DATASET, "meta", split="train", streaming=True)

    for meta_row in meta:
        video_id = meta_row["video_id"]
        payload = {"video_id": video_id, "meta": meta_row}
        for name, group_iter in table_iters.items():
            head = table_heads[name]
            while head and head[0] != video_id:
                head = next(group_iter, None)
            if head and head[0] == video_id:
                payload[name] = head[1]
                table_heads[name] = next(group_iter, None)
            else:
                payload[name] = []
                table_heads[name] = head
        yield payload

example = next(stream_by_video())
print(example["video_id"], len(example["slices"]), len(example["frames"]))

Each list is sorted by its per-level index for temporal order.

Summary

  • Source: derived from HuggingFaceFV/finevideo (YouTube-origin content)
  • Modalities: audio (stereo PCM16), visual/video frames (JPEG bytes), phonemes (moments), text (seconds and experiences), metadata
  • Time base: all timestamps are in milliseconds
  • Primary streaming unit: 10 ms slices
  • Additional future modalities:
@dataclass
class TemporalPlanck:
    """
    A chunk of temporal multimodal data at some granularity.

    The granularity is implicit in the length/duration; for v1 we store it explicitly.
    """

    id: str  # timestamp in epoch ms plus `_<level>`
    level: TemporalLevel
    parent: Optional[str] = None
    slices: Optional[List[str]] = field(default_factory=list)

    meta: Dict[str, Any] = field(
        default_factory=dict
    )  # metadata / stats for evolution, not encoded


@dataclass
class TemporalSlice(TemporalPlanck):
    level: TemporalLevel = TemporalLevel.SLICE
    text: Optional[str] = None
    audio_l: Optional[int] = None  # Parquet row idx for 10ms PCM16 chunk
    audio_r: Optional[int] = None  # Parquet row idx for 10ms PCM16 chunk
    visual: Optional[int] = None  # Parquet row idx for frame reference
    imu: Optional[List[List[float]]] = None
    gps: Optional[tuple[float, float, float]] = None  # lat, lon, alt
    temp: Optional[float] = None
    humidity: Optional[float] = None
    baro: Optional[float] = None
    lidar: Optional[str] = None  # Raw lidar (not sure type, str is placeholder)
    ranges: Optional[List[float]] = None  # X, Y, Z vector and range
    screen: Optional[str] = None  # Raw screen image (not sure type, str is placeholder)
    data: Optional[Dict[str, Any]] = None  # For unknown extensibility

Stats (current export)

  • Videos: 1,999
  • Duration ms (min/mean/max): 19,000 / 281,259 / 658,000
  • Total size: 541,565,728,481 bytes (approx 541.6 GB)

Row counts:

  • slices: 57,246,000
  • moments: 5,724,600
  • seconds: 572,460
  • experiences: 57,246
  • minutes: 10,317
  • frames: 15,781,125
  • meta: 1,999

Dataset structure

All data is stored in Parquet shards:

slices-00000-of-000NN.parquet
moments-00000-of-000NN.parquet
seconds-00000-of-000NN.parquet
experiences-00000-of-000NN.parquet
minutes-00000-of-000NN.parquet
frames-00000-of-000NN.parquet
meta-00000-of-000NN.parquet

Each table uses video_id as the primary key to connect across tables. Index columns are 0-based within each video (e.g., slice_idx, moment_idx, frame_idx).

slices (10 ms)

Core streaming unit. Use this table for training.

Key fields:

  • video_id, slice_idx, start_ms
  • audio_l_pcm16, audio_r_pcm16: 320-byte PCM16 chunks (16 kHz, 10 ms)
  • frame_idx: points to frames.frame_idx for the same video_id
  • moment_idx, second_idx, experience_idx, minute_idx
  • is_video_start, is_video_end
  • Optional sensors: imu, gps, temp, humidity, baro, lidar, ranges, screen, data

moments (100 ms)

  • video_id, moment_idx, start_ms, end_ms
  • slice_start_idx, slice_end_idx
  • phoneme (nullable)

seconds (1 s)

  • video_id, second_idx, start_ms, end_ms
  • moment_start_idx, moment_end_idx
  • words: list of word tokens aligned to the second

experiences (10 s)

  • video_id, experience_idx, start_ms, end_ms
  • second_start_idx, second_end_idx
  • statements: list of text segments for the 10 s window
  • gestures: list of gesture tokens (nullable)

minutes (60 s)

  • video_id, minute_idx, start_ms, end_ms
  • experience_start_idx, experience_end_idx
  • actions: list of action tokens (nullable)

frames

  • video_id, frame_idx, frame_time_ms
  • image: struct with {bytes, path} where bytes are JPEG bytes and path is null

meta

Top-level metadata from the source dataset. Stored as strings if not scalar.

Key fields:

  • video_id, duration_ms, resolution
  • Content metadata: content_parent_category, content_fine_category, content_metadata
  • YouTube metadata: youtube_title, youtube_description, youtube_channel, youtube_categories, youtube_tags, youtube_upload_date, etc.

Streaming usage

Slices are ordered by (video_id, slice_idx) in each shard, so you can stream them in order. Use is_video_start / is_video_end or video_id changes to detect boundaries. For multi-modal by-video streaming, use the Quickstart snippet.

from datasets import load_dataset

ds = load_dataset(
    "Ardea/NEXUS-temporal_hierarchical_multi-modal",
    "slices",
    split="train",
    streaming=True,
)

# Stream the first 10 minutes of slices from the first video
current_video = None
for row in ds:
    if current_video is None:
        current_video = row["video_id"]
    if row["video_id"] != current_video:
        break
    if row["start_ms"] >= 10 * 60 * 1000:
        break

Decoding examples

Decode audio:

from datasets import load_dataset
import numpy as np

ds = load_dataset(
    "Ardea/NEXUS-temporal_hierarchical_multi-modal",
    "slices",
    split="train",
    streaming=True,
)
row = next(iter(ds))

pcm = row["audio_l_pcm16"]  # bytes, 16 kHz PCM16
samples = np.frombuffer(pcm, dtype="<i2")  # int16

Decode frames as images:

from datasets import load_dataset

frames = load_dataset(
    "Ardea/NEXUS-temporal_hierarchical_multi-modal",
    "frames",
    split="train",
    streaming=True,
)
frame = next(iter(frames))
image = frame["image"]  # PIL.Image.Image

Intended use

  • Streaming temporal modeling
  • Multimodal alignment research
  • Hierarchical sequence modeling

Limitations

  • Derived from YouTube content; metadata and transcription quality depend on the source dataset and Montreal Forced Aligner
  • Audio and frames are stored independently; use video_id and indices to align.

License and attribution

This dataset is derived from HuggingFaceFV/finevideo. Please follow the original dataset license and YouTube content terms when using or redistributing this dataset.