--- license: cc pretty_name: Neuro Evolution for eXtensible Universal Semantics Dataset task_categories: - automatic-speech-recognition - image-to-text - image-text-to-text - audio-text-to-text - feature-extraction - video-classification - video-text-to-text language: - en size_categories: - 10M` level: TemporalLevel parent: Optional[str] = None slices: Optional[List[str]] = field(default_factory=list) meta: Dict[str, Any] = field( default_factory=dict ) # metadata / stats for evolution, not encoded @dataclass class TemporalSlice(TemporalPlanck): level: TemporalLevel = TemporalLevel.SLICE text: Optional[str] = None audio_l: Optional[int] = None # Parquet row idx for 10ms PCM16 chunk audio_r: Optional[int] = None # Parquet row idx for 10ms PCM16 chunk visual: Optional[int] = None # Parquet row idx for frame reference imu: Optional[List[List[float]]] = None gps: Optional[tuple[float, float, float]] = None # lat, lon, alt temp: Optional[float] = None humidity: Optional[float] = None baro: Optional[float] = None lidar: Optional[str] = None # Raw lidar (not sure type, str is placeholder) ranges: Optional[List[float]] = None # X, Y, Z vector and range screen: Optional[str] = None # Raw screen image (not sure type, str is placeholder) data: Optional[Dict[str, Any]] = None # For unknown extensibility ``` ## Stats (current export) - Videos: 1,999 - Duration ms (min/mean/max): 19,000 / 281,259 / 658,000 - Total size: 541,565,728,481 bytes (approx 541.6 GB) Row counts: - slices: 57,246,000 - moments: 5,724,600 - seconds: 572,460 - experiences: 57,246 - minutes: 10,317 - frames: 15,781,125 - meta: 1,999 ## Dataset structure All data is stored in Parquet shards: ``` slices-00000-of-000NN.parquet moments-00000-of-000NN.parquet seconds-00000-of-000NN.parquet experiences-00000-of-000NN.parquet minutes-00000-of-000NN.parquet frames-00000-of-000NN.parquet meta-00000-of-000NN.parquet ``` Each table uses `video_id` as the primary key to connect across tables. Index columns are 0-based within each video (e.g., `slice_idx`, `moment_idx`, `frame_idx`). ### slices (10 ms) Core streaming unit. Use this table for training. Key fields: - `video_id`, `slice_idx`, `start_ms` - `audio_l_pcm16`, `audio_r_pcm16`: 320-byte PCM16 chunks (16 kHz, 10 ms) - `frame_idx`: points to `frames.frame_idx` for the same `video_id` - `moment_idx`, `second_idx`, `experience_idx`, `minute_idx` - `is_video_start`, `is_video_end` - Optional sensors: `imu`, `gps`, `temp`, `humidity`, `baro`, `lidar`, `ranges`, `screen`, `data` ### moments (100 ms) - `video_id`, `moment_idx`, `start_ms`, `end_ms` - `slice_start_idx`, `slice_end_idx` - `phoneme` (nullable) ### seconds (1 s) - `video_id`, `second_idx`, `start_ms`, `end_ms` - `moment_start_idx`, `moment_end_idx` - `words`: list of word tokens aligned to the second ### experiences (10 s) - `video_id`, `experience_idx`, `start_ms`, `end_ms` - `second_start_idx`, `second_end_idx` - `statements`: list of text segments for the 10 s window - `gestures`: list of gesture tokens (nullable) ### minutes (60 s) - `video_id`, `minute_idx`, `start_ms`, `end_ms` - `experience_start_idx`, `experience_end_idx` - `actions`: list of action tokens (nullable) ### frames - `video_id`, `frame_idx`, `frame_time_ms` - `image`: struct with `{bytes, path}` where `bytes` are JPEG bytes and `path` is null ### meta Top-level metadata from the source dataset. Stored as strings if not scalar. Key fields: - `video_id`, `duration_ms`, `resolution` - Content metadata: `content_parent_category`, `content_fine_category`, `content_metadata` - YouTube metadata: `youtube_title`, `youtube_description`, `youtube_channel`, `youtube_categories`, `youtube_tags`, `youtube_upload_date`, etc. ## Streaming usage Slices are ordered by `(video_id, slice_idx)` in each shard, so you can stream them in order. Use `is_video_start` / `is_video_end` or `video_id` changes to detect boundaries. For multi-modal by-video streaming, use the Quickstart snippet. ```python from datasets import load_dataset ds = load_dataset( "Ardea/NEXUS-temporal_hierarchical_multi-modal", "slices", split="train", streaming=True, ) # Stream the first 10 minutes of slices from the first video current_video = None for row in ds: if current_video is None: current_video = row["video_id"] if row["video_id"] != current_video: break if row["start_ms"] >= 10 * 60 * 1000: break ``` ## Decoding examples Decode audio: ```python from datasets import load_dataset import numpy as np ds = load_dataset( "Ardea/NEXUS-temporal_hierarchical_multi-modal", "slices", split="train", streaming=True, ) row = next(iter(ds)) pcm = row["audio_l_pcm16"] # bytes, 16 kHz PCM16 samples = np.frombuffer(pcm, dtype="