The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
schema_version: string
scene_id: string
layout: struct<layout_id: string, path: string, coordinate_frame: string, notes: string>
child 0, layout_id: string
child 1, path: string
child 2, coordinate_frame: string
child 3, notes: string
window: struct<start: string, end: string, duration_s: double>
child 0, start: string
child 1, end: string
child 2, duration_s: double
tracking: struct<tracking_id: string, format: string, path: string, timestamp_field: string, track_id_field: s (... 47 chars omitted)
child 0, tracking_id: string
child 1, format: string
child 2, path: string
child 3, timestamp_field: string
child 4, track_id_field: string
child 5, coordinate_frame: string
child 6, notes: string
flow: struct<p_star: list<item: double>, u_hat: list<item: double>>
child 0, p_star: list<item: double>
child 0, item: double
child 1, u_hat: list<item: double>
child 0, item: double
metadata: struct<task_type: string, description: string, score: double, rerank_score: double, alignment_signed (... 332 chars omitted)
child 0, task_type: string
child 1, description: string
child 2, score: double
child 3, rerank_score: double
child 4, alignment_signed: double
child 5, crowd_coherence: double
child 6, crowd_local_density: double
child 7, difficulty_score_raw: double
child 8, difficulty_score_norm: double
child 9, difficulty_category: string
child 10, difficulty_rank_within_category: int64
child 11, heading_diff_deg: double
...
bject_duration_s: double
child 14, crowd_n_other_tracks: int64
child 15, crowd_n_samples: int64
provenance: struct<generator: struct<name: string, version: string>, generated_at: null, source: struct<chunk_fi (... 103 chars omitted)
child 0, generator: struct<name: string, version: string>
child 0, name: string
child 1, version: string
child 1, generated_at: null
child 2, source: struct<chunk_file: string, person_track_id: string, window_start: string, window_end: string, candid (... 17 chars omitted)
child 0, chunk_file: string
child 1, person_track_id: string
child 2, window_start: string
child 3, window_end: string
child 4, candidates_csv: string
obstacles: list<item: struct<id: string, polygon: list<item: list<item: double>>>>
child 0, item: struct<id: string, polygon: list<item: list<item: double>>>
child 0, id: string
child 1, polygon: list<item: list<item: double>>
child 0, item: list<item: double>
child 0, item: double
layout_id: string
exits: list<item: struct<id: string, polygon: list<item: list<item: double>>>>
child 0, item: struct<id: string, polygon: list<item: list<item: double>>>
child 0, id: string
child 1, polygon: list<item: list<item: double>>
child 0, item: list<item: double>
child 0, item: double
boundary: list<item: list<item: double>>
child 0, item: list<item: double>
child 0, item: double
version: string
units: string
to
{'layout_id': Value('string'), 'version': Value('string'), 'units': Value('string'), 'boundary': List(List(Value('float64'))), 'obstacles': List({'id': Value('string'), 'polygon': List(List(Value('float64')))}), 'exits': List({'id': Value('string'), 'polygon': List(List(Value('float64')))}), 'metadata': {'source_layout_id': Value('string'), 'source_version': Value('int64'), 'source_layout_api_version': Value('int64'), 'conversion_note': Value('string'), 'cleanup': {'merge_gap_m': Value('float64'), 'boundary_clearance_m': Value('float64'), 'min_area_m2': Value('float64'), 'regularization': Value('string'), 'inflation_limit': Value('float64'), 'notes': List(Value('string')), 'merge_report': List({'output_id': Value('string'), 'source_ids': List(Value('string')), 'regularization': Value('string'), 'area_before': Value('float64'), 'area_after': Value('float64'), 'inflation': Value('float64'), 'vertex_count_before': Value('int64'), 'vertex_count_after': Value('int64')})}, 'manual_family_collapse': {'default_mode': Value('string'), 'families': List({'ids': List(Value('string')), 'mode': Value('string')}), 'report': List({'output_id': Value('string'), 'source_ids': List(Value('string')), 'mode': Value('string'), 'input_area_sum': Value('float64'), 'output_area': Value('float64'), 'inflation': Value('float64')})}, 'manual_concave_replacements': {'obs_3': {'source_ids': List(Value('string')), 'mode': Value('string')}, 'obs_15': {'source_ids': List(Value('string')), 'mode': Value('string'), 'polygon': List(List(Value('float64')))}}}}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 295, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
schema_version: string
scene_id: string
layout: struct<layout_id: string, path: string, coordinate_frame: string, notes: string>
child 0, layout_id: string
child 1, path: string
child 2, coordinate_frame: string
child 3, notes: string
window: struct<start: string, end: string, duration_s: double>
child 0, start: string
child 1, end: string
child 2, duration_s: double
tracking: struct<tracking_id: string, format: string, path: string, timestamp_field: string, track_id_field: s (... 47 chars omitted)
child 0, tracking_id: string
child 1, format: string
child 2, path: string
child 3, timestamp_field: string
child 4, track_id_field: string
child 5, coordinate_frame: string
child 6, notes: string
flow: struct<p_star: list<item: double>, u_hat: list<item: double>>
child 0, p_star: list<item: double>
child 0, item: double
child 1, u_hat: list<item: double>
child 0, item: double
metadata: struct<task_type: string, description: string, score: double, rerank_score: double, alignment_signed (... 332 chars omitted)
child 0, task_type: string
child 1, description: string
child 2, score: double
child 3, rerank_score: double
child 4, alignment_signed: double
child 5, crowd_coherence: double
child 6, crowd_local_density: double
child 7, difficulty_score_raw: double
child 8, difficulty_score_norm: double
child 9, difficulty_category: string
child 10, difficulty_rank_within_category: int64
child 11, heading_diff_deg: double
...
bject_duration_s: double
child 14, crowd_n_other_tracks: int64
child 15, crowd_n_samples: int64
provenance: struct<generator: struct<name: string, version: string>, generated_at: null, source: struct<chunk_fi (... 103 chars omitted)
child 0, generator: struct<name: string, version: string>
child 0, name: string
child 1, version: string
child 1, generated_at: null
child 2, source: struct<chunk_file: string, person_track_id: string, window_start: string, window_end: string, candid (... 17 chars omitted)
child 0, chunk_file: string
child 1, person_track_id: string
child 2, window_start: string
child 3, window_end: string
child 4, candidates_csv: string
obstacles: list<item: struct<id: string, polygon: list<item: list<item: double>>>>
child 0, item: struct<id: string, polygon: list<item: list<item: double>>>
child 0, id: string
child 1, polygon: list<item: list<item: double>>
child 0, item: list<item: double>
child 0, item: double
layout_id: string
exits: list<item: struct<id: string, polygon: list<item: list<item: double>>>>
child 0, item: struct<id: string, polygon: list<item: list<item: double>>>
child 0, id: string
child 1, polygon: list<item: list<item: double>>
child 0, item: list<item: double>
child 0, item: double
boundary: list<item: list<item: double>>
child 0, item: list<item: double>
child 0, item: double
version: string
units: string
to
{'layout_id': Value('string'), 'version': Value('string'), 'units': Value('string'), 'boundary': List(List(Value('float64'))), 'obstacles': List({'id': Value('string'), 'polygon': List(List(Value('float64')))}), 'exits': List({'id': Value('string'), 'polygon': List(List(Value('float64')))}), 'metadata': {'source_layout_id': Value('string'), 'source_version': Value('int64'), 'source_layout_api_version': Value('int64'), 'conversion_note': Value('string'), 'cleanup': {'merge_gap_m': Value('float64'), 'boundary_clearance_m': Value('float64'), 'min_area_m2': Value('float64'), 'regularization': Value('string'), 'inflation_limit': Value('float64'), 'notes': List(Value('string')), 'merge_report': List({'output_id': Value('string'), 'source_ids': List(Value('string')), 'regularization': Value('string'), 'area_before': Value('float64'), 'area_after': Value('float64'), 'inflation': Value('float64'), 'vertex_count_before': Value('int64'), 'vertex_count_after': Value('int64')})}, 'manual_family_collapse': {'default_mode': Value('string'), 'families': List({'ids': List(Value('string')), 'mode': Value('string')}), 'report': List({'output_id': Value('string'), 'source_ids': List(Value('string')), 'mode': Value('string'), 'input_area_sum': Value('float64'), 'output_area': Value('float64'), 'inflation': Value('float64')})}, 'manual_concave_replacements': {'obs_3': {'source_ids': List(Value('string')), 'mode': Value('string')}, 'obs_15': {'source_ids': List(Value('string')), 'mode': Value('string'), 'polygon': List(List(Value('float64')))}}}}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
EvenFlow Benchmark Dataset
EvenFlow is an evaluation suite for shared-space navigation, built from real-world human trajectory data.
Most benchmarks evaluate whether an agent can navigate around people.
EvenFlow evaluates whether an agent can navigate with them.
It converts real-world human trajectories into executable navigation tasks, enabling evaluation of coordination, timing, and interaction—not just collision avoidance.
Dataset Download.
git clone https://huggingface.co/datasets/standard-cognition/EvenFlow
What This Dataset Provides
The dataset consists of:
- Layouts: static environment geometry
- Scenes: human trajectory data over time
- Tasks: executable navigation problems derived from real behavior
- Tracks: time-indexed human trajectories
These components are structured to support trajectory-level evaluation of navigation planners.
Dataset Structure
benchmark/
aligned_flow/
tasks/
scenes/
layouts/
cross_flow/
tasks/
scenes/
layouts/
interaction_constrained/
tasks/
scenes/
layouts/
Each scenario family captures a different navigation regime:
- Aligned Flow: motion aligned with surrounding traffic
- Cross Flow: traversal across moving streams
- Interaction-Constrained: navigation shaped by local human interactions
How to Use This Dataset
EvenFlow is designed for executable evaluation, not just analysis.
Typical workflow:
- Load a task (
task.json) - Resolve its scene and associated human trajectories
- Run a planner to generate a time-parameterized trajectory
- Evaluate the result using EvenFlow metrics
Code and evaluation tools:
👉 https://github.com/standard-ai/evenflow-benchmark
Responsible AI Considerations
Data Collection
Data was collected in real-world environments using overhead camera systems.
The dataset reflects naturally occurring human behavior in shared spaces.
Privacy
- No raw video is released
- No biometric identifiers are included
- No personally identifiable information (PII) is present
All released data consists of anonymized trajectory representations.
Intended Use
This dataset is intended for research in:
- robot navigation in human environments
- multi-agent coordination
- trajectory-based evaluation
- human-aware motion planning
Limitations
- Data is collected from a single physical environment (v1 release)
- No demographic or identity-related attributes are included
- Evaluation is performed offline (no closed-loop interaction)
We view this dataset as a foundation for broader multi-environment and interactive benchmarks.
License
This dataset is released under a custom research license.
- Free for research and academic use
- Commercial use requires a separate agreement
See the LICENSE file for full terms.
- Downloads last month
- 28