Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
dataset_root: string
scenes: struct<bicycle: struct<12: list<item: int64>, 15: list<item: int64>, 18: list<item: int64>, 21: list (... 1475 chars omitted)
  child 0, bicycle: struct<12: list<item: int64>, 15: list<item: int64>, 18: list<item: int64>, 21: list<item: int64>, 3 (... 64 chars omitted)
      child 0, 12: list<item: int64>
          child 0, item: int64
      child 1, 15: list<item: int64>
          child 0, item: int64
      child 2, 18: list<item: int64>
          child 0, item: int64
      child 3, 21: list<item: int64>
          child 0, item: int64
      child 4, 3: list<item: int64>
          child 0, item: int64
      child 5, 6: list<item: int64>
          child 0, item: int64
      child 6, 9: list<item: int64>
          child 0, item: int64
  child 1, bonsai: struct<12: list<item: int64>, 15: list<item: int64>, 18: list<item: int64>, 21: list<item: int64>, 3 (... 64 chars omitted)
      child 0, 12: list<item: int64>
          child 0, item: int64
      child 1, 15: list<item: int64>
          child 0, item: int64
      child 2, 18: list<item: int64>
          child 0, item: int64
      child 3, 21: list<item: int64>
          child 0, item: int64
      child 4, 3: list<item: int64>
          child 0, item: int64
      child 5, 6: list<item: int64>
          child 0, item: int64
      child 6, 9: list<item: int64>
          child 0, item: int64
  child 2, counter: struct<12: list<item: int64>, 15: list<item: int64>, 18: list<item: int64>, 21: list<item: i
...
 string, description: string>>
  child 0, item: struct<@type: string, name: string, description: string>
      child 0, @type: string
      child 1, name: string
      child 2, description: string
rai:dataUseCases: list<item: string>
  child 0, item: string
recordSet: list<item: struct<@type: string, @id: string, name: string, description: string, field: list<item: s (... 173 chars omitted)
  child 0, item: struct<@type: string, @id: string, name: string, description: string, field: list<item: struct<@type (... 161 chars omitted)
      child 0, @type: string
      child 1, @id: string
      child 2, name: string
      child 3, description: string
      child 4, field: list<item: struct<@type: string, @id: string, name: string, description: string, dataType: string, s (... 83 chars omitted)
          child 0, item: struct<@type: string, @id: string, name: string, description: string, dataType: string, source: stru (... 71 chars omitted)
              child 0, @type: string
              child 1, @id: string
              child 2, name: string
              child 3, description: string
              child 4, dataType: string
              child 5, source: struct<fileObject: struct<@id: string>, extract: struct<jsonPath: string>>
                  child 0, fileObject: struct<@id: string>
                      child 0, @id: string
                  child 1, extract: struct<jsonPath: string>
                      child 0, jsonPath: string
@type: string
rai:dataSocialImpact: string
to
{'@context': {'@language': Value('string'), '@vocab': Value('string'), 'sc': Value('string'), 'cr': Value('string'), 'rai': Value('string'), 'dct': Value('string'), 'prov': Value('string'), 'conformsTo': Value('string'), 'citeAs': Value('string'), 'recordSet': Value('string'), 'field': Value('string'), 'dataType': {'@id': Value('string'), '@type': Value('string')}, 'source': Value('string'), 'fileObject': Value('string'), 'fileSet': Value('string'), 'extract': Value('string'), 'jsonPath': Value('string'), 'containedIn': Value('string'), 'includes': Value('string')}, '@type': Value('string'), 'name': Value('string'), 'alternateName': List(Value('string')), 'description': Value('string'), 'url': Value('string'), 'license': Value('string'), 'conformsTo': List(Value('string')), 'version': Value('string'), 'citeAs': Value('string'), 'datePublished': Value('timestamp[s]'), 'creator': {'@type': Value('string'), 'name': Value('string')}, 'keywords': List(Value('string')), 'distribution': List({'@type': Value('string'), '@id': Value('string'), 'name': Value('string'), 'description': Value('string'), 'contentUrl': Value('string'), 'encodingFormat': Value('string'), 'contentSize': Value('string'), 'sha256': Value('string'), 'containedIn': {'@id': Value('string')}, 'includes': Value('string')}), 'recordSet': List({'@type': Value('string'), '@id': Value('string'), 'name': Value('string'), 'description': Value('string'), 'field': List({'@type': Value('string'), '@id': Value('string'), 'name': Value('string'), 'description': Value('string'), 'dataType': Value('string'), 'source': {'fileObject': {'@id': Value('string')}, 'extract': {'jsonPath': Value('string')}}})}), 'rai:dataLimitations': List(Value('string')), 'rai:dataBiases': List(Value('string')), 'rai:personalSensitiveInformation': Value('string'), 'rai:dataUseCases': List(Value('string')), 'rai:dataSocialImpact': Value('string'), 'rai:hasSyntheticData': Value('bool'), 'rai:dataCollection': Value('string'), 'rai:dataPreprocessingProtocol': Value('string'), 'rai:dataAnnotationProtocol': Value('string'), 'rai:dataReleaseMaintenancePlan': Value('string'), 'prov:wasDerivedFrom': List({'@id': Value('string'), 'name': Value('string')}), 'prov:wasGeneratedBy': List({'@type': Value('string'), 'name': Value('string'), 'description': Value('string')})}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
                  for key, pa_table in self.ex_iterable._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 299, in _generate_tables
                  self._cast_table(pa_table, json_field_paths=json_field_paths),
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              dataset_root: string
              scenes: struct<bicycle: struct<12: list<item: int64>, 15: list<item: int64>, 18: list<item: int64>, 21: list (... 1475 chars omitted)
                child 0, bicycle: struct<12: list<item: int64>, 15: list<item: int64>, 18: list<item: int64>, 21: list<item: int64>, 3 (... 64 chars omitted)
                    child 0, 12: list<item: int64>
                        child 0, item: int64
                    child 1, 15: list<item: int64>
                        child 0, item: int64
                    child 2, 18: list<item: int64>
                        child 0, item: int64
                    child 3, 21: list<item: int64>
                        child 0, item: int64
                    child 4, 3: list<item: int64>
                        child 0, item: int64
                    child 5, 6: list<item: int64>
                        child 0, item: int64
                    child 6, 9: list<item: int64>
                        child 0, item: int64
                child 1, bonsai: struct<12: list<item: int64>, 15: list<item: int64>, 18: list<item: int64>, 21: list<item: int64>, 3 (... 64 chars omitted)
                    child 0, 12: list<item: int64>
                        child 0, item: int64
                    child 1, 15: list<item: int64>
                        child 0, item: int64
                    child 2, 18: list<item: int64>
                        child 0, item: int64
                    child 3, 21: list<item: int64>
                        child 0, item: int64
                    child 4, 3: list<item: int64>
                        child 0, item: int64
                    child 5, 6: list<item: int64>
                        child 0, item: int64
                    child 6, 9: list<item: int64>
                        child 0, item: int64
                child 2, counter: struct<12: list<item: int64>, 15: list<item: int64>, 18: list<item: int64>, 21: list<item: i
              ...
               string, description: string>>
                child 0, item: struct<@type: string, name: string, description: string>
                    child 0, @type: string
                    child 1, name: string
                    child 2, description: string
              rai:dataUseCases: list<item: string>
                child 0, item: string
              recordSet: list<item: struct<@type: string, @id: string, name: string, description: string, field: list<item: s (... 173 chars omitted)
                child 0, item: struct<@type: string, @id: string, name: string, description: string, field: list<item: struct<@type (... 161 chars omitted)
                    child 0, @type: string
                    child 1, @id: string
                    child 2, name: string
                    child 3, description: string
                    child 4, field: list<item: struct<@type: string, @id: string, name: string, description: string, dataType: string, s (... 83 chars omitted)
                        child 0, item: struct<@type: string, @id: string, name: string, description: string, dataType: string, source: stru (... 71 chars omitted)
                            child 0, @type: string
                            child 1, @id: string
                            child 2, name: string
                            child 3, description: string
                            child 4, dataType: string
                            child 5, source: struct<fileObject: struct<@id: string>, extract: struct<jsonPath: string>>
                                child 0, fileObject: struct<@id: string>
                                    child 0, @id: string
                                child 1, extract: struct<jsonPath: string>
                                    child 0, jsonPath: string
              @type: string
              rai:dataSocialImpact: string
              to
              {'@context': {'@language': Value('string'), '@vocab': Value('string'), 'sc': Value('string'), 'cr': Value('string'), 'rai': Value('string'), 'dct': Value('string'), 'prov': Value('string'), 'conformsTo': Value('string'), 'citeAs': Value('string'), 'recordSet': Value('string'), 'field': Value('string'), 'dataType': {'@id': Value('string'), '@type': Value('string')}, 'source': Value('string'), 'fileObject': Value('string'), 'fileSet': Value('string'), 'extract': Value('string'), 'jsonPath': Value('string'), 'containedIn': Value('string'), 'includes': Value('string')}, '@type': Value('string'), 'name': Value('string'), 'alternateName': List(Value('string')), 'description': Value('string'), 'url': Value('string'), 'license': Value('string'), 'conformsTo': List(Value('string')), 'version': Value('string'), 'citeAs': Value('string'), 'datePublished': Value('timestamp[s]'), 'creator': {'@type': Value('string'), 'name': Value('string')}, 'keywords': List(Value('string')), 'distribution': List({'@type': Value('string'), '@id': Value('string'), 'name': Value('string'), 'description': Value('string'), 'contentUrl': Value('string'), 'encodingFormat': Value('string'), 'contentSize': Value('string'), 'sha256': Value('string'), 'containedIn': {'@id': Value('string')}, 'includes': Value('string')}), 'recordSet': List({'@type': Value('string'), '@id': Value('string'), 'name': Value('string'), 'description': Value('string'), 'field': List({'@type': Value('string'), '@id': Value('string'), 'name': Value('string'), 'description': Value('string'), 'dataType': Value('string'), 'source': {'fileObject': {'@id': Value('string')}, 'extract': {'jsonPath': Value('string')}}})}), 'rai:dataLimitations': List(Value('string')), 'rai:dataBiases': List(Value('string')), 'rai:personalSensitiveInformation': Value('string'), 'rai:dataUseCases': List(Value('string')), 'rai:dataSocialImpact': Value('string'), 'rai:hasSyntheticData': Value('bool'), 'rai:dataCollection': Value('string'), 'rai:dataPreprocessingProtocol': Value('string'), 'rai:dataAnnotationProtocol': Value('string'), 'rai:dataReleaseMaintenancePlan': Value('string'), 'prov:wasDerivedFrom': List({'@id': Value('string'), 'name': Value('string')}), 'prov:wasGeneratedBy': List({'@type': Value('string'), 'name': Value('string'), 'description': Value('string')})}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

SysCON3D

Portable release bundle for the SysCON3D benchmark and demo.

Contents

  • mipnerf360_calibration_splits.json: consistent-scene calibration splits.
  • mipnerf360_impossible_splits.json: precomputed SysCON3D benchmark samples.
  • archives/syscon3d_mipnerf360_*.tar: tar shards containing mipnerf360/... payload files.

Release summary

  • scenes: bicycle, bonsai, counter, flowers, garden, kitchen, room, stump, treehill
  • copy mode: referenced-only
  • storage mode: tar-shards
  • copied files: 5059
  • copied size (GiB): 0.606
  • archive files: 1

Notes

  • The manifests in this folder use dataset_root: mipnerf360, so they are portable across local runs and Spaces.
  • If this release uses archives/, extract those tar files before running tools that expect the raw mipnerf360/ directory.

Extracting Files

Extract the full payload:

mkdir -p tmp/syscon3d_release
for shard in tmp/syscon3d_release/archives/*.tar; do
  tar -xf "$shard" -C tmp/syscon3d_release
done

List files without extracting:

tar -tf tmp/syscon3d_release/archives/syscon3d_mipnerf360_000.tar | less

Extract one file:

tar -xf tmp/syscon3d_release/archives/syscon3d_mipnerf360_000.tar \
  -C tmp/syscon3d_release \
  mipnerf360/syscon3d_scene_types/noise_gaussian/k09/noise_gaussian_k09_000/images_4/view_000.png

Extract one sample directory:

tar -xf tmp/syscon3d_release/archives/syscon3d_mipnerf360_000.tar \
  -C tmp/syscon3d_release \
  mipnerf360/syscon3d_scene_types/patched_gaussian/k09/patched_gaussian_k09_000

The paths in mipnerf360_impossible_splits.json are relative to tmp/syscon3d_release/mipnerf360/ after extraction.

License and Source Data

SysCON3D is a derived benchmark bundle for research evaluation. It contains referenced Mip-NeRF 360 scene images plus deterministic materialized stress-test images. Users should follow the terms of the upstream source data and cite the source dataset and SysCON3D paper when using this benchmark.

Downloads last month
9