Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
_id: struct<$oid: string>
name: string
slug: string
version: string
created_at: struct<$date: string>
last_modified_at: struct<$date: string>
last_deletion_at: null
last_loaded_at: struct<$date: string>
sample_collection_name: string
persistent: bool
media_type: string
group_media_types: struct<>
tags: list<item: null>
info: struct<>
app_config: struct<dynamic_groups_target_frame_rate: int64, grid_media_field: string, media_fallback: bool, media_fields: list<item: string>, modal_media_field: string, plugins: struct<>>
classes: struct<>
default_classes: list<item: null>
mask_targets: struct<>
default_mask_targets: struct<>
skeletons: struct<>
sample_fields: list<item: struct<name: string, ftype: string, embedded_doc_type: string, subfield: string, fields: list<item: struct<name: string, ftype: string, embedded_doc_type: null, subfield: string, fields: list<item: null>, db_field: string, description: null, info: null, read_only: bool, created_at: struct<$date: string>>>, db_field: string, description: null, info: null, read_only: bool, created_at: struct<$date: string>>>
frame_fields: list<item: null>
saved_views: list<item: null>
workspaces: list<item: null>
annotation_runs: struct<>
brain_methods: struct<>
evaluations: struct<>
runs: struct<>
vs
samples: list<item: struct<_id: struct<$oid: string>, filepath: string, tags: list<item: string>, metadata: struct<_cls: string, size_bytes: int64, mime_type: string, width: int64, height: int64, num_channels: int64>, _media_type: string, _rand: double, ground_truth: struct<_id: struct<$oid: string>, _cls: string, tags: list<item: null>, label: string>, lenet_train_classification: struct<_id: struct<$oid: string>, _cls: string, tags: list<item: null>, label: string, confidence: double, logits: struct<$binary: struct<base64: string, subType: string>>>, hardness: double, mistakenness: double, lenet_train_eval: bool, lenet_embeddings: struct<$binary: struct<base64: string, subType: string>>, uniqueness: double, representativeness: double, idk_ground_truth: struct<_id: struct<$oid: string>, _cls: string, tags: list<item: null>, label: string>, idk_classification: struct<_id: struct<$oid: string>, _cls: string, tags: list<item: null>, label: string, confidence: double, logits: struct<$binary: struct<base64: string, subType: string>>>, _dataset_id: struct<$oid: string>, created_at: struct<$date: string>, last_modified_at: struct<$date: string>, lenet_validation_classification: struct<_id: struct<$oid: string>, _cls: string, tags: list<item: null>, label: string, confidence: double, logits: struct<$binary: struct<base64: string, subType: string>>>, lenet_validation_eval: bool, lenet_classification: struct<_id: struct<$oid: string>, _cls: string, tags: list<item: null>, label: string, confidence: double, logits: struct<$binary: struct<base64: string, subType: string>>>, lenet_eval: bool, retrained_lenet_classification: struct<_id: struct<$oid: string>, _cls: string, tags: list<item: null>, label: string, confidence: double, logits: struct<$binary: struct<base64: string, subType: string>>>, retrained_lenet_eval: bool, idk_eval: bool>>
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 246, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 563, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              _id: struct<$oid: string>
              name: string
              slug: string
              version: string
              created_at: struct<$date: string>
              last_modified_at: struct<$date: string>
              last_deletion_at: null
              last_loaded_at: struct<$date: string>
              sample_collection_name: string
              persistent: bool
              media_type: string
              group_media_types: struct<>
              tags: list<item: null>
              info: struct<>
              app_config: struct<dynamic_groups_target_frame_rate: int64, grid_media_field: string, media_fallback: bool, media_fields: list<item: string>, modal_media_field: string, plugins: struct<>>
              classes: struct<>
              default_classes: list<item: null>
              mask_targets: struct<>
              default_mask_targets: struct<>
              skeletons: struct<>
              sample_fields: list<item: struct<name: string, ftype: string, embedded_doc_type: string, subfield: string, fields: list<item: struct<name: string, ftype: string, embedded_doc_type: null, subfield: string, fields: list<item: null>, db_field: string, description: null, info: null, read_only: bool, created_at: struct<$date: string>>>, db_field: string, description: null, info: null, read_only: bool, created_at: struct<$date: string>>>
              frame_fields: list<item: null>
              saved_views: list<item: null>
              workspaces: list<item: null>
              annotation_runs: struct<>
              brain_methods: struct<>
              evaluations: struct<>
              runs: struct<>
              vs
              samples: list<item: struct<_id: struct<$oid: string>, filepath: string, tags: list<item: string>, metadata: struct<_cls: string, size_bytes: int64, mime_type: string, width: int64, height: int64, num_channels: int64>, _media_type: string, _rand: double, ground_truth: struct<_id: struct<$oid: string>, _cls: string, tags: list<item: null>, label: string>, lenet_train_classification: struct<_id: struct<$oid: string>, _cls: string, tags: list<item: null>, label: string, confidence: double, logits: struct<$binary: struct<base64: string, subType: string>>>, hardness: double, mistakenness: double, lenet_train_eval: bool, lenet_embeddings: struct<$binary: struct<base64: string, subType: string>>, uniqueness: double, representativeness: double, idk_ground_truth: struct<_id: struct<$oid: string>, _cls: string, tags: list<item: null>, label: string>, idk_classification: struct<_id: struct<$oid: string>, _cls: string, tags: list<item: null>, label: string, confidence: double, logits: struct<$binary: struct<base64: string, subType: string>>>, _dataset_id: struct<$oid: string>, created_at: struct<$date: string>, last_modified_at: struct<$date: string>, lenet_validation_classification: struct<_id: struct<$oid: string>, _cls: string, tags: list<item: null>, label: string, confidence: double, logits: struct<$binary: struct<base64: string, subType: string>>>, lenet_validation_eval: bool, lenet_classification: struct<_id: struct<$oid: string>, _cls: string, tags: list<item: null>, label: string, confidence: double, logits: struct<$binary: struct<base64: string, subType: string>>>, lenet_eval: bool, retrained_lenet_classification: struct<_id: struct<$oid: string>, _cls: string, tags: list<item: null>, label: string, confidence: double, logits: struct<$binary: struct<base64: string, subType: string>>>, retrained_lenet_eval: bool, idk_eval: bool>>

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for Dataset Name

This dataset is a curated version of the MNIST dataset. For the creation I followed this tutorial: https://docs.google.com/document/d/1qJtncbmApAbKxfK9Qgl4EaZGm408Rn9Kz_CUX7RRq_g/edit?tab=t.0. The public GitHub repository with the notebook is here: https://github.com/CarloColumbo/Dataset-Curation-Lab-1---MNIST

Downloads last month
3