The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowCapacityError
Message: array cannot contain more than 2147483646 bytes, have 12052107539
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1594, in _prepare_split_single
writer.write(example)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 682, in write
self.write_examples_on_file()
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 655, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 743, in write_batch
arrays.append(pa.array(typed_sequence))
^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/array.pxi", line 256, in pyarrow.lib.array
File "pyarrow/array.pxi", line 118, in pyarrow.lib._handle_arrow_array_protocol
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 291, in __arrow_array__
out = self._arrow_array(type=type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 340, in _arrow_array
out = pa.array(cast_to_python_objects(examples, only_1d_for_numpy=True))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/array.pxi", line 375, in pyarrow.lib.array
File "pyarrow/array.pxi", line 46, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 12052107539
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1607, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 770, in finalize
self.write_examples_on_file()
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 655, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 743, in write_batch
arrays.append(pa.array(typed_sequence))
^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/array.pxi", line 256, in pyarrow.lib.array
File "pyarrow/array.pxi", line 118, in pyarrow.lib._handle_arrow_array_protocol
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 291, in __arrow_array__
out = self._arrow_array(type=type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 340, in _arrow_array
out = pa.array(cast_to_python_objects(examples, only_1d_for_numpy=True))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/array.pxi", line 375, in pyarrow.lib.array
File "pyarrow/array.pxi", line 46, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 12052107539
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1342, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 907, in stream_convert_to_parquet
builder._prepare_split(split_generator=splits_generators[split], file_format="parquet")
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1438, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1616, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
gz unknown | __key__ string | __url__ string |
|---|---|---|
"cHN0MAUKAwAAAAEEAgAAARUEERhCaW86OkVuc0VNQkw6OlRyYW5zY3JpcHQDAAAAHgiBAAAADGlzX2Nhbm9uaWNhbAoHRERYMTF(...TRUNCATED) | homo_sapiens_merged/115_GRCh37/1/1-1000000 | "hf://datasets/pzweuj/SchemaBio_Bundle@a73084bff06290ea61def202fa8f5f6b677681da/hg19/homo_sapiens_me(...TRUNCATED) |
"cHN0MAUKAwAAAAEEAwAAAAEEAgAAANAEEShCaW86OkVuc0VNQkw6OkZ1bmNnZW46OlJlZ3VsYXRvcnlGZWF0dXJlAwAAAA0IgAA(...TRUNCATED) | homo_sapiens_merged/115_GRCh37/1/1-1000000_reg | "hf://datasets/pzweuj/SchemaBio_Bundle@a73084bff06290ea61def202fa8f5f6b677681da/hg19/homo_sapiens_me(...TRUNCATED) |
"cHN0MAUKAwAAAAEEAgAAAJMEERhCaW86OkVuc0VNQkw6OlRyYW5zY3JpcHQDAAAAIwoFMTU4NDYAAAANX2dlbmVfaGduY19pZAi(...TRUNCATED) | homo_sapiens_merged/115_GRCh37/1/100000001-101000000 | "hf://datasets/pzweuj/SchemaBio_Bundle@a73084bff06290ea61def202fa8f5f6b677681da/hg19/homo_sapiens_me(...TRUNCATED) |
"cHN0MAUKAwAAAAEEAwAAAAEEAgAAAIYEEShCaW86OkVuc0VNQkw6OkZ1bmNnZW46OlJlZ3VsYXRvcnlGZWF0dXJlAwAAAA0IgAA(...TRUNCATED) | homo_sapiens_merged/115_GRCh37/1/100000001-101000000_reg | "hf://datasets/pzweuj/SchemaBio_Bundle@a73084bff06290ea61def202fa8f5f6b677681da/hg19/homo_sapiens_me(...TRUNCATED) |
"cHN0MAUKAwAAAAEEAgAAAJIEERhCaW86OkVuc0VNQkw6OlRyYW5zY3JpcHQDAAAAIgoMSzdFUzk1X0hVTUFOAAAAB190cmVtYmw(...TRUNCATED) | homo_sapiens_merged/115_GRCh37/1/10000001-11000000 | "hf://datasets/pzweuj/SchemaBio_Bundle@a73084bff06290ea61def202fa8f5f6b677681da/hg19/homo_sapiens_me(...TRUNCATED) |
"cHN0MAUKAwAAAAEEAwAAAAEEAgAAANsEEShCaW86OkVuc0VNQkw6OkZ1bmNnZW46OlJlZ3VsYXRvcnlGZWF0dXJlAwAAAA0EERN(...TRUNCATED) | homo_sapiens_merged/115_GRCh37/1/10000001-11000000_reg | "hf://datasets/pzweuj/SchemaBio_Bundle@a73084bff06290ea61def202fa8f5f6b677681da/hg19/homo_sapiens_me(...TRUNCATED) |
"cHN0MAUKAwAAAAEEAgAAAooEERhCaW86OkVuc0VNQkw6OlRyYW5zY3JpcHQDAAAAHAQCAAAAAQQRF0Jpbzo6RW5zRU1CTDo6QXR(...TRUNCATED) | homo_sapiens_merged/115_GRCh37/1/1000001-2000000 | "hf://datasets/pzweuj/SchemaBio_Bundle@a73084bff06290ea61def202fa8f5f6b677681da/hg19/homo_sapiens_me(...TRUNCATED) |
"cHN0MAUKAwAAAAEEAwAAAAEEAgAAAVAEEShCaW86OkVuc0VNQkw6OkZ1bmNnZW46OlJlZ3VsYXRvcnlGZWF0dXJlAwAAAA0EAwA(...TRUNCATED) | homo_sapiens_merged/115_GRCh37/1/1000001-2000000_reg | "hf://datasets/pzweuj/SchemaBio_Bundle@a73084bff06290ea61def202fa8f5f6b677681da/hg19/homo_sapiens_me(...TRUNCATED) |
"cHN0MAUKAwAAAAEEAgAAAGIEERhCaW86OkVuc0VNQkw6OlRyYW5zY3JpcHQDAAAAIwoENDUzOQAAAA1fZ2VuZV9oZ25jX2lkCIE(...TRUNCATED) | homo_sapiens_merged/115_GRCh37/1/101000001-102000000 | "hf://datasets/pzweuj/SchemaBio_Bundle@a73084bff06290ea61def202fa8f5f6b677681da/hg19/homo_sapiens_me(...TRUNCATED) |
"cHN0MAUKAwAAAAEEAwAAAAEEAgAAAJ0EEShCaW86OkVuc0VNQkw6OkZ1bmNnZW46OlJlZ3VsYXRvcnlGZWF0dXJlAwAAAA0EAgA(...TRUNCATED) | homo_sapiens_merged/115_GRCh37/1/101000001-102000000_reg | "hf://datasets/pzweuj/SchemaBio_Bundle@a73084bff06290ea61def202fa8f5f6b677681da/hg19/homo_sapiens_me(...TRUNCATED) |
End of preview.