The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 9 new columns ({'_indexes', '_output_all_columns', '_indices_data_files', '_split', '_format_columns', '_format_type', '_data_files', '_fingerprint', '_format_kwargs'}) and 17 missing columns ({'size_in_bytes', 'post_processing_size', 'download_checksums', 'builder_name', 'homepage', 'citation', 'supervised_keys', 'config_name', 'description', 'download_size', 'splits', 'features', 'post_processed', 'version', 'license', 'dataset_size', 'task_templates'}).
This happened while the json dataset builder was generating data using
hf://datasets/PopularPenguin/LCQuAD/train_state.json (at revision b3b921bef599c484b2e232b3ab62625e1222b63b)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast
return cast_table_to_schema(table, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
_fingerprint: string
_format_columns: null
_format_kwargs: struct<>
_format_type: null
_indexes: struct<>
_indices_data_files: null
_output_all_columns: bool
_split: null
to
{'builder_name': Value(dtype='null', id=None), 'citation': Value(dtype='string', id=None), 'config_name': Value(dtype='null', id=None), 'dataset_size': Value(dtype='null', id=None), 'description': Value(dtype='string', id=None), 'download_checksums': Value(dtype='null', id=None), 'download_size': Value(dtype='null', id=None), 'features': {'translation': {'en': {'_type': Value(dtype='string', id=None), 'dtype': Value(dtype='string', id=None), 'id': Value(dtype='null', id=None)}, 'sparql': {'_type': Value(dtype='string', id=None), 'dtype': Value(dtype='string', id=None), 'id': Value(dtype='null', id=None)}}}, 'homepage': Value(dtype='string', id=None), 'license': Value(dtype='string', id=None), 'post_processed': Value(dtype='null', id=None), 'post_processing_size': Value(dtype='null', id=None), 'size_in_bytes': Value(dtype='null', id=None), 'splits': Value(dtype='null', id=None), 'supervised_keys': Value(dtype='null', id=None), 'task_templates': Value(dtype='null', id=None), 'version': Value(dtype='null', id=None)}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1417, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1049, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1872, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 9 new columns ({'_indexes', '_output_all_columns', '_indices_data_files', '_split', '_format_columns', '_format_type', '_data_files', '_fingerprint', '_format_kwargs'}) and 17 missing columns ({'size_in_bytes', 'post_processing_size', 'download_checksums', 'builder_name', 'homepage', 'citation', 'supervised_keys', 'config_name', 'description', 'download_size', 'splits', 'features', 'post_processed', 'version', 'license', 'dataset_size', 'task_templates'}).
This happened while the json dataset builder was generating data using
hf://datasets/PopularPenguin/LCQuAD/train_state.json (at revision b3b921bef599c484b2e232b3ab62625e1222b63b)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
builder_name
null | citation
string | config_name
null | dataset_size
null | description
string | download_checksums
null | download_size
null | features
dict | homepage
string | license
string | post_processed
null | post_processing_size
null | size_in_bytes
null | splits
null | supervised_keys
null | task_templates
null | version
null | _data_files
list | _fingerprint
string | _format_columns
null | _format_kwargs
dict | _format_type
null | _indexes
dict | _indices_data_files
null | _output_all_columns
bool | _split
null |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null | null | null | null | null |
{
"translation": {
"en": {
"_type": "Value",
"dtype": "string",
"id": null
},
"sparql": {
"_type": "Value",
"dtype": "string",
"id": null
}
}
}
| null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
||||
null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null |
[
{
"filename": "dataset.arrow"
}
] |
676e0d43c9920873
| null |
{}
| null |
{}
| null | false
| null |