Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 1 new columns ({'tokenizer_config'}) and 1 missing columns ({'concept1'}).
This happened while the json dataset builder was generating data using
hf://datasets/Xilixmeaty40/auto-datasets/tokenizer_config.json (at revision b2cab0b1116e8d09830570dec2a2090f38f29596)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
return cast_table_to_schema(table, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
tokenizer_config: struct<vocab_size: int64, max_length: int64, token_type_ids: bool, pad_token_id: int64, bos_token_id: int64, eos_token_id: int64>
child 0, vocab_size: int64
child 1, max_length: int64
child 2, token_type_ids: bool
child 3, pad_token_id: int64
child 4, bos_token_id: int64
child 5, eos_token_id: int64
to
{'concept1': Value(dtype='string', id=None)}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 1 new columns ({'tokenizer_config'}) and 1 missing columns ({'concept1'}).
This happened while the json dataset builder was generating data using
hf://datasets/Xilixmeaty40/auto-datasets/tokenizer_config.json (at revision b2cab0b1116e8d09830570dec2a2090f38f29596)
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
concept1
string | tokenizer_config
dict |
|---|---|
<enter your prompt here>
| null |
null |
{
"vocab_size": 30000,
"max_length": 51200,
"token_type_ids": true,
"pad_token_id": 0,
"bos_token_id": 1,
"eos_token_id": 2
}
|
README.md exists but content is empty.
- Downloads last month
- 2