BraveOni/2ch-text-classification
Text Classification
•
Updated
•
4
Error code: DatasetGenerationError
Exception: ArrowNotImplementedError
Message: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 642, in write_table
self._build_writer(inferred_schema=pa_table.schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 457, in _build_writer
self.pa_writer = self._WRITER_CLASS(self.stream, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
self.writer = _parquet.ParquetWriter(
File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1847, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 661, in finalize
self._build_writer(self.schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 457, in _build_writer
self.pa_writer = self._WRITER_CLASS(self.stream, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
self.writer = _parquet.ParquetWriter(
File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1456, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1055, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 894, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 970, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1702, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1858, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
_data_files
list | _fingerprint
string | _format_columns
list | _format_kwargs
dict | _format_type
null | _indexes
dict | _output_all_columns
bool | _split
null |
|---|---|---|---|---|---|---|---|
[
{
"filename": "dataset.arrow"
}
] |
29bb0a7886410abf
|
[
"target",
"text"
] |
{}
| null |
{}
| false
| null |
This dataset has been automatically processed by AutoTrain for project 2ch-text-classification.
The BCP-47 code for the dataset's language is en.
A sample from this dataset looks as follows:
[
{
"text": "\u0434\u0430 \u043d\u0435 \u0431\u0438\u0437\u043d\u0435\u0441 \u0442\u0430\u043c \u043f\u0440\u043e\u0434\u0430\u0435\u0442\u0441\u044f, \u0430 \u0440\u0435\u043c\u043e\u043d\u0442 \u0432 \u043f\u043e\u043c\u0435\u0449\u0435\u043d\u0438\u0438 \u043e\u0431\u043e\u0440\u0443\u0434\u043e\u0432\u0430\u043d\u0438\u0435, \u043f\u043e \u0431\u043e\u043b\u044c\u0448\u0435\u0439 \u0447\u0430\u0441\u0442\u0438. \u0431\u0438\u0437\u043d\u0435\u0441\u0430 \u043a \u043f\u0440\u043e\u0434\u0430\u0436\u0435 -[...]",
"target": 0
},
{
"text": "\u041d\u0443 \u0443\u0431\u043e\u0440\u043a\u0430 \u0443\u043b\u0438\u0446 \u043e\u0442 \u0441\u043d\u0435\u0433\u0430 \u044d\u0442\u043e \u0443\u0436\u0435 \u043d\u0435\u043f\u043b\u043e\u0445\u043e . \u041d\u0435 \u0432\u043e \u0432\u0441\u0435\u0445 \u0433\u043e\u0440\u043e\u0434\u0430\u0445 \u0434\u0430\u0436\u0435 \u044d\u0442\u043e \u0445\u043e\u0440\u043e\u0448\u043e \u043e\u0440\u0433\u0430\u043d\u0438\u0437\u043e\u0432\u0430\u043d\u043e. \u0418 \u043f\u0440\u0435\u0434\u0432\u044b\u0431[...]",
"target": 0
}
]
The dataset has the following fields (also called "features"):
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(num_classes=2, names=['0.0', '1.0'], id=None)"
}
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
|---|---|
| train | 11528 |
| valid | 2884 |