neil-code/autotrain-translation-en-zh-83960142491
Translation
•
77.5M
•
Updated
•
7
Error code: DatasetGenerationError
Exception: ArrowNotImplementedError
Message: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 583, in write_table
self._build_writer(inferred_schema=pa_table.schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
self.pa_writer = self._WRITER_CLASS(self.stream, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
self.writer = _parquet.ParquetWriter(
File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2027, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 602, in finalize
self._build_writer(self.schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 404, in _build_writer
self.pa_writer = self._WRITER_CLASS(self.stream, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
self.writer = _parquet.ParquetWriter(
File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
_data_files
list | _fingerprint
string | _format_columns
sequence | _format_kwargs
dict | _format_type
null | _output_all_columns
bool | _split
null |
|---|---|---|---|---|---|---|
[
{
"filename": "data-00000-of-00001.arrow"
}
] |
65ba51f13c6f030d
|
[
"source",
"target"
] |
{}
| null | false
| null |
This dataset has been automatically processed by AutoTrain for project translation-en-zh.
The BCP-47 code for the dataset's language is en2zh.
A sample from this dataset looks as follows:
[
{
"source": "Huang Yau-tai had a tough childhood, one in which musical resources were in short supply. However, he strove to educate himself, motivated by his passion for music and his perseverence. Today, he is the composer of more than 2,000 songs and 20 books on music.",
"target": "\u9ec3\u53cb\u68e3\u5e7c\u6642\u751f\u6d3b\u56f0\u9813\uff0c\u97f3\u6a02\u8cc7\u6e90\u66f4\u662f\u8ca7\u4e4f\uff0c\u6191\u8457\u5c0d\u97f3\u6a02\u7684\u71b1\u611b\u8207\u5805\u6301\uff0c\u52aa\u529b\u81ea\u5b78\uff0c\u81f3\u4eca\u5df2\u5b8c\u6210\u8d85\u904e 2 \u5343\u9996\u6a02\u66f2\u53ca 20 \u672c\u97f3\u6a02\u5c08\u8457\u3002"
},
{
"source": "He also remarked what a shame it was that the West only discovered it so late! Even Bob Dylan, the American singer perhaps second only to Elvis in popularity, praised the book, saying, \"I'm not trying to push it, I don't want to talk about it, but it's the only thing that is amazingly true, period, not just for me. Anybody would know it.",
"target": "\u9023\u9b91\u4f2f\u30fb\u72c4\u502b\uff0c\u9019\u4f4d\u7e7c\u8c93\u738b\u4e4b\u5f8c\u6700\u53d7\u666f\u4ef0\u7684\u7f8e\u570b\u6c11\u8b20\u6b4c\u624b\uff0c\u4e5f\u5f37\u529b\u5411\u8eab\u908a\u6240\u6709\u4eba\u63a8\u85a6\uff1a\u300c\u4f60\u4e00\u5b9a\u89aa\u8eab\u53bb\u9ad4\u6703\u300a\u6613\u7d93\u300b\uff0c\u6211\u4e0d\u60f3\u8b1b\u592a\u591a\uff0c\u53ea\u6709\u4e00\u9ede\uff1a\u4f60\u4e00\u8b80\u9019\u672c\u66f8\uff0c\u99ac\u4e0a\u6703\u5f37\u70c8\u611f\u89ba\u5230\uff0c\u9019\u4e00\u5207\u90fd\u662f\u771f\u7684\uff01\u300d"
}
]
The dataset has the following fields (also called "features"):
{
"source": "Value(dtype='string', id=None)",
"target": "Value(dtype='string', id=None)"
}
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
|---|---|
| train | 800 |
| valid | 200 |