The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: ArrowNotImplementedError
Message: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1871, in _prepare_split_single
writer.write_table(table)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 641, in write_table
self._build_writer(inferred_schema=pa_table.schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 456, in _build_writer
self.pa_writer = self._WRITER_CLASS(self.stream, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
self.writer = _parquet.ParquetWriter(
File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1887, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 660, in finalize
self._build_writer(self.schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 456, in _build_writer
self.pa_writer = self._WRITER_CLASS(self.stream, schema)
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
self.writer = _parquet.ParquetWriter(
File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type '_format_kwargs' with no child field to Parquet. Consider adding a dummy child field.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1436, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1053, in convert_to_parquet
builder.download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 925, in download_and_prepare
self._download_and_prepare(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1001, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1742, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1898, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
_data_files
list | _fingerprint
string | _format_columns
null | _format_kwargs
dict | _format_type
null | _output_all_columns
bool | _split
string |
|---|---|---|---|---|---|---|
[
{
"filename": "data-00000-of-00001.arrow"
}
] |
032d153922659baf
| null |
{}
| null | false
|
train
|
[
{
"filename": "data-00000-of-00001.arrow"
}
] |
5c231fa78a5c3fc2
| null |
{}
| null | false
|
train
|
[
{
"filename": "data-00000-of-00001.arrow"
}
] |
b6fce295285394b8
| null |
{}
| null | false
|
train
|
[
{
"filename": "data-00000-of-00001.arrow"
}
] |
5dc1a83ac9744cd3
| null |
{}
| null | false
|
train
|
[
{
"filename": "data-00000-of-00001.arrow"
}
] |
cdb8af4263165a56
| null |
{}
| null | false
|
train
|
[
{
"filename": "data-00000-of-00001.arrow"
}
] |
589ef719407cdeb8
| null |
{}
| null | false
|
train
|
[
{
"filename": "data-00000-of-00001.arrow"
}
] |
3ad31c38bc1a9aff
| null |
{}
| null | false
|
train
|
[
{
"filename": "data-00000-of-00001.arrow"
}
] |
dacb88bad5e36b4a
| null |
{}
| null | false
|
train
|
[
{
"filename": "data-00000-of-00001.arrow"
}
] |
098a2c39ddd4f6ef
| null |
{}
| null | false
|
train
|
[
{
"filename": "data-00000-of-00001.arrow"
}
] |
86e3ef63d54f26d3
| null |
{}
| null | false
|
train
|
Partitioned IRIS Datasets
This repository contains a script (dataset.py) to download the Iris dataset and split it into multiple partitions. Each partition is further divided into a public "mock" dataset and a "private" dataset.
IRIS Dataset Overview
The Iris dataset is a classic dataset in machine learning, consisting of 150 samples of iris flowers. Each sample has four features (sepal length, sepal width, petal length, and petal width) and belongs to one of three species: Iris Setosa, Iris Versicolor, or Iris Virginica.
| Id | SepalLengthCm | SepalWidthCm | PetalLengthCm | PetalWidthCm | Species |
|---|---|---|---|---|---|
| 88 | 6.3 | 2.3 | 4.4 | 1.3 | Iris-versicolor |
| 1 | 5.1 | 3.5 | 1.4 | 0.2 | Iris-setosa |
| 127 | 6.2 | 2.8 | 4.8 | 1.8 | Iris-virginica |
| 121 | 6.9 | 3.2 | 5.7 | 2.3 | Iris-virginica |
| 144 | 6.8 | 3.2 | 5.9 | 2.3 | Iris-virginica |
Generating the Partitioned Datasets
To generate the partitioned datasets, run the dataset.py script from the root of this repository (provided that you already have uv installed):
uv venv && source .venv/bin/activate
uv sync
uv run dataset.py
By default, the script will create 5 partitions. You can modify the num_partitions variable in the if __name__ == "__main__": block at the end of dataset.py to change the number of partitions generated.
Directory Structure
After running the script, you will find a directory for each partition, named iris-1, iris-2, and so on. Each of these partition directories will have the following structure:
iris-1/
βββ mock/
β βββ dataset.arrow
β βββ dataset_info.json
β βββ state.json
βββ private/
β βββ dataset.arrow
β βββ dataset_info.json
β βββ state.json
βββ README.md
mock/: This directory contains a small, public subset of the data for that partition (10% of the partition's data). This can be used for development and testing of models without accessing the private data.private/: This directory contains the larger, private subset of the data (90% of the partition's data). This data should be kept confidential.README.md: Each partition directory also contains its ownREADME.mdfile, which provides a brief description of the Iris dataset and the mock/private split.
The data within the mock and private directories is saved in Apache Arrow format.
dataset_info:
name: Iris Dataset
description: A classic dataset in machine learning consisting of 150 samples of iris flowers.
features:
- SepalLengthCm: float
- SepalWidthCm: float
- PetalLengthCm: float
- PetalWidthCm: float
targets:
- Species: string (Iris-setosa, Iris-versicolor, Iris-virginica)
size: 150 samples
source: https://archive.ics.uci.edu/ml/datasets/iris
partitioned: true
partitions:
count: 5 (default, configurable in dataset.py)
structure:
mock:
description: 10% of the partition's data, public.
format: Apache Arrow
private:
description: 90% of the partition's data, confidential.
format: Apache Arrow
- Downloads last month
- 1