Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 4 new columns ({'recall', 'support', 'precision', 'f1-score'}) and 4 missing columns ({'split', 'label', 'path', 'folder'}).
This happened while the csv dataset builder was generating data using
hf://datasets/Arushhh/deeplense-test5-densepolar-artifacts/val_per_class.csv (at revision fedc4f51c0c7c015a97462859e859f5d0431605f), [/tmp/hf-datasets-cache/medium/datasets/11222676799002-config-parquet-and-info-Arushhh-deeplense-test5-d-9093d4e4/hub/datasets--Arushhh--deeplense-test5-densepolar-artifacts/snapshots/fedc4f51c0c7c015a97462859e859f5d0431605f/val_df.csv (origin=hf://datasets/Arushhh/deeplense-test5-densepolar-artifacts@fedc4f51c0c7c015a97462859e859f5d0431605f/val_df.csv), /tmp/hf-datasets-cache/medium/datasets/11222676799002-config-parquet-and-info-Arushhh-deeplense-test5-d-9093d4e4/hub/datasets--Arushhh--deeplense-test5-densepolar-artifacts/snapshots/fedc4f51c0c7c015a97462859e859f5d0431605f/val_per_class.csv (origin=hf://datasets/Arushhh/deeplense-test5-densepolar-artifacts@fedc4f51c0c7c015a97462859e859f5d0431605f/val_per_class.csv), /tmp/hf-datasets-cache/medium/datasets/11222676799002-config-parquet-and-info-Arushhh-deeplense-test5-d-9093d4e4/hub/datasets--Arushhh--deeplense-test5-densepolar-artifacts/snapshots/fedc4f51c0c7c015a97462859e859f5d0431605f/val_summary.csv (origin=hf://datasets/Arushhh/deeplense-test5-densepolar-artifacts@fedc4f51c0c7c015a97462859e859f5d0431605f/val_summary.csv)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1890, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 760, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
precision: double
recall: double
f1-score: double
support: double
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 723
to
{'path': Value('string'), 'label': Value('int64'), 'split': Value('string'), 'folder': Value('string')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1892, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 4 new columns ({'recall', 'support', 'precision', 'f1-score'}) and 4 missing columns ({'split', 'label', 'path', 'folder'}).
This happened while the csv dataset builder was generating data using
hf://datasets/Arushhh/deeplense-test5-densepolar-artifacts/val_per_class.csv (at revision fedc4f51c0c7c015a97462859e859f5d0431605f), [/tmp/hf-datasets-cache/medium/datasets/11222676799002-config-parquet-and-info-Arushhh-deeplense-test5-d-9093d4e4/hub/datasets--Arushhh--deeplense-test5-densepolar-artifacts/snapshots/fedc4f51c0c7c015a97462859e859f5d0431605f/val_df.csv (origin=hf://datasets/Arushhh/deeplense-test5-densepolar-artifacts@fedc4f51c0c7c015a97462859e859f5d0431605f/val_df.csv), /tmp/hf-datasets-cache/medium/datasets/11222676799002-config-parquet-and-info-Arushhh-deeplense-test5-d-9093d4e4/hub/datasets--Arushhh--deeplense-test5-densepolar-artifacts/snapshots/fedc4f51c0c7c015a97462859e859f5d0431605f/val_per_class.csv (origin=hf://datasets/Arushhh/deeplense-test5-densepolar-artifacts@fedc4f51c0c7c015a97462859e859f5d0431605f/val_per_class.csv), /tmp/hf-datasets-cache/medium/datasets/11222676799002-config-parquet-and-info-Arushhh-deeplense-test5-d-9093d4e4/hub/datasets--Arushhh--deeplense-test5-densepolar-artifacts/snapshots/fedc4f51c0c7c015a97462859e859f5d0431605f/val_summary.csv (origin=hf://datasets/Arushhh/deeplense-test5-densepolar-artifacts@fedc4f51c0c7c015a97462859e859f5d0431605f/val_summary.csv)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
path string | label int64 | split string | folder string |
|---|---|---|---|
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/10.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/100.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1000.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1001.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1002.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1003.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1004.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1005.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1006.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1007.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1008.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1009.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/101.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1010.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1011.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1012.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1013.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1014.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1015.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1016.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1017.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1018.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1019.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/102.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1020.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1021.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1022.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1023.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1024.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1025.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1026.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1027.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1028.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1029.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/103.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1030.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1031.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1032.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1033.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1034.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1035.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1036.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1037.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1038.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1039.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/104.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1040.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1041.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1042.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1043.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1044.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1045.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1046.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1047.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1048.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1049.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/105.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1050.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1051.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1052.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1053.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1054.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1055.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1056.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1057.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1058.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1059.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/106.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1060.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1061.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1062.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1063.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1064.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1065.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1066.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1067.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1068.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1069.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/107.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1070.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1071.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1072.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1073.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1074.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1075.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1076.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1077.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1078.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1079.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/108.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1080.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1081.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1082.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1083.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1084.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1085.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1086.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1087.npy | 1 | train | train_lenses |
/kaggle/input/datasets/arushhh/dataset-task-v/train_lenses/1088.npy | 1 | train | train_lenses |
End of preview.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
DeepLense Test V Artifacts
This dataset repo contains artifacts from the DensePolarNet-Robust experiments.
Main summary
- HF repo: Arushhh/deeplense-test5-densepolar-artifacts
- Threshold: 0.7799999999999999
- Validation ROC-AUC: 0.9935817592570199
- Test ROC-AUC: 0.9902856690983136
- Test PR-AUC: 0.7804945987936178
- Seeds: [42]
Contents
- model checkpoints (
.pt) - split CSVs
- config JSON
- prediction arrays (
.npy) - metrics tables (
.csv,.json) - ROC / PR / confusion-matrix plots
- inference manifest
Reproducibility
To reproduce inference exactly, use:
- the same notebook architecture definitions,
- the same preprocessing pipeline,
- the saved checkpoints,
- the saved threshold if thresholded metrics are needed.
- Downloads last month
- 50