Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 17 new columns ({'benchmark_equity', 'final_equity', 'ann_return', 'calmar', 'z_reentry', 'n_test_days', 'hit_ratio', 'tsl_pct', 'allocations', 'evaluated_at', 'benchmark_sharpe', 'benchmark_ann', 'max_drawdown', 'allocation_pct', 'sharpe', 'test_dates', 'equity_curve'}) and 12 missing columns ({'test_sharpe', 'history', 'val_days', 'n_episodes', 'train_days', 'test_equity', 'trained_at', 'test_days', 'best_val_sharpe', 'lookback', 'n_features', 'state_size'}).
This happened while the json dataset builder was generating data using
hf://datasets/P2SAMAPA/P2-ETF-DQN-ENGINE-DATASET/results/evaluation_results.json (at revision df65343dc3186b156600c312fa9c393a9c886c48), [/tmp/hf-datasets-cache/medium/datasets/73914331800540-config-parquet-and-info-P2SAMAPA-P2-ETF-DQN-ENGIN-a2e76d5a/hub/datasets--P2SAMAPA--P2-ETF-DQN-ENGINE-DATASET/snapshots/df65343dc3186b156600c312fa9c393a9c886c48/results/evaluation_results.json (origin=hf://datasets/P2SAMAPA/P2-ETF-DQN-ENGINE-DATASET@df65343dc3186b156600c312fa9c393a9c886c48/results/evaluation_results.json)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1887, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 675, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
start_year: int64
evaluated_at: string
n_test_days: int64
ann_return: double
sharpe: double
max_drawdown: double
calmar: double
hit_ratio: double
final_equity: double
benchmark_sharpe: struct<SPY: double, AGG: double>
child 0, SPY: double
child 1, AGG: double
benchmark_ann: struct<SPY: double, AGG: double>
child 0, SPY: double
child 1, AGG: double
benchmark_equity: struct<SPY: list<item: double>, AGG: list<item: double>>
child 0, SPY: list<item: double>
child 0, item: double
child 1, AGG: list<item: double>
child 0, item: double
test_dates: list<item: timestamp[s]>
child 0, item: timestamp[s]
allocation_pct: struct<SLV: double, GLD: double, TLT: double, VNQ: double, CASH: double, HYG: double, LQD: double>
child 0, SLV: double
child 1, GLD: double
child 2, TLT: double
child 3, VNQ: double
child 4, CASH: double
child 5, HYG: double
child 6, LQD: double
equity_curve: list<item: double>
child 0, item: double
allocations: list<item: string>
child 0, item: string
fee_bps: int64
tsl_pct: double
z_reentry: double
to
{'start_year': Value('int64'), 'n_episodes': Value('int64'), 'fee_bps': Value('int64'), 'lookback': Value('int64'), 'state_size': Value('int64'), 'n_features': Value('int64'), 'trained_at': Value('string'), 'best_val_sharpe': Value('float64'), 'test_sharpe': Value('float64'), 'test_equity': Value('float64'), 'train_days': Value('int64'), 'val_days': Value('int64'), 'test_days': Value('int64'), 'history': List({'episode': Value('int64'), 'train_sharpe': Value('float64'), 'train_equity': Value('float64'), 'val_sharpe': Value('float64'), 'val_equity': Value('float64'), 'avg_loss': Value('float64'), 'epsilon': Value('float64')})}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1736, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1889, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 17 new columns ({'benchmark_equity', 'final_equity', 'ann_return', 'calmar', 'z_reentry', 'n_test_days', 'hit_ratio', 'tsl_pct', 'allocations', 'evaluated_at', 'benchmark_sharpe', 'benchmark_ann', 'max_drawdown', 'allocation_pct', 'sharpe', 'test_dates', 'equity_curve'}) and 12 missing columns ({'test_sharpe', 'history', 'val_days', 'n_episodes', 'train_days', 'test_equity', 'trained_at', 'test_days', 'best_val_sharpe', 'lookback', 'n_features', 'state_size'}).
This happened while the json dataset builder was generating data using
hf://datasets/P2SAMAPA/P2-ETF-DQN-ENGINE-DATASET/results/evaluation_results.json (at revision df65343dc3186b156600c312fa9c393a9c886c48), [/tmp/hf-datasets-cache/medium/datasets/73914331800540-config-parquet-and-info-P2SAMAPA-P2-ETF-DQN-ENGIN-a2e76d5a/hub/datasets--P2SAMAPA--P2-ETF-DQN-ENGINE-DATASET/snapshots/df65343dc3186b156600c312fa9c393a9c886c48/results/evaluation_results.json (origin=hf://datasets/P2SAMAPA/P2-ETF-DQN-ENGINE-DATASET@df65343dc3186b156600c312fa9c393a9c886c48/results/evaluation_results.json)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
start_year int64 | n_episodes int64 | fee_bps int64 | lookback int64 | state_size int64 | n_features int64 | trained_at string | best_val_sharpe float64 | test_sharpe float64 | test_equity float64 | train_days int64 | val_days int64 | test_days int64 | history list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2,019 | 300 | 10 | 20 | 3,048 | 152 | 2026-03-04T02:02:06.337040 | 3.472 | 1.7572 | 1.5677 | 1,420 | 160 | 161 | [
{
"episode": 281,
"train_sharpe": 11.185,
"train_equity": 24227.5159,
"val_sharpe": -0.226,
"val_equity": 0.9883,
"avg_loss": 0.000102,
"epsilon": 0.217
},
{
"episode": 282,
"train_sharpe": 11.498,
"train_equity": 65489.7305,
"val_sharpe": -0.358,
"val_equity": 0.... |
No dataset card yet
- Downloads last month
- 59