Fidel-TS
Collection
Toward real-world high fidelity multimodal time series forecasting benchmark
•
6 items
•
Updated
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
for split_generator in builder._split_generators(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 60, in _split_generators
self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f))
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 2325, in read_schema
file = ParquetFile(
File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 318, in __init__
self.reader.open(
File "pyarrow/_parquet.pyx", line 1470, in pyarrow._parquet.ParquetReader.open
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
info = get_dataset_config_info(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
The dataset is organized into the following structure:
|-- subdataset1
| |-- raw_data # Original data files
| |-- time_series # Rule-based Imputed data files
| | |-- id_1.parquet # Time series data for each subject can be multivariate, can be in csv, parquet, etc.
| | |-- id_2.parquet
| | |-- ...
| | |-- id_info.json # Metadata for each subject
| |-- hetero
| | |-- room
| | | |-- room report / formal_report # room report (include control channels)
| | | | |-- room 1
| | | | | |-- all.json
| | | | | |-- flow.json
| | | | | |-- temperature.json
| | | | | |-- occupy.json
| | | | |-- room 2
| | | | | |-- all.json
| | | | | |-- flow.json
| | | | | |-- temperature.json
| | | | | |-- occupy.json
| | | | |-- ...
| | | |-- report embedding / formal_report # embedding for the room report
| | | | |-- room 1
| | | | | |-- all.pkl
| | | | | |-- flow.pkl
| | | | | |-- temperature.pkl
| | | | | |-- occupy.pkl
| | | | |-- room 2
| | | | | |-- all.pkl
| | | | | |-- flow.pkl
| | | | | |-- temperature.pkl
| | | | | |-- occupy.pkl
| | | | |-- ...
| | |-- weather
| | | |-- raw_data
| | | | |-- daily_weather_raw_????.json
| | | | |-- ...
| | | | |-- daily_weather_????.csv
| | | | |-- ...
| | | | |-- hourly_weather_????.csv
| | | | |-- ...
| | | |-- weather_report / formal_report (can be flattened and use regex to extract the version)
| | | | |-- version_1.json
| | | | |-- version_2.json
| | | | |-- ...
| | | |-- report_embedding / formal_report # embedding for the weather report
| | | | |-- version_1.pkl
| | | | |-- version_2.pkl
| | | | |-- ...
| |-- scripts # Scripts for data processing, model training, and evaluation
| |-- id_info.json # Metadata for whole dataset without preprocessing
| |-- static_info.json # Static information for this dataset, including the dataset information, channel information, downtime reasons.
| |-- static_info_embeddings.pkl
|-- subdataset2
|-- ......
The id_info.json file contains metadata for each subject in the dataset. Extracted from the raw dataset. The structure is as follows:
{
"id_1": {
"len": 1000, # Length of the time series data
"sensor_downtime": {
1: {
"time": [yyyy-mm-dd hh:mm:ss, yyyy-mm-dd hh:mm:ss],
"index": [start_index, end_index]
},
2: {
"time": [yyyy-mm-dd hh:mm:ss, yyyy-mm-dd hh:mm:ss],
"index": [start_index, end_index]
},
...
},
"other_info_1": "value_1", # Other information about the subject customizable entry
"other_info_2": "value_2",
...
},
"id_2": ...
}
The static_info.json file contains static information for the whole dataset. The structure is as follows:
{
"general_info": "description of the dataset",
"downtime_prompt": "",
"channel_info": {
"id_1": {
"channel_1": "id_1 is xxx located in xxx; channel 1 is xxx",
"channel_2": "id_1 is xxx located in xxx; channel 2 is xxx",
...
},
"id_2": {
"channel_1": "id_2 is xxx located in xxx; channel 1 is xxx",
"channel_2": "id_2 is xxx located in xxx; channel 2 is xxx",
...
},
...
},
}