Datasets:
Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 1 new columns ({'__index_level_0__'})
This happened while the csv dataset builder was generating data using
hf://datasets/ClarusC64/F1-decoherence-trigger-localization-v0.1/data/test.csv (at revision 27a743faa7fc2b50a1d6d0f9d4884ae29e7c924e), [/tmp/hf-datasets-cache/medium/datasets/36716515816036-config-parquet-and-info-ClarusC64-F1-decoherence--a681cfe9/hub/datasets--ClarusC64--F1-decoherence-trigger-localization-v0.1/snapshots/27a743faa7fc2b50a1d6d0f9d4884ae29e7c924e/data/test.csv (origin=hf://datasets/ClarusC64/F1-decoherence-trigger-localization-v0.1@27a743faa7fc2b50a1d6d0f9d4884ae29e7c924e/data/test.csv)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1887, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 675, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
id: string
component_pair: string
telemetry_window: double
baseline_correlation: double
window_correlation: double
trigger_event: double
trigger_lap: double
initiating_component: double
correlation_drop: double
propagation_direction: double
immediate_failure_risk: double
notes: string
constraints: string
gold_checklist: string
__index_level_0__: string
-- schema metadata --
pandas: '{"index_columns": ["__index_level_0__"], "column_indexes": [{"na' + 2147
to
{'id': Value('string'), 'component_pair': Value('string'), 'telemetry_window': Value('string'), 'baseline_correlation': Value('float64'), 'window_correlation': Value('float64'), 'trigger_event': Value('string'), 'trigger_lap': Value('int64'), 'initiating_component': Value('string'), 'correlation_drop': Value('float64'), 'propagation_direction': Value('string'), 'immediate_failure_risk': Value('float64'), 'notes': Value('string'), 'constraints': Value('string'), 'gold_checklist': Value('string')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1736, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1889, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 1 new columns ({'__index_level_0__'})
This happened while the csv dataset builder was generating data using
hf://datasets/ClarusC64/F1-decoherence-trigger-localization-v0.1/data/test.csv (at revision 27a743faa7fc2b50a1d6d0f9d4884ae29e7c924e), [/tmp/hf-datasets-cache/medium/datasets/36716515816036-config-parquet-and-info-ClarusC64-F1-decoherence--a681cfe9/hub/datasets--ClarusC64--F1-decoherence-trigger-localization-v0.1/snapshots/27a743faa7fc2b50a1d6d0f9d4884ae29e7c924e/data/test.csv (origin=hf://datasets/ClarusC64/F1-decoherence-trigger-localization-v0.1@27a743faa7fc2b50a1d6d0f9d4884ae29e7c924e/data/test.csv)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
id
string | component_pair
string | telemetry_window
string | baseline_correlation
float64 | window_correlation
float64 | trigger_event
string | trigger_lap
int64 | initiating_component
string | correlation_drop
float64 | propagation_direction
string | immediate_failure_risk
float64 | notes
string | constraints
string | gold_checklist
string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
DTL-001
|
brake_temp|pad_wear
|
kerb strike -> vibration spike -> wear accel
| 0.89
| 0.44
|
kerb_strike
| 28
|
brake_assembly
| 0.45
|
rear_to_front
| 0.62
|
Kerb strike initiated vibration
|
<900 words
|
trigger+lap+component+drop+risk
|
DTL-002
|
oil_pressure|rpm
|
steady rpm -> pressure dip
| 0.91
| 0.51
|
seal_leak
| 17
|
oil_system
| 0.4
|
engine_wide
| 0.58
|
Pressure loss begins cascade
|
<900 words
|
trigger+lap+component+drop+risk
|
DTL-003
|
battery_temp|ers_deploy
|
temp rise -> deploy fluctuation
| 0.84
| 0.49
|
thermal_spike
| 35
|
ers_battery
| 0.35
|
ers_loop
| 0.55
|
Battery heat disrupts deploy
|
<900 words
|
trigger+lap+component+drop+risk
|
DTL-004
|
gearbox_temp|shift_latency
|
latency jump -> temp drift
| 0.87
| 0.42
|
shift_shock
| 41
|
gearbox
| 0.45
|
drivetrain
| 0.67
|
Shift shock destabilized gearbox
|
<900 words
|
trigger+lap+component+drop+risk
|
DTL-005
|
coolant_temp|engine_load
|
load stable -> temp rise
| 0.9
| 0.63
|
cooling_block
| 22
|
radiator
| 0.27
|
engine_block
| 0.44
|
Cooling obstruction emerging
|
<900 words
|
trigger+lap+component+drop+risk
|
DTL-006
|
tire_temp_fl|slip_angle_fl
|
slip oscillation -> temp lag
| 0.82
| 0.47
|
surface_debris
| 13
|
front_left_tire
| 0.35
|
front_axle
| 0.52
|
Debris altered grip balance
|
<900 words
|
trigger+lap+component+drop+risk
|
What this dataset tests
Whether an intelligence system can identify the initiating disturbance that caused telemetry coherence to collapse across components.
The goal is not detecting failure.
The goal is identifying the trigger.
Required outputs
- trigger_event
- trigger_lap
- initiating_component
- correlation_drop
- propagation_direction
- immediate_failure_risk
Use case
Layer two of the Failure Cascade Graph trinity.
Used for:
- predictive failure analysis
- telemetry root-cause detection
- race engineering diagnostics
Evaluation
The scorer checks:
- numeric validity
- presence of all required fields
- structural correctness
- trigger identification completeness
- Downloads last month
- 17