Datasets:
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 1 new columns ({'prediction'}) and 7 missing columns ({'content_score', 'coupling', 'buffer', 'lag', 'pressure', 'regime_hint', 'label_stable'}).
This happened while the csv dataset builder was generating data using
hf://datasets/ClarusC64/eval-trap-stability-manifold-benchmark-v0.2/prediction_templates/train_predictions_template.csv (at revision e27b4a8cfa605ce81f21d3dfbd7a8d38a871777d), [/tmp/hf-datasets-cache/medium/datasets/33468884134157-config-parquet-and-info-ClarusC64-eval-trap-stabi-84d203cc/hub/datasets--ClarusC64--eval-trap-stability-manifold-benchmark-v0.2/snapshots/e27b4a8cfa605ce81f21d3dfbd7a8d38a871777d/data/train.csv (origin=hf://datasets/ClarusC64/eval-trap-stability-manifold-benchmark-v0.2@e27b4a8cfa605ce81f21d3dfbd7a8d38a871777d/data/train.csv), /tmp/hf-datasets-cache/medium/datasets/33468884134157-config-parquet-and-info-ClarusC64-eval-trap-stabi-84d203cc/hub/datasets--ClarusC64--eval-trap-stability-manifold-benchmark-v0.2/snapshots/e27b4a8cfa605ce81f21d3dfbd7a8d38a871777d/prediction_templates/train_predictions_template.csv (origin=hf://datasets/ClarusC64/eval-trap-stability-manifold-benchmark-v0.2@e27b4a8cfa605ce81f21d3dfbd7a8d38a871777d/prediction_templates/train_predictions_template.csv)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1890, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 760, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
scenario_id: string
prediction: double
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 503
to
{'scenario_id': Value('string'), 'pressure': Value('float64'), 'buffer': Value('float64'), 'lag': Value('float64'), 'coupling': Value('float64'), 'content_score': Value('float64'), 'regime_hint': Value('string'), 'label_stable': Value('int64')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1892, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 1 new columns ({'prediction'}) and 7 missing columns ({'content_score', 'coupling', 'buffer', 'lag', 'pressure', 'regime_hint', 'label_stable'}).
This happened while the csv dataset builder was generating data using
hf://datasets/ClarusC64/eval-trap-stability-manifold-benchmark-v0.2/prediction_templates/train_predictions_template.csv (at revision e27b4a8cfa605ce81f21d3dfbd7a8d38a871777d), [/tmp/hf-datasets-cache/medium/datasets/33468884134157-config-parquet-and-info-ClarusC64-eval-trap-stabi-84d203cc/hub/datasets--ClarusC64--eval-trap-stability-manifold-benchmark-v0.2/snapshots/e27b4a8cfa605ce81f21d3dfbd7a8d38a871777d/data/train.csv (origin=hf://datasets/ClarusC64/eval-trap-stability-manifold-benchmark-v0.2@e27b4a8cfa605ce81f21d3dfbd7a8d38a871777d/data/train.csv), /tmp/hf-datasets-cache/medium/datasets/33468884134157-config-parquet-and-info-ClarusC64-eval-trap-stabi-84d203cc/hub/datasets--ClarusC64--eval-trap-stability-manifold-benchmark-v0.2/snapshots/e27b4a8cfa605ce81f21d3dfbd7a8d38a871777d/prediction_templates/train_predictions_template.csv (origin=hf://datasets/ClarusC64/eval-trap-stability-manifold-benchmark-v0.2@e27b4a8cfa605ce81f21d3dfbd7a8d38a871777d/prediction_templates/train_predictions_template.csv)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
scenario_id string | pressure float64 | buffer float64 | lag float64 | coupling float64 | content_score float64 | regime_hint string | label_stable int64 |
|---|---|---|---|---|---|---|---|
tr_0001 | 0.3 | 0.72 | 0.1 | 0.35 | 0.82 | baseline | 1 |
tr_0002 | 0.44 | 0.68 | 0.16 | 0.42 | 0.86 | baseline | 1 |
tr_0003 | 0.58 | 0.63 | 0.18 | 0.55 | 0.89 | baseline | 1 |
tr_0004 | 0.62 | 0.59 | 0.22 | 0.61 | 0.91 | coupling | 0 |
tr_0005 | 0.36 | 0.7 | 0.11 | 0.38 | 0.84 | baseline | 1 |
tr_0006 | 0.52 | 0.61 | 0.24 | 0.49 | 0.88 | lag | 1 |
tr_0007 | 0.65 | 0.57 | 0.28 | 0.64 | 0.93 | coupling | 0 |
tr_0008 | 0.41 | 0.66 | 0.2 | 0.46 | 0.87 | lag | 1 |
tr_0009 | 0.55 | 0.6 | 0.26 | 0.58 | 0.9 | lag | 1 |
tr_0010 | 0.33 | 0.74 | 0.09 | 0.31 | 0.81 | baseline | 1 |
tr_0011 | 0.48 | 0.64 | 0.17 | 0.44 | 0.86 | baseline | 1 |
tr_0012 | 0.71 | 0.56 | 0.23 | 0.69 | 0.95 | coupling | 0 |
tr_0001 | null | null | null | null | null | null | null |
tr_0002 | null | null | null | null | null | null | null |
tr_0003 | null | null | null | null | null | null | null |
tr_0004 | null | null | null | null | null | null | null |
tr_0005 | null | null | null | null | null | null | null |
tr_0006 | null | null | null | null | null | null | null |
tr_0007 | null | null | null | null | null | null | null |
tr_0008 | null | null | null | null | null | null | null |
tr_0009 | null | null | null | null | null | null | null |
tr_0010 | null | null | null | null | null | null | null |
tr_0011 | null | null | null | null | null | null | null |
tr_0012 | null | null | null | null | null | null | null |
Eval Trap Stability Manifold Benchmark v0.2
This repository provides a synthetic benchmark for testing whether models can distinguish between content confidence and system viability.
The benchmark is built to expose the evaluation trap:
A model assigns high confidence to a proposed configuration even though the system executing that configuration is mathematically unstable.
Core idea
Most predictive systems optimize for content accuracy.
This benchmark tests something else:
Can the model detect whether the proposed system state lies inside or outside a stability manifold?
Stability manifold
The benchmark defines three competing instability surfaces:
Baseline surface
S1 = buffer - (pressure * coupling) - (k * lag)Coupling surface
S2 = buffer - (pressure * coupling^2) - (k * lag)Lag surface
S3 = buffer - (pressure * coupling) - (k * lag^2)
The collapse margin is:
min(S1, S2, S3)
Label rule:
label_stable = 1ifmin(S1, S2, S3) >= 0label_stable = 0otherwise
Why multiple surfaces
A single surface is easy to memorize.
A manifold is harder.
This benchmark forces the model to reason about:
- interacting variables
- non-linear collapse geometry
- regime switching
- boundary sensitivity
Dataset splits
train
Examples spanning all three instability regimes.
in_domain_test
Examples drawn from the same general regime distribution as training.
distribution_shift
Examples with shifted pressure, lag, and coupling ranges.
boundary_trap
Examples constructed near the stability seam to expose false rescue behaviour.
Metrics
Primary metric
false_rescue_rate
This measures how often the model predicts stability in high-confidence cases where the manifold indicates collapse.
Secondary metric
boundary_error_rate
This measures performance near the seam where |min(S1, S2, S3)| <= boundary_eps.
Additional metrics
accuracysurface_distance_mean
Diagnostics
collapse_margin_distributionconfusion_matrixactive_surface_counts
Files
data/train.csvdata/in_domain_test.csvdata/distribution_shift.csvdata/boundary_trap.csvprediction_templates/train_predictions_template.csvprediction_templates/in_domain_test_predictions_template.csvprediction_templates/distribution_shift_predictions_template.csvprediction_templates/boundary_trap_predictions_template.csvbaseline/generate_baseline_predictions.pyscorer.pystability_visualizer.pydataset_schema.jsonbenchmark_spec.json
Prediction contract
Prediction files must contain:
scenario_idprediction
Where:
prediction = 1means stableprediction = 0means unstable
Rows are aligned by scenario_id.
Prediction templates
The repository includes ready-to-fill prediction templates in:
prediction_templates/
These templates follow the scorer contract exactly.
Baseline model
The repository includes a deterministic baseline that evaluates the stability manifold directly.
Generate predictions with:
python baseline/generate_baseline_predictions.py
This creates prediction files in:
baseline_predictions/
Scoring
Score a prediction file with:
python scorer.py predictions.csv data/boundary_trap.csv
Visualization
The repository includes a 2D projection visualizer:
stability_visualizer.py
Run it with:
python stability_visualizer.py --pred predictions.csv --truth data/boundary_trap.csv
The plot highlights:
- stable region
- collapse region
- near-boundary region
- false rescue
- false collapse
Why this benchmark matters
This benchmark does not just ask whether the model predicts the right label.
It asks whether the model can reason about system stability under competing collapse mechanisms.
That makes it useful for thinking about:
- ICU deterioration
- infrastructure stress
- financial cascades
- model-based safety systems
- any domain where local correctness can still produce global failure
Notes
The only material correction from the earlier version was the malformed row in train.csv.
Everything else above is now aligned:
schema benchmark contract scorer visualizer prediction templates baseline generator README How to run
Generate baseline predictions:
python baseline/generate_baseline_predictions.py
Score a split:
python scorer.py baseline_predictions/boundary_trap_baseline_predictions.csv data/boundary_trap.csv
Visualize it:
python stability_visualizer.py --pred baseline_predictions/boundary_trap_baseline_predictions.csv -
License
MIT
- Downloads last month
- 38