ClarusC64 commited on
Commit
1f25da6
·
verified ·
1 Parent(s): f09199a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +201 -3
README.md CHANGED
@@ -1,3 +1,201 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: mit
4
+ task_categories:
5
+ - text-classification
6
+ tags:
7
+ - stability-analysis
8
+ - adversarial-benchmark
9
+ - system-dynamics
10
+ - collapse-detection
11
+ size_categories:
12
+ - 1K<n<10K
13
+ pretty_name: CASSES State-Space Collapse Benchmark
14
+ ---
15
+ CASSES — Collapse Analysis in State-Space Evaluation Suite
16
+ Overview
17
+
18
+ CASSES is a diagnostic benchmark designed to test whether machine learning systems can detect instability and collapse in dynamic systems.
19
+
20
+ Most AI benchmarks evaluate models on tasks such as classification, language generation, or reasoning over static data.
21
+ CASSES evaluates a different capability:
22
+
23
+ state-space stability understanding.
24
+
25
+ The benchmark tests whether a model can identify when a system is approaching a collapse boundary using signals derived from system dynamics.
26
+
27
+ The dataset is intentionally adversarial.
28
+ Several trap structures are included to prevent simple heuristics from solving the task.
29
+
30
+ The System Model
31
+
32
+ Each row represents a simulated dynamic system.
33
+
34
+ The system is defined by four interacting structural variables:
35
+
36
+ pressure
37
+ buffer capacity
38
+ intervention lag
39
+ system coupling
40
+
41
+ These variables interact non-linearly to determine the stability margin of the system.
42
+
43
+ From these dynamics the dataset derives observable signals including:
44
+
45
+ boundary_distance
46
+ drift_gradient
47
+ drift_acceleration
48
+ recovery_feasibility
49
+ regime_competition_ratio
50
+
51
+ These signals describe where the system sits in stability space and how it is moving through that space.
52
+
53
+ What Models Must Predict
54
+
55
+ The core prediction task is:
56
+
57
+ true_label
58
+
59
+ Where:
60
+
61
+ 0 = system remains stable
62
+ 1 = system collapses
63
+
64
+ Models must infer collapse risk from observed trajectories and derived geometry signals.
65
+
66
+ Dataset Structure
67
+
68
+ The dataset contains two splits.
69
+
70
+ train.csv
71
+ tester.csv
72
+
73
+ The train split provides examples for model development.
74
+
75
+ The tester split is used for evaluation and includes adversarial trap families designed to test robustness.
76
+
77
+ Each row includes:
78
+
79
+ system observations across time
80
+ derived stability geometry
81
+ intervention counterfactuals
82
+ difficulty labels
83
+ trap annotations
84
+
85
+ Trap Families
86
+
87
+ The dataset contains several adversarial trap types.
88
+
89
+ These prevent simple threshold heuristics from solving the task.
90
+
91
+ False Stability
92
+
93
+ Observed signals appear stable while the underlying system state is unstable.
94
+
95
+ Models must detect hidden instability.
96
+
97
+ Boundary Masking
98
+
99
+ Collapse occurs even though the system appears distant from the instability boundary.
100
+
101
+ This tests robustness to misleading boundary signals.
102
+
103
+ Trajectory Aliasing
104
+
105
+ Different trajectories produce similar short-term observations but diverge later.
106
+
107
+ Models must infer the correct long-term trajectory.
108
+
109
+ Temporal Alias
110
+
111
+ Temporal patterns appear stable over short windows but hide acceleration toward collapse.
112
+
113
+ Intervention Decoy
114
+
115
+ Counterfactual interventions appear stabilizing but actually increase collapse risk.
116
+
117
+ Counterfactual Intervention Evaluation
118
+
119
+ Some rows include simulated interventions.
120
+
121
+ Fields include:
122
+
123
+ intervention_action
124
+ intervention_magnitude
125
+ boundary_distance_before
126
+ boundary_distance_after
127
+ intervention_effect_direction
128
+
129
+ These rows test whether models understand how interventions change system stability.
130
+
131
+ Pair Evaluation
132
+
133
+ Certain rows appear in paired form.
134
+
135
+ Each pair contains:
136
+
137
+ safe_pair
138
+ unstable_pair
139
+
140
+ The trajectories appear similar but diverge in stability outcome.
141
+
142
+ Models must identify which trajectory leads to collapse.
143
+
144
+ Difficulty Levels
145
+
146
+ Each scenario is assigned a difficulty level.
147
+
148
+ easy
149
+ medium
150
+ hard
151
+
152
+ Difficulty reflects the clarity of the collapse signal and the degree of adversarial masking present.
153
+
154
+ Evaluation
155
+
156
+ Evaluation is performed using the provided scorer.
157
+
158
+ Metrics include:
159
+
160
+ accuracy
161
+ precision
162
+ recall
163
+ F1 score
164
+
165
+ Additional diagnostic metrics measure performance on specific trap families and system dynamics features.
166
+
167
+ The primary composite metric is:
168
+
169
+ CASSES score
170
+
171
+ This score summarizes model performance across collapse detection, adversarial traps, and counterfactual reasoning.
172
+
173
+ Baseline Results
174
+
175
+ A simple heuristic baseline achieves approximately:
176
+
177
+ CASSES score ≈ 0.64
178
+
179
+ This indicates the benchmark cannot be solved with trivial rules and requires meaningful reasoning about system dynamics.
180
+
181
+ Files
182
+
183
+ train.csv — training scenarios
184
+ tester.csv — evaluation scenarios
185
+ generator.py — dataset generator
186
+ prediction_baseline.py — reference baseline
187
+ scorer.py — official evaluation script
188
+
189
+ Intended Use
190
+
191
+ CASSES is intended for:
192
+
193
+ evaluation of machine learning models
194
+ research on stability reasoning
195
+ development of system-dynamics-aware AI
196
+
197
+ The benchmark focuses on instability detection in dynamic systems rather than traditional static classification tasks.
198
+
199
+ License
200
+
201
+ MIT License