stl_updated / README.md
saracandu's picture
maybe this is more readable
5180eae verified
metadata
dataset_info:
  features:
    - name: formula
      dtype: string
    - name: perturbation_type
      dtype: string
    - name: equivalent
      dtype: float64
    - name: original_formula
      dtype: string
    - name: depth
      dtype: int64
  splits:
    - name: train
      num_bytes: 792081714
      num_examples: 3231354
    - name: test
      num_bytes: 13498535
      num_examples: 74129
  download_size: 154751195
  dataset_size: 805580249
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*

Dataset Card for stl_updated

Dataset Description

The stl_updated dataset is a large-scale collection of 3.3 million Signal Temporal Logic (STL) formulae, designed to stress-test and train models (such as Transformer encoders) on recursive understanding, semantic similarity, and syntactic complexity.

The training and test sets are augmented directly from the seed formulae introduced in Candussio (2025) and originally hosted in the base dataset saracandu/stl_formulae. These seed formulae have been aggressively expanded through an object-based augmentation strategy acting directly on the STL Abstract Syntax Tree (AST).

Dataset Composition

The dataset is stratified to ensure that the model learns to capture semantic similarity across varying syntactic and quantitative modifications. It contains 3.3M total formulae, broken down into three main categories:

  • 10.4% Lexically Complex (Equivalent Semantics): Formulas that have been structurally complicated but retain the exact same truth values and semantics as the seeds.
  • 43.4% Parametric Perturbations: Formulas where the overarching structure is kept intact, but numerical values of thresholds and temporal bounds have been altered.
  • 45.7% Hybrid Items: Formulas combining both structural complexity changes and semantic numerical shifts.

Dataset Construction

To ensure the formulas effectively test recursive understanding, variants are generated using a semantic-preserving recursive augmentation pipeline, followed by semantic-altering perturbations and strict post-serialization checks.

1. Semantic-Preserving Recursive Augmentation

The core of the pipeline is a stochastic priority cascade operating on AST nodes. To guarantee structural complexity, the algorithm implements a forcing loop: if a generated variant does not reach a minimum target depth of d ≥ 5, the augmentation is re-attempted until the threshold is met.

Transformations are selected based on the following probability distribution:

Rule Probability Mathematical Transformation
Time partitioning 35.0% op_I φ → op_{I_1} (op_{I_2} φ), where op ∈ {□, ♢}
Until nesting 25.0% φ → (φ U ψ) or (ψ U φ)
Distributivity 15.0% □_I (A ∧ B) → □_I A ∧ □_I B (or dual)
De Morgan rewriting 9.9% (A ∧ B) → ¬(¬A ∨ ¬B) (and dual)
Predicate inversion 8.0% x_i ≤ θ → ¬(x_i > θ)
Temporal identity 5.0% φ → op_[0,0] φ, where op ∈ {□, ♢}
No change 2.0% φ → φ
Not-injection 0.1% φ → ¬¬φ (if parent ≠ ¬)

(Note: □ = always, ♢ = eventually, U = until)

2. Semantic-Altering Perturbations

To create distinct satisfaction boundaries, 65% of the dataset is subjected to semantic shifts. These are applied orthogonally to the structural transformations (e.g., a formula is first logically complicated, then numerically perturbed). The perturbations are split evenly (50/50) between:

  • Vibration: Multiplicative noise affecting thresholds (±10%) and time bounds (W' ∈ [0.6 * W, 1.8 * W]).
  • Shift: Additive offsets where Δ ∈ [-6, 6] for thresholds, and Δ ∈ [-15, 40] for time bounds.

3. Post-Serialization Refinement

After the formulas are serialized, three final operations ensure syntax/logic de-correlation and data integrity:

  • Duality Shift (40% probability): Temporal operators are replaced with their negated duals via regex-based substitution (e.g., □_[I] A ⇔ ¬♢_[I] ¬A).
  • Interval Consistency: The algorithm strictly enforces the temporal constraint R > L. Following any perturbation, the upper bound is reset as R' = L' + W', where the new width W' is sampled stochastically.
  • Empirical Validation: Every formula variant is evaluated against a stochastic dummy signal. This ensures compatibility with the signal's temporal horizon and eliminates undefined robustness values or empty traces caused by excessive nesting or severe parameter shifts.