pinnbench / README.md
EvalData's picture
Initial benchmark release: 1,539 logged runs + Croissant metadata + claim manifest
bd14b51 verified
metadata
license: apache-2.0
language:
  - en
tags:
  - physics-informed-neural-networks
  - pinn
  - pde
  - benchmark
  - evaluation
  - routing
  - causal-loss
  - fourier-neural-operator
  - deeponet
  - scientific-machine-learning
pretty_name: PINNBench
size_categories:
  - 1K<n<10K
task_categories:
  - other
configs:
  - config_name: routing_evaluation
    data_files: results_paper_combined/routing_evaluation/*.json
  - config_name: stage_ablation
    data_files: results_paper_combined/stage_ablation/*.json
  - config_name: meta_router
    data_files: results_paper_combined/meta_router/*.json
  - config_name: regret_bound_validation
    data_files: results_paper_combined/routing_evaluation/regret_bound_validation.json
  - config_name: adaptive_epsilon
    data_files: results_paper_combined/adaptive_epsilon/*.json
  - config_name: external_probes
    data_files: results_paper_combined/{hypino,pinnacle,wang}*/*.json

PINNBench

A controlled benchmark for evaluating training-policy selection in hybrid Physics-Informed Neural Network (PINN) and neural-operator solvers. PINNBench accompanies the paper "PINNBench: A Benchmark and Evaluation Study of Training Policy Selection in Hybrid PINN-Operator Solvers" (anonymous submission, NeurIPS 2026 Evaluations and Datasets Track).

What is in this repository

This Hugging Face dataset hosts the benchmark protocol artifacts + logged results of 1,539 controlled training runs on 13 PDE configurations spanning 8 equation families. The dataset is not raw simulation data (PDE reference solutions are analytical and regenerated at runtime); it is the decision-relevant evaluation record that supports the paper's headline claims.

results_paper_combined/
├── routing_evaluation/
│   ├── results.json              # 13-PDE leave-one-out CV (450 routing decisions × 4 selectors)
│   ├── holdout.json              # 3-PDE held-out generalization
│   ├── regret_bound_validation.json   # 33/52 PDE × probe-window pairs
│   └── scaling_ablation.json     # routing accuracy vs. PDE count
├── meta_router/
│   └── results.json              # PDE-aware router (84.6% with 5+7 features)
├── stage_ablation/
│   └── *_ablation.json           # 9 PDEs × 2×2 (Stage1, Stage3) factorial × 10 seeds
├── adaptive_epsilon/
│   └── results.json              # 4 PDEs × 4 conditions × 5 seeds (causal-weight collapse)
├── wang_comparison/
│   └── *_results.json            # Wang-style single-stage causal probe (4 PDEs)
├── hypino_*/                     # HyPINO native, adapter, target-PINN adaptation probes
├── pinnacle_subset_5task/
│   └── results.json              # PINNacle 5-task executable subset
└── statistical_tests.json        # BH-FDR corrected, 27 tests

How to load

The recommended access pattern is direct JSON read; result schemas are documented in MANIFEST.md.

from huggingface_hub import hf_hub_download
import json

routing_path = hf_hub_download(
    "PINNBench/pinnbench",
    "results_paper_combined/routing_evaluation/results.json",
    repo_type="dataset",
)
with open(routing_path) as f:
    routing = json.load(f)

datasets-library config aliases (routing_evaluation, stage_ablation, meta_router, regret_bound_validation, adaptive_epsilon, external_probes) are declared in the YAML header above for convenience.

Verification

Every headline claim in the paper is recomputed from these JSONs by the script scripts/verify_claims.py in the source repository. Reproduced PASS results:

Claim Source artifact Computed Paper
Total runs union 1,510 enumerable + 29 probes 1,539
Nested PDE-aware routing accuracy meta_router/results.json 84.62% (11/13) 84.6%
Full RF+PDE routing meta_router/results.json 61.54% (8/13) 61.5%
Stage-2 compute savings analytical 60.0% (K=3, Eₚ=50, E₂=500) 60%
Diagnostic regret bound regret_bound_validation.json 33/52 overall, 33/39 if k∈{10,50,100} 33/52, 33/39
Accuracy–regret dissociation routing_evaluation/results.json 77.27× (0.0850 / 0.0011) 77×
Family-macro accuracy (LOPO-F) routing_evaluation/results.json 81.2% physics-loss-final (6/8) 81.2%
KdV1D causal effect (PDE residual) rq1e_kdv_10seed_causal/results.json 0.0096 / 0.0105 0.010 / 0.011
Adaptive-ε mitigation (AdvDiff1D) adaptive_epsilon/results.json 0.78 → 0.98 0.78 → 0.97

Reproducibility caveats

  • All training results were computed at the final epoch of each stage (no best-checkpoint selection, no validation set).
  • Evaluation points were regenerated per seed via torch.rand rather than loaded from a fixed grid; per-seed metric variance therefore includes evaluation-point sampling variance.
  • The rq1e_kdv_10seed_causal/results.json summary aggregate has a nan-mean bug for some fields; per-seed means are recomputed by the verification script.

Croissant metadata

A validated Croissant 1.0 metadata file is included as croissant.json (RAI fields complete: data collection, preprocessing, annotation protocol, personal/sensitive information, safety, deidentification, fairness, biases, release/maintenance). Validate with:

https://huggingface.co/spaces/JoaquinVanschoren/croissant-checker

Citation

The paper is currently in submission to NeurIPS 2026 Evaluations and Datasets Track (double-blind). A BibTeX entry will be added to this dataset card after the review period.

License

Apache-2.0 for code and benchmark artifacts. PDE reference solutions are analytical and original to this work; no third-party data is redistributed.

Contact

During the anonymous review period, all communication is routed through the OpenReview discussion thread of the submission. After camera-ready, the maintainer details will be added here.