experiment string | pdes list | conditions list | seeds list | hyperparameters dict | summary dict | raw_results list |
|---|---|---|---|---|---|---|
adaptive_epsilon | [
"heat1d",
"advdiff1d",
"burgers1d",
"allencahn1d_sharp"
] | [
"no_causal",
"fixed_epsilon",
"adaptive_epsilon",
"no_pretrain_adaptive"
] | [
42,
123,
456,
789,
1024
] | {"stage1_epochs":200,"stage2_epochs":400,"stage3_epochs":200,"stage1_lr":0.001,"stage2_lr":0.0001,"s(...TRUNCATED) | {"heat1d":{"no_causal":{"relative_l2_error":{"mean":0.18844890147447585,"std":0.061981019588727086,"(...TRUNCATED) | [{"pde":"heat1d","condition":"no_causal","skip_stage1":false,"skip_stage3":true,"use_adaptive":false(...TRUNCATED) |
PINNBench
A controlled benchmark for evaluating training-policy selection in hybrid Physics-Informed Neural Network (PINN) and neural-operator solvers. PINNBench accompanies the paper "PINNBench: A Benchmark and Evaluation Study of Training Policy Selection in Hybrid PINN-Operator Solvers" (anonymous submission, NeurIPS 2026 Evaluations and Datasets Track).
What is in this repository
This Hugging Face dataset hosts the benchmark protocol artifacts + logged results of 1,539 controlled training runs on 13 PDE configurations spanning 8 equation families. The dataset is not raw simulation data (PDE reference solutions are analytical and regenerated at runtime); it is the decision-relevant evaluation record that supports the paper's headline claims.
results_paper_combined/
├── routing_evaluation/
│ ├── results.json # 13-PDE leave-one-out CV (450 routing decisions × 4 selectors)
│ ├── holdout.json # 3-PDE held-out generalization
│ ├── regret_bound_validation.json # 33/52 PDE × probe-window pairs
│ └── scaling_ablation.json # routing accuracy vs. PDE count
├── meta_router/
│ └── results.json # PDE-aware router (84.6% with 5+7 features)
├── stage_ablation/
│ └── *_ablation.json # 9 PDEs × 2×2 (Stage1, Stage3) factorial × 10 seeds
├── adaptive_epsilon/
│ └── results.json # 4 PDEs × 4 conditions × 5 seeds (causal-weight collapse)
├── wang_comparison/
│ └── *_results.json # Wang-style single-stage causal probe (4 PDEs)
├── hypino_*/ # HyPINO native, adapter, target-PINN adaptation probes
├── pinnacle_subset_5task/
│ └── results.json # PINNacle 5-task executable subset
└── statistical_tests.json # BH-FDR corrected, 27 tests
How to load
The recommended access pattern is direct JSON read; result schemas are documented in MANIFEST.md.
from huggingface_hub import hf_hub_download
import json
routing_path = hf_hub_download(
"PINNBench/pinnbench",
"results_paper_combined/routing_evaluation/results.json",
repo_type="dataset",
)
with open(routing_path) as f:
routing = json.load(f)
datasets-library config aliases (routing_evaluation, stage_ablation, meta_router, regret_bound_validation, adaptive_epsilon, external_probes) are declared in the YAML header above for convenience.
Verification
Every headline claim in the paper is recomputed from these JSONs by the script scripts/verify_claims.py in the source repository. Reproduced PASS results:
| Claim | Source artifact | Computed | Paper |
|---|---|---|---|
| Total runs | union | 1,510 enumerable + 29 probes | 1,539 |
| Nested PDE-aware routing accuracy | meta_router/results.json |
84.62% (11/13) | 84.6% |
| Full RF+PDE routing | meta_router/results.json |
61.54% (8/13) | 61.5% |
| Stage-2 compute savings | analytical | 60.0% (K=3, Eₚ=50, E₂=500) | 60% |
| Diagnostic regret bound | regret_bound_validation.json |
33/52 overall, 33/39 if k∈{10,50,100} | 33/52, 33/39 |
| Accuracy–regret dissociation | routing_evaluation/results.json |
77.27× (0.0850 / 0.0011) | 77× |
| Family-macro accuracy (LOPO-F) | routing_evaluation/results.json |
81.2% physics-loss-final (6/8) | 81.2% |
| KdV1D causal effect (PDE residual) | rq1e_kdv_10seed_causal/results.json |
0.0096 / 0.0105 | 0.010 / 0.011 |
| Adaptive-ε mitigation (AdvDiff1D) | adaptive_epsilon/results.json |
0.78 → 0.98 | 0.78 → 0.97 |
Reproducibility caveats
- All training results were computed at the final epoch of each stage (no best-checkpoint selection, no validation set).
- Evaluation points were regenerated per seed via
torch.randrather than loaded from a fixed grid; per-seed metric variance therefore includes evaluation-point sampling variance. - The
rq1e_kdv_10seed_causal/results.jsonsummary aggregate has anan-meanbug for some fields; per-seed means are recomputed by the verification script.
Croissant metadata
A validated Croissant 1.0 metadata file is included as croissant.json (RAI fields complete: data collection, preprocessing, annotation protocol, personal/sensitive information, safety, deidentification, fairness, biases, release/maintenance). Validate with:
https://huggingface.co/spaces/JoaquinVanschoren/croissant-checker
Citation
The paper is currently in submission to NeurIPS 2026 Evaluations and Datasets Track (double-blind). A BibTeX entry will be added to this dataset card after the review period.
License
Apache-2.0 for code and benchmark artifacts. PDE reference solutions are analytical and original to this work; no third-party data is redistributed.
Contact
During the anonymous review period, all communication is routed through the OpenReview discussion thread of the submission. After camera-ready, the maintainer details will be added here.
- Downloads last month
- 25