Improve dataset card: add task categories, paper link, and sample usage
Browse filesHi! I'm Niels from the community science team at Hugging Face. I've updated the dataset card to improve its discoverability and utility:
- Added `task_categories: other` to the YAML metadata.
- Linked the dataset to the associated research paper on the Hugging Face Hub.
- Added a **Sample Usage** section featuring the `vpnls` library (as referenced in the paper) to show how to fit these scaling laws.
- Organized the project and repository links for better visibility.
README.md
CHANGED
|
@@ -1,16 +1,43 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
-
|
| 4 |
-
|
| 5 |
---
|
| 6 |
|
| 7 |
# IsoFLOP Scaling Law Experiments
|
| 8 |
|
| 9 |
Curated collection of IsoFLOP curve data from 6 experiments, standardized to a common schema.
|
| 10 |
|
| 11 |
-
|
| 12 |
|
| 13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
## Schema
|
| 16 |
|
|
@@ -62,10 +89,6 @@ Scaling law survey data from Marghi et al. ([arxiv:2502.18969](https://arxiv.org
|
|
| 62 |
|
| 63 |
Marin community scaling ladder experiments: Llama 2 models trained on three datasets (Comma, DCLM, Nemotron). Raw data: vendored CSVs exported from the [Marin W&B project](https://wandb.ai/marin-community/marin/reports/Scaling-Ladders--VmlldzoxNTc0MjM1NQ). Budget is parsed from run names and multiplied by 3 to convert from forward-pass FLOPs (`≈2ND`) to total FLOPs (`≈6ND`); this factor was validated empirically across all runs. "Validation-optimal" runs (which use a different FLOPs convention) are excluded. Loss is `eval/paloma/macro_loss`.
|
| 64 |
|
| 65 |
-
## License
|
| 66 |
-
|
| 67 |
-
Apache 2.0
|
| 68 |
-
|
| 69 |
## Citation
|
| 70 |
|
| 71 |
```bibtex
|
|
@@ -75,4 +98,4 @@ Apache 2.0
|
|
| 75 |
journal={arXiv preprint arXiv:2603.22339},
|
| 76 |
year={2026}
|
| 77 |
}
|
| 78 |
-
```
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- other
|
| 5 |
---
|
| 6 |
|
| 7 |
# IsoFLOP Scaling Law Experiments
|
| 8 |
|
| 9 |
Curated collection of IsoFLOP curve data from 6 experiments, standardized to a common schema.
|
| 10 |
|
| 11 |
+
This dataset is associated with the paper [Problems with Chinchilla Approach 2: Systematic Biases in IsoFLOP Parabola Fits](https://huggingface.co/papers/2603.22339).
|
| 12 |
|
| 13 |
+
- **Project Page:** [https://openathena.ai/scaling-law-analysis/](https://openathena.ai/scaling-law-analysis/)
|
| 14 |
+
- **Data Extraction Code:** [Open-Athena/scaling-law-analysis](https://github.com/Open-Athena/scaling-law-analysis)
|
| 15 |
+
- **Fitting & Analysis Library (vpnls):** [Open-Athena/vpnls](https://github.com/Open-Athena/vpnls)
|
| 16 |
+
|
| 17 |
+
## Sample Usage
|
| 18 |
+
|
| 19 |
+
The researchers provided the `vpnls` package to fit compute-optimal scaling laws using Variable Projection. Below is an example of how to use it with the scaling data:
|
| 20 |
+
|
| 21 |
+
```python
|
| 22 |
+
import numpy as np
|
| 23 |
+
from vpnls.api import fit_vpnls, simulate_isoflop_data
|
| 24 |
+
|
| 25 |
+
# Generate synthetic data (8 budgets x 16 points = 128 samples)
|
| 26 |
+
N, D, L = simulate_isoflop_data(
|
| 27 |
+
alpha=0.34, beta=0.28, A=406.4, B=410.7, E=1.69, # Chinchilla / Hoffmann et al. 2022
|
| 28 |
+
compute_budgets=np.geomspace(1e17, 1e22, 8), n_points_per_budget=16, noise_std=0,
|
| 29 |
+
)
|
| 30 |
+
|
| 31 |
+
# 2-digit exponent (alpha/beta) precision (~25ms)
|
| 32 |
+
result = fit_vpnls(N, D, L, method="grid", resolution=0.01)
|
| 33 |
+
# -> alpha=0.34, beta=0.28, E=1.6900, A=406.40, B=410.70 (recovery is already exact)
|
| 34 |
+
|
| 35 |
+
# 3-digit precision, 10 processes (~250ms on M4 Pro; 4-digit takes ~16s)
|
| 36 |
+
result = fit_vpnls(N, D, L, method="grid", resolution=0.001, num_workers=10)
|
| 37 |
+
|
| 38 |
+
# L-BFGS-B refinement from dense grid search above
|
| 39 |
+
result = fit_vpnls(N, D, L, method="jax") # or "scipy"
|
| 40 |
+
```
|
| 41 |
|
| 42 |
## Schema
|
| 43 |
|
|
|
|
| 89 |
|
| 90 |
Marin community scaling ladder experiments: Llama 2 models trained on three datasets (Comma, DCLM, Nemotron). Raw data: vendored CSVs exported from the [Marin W&B project](https://wandb.ai/marin-community/marin/reports/Scaling-Ladders--VmlldzoxNTc0MjM1NQ). Budget is parsed from run names and multiplied by 3 to convert from forward-pass FLOPs (`≈2ND`) to total FLOPs (`≈6ND`); this factor was validated empirically across all runs. "Validation-optimal" runs (which use a different FLOPs convention) are excluded. Loss is `eval/paloma/macro_loss`.
|
| 91 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 92 |
## Citation
|
| 93 |
|
| 94 |
```bibtex
|
|
|
|
| 98 |
journal={arXiv preprint arXiv:2603.22339},
|
| 99 |
year={2026}
|
| 100 |
}
|
| 101 |
+
```
|