|
|
--- |
|
|
license: mit |
|
|
dataset_info: |
|
|
features: |
|
|
- name: kappa |
|
|
dtype: |
|
|
array3_d: |
|
|
shape: |
|
|
- 101 |
|
|
- 1424 |
|
|
- 176 |
|
|
dtype: float16 |
|
|
- name: theta |
|
|
dtype: |
|
|
array2_d: |
|
|
shape: |
|
|
- 101 |
|
|
- 5 |
|
|
dtype: float32 |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 13108270080 |
|
|
num_examples: 256 |
|
|
download_size: 6934979115 |
|
|
dataset_size: 13108270080 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train-* |
|
|
--- |
|
|
|
|
|
# FAIR Universe - NeurIPS 2025 Weak Lensing Uncertainty Challenge |
|
|
This dataset is a HF mirror of the official challenge training data for this challenge: |
|
|
|
|
|
https://www.codabench.org/competitions/8934/ |
|
|
|
|
|
To ease the split along the nuisance parameter axis, the dataset challenge has been reordered as |
|
|
``` |
|
|
ncosmo, np, ... -> np, ncosmo, ... |
|
|
``` |
|
|
|
|
|
To get started: |
|
|
|
|
|
```python |
|
|
import datasets |
|
|
|
|
|
dset = datasets.load_dataset("cosmostat/neurips-wl-challenge") |
|
|
dset = dset.with_format('torch') |
|
|
|
|
|
example = dset['train'][0] |
|
|
``` |