metadata
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: kappa
dtype:
array2_d:
shape:
- 1424
- 176
dtype: float16
- name: theta
list: float32
length: 5
splits:
- name: train
num_bytes: 10445568672
num_examples: 20604
- name: validation
num_bytes: 2662595936
num_examples: 5252
download_size: 6938697249
dataset_size: 13108164608
FAIR Universe - NeurIPS 2025 Weak Lensing Uncertainty Challenge
This dataset is a HF mirror of the official challenge training data for this challenge:
https://www.codabench.org/competitions/8934/
This repo contains a preprocessed train/validation split.
The original data has shape ncosmo, np, .... This dataset is splitted along the np dimension with a fraction of 0.8, and reordered as
ncosmo, np, ... -> np, ncosmo, ...
To get started:
import datasets
dset_train = datasets.load_dataset("b-remy/neurips-wl-challenge-split",split="train", streaming=True)
dset_val = datasets.load_dataset("b-remy/neurips-wl-challenge-split", split="validation", streaming=True)
dset_train = dset_train.with_format('torch')
dset_val = dset_val.with_format('torch')
example = dset_train['train'][0]