Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

NEO dataset

Preprocessing and loading scripts for NEO datasets: 1 week (TODO) and 1 month variants. Go in either 1 week or 1 month directories.

Steps:

1. Download raw data

Use the ./download_raw_data.py -o raw_data script.

For weekly data only. Use the preprocess_raw_1week_data.py raw_data -o raw_data_fixed script.

  • NDVI is only sampled at 16 days, we make an average of two consecutive samples to create an 8 day average.

  • We rename the data from YYYY-MM-DD to YYYY-MM-E (E=1,2,3,4 depending on which week bucket). This is needed because different sensors sample data at different weeks

2. Convert from PNG to NPY files

./convert_png_to_npy.py raw_data -o npy_data [--resolution 540 1080]

Converts from raw PNG to [0:1] normalied, 540x1080 npy files, and adds NaNs to invalid regions in the NEO data.

If --resolution is not set, it will default to 540 x 1080.

3. Create train and test set

Run the following script:

./scripts/symlink_train_val_split.py neo_1month/data -o neo_1month/data_split --mode train_set
./scripts/symlink_train_val_split.py neo_1month/data -o neo_1month/data_split --mode test_set
./scripts/symlink_train_val_split.py neo_1month/data -o neo_1month/data_split --mode train_set_plus_missing # optional
# copy stats files too :)
cp neo_1month/data/.task_statistics.npz neo_1month/data_split/train_set
cp neo_1month/data/.task_statistics.npz neo_1month/data_split/test_set
cp neo_1month/data/.task_statistics.npz neo_1month/data_split/train_set_plus_missing

4. Run the provided viewer

Run the neo_viewer and data analysis notebook

  • Set the path to neo_1week/npy_data or neo_1month/npy_data.

5. Use in ML python scripts

Use the neo reader like in the viewer above.

from neo_reader import MultiTaskDataset, neo_task_types
reader = MultiTaskDataset(dataset_path, handle_missing_data="fill_none", task_types=neo_task_types)
train_data = train_reader[0:5][0] # a batch of first 5 items. [0] is the raw torch data as a dict
pprint(train_data)
>> {
 'AOD': tensor[5, 540, 1080, 1] n=2916000 (11Mb) x∈[0.004, 0.996] μ=0.223 σ=0.237 NaN!,
 'CHLORA': tensor[5, 540, 1080, 1] n=2916000 (11Mb) NaN!,
 'CLD_FR': tensor[5, 540, 1080, 1] n=2916000 (11Mb) NaN!,
 'CLD_RD': tensor[5, 540, 1080, 1] n=2916000 (11Mb) x∈[0.024, 0.996] μ=0.546 σ=0.192 NaN!,
 'CLD_WP': tensor[5, 540, 1080, 1] n=2916000 (11Mb) NaN!,
 'COT': tensor[5, 540, 1080, 1] n=2916000 (11Mb) x∈[0.004, 0.996] μ=0.375 σ=0.220 NaN!,
 'CO_M': tensor[5, 540, 1080, 1] n=2916000 (11Mb) x∈[0.063, 0.996] μ=0.449 σ=0.168 NaN!,
 'FIRE': tensor[5, 540, 1080, 1] n=2916000 (11Mb) x∈[0.157, 0.996] μ=0.846 σ=0.148 NaN!,
 'LAI': tensor[5, 540, 1080, 1] n=2916000 (11Mb) x∈[0.004, 0.996] μ=0.314 σ=0.298 NaN!,
 'LSTD': tensor[5, 540, 1080, 1] n=2916000 (11Mb) x∈[0.004, 0.996] μ=0.632 σ=0.275 NaN!,
 'LSTD_AN': tensor[5, 540, 1080, 1] n=2916000 (11Mb) x∈[0.004, 0.996] μ=0.545 σ=0.175 NaN!,
 'LSTN': tensor[5, 540, 1080, 1] n=2916000 (11Mb) x∈[0.004, 0.996] μ=0.493 σ=0.248 NaN!,
 'LSTN_AN': tensor[5, 540, 1080, 1] n=2916000 (11Mb) x∈[0.004, 0.996] μ=0.531 σ=0.167 NaN!,
 'NDVI': tensor[5, 540, 1080, 1] n=2916000 (11Mb) x∈[0.004, 0.996] μ=0.481 σ=0.297 NaN!,
 'NO2': tensor[5, 540, 1080, 1] n=2916000 (11Mb) NaN!,
 'OZONE': tensor[5, 540, 1080, 1] n=2916000 (11Mb) NaN!,
 'SNOWC': tensor[5, 540, 1080, 1] n=2916000 (11Mb) x∈[0.004, 0.996] μ=0.775 σ=0.336 NaN!,
 'SST': tensor[5, 540, 1080, 1] n=2916000 (11Mb) NaN!,
 'WV': tensor[5, 540, 1080, 1] n=2916000 (11Mb) x∈[0.004, 0.996] μ=0.408 σ=0.310 NaN!
}
Downloads last month
37