Calculation of MRE

#99
by JobPetrovcic - opened

Could you clarify the MRE evaluation metric? I've observed that the target data (LUTdata) contains zero values, which results in an undefined relative error (|pred-true| / |true|).
To ensure my local validation aligns with the official leaderboard, could you please specify how the evaluation script handles cases where the true value is zero? Thank you.

Image and Signal Processing • ISP org

Hi JobPetrovcic
Would you mind please to tell me which file has zero values in the target data (LUTdata)?
I'm not sure that you can run locally a validation, unless you have access to the reference output data, which we are not providing, or unless you generate it yourself. Normally, the procedure to participate in this challenge should be as follows:

  1. Take the training data (e.g. ScenarioA/train/train2000.h5) to train your model.
  2. Use your trained emulator to predict the data on the reference input values (e.g. ScenarioA/reference/refInterp.csv for track A1)
  3. Submit your results in the "Results" folder with the requested format (e.g. MyModelName_A1.h5, as described in the main page). Don't forget adding the .json file
  4. The submitted results are evaluated once a day (12:00 CET) so that you can see the leaderboard.
    You can upload more than one model (with different names), and update your models (if the name is the same, the previous version will be replaced by the new one).

To give you more information about how the validation is done:
For scenario A, we take the predicted spectral data (atmospheric transmittances/reflectances) and perform an atmospheric correction on a reference test dataset that you none of the participants have access to. We then compare the retrieved surface reflectance agains the reference spectrum, and calculare the MRE over the entire reference dataset (10'000 samples). Then, we calculate the spectral average of the MRE avoiding spectral channels where the H2O saturated the signal. I can provide you the spectral range values if you wish. As you can see, you cannot probably replicate the validation locally as you don't have access to the top-of-atmosphere radiance simulations nor the reference surface reflectance.

Please let us know if you have further questions and we'll try to help

The file train2000.h5 contains zero values:

import h5py

trainFile = 'rtm_emulation/scenarioA/train/train2000.h5'

with h5py.File(trainFile, 'r') as h5_file:
    Ytrain = h5_file['LUTdata'][:]
    Xtrain = h5_file['LUTheader'][:]
    wvl = h5_file['wvl'][:]

print((Ytrain == 0).sum()) # = 389656 when I run this on train2000.h5 and = 97524 for train500.h5

I wanted to create a local validation setup by splitting the Xtrain and Ytrain into a train set and a (local) validation set, and test my models using the MRE metric on the validation split of YTrain. The MRE metric is ill-defined in this case because of the zero values.

However, if I understand correctly, in the actual evaluation, the MRE is not calculated between the true values Y and the predictions resulting from data in refInterp.csv; a transformation -- atmospheric correction -- is applied before the MRE is calculated. Is my thinking correct?

Image and Signal Processing • ISP org

I've just downloaded the train2000.h5 file from the corresponding folder (https://huggingface.co/datasets/isp-uv-es/rtm_emulation/blob/main/scenarioA/train/train2000.h5) and read the data in Matlab.
Indeed, out of the total 50460000 elements in the 'LUTdata', only 389656 (i.e. 0.77%) are zeros. These correspond to strinctly zero transmittances/reflectances at the bottom of the deep H2O absorption bands. I made a simple plot so that you can take a look on how the spectra looks like (see figure below).
Indeed, when calculating relative errors, these bands make the erro undefined (Inf). That's the reason why, in our validation script, we are filtering out certain very deep absorption bands.
For your information, these bands correspond to the following wavelength intervals:

idx = ~((wvl > 931 & wvl < 945) |...
        (wvl > 1100 & wvl < 1160) |...
        (wvl > 1300 & wvl < 1500) |...
        (wvl > 1750 & wvl < 1980) | (wvl > 2420));

Here, 'wvl' is the wavelength grid at which the spectral data is provided. Note that the spectral data contains 6 atmospheric radiative transfer functions (transmittances/reflectances), so 4205 wavelengts * 6 = 25230, which is the length of one of the dimentsions in LUTdata. You can read the wavelength grid from the training data file. In python: wvl = h5_file['wvl'][:]

As you correctly understood, you cannot replicate the evaluation done in the challenge as it involves using your predictions (resulting from running your emulator in refInterp.csv data) and perform a transformation (atmospheric correction) for comparison agains a reference surface reflectance. However, something you could do is the following:

  • use train2000.h5 to train your emulator (perhaps you want to slpit it into train+validation subsets)
  • then use the train500.h5 as testing dataset
  • although you cannot compute the same error metrics as I'm doing, you can still compare (validate) your predictions against the spectral data from train500 and have an idea of whether you're doing things right
  • you could potentially use the baseline_XX.h5 files for comparison, although these are a simple polynomial interpolation mostly for the participants to have an example of how the file should look like

Please let me know if you have any other questions.
Sin título.png

Sign up or log in to comment