JulioContrerasH commited on
Commit
85fa926
·
verified ·
1 Parent(s): 7efe90f

first commit

Browse files
Files changed (1) hide show
  1. README.md +19 -17
README.md CHANGED
@@ -16,7 +16,7 @@ sdk: static
16
  sdk_version: "latest"
17
  pinned: false
18
  ---
19
- Last update: 25-04-2025
20
 
21
  <img src="https://elias-ai.eu/wp-content/uploads/2023/09/elias_logo_big-1.png" alt="elias_logo" style="width:15%; display: inline-block; margin-right: 150px;">
22
  <img src="https://elias-ai.eu/wp-content/uploads/2024/01/EN_FundedbytheEU_RGB_WHITE-Outline-1.png" alt="eu_logo" style="width:20%; display: inline-block;">
@@ -44,11 +44,12 @@ Last update: 25-04-2025
44
 
45
  ## **Benchmark Results**
46
 
47
- | **Model** | **MRE A1 (%)** | **MRE A2 (%)** | **MRE B1 (%)** | **MRE B2 (%)** | **Score** | **Rank** |
48
- |-----------|---------------|---------------|---------------|---------------|----------|--------|
49
- | test2 | 0.203 | 4.579 | 0.657 | 3.965 | 1.000 | 1° |
50
- | test1 | 0.545 | 7.693 | 0.816 | 7.877 | 2.175 | 2° |
51
- | baseline | 0.998 | 12.604 | 1.065 | 7.072 | 2.825 | 3° |
 
52
 
53
  ## **Introduction**
54
 
@@ -74,7 +75,7 @@ broader exploration of emission pathways aligned with sustainability goals.
74
 
75
  Participants in this challenge will develop emulators trained on provided datasets to predict spectral magnitudes (atmospheric transmittances and reflectances)
76
  based on input atmospheric and geometric conditions. The challenge is structured around three main tasks: (1) training ML models
77
- using predefined datasets, (2) predicting outputs for given test conditions, and (3) evaluating emulator performance based on accuracy and runtime.
78
 
79
  ### **Proposed Experiments**
80
 
@@ -179,8 +180,8 @@ wavelengths <- h5read(file_path, "wvl")
179
  Xtest <- as.matrix(read.table(file_path, sep = ",", header = TRUE))
180
  ```
181
 
182
- All data will be shared through a this [huggingface]. After the challenge finishes, participants will also have access to the evaluation scripts on
183
- [this GitLab](http://to_be_prepared) to ensure transparency and reproducibility.
184
 
185
 
186
  ## **Evaluation methodology**
@@ -214,9 +215,10 @@ for 1st place, they will both receive rank 1, and the next participant will be r
214
  information that would allow them to fine-tune their models. The final results and ranking evaluated with all the validation data will be provided and the end-date of the challenge.
215
 
216
  ### **Computational efficiency**
217
- Participants must report the runtime required to generate predictions across different emulator configurations. To facilitate fair comparisons, they should also provide
218
- a report with hardware specifications, including: CPU, Parallelization settings (e.g., multi-threading, GPU acceleration), RAM availability.
219
- Additionally, participants should report key model characteristics, such as the number of operations required for a single prediction and the number of trainable
 
220
  parameters in their ML models.
221
 
222
  All evaluation scripts will be publicly available on GitLab and Huggingface to ensure fairness, trustworthiness, and transparency.
@@ -233,9 +235,8 @@ All evaluation scripts will be publicly available on GitLab and Huggingface to e
233
  Note that only the **`LUTdata`** matrix (i.e., the predictions) are needed. A baseline example of this file is available for participants (`baseline_Sn.h5`).
234
  We encourage participants to compress their hdf5 files using the deflate option.
235
 
236
- - Each prediction file must be stored in`predictions` subfolder within the corresponding
237
- scenario folder (e.g., (e.g. `/scenarioA/predictions`). The prediction files should be named using the emulator/model name followed by the scenario-track ID
238
- (e.g. `/scenarioA/predictions/mymodel_A1.h5`). A global attributed named `runtime`must be included to report the
239
  computational efficiency of your model (value expressed in seconds).
240
  Note that all predictions for different scenario-tracks should be stored in separate files.
241
 
@@ -248,7 +249,8 @@ All evaluation scripts will be publicly available on GitLab and Huggingface to e
248
  "affiliations": ["affiliation1", "affiliation2"],
249
  "description": "A brief description of the emulator",
250
  "url": "[OPTIONAL] URL to the model repository if it is open-source",
251
- "doi": "DOI to the model publication (if available)"
 
252
  }
253
  ```
254
 
@@ -257,7 +259,7 @@ All evaluation scripts will be publicly available on GitLab and Huggingface to e
257
  - After the deadline, teams will be contacted with their evaluation results. If any issues are identified, theams will have up to two
258
  weeks to provide the necessary corrections.
259
 
260
- - Questions and discussions will be held in the discussion section of this [repository](https://huggingface.co/isp-uv-es/rtm_emulation/discussions).
261
 
262
  - After all the participants have provided the necessary corrections, the results will be published in the discussion section of this repository.
263
 
 
16
  sdk_version: "latest"
17
  pinned: false
18
  ---
19
+ Last update: 22-05-2025
20
 
21
  <img src="https://elias-ai.eu/wp-content/uploads/2023/09/elias_logo_big-1.png" alt="elias_logo" style="width:15%; display: inline-block; margin-right: 150px;">
22
  <img src="https://elias-ai.eu/wp-content/uploads/2024/01/EN_FundedbytheEU_RGB_WHITE-Outline-1.png" alt="eu_logo" style="width:20%; display: inline-block;">
 
44
 
45
  ## **Benchmark Results**
46
 
47
+ | **Model** | **MRE A1 (%)** | **MRE A2 (%)** | **MRE B1 (%)** | **MRE B2 (%)** | **Score** | **Runtime** | **Rank** |
48
+ |-----------|---------------|---------------|---------------|---------------|----------|----------|--------|
49
+ | Hugo | 0.203 | 4.579 | 0.657 | 3.965 | 1.000 | 1.065 | 1° |
50
+ | Krtek | 0.545 | 7.693 | 0.816 | 7.877 | 2.175 | 0.526 | 2° |
51
+ | Jasdeep | 0.827 | 28.894 | 1.011 | 35.625 | 3.350 | 1.252 | 3° |
52
+ | baseline | 0.998 | 12.604 | 1.065 | 7.072 | 3.475 | 2.400 | 4° |
53
 
54
  ## **Introduction**
55
 
 
75
 
76
  Participants in this challenge will develop emulators trained on provided datasets to predict spectral magnitudes (atmospheric transmittances and reflectances)
77
  based on input atmospheric and geometric conditions. The challenge is structured around three main tasks: (1) training ML models
78
+ using predefined datasets, (2) predicting outputs for given test conditions, and (3) evaluating emulator performance based on accuracy.
79
 
80
  ### **Proposed Experiments**
81
 
 
180
  Xtest <- as.matrix(read.table(file_path, sep = ",", header = TRUE))
181
  ```
182
 
183
+ All data will be shared through a this [repository](ttps://huggingface.co/datasets/isp-uv-es/rtm_emulation/tree/main). After the challenge finishes, participants
184
+ will also have access to the evaluation scripts on [this GitLab](http://to_be_prepared) to ensure transparency and reproducibility.
185
 
186
 
187
  ## **Evaluation methodology**
 
215
  information that would allow them to fine-tune their models. The final results and ranking evaluated with all the validation data will be provided and the end-date of the challenge.
216
 
217
  ### **Computational efficiency**
218
+ Participants must report the runtime required to generate predictions across different emulator configurations. The average runtime of all scenario-track combinations
219
+ will be calculated and reported in the table. **Runtime won't be taken into account for the final ranking**. After the competition ends, and to facilitate fair
220
+ comparisons, participants will be requested to provide a report with hardware specifications, including: CPU, Parallelization settings (e.g., multi-threading, GPU
221
+ acceleration), RAM availability. Additionally, participants should report key model characteristics, such as the number of operations required for a single prediction and the number of trainable
222
  parameters in their ML models.
223
 
224
  All evaluation scripts will be publicly available on GitLab and Huggingface to ensure fairness, trustworthiness, and transparency.
 
235
  Note that only the **`LUTdata`** matrix (i.e., the predictions) are needed. A baseline example of this file is available for participants (`baseline_Sn.h5`).
236
  We encourage participants to compress their hdf5 files using the deflate option.
237
 
238
+ - Each prediction file must be stored in the `results` folder in this repository. The prediction files should be named using the emulator/model name followed by
239
+ the scenario-track ID (e.g. `/results/mymodel_A1.h5`). A global attributed named `runtime` must be included to report the
 
240
  computational efficiency of your model (value expressed in seconds).
241
  Note that all predictions for different scenario-tracks should be stored in separate files.
242
 
 
249
  "affiliations": ["affiliation1", "affiliation2"],
250
  "description": "A brief description of the emulator",
251
  "url": "[OPTIONAL] URL to the model repository if it is open-source",
252
+ "doi": "DOI to the model publication (if available)",
253
+ "email": <main_contact_email>
254
  }
255
  ```
256
 
 
259
  - After the deadline, teams will be contacted with their evaluation results. If any issues are identified, theams will have up to two
260
  weeks to provide the necessary corrections.
261
 
262
+ - In case of **problems with the pull request** or incorrect validity of the submitted files, all discussions will be held in the [discussion board](https://huggingface.co/isp-uv-es/rtm_emulation/discussions).
263
 
264
  - After all the participants have provided the necessary corrections, the results will be published in the discussion section of this repository.
265