Refactor description in app.py to improve readability
Browse files
app.py
CHANGED
|
@@ -251,29 +251,72 @@ iface = gr.Interface(
|
|
| 251 |
datatype=["number"] * len(example_result),
|
| 252 |
),
|
| 253 |
description="""
|
| 254 |
-
|
| 255 |
-
|
| 256 |
-
`y2`,
|
| 257 |
-
|
| 258 |
-
|
| 259 |
-
|
| 260 |
-
|
| 261 |
-
|
| 262 |
-
|
| 263 |
-
|
| 264 |
-
|
| 265 |
-
|
| 266 |
-
|
| 267 |
-
|
| 268 |
-
|
| 269 |
-
|
| 270 |
-
|
| 271 |
-
|
| 272 |
-
|
| 273 |
-
|
| 274 |
-
|
| 275 |
-
|
| 276 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 277 |
""",
|
| 278 |
)
|
| 279 |
iface.launch()
|
|
|
|
| 251 |
datatype=["number"] * len(example_result),
|
| 252 |
),
|
| 253 |
description="""
|
| 254 |
+
## Objectives
|
| 255 |
+
|
| 256 |
+
**Minimize `y1`, `y2`, `y3`, and `y4`**
|
| 257 |
+
|
| 258 |
+
### Correlations
|
| 259 |
+
|
| 260 |
+
- `y1` and `y2` are correlated
|
| 261 |
+
- `y1` is anticorrelated with `y3`
|
| 262 |
+
- `y2` is anticorrelated with `y3`
|
| 263 |
+
|
| 264 |
+
### Noise
|
| 265 |
+
|
| 266 |
+
`y1`, `y2`, and `y3` are stochastic with heteroskedastic, parameter-free
|
| 267 |
+
noise, whereas `y4` is deterministic, but still considered 'black-box'. In
|
| 268 |
+
other words, repeat calls with the same input arguments will result in
|
| 269 |
+
different values for `y1`, `y2`, and `y3`, but the same value for `y4`.
|
| 270 |
+
|
| 271 |
+
### Objective thresholds
|
| 272 |
+
|
| 273 |
+
If `y1` is greater than 0.2, the result is considered "bad" no matter how
|
| 274 |
+
good the other values are. If `y2` is greater than 0.7, the result is
|
| 275 |
+
considered "bad" no matter how good the other values are. If `y3` is greater
|
| 276 |
+
than 1800, the result is considered "bad" no matter how good the other
|
| 277 |
+
values are. If `y4` is greater than 40e6, the result is considered "bad" no
|
| 278 |
+
matter how good the other values are.
|
| 279 |
+
|
| 280 |
+
## Search Space
|
| 281 |
+
|
| 282 |
+
### Fidelity
|
| 283 |
+
|
| 284 |
+
`fidelity1` is a fidelity parameter. The lowest fidelity is 0, and the
|
| 285 |
+
highest fidelity is 1. The higher the fidelity, the more expensive the
|
| 286 |
+
evaluation, and the higher the quality.
|
| 287 |
+
|
| 288 |
+
NOTE: `fidelity1` and `y3` are correlated.
|
| 289 |
+
|
| 290 |
+
### Constraints
|
| 291 |
+
|
| 292 |
+
- x<sub>19</sub> < x<sub>20</sub>
|
| 293 |
+
- x<sub>6</sub> + x<sub>15</sub> ≤ 1.0
|
| 294 |
+
|
| 295 |
+
### Parameter bounds
|
| 296 |
+
|
| 297 |
+
- 0 ≤ x<sub>i</sub> ≤ 1 for i ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
|
| 298 |
+
14, 15, 16, 17, 18, 19, 20}
|
| 299 |
+
- c<sub>1</sub> ∈ {c1_0, c1_1}
|
| 300 |
+
- c<sub>2</sub> ∈ {c2_0, c2_1}
|
| 301 |
+
- c<sub>3</sub> ∈ {c3_0, c3_1, c3_2}
|
| 302 |
+
- 0 ≤ fidelity1 ≤ 1
|
| 303 |
+
|
| 304 |
+
## Notion of best
|
| 305 |
+
|
| 306 |
+
Thresholded Pareto front hypervolume vs. running cost for three different
|
| 307 |
+
budgets, and averaged over 10 search campaigns.
|
| 308 |
+
|
| 309 |
+
References:
|
| 310 |
+
|
| 311 |
+
(1) Baird, S. G.; Liu, M.; Sparks, T. D. High-Dimensional Bayesian
|
| 312 |
+
Optimization of 23 Hyperparameters over 100 Iterations for an
|
| 313 |
+
Attention-Based Network to Predict Materials Property: A Case Study on
|
| 314 |
+
CrabNet Using Ax Platform and SAASBO. Computational Materials Science
|
| 315 |
+
2022, 211, 111505. https://doi.org/10.1016/j.commatsci.2022.111505.
|
| 316 |
+
(2) Baird, S. G.; Parikh, J. N.; Sparks, T. D. Materials Science
|
| 317 |
+
Optimization Benchmark Dataset for High-Dimensional, Multi-Objective,
|
| 318 |
+
Multi-Fidelity Optimization of CrabNet Hyperparameters. ChemRxiv March
|
| 319 |
+
7, 2023. https://doi.org/10.26434/chemrxiv-2023-9s6r7.
|
| 320 |
""",
|
| 321 |
)
|
| 322 |
iface.launch()
|