Zeel commited on
Commit
fd4877b
·
1 Parent(s): 2cf6d3f

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +2 -1
app.py CHANGED
@@ -78,5 +78,6 @@ st.markdown(r"""
78
  Here are some observations to note while experimenting with the hyperparameters:
79
  * Lengthscale $\ell$ controls the smoothness of the fit. Smoothness in fit increases with an increase in $\ell$.
80
  * Variance $\sigma_f^2$ controls the uncertainty in the model (aka epistemic uncertainty). Sometimes it is also called lengthscale in the vertical direction [[Slide 154](http://cbl.eng.cam.ac.uk/pub/Public/Turner/News/imperial-gp-tutorial.pdf)]).
81
- * Noise variance $\sigma_n^2$ is a measure of observation noise or irreducible noise (aka aleatoric uncertainty) present in the dataset. Increasing noise variance to a certain limit reduces overfitting. One can fix it if known from the data generation process or can be learned during the hyperparameter optimization process.
 
82
  """)
 
78
  Here are some observations to note while experimenting with the hyperparameters:
79
  * Lengthscale $\ell$ controls the smoothness of the fit. Smoothness in fit increases with an increase in $\ell$.
80
  * Variance $\sigma_f^2$ controls the uncertainty in the model (aka epistemic uncertainty). Sometimes it is also called lengthscale in the vertical direction [[Slide 154](http://cbl.eng.cam.ac.uk/pub/Public/Turner/News/imperial-gp-tutorial.pdf)]).
81
+ * Noise variance $\sigma_n^2$ is a measure of observation noise or irreducible noise (aka aleatoric uncertainty) present in the dataset. Increasing noise variance to a certain limit reduces overfitting. One can fix it if known from the data generation process or it can be learned during the hyperparameter optimization process.
82
+ * Negative Log Marginal Likelihood works as a loss function for GP hyperparameter tuning. Though there are advanced tools available for hyperparameter tuning, you can manually optimize them with the sliders above to test your understanding.
83
  """)