Christina Theodoris
commited on
Commit
·
98ce6d7
1
Parent(s):
77eb432
Add note to recommend tuning hyperparameters for downstream applications
Browse files
examples/cell_classification.ipynb
CHANGED
|
@@ -176,6 +176,14 @@
|
|
| 176 |
" }"
|
| 177 |
]
|
| 178 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 179 |
{
|
| 180 |
"cell_type": "code",
|
| 181 |
"execution_count": 19,
|
|
@@ -187,7 +195,7 @@
|
|
| 187 |
"# max input size\n",
|
| 188 |
"max_input_size = 2 ** 11 # 2048\n",
|
| 189 |
"\n",
|
| 190 |
-
"# set training
|
| 191 |
"# max learning rate\n",
|
| 192 |
"max_lr = 5e-5\n",
|
| 193 |
"# how many pretrained layers to freeze\n",
|
|
|
|
| 176 |
" }"
|
| 177 |
]
|
| 178 |
},
|
| 179 |
+
{
|
| 180 |
+
"cell_type": "markdown",
|
| 181 |
+
"id": "beaab7a4-cc13-4e8f-b137-ed18ff7b633c",
|
| 182 |
+
"metadata": {},
|
| 183 |
+
"source": [
|
| 184 |
+
"### Please note that, as usual with deep learning models, we **highly** recommend tuning learning hyperparameters for all fine-tuning applications as this can significantly improve model performance. Example hyperparameters are defined below, but please see the \"hyperparam_optimiz_for_disease_classifier\" script for an example of how to tune hyperparameters for downstream applications."
|
| 185 |
+
]
|
| 186 |
+
},
|
| 187 |
{
|
| 188 |
"cell_type": "code",
|
| 189 |
"execution_count": 19,
|
|
|
|
| 195 |
"# max input size\n",
|
| 196 |
"max_input_size = 2 ** 11 # 2048\n",
|
| 197 |
"\n",
|
| 198 |
+
"# set training hyperparameters\n",
|
| 199 |
"# max learning rate\n",
|
| 200 |
"max_lr = 5e-5\n",
|
| 201 |
"# how many pretrained layers to freeze\n",
|
examples/gene_classification.ipynb
CHANGED
|
@@ -444,6 +444,13 @@
|
|
| 444 |
"## Fine-Tune With Gene Classification Learning Objective and Quantify Predictive Performance"
|
| 445 |
]
|
| 446 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 447 |
{
|
| 448 |
"cell_type": "code",
|
| 449 |
"execution_count": null,
|
|
@@ -454,7 +461,7 @@
|
|
| 454 |
"# max input size\n",
|
| 455 |
"max_input_size = 2 ** 11 # 2048\n",
|
| 456 |
"\n",
|
| 457 |
-
"# set training
|
| 458 |
"# max learning rate\n",
|
| 459 |
"max_lr = 5e-5\n",
|
| 460 |
"# how many pretrained layers to freeze\n",
|
|
|
|
| 444 |
"## Fine-Tune With Gene Classification Learning Objective and Quantify Predictive Performance"
|
| 445 |
]
|
| 446 |
},
|
| 447 |
+
{
|
| 448 |
+
"cell_type": "markdown",
|
| 449 |
+
"metadata": {},
|
| 450 |
+
"source": [
|
| 451 |
+
"### Please note that, as usual with deep learning models, we **highly** recommend tuning learning hyperparameters for all fine-tuning applications as this can significantly improve model performance. Example hyperparameters are defined below, but please see the \"hyperparam_optimiz_for_disease_classifier\" script for an example of how to tune hyperparameters for downstream applications."
|
| 452 |
+
]
|
| 453 |
+
},
|
| 454 |
{
|
| 455 |
"cell_type": "code",
|
| 456 |
"execution_count": null,
|
|
|
|
| 461 |
"# max input size\n",
|
| 462 |
"max_input_size = 2 ** 11 # 2048\n",
|
| 463 |
"\n",
|
| 464 |
+
"# set training hyperparameters\n",
|
| 465 |
"# max learning rate\n",
|
| 466 |
"max_lr = 5e-5\n",
|
| 467 |
"# how many pretrained layers to freeze\n",
|