avantikalal commited on
Commit
e410f4c
·
1 Parent(s): 2878d99

Upload folder using huggingface_hub (#1)

Browse files

- Upload folder using huggingface_hub (ae1d3d64cb38197827fff3b93c5036f46960246e)

README.md CHANGED
@@ -1,3 +1,71 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ library_name: pytorch-lightning
4
+ pipeline_tag: tabular-regression
5
+ tags:
6
+ - biology
7
+ - genomics
8
+ datasets:
9
+ - Genentech/borzoi-data
10
+ ---
11
+
12
+ # Borzoi Model (Linder et al. 2025)
13
+
14
+ ## Model Description
15
+ This repository contains the weights for the Borzoi model, a model designed to predict functional genomic tracks from genomic DNA sequences.
16
+
17
+ - **Architecture:** Convolutions followed by Transformer and U-net layers.
18
+ - **Input:** 524,288 bp of genomic DNA sequence.
19
+ - **Output Resolution:** 32 bp bins.
20
+ - **Source:** [Linder, J. et al. Nature Genetics (2025)](https://www.nature.com/articles/s41588-024-02053-6)
21
+
22
+
23
+ ## Repository Content
24
+ The repository includes both full PyTorch Lightning checkpoints and raw state dictionaries for the human and mouse versions of the model. Each model has 4 independent replicates. Note that the weights are derived from the publication but the model has been converted into the PyTorch Lightning format used by gReLU (https://github.com/Genentech/gReLU).
25
+
26
+ | File | Type | Description |
27
+ | :--- | :--- | :--- |
28
+ | `human_rep0.ckpt` | PyTorch Lightning | Full checkpoint including base model and human head, replicate 0. |
29
+ | `human_rep1.ckpt` | PyTorch Lightning | Full checkpoint including base model and human head, replicate 1. |
30
+ | `human_rep2.ckpt` | PyTorch Lightning | Full checkpoint including base model and human head, replicate 2. |
31
+ | `human_rep3.ckpt` | PyTorch Lightning | Full checkpoint including base model and human head, replicate 3. |
32
+ | `mouse_rep0.ckpt` | PyTorch Lightning | Full checkpoint including base model and mouse head, replicate 0. |
33
+ | `mouse_rep1.ckpt` | PyTorch Lightning | Full checkpoint including base model and mouse head, replicate 1. |
34
+ | `mouse_rep2.ckpt` | PyTorch Lightning | Full checkpoint including base model and mouse head, replicate 2. |
35
+ | `mouse_rep3.ckpt` | PyTorch Lightning | Full checkpoint including base model and mouse head, replicate 3. |
36
+ | `human_state_dict_rep0.h5` | HDF5 | Weights-only state dictionary for the human model, replicate 0. |
37
+ | `human_state_dict_rep1.h5` | HDF5 | Weights-only state dictionary for the human model, replicate 1. |
38
+ | `human_state_dict_rep2.h5` | HDF5 | Weights-only state dictionary for the human model, replicate 2. |
39
+ | `human_state_dict_rep3.h5` | HDF5 | Weights-only state dictionary for the human model, replicate 3. |
40
+ | `mouse_state_dict_rep0.h5` | HDF5 | Weights-only state dictionary for the mouse model, replicate 0. |
41
+ | `mouse_state_dict_rep1.h5` | HDF5 | Weights-only state dictionary for the mouse model, replicate 1. |
42
+ | `mouse_state_dict_rep2.h5` | HDF5 | Weights-only state dictionary for the mouse model, replicate 2. |
43
+ | `mouse_state_dict_rep3.h5` | HDF5 | Weights-only state dictionary for the mouse model, replicate 3. |
44
+ | `save_wandb_borzoi_human.ipynb` | Jupyter Notebook | Code used to create human checkpoints |
45
+ | `save_wandb_borzoi_mouse.ipynb` | Jupyter Notebook | Code used to create mouse checkpoints |
46
+
47
+ ## Model Heads & Output Tracks
48
+ The uman and mouse `.ckpt` files for the same replicate utilize the same core transformer trunk but differ in their species-specific output heads.
49
+
50
+ ### Outputs
51
+
52
+ Human Head: 7,611 total tracks
53
+ Mouse Head: 2,608 total tracks
54
+
55
+ ## Usage
56
+ The models are intended for use with the `grelu` library.
57
+
58
+ ```python
59
+ from grelu.lightning import LightningModel
60
+ from huggingface_hub import hf_hub_download
61
+
62
+ # Download the desired checkpoint
63
+ ckpt_path = hf_hub_download(
64
+ repo_id="Genentech/borzoi-model",
65
+ filename="human_rep0.ckpt"
66
+ )
67
+
68
+ # Load the model
69
+ model = LightningModel.load_from_checkpoint(ckpt_path)
70
+ model.eval()
71
+ ```
human_rep0.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd294dd588f0d9b481a10bc25dd56ff4cc1c3ba58b7f9f92aacd072fe6b5b2ae
3
+ size 746371522
human_rep1.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0282dc7be3a5ed92245918aef186ab51fda21b51ef067f0690f3479c5943f1b5
3
+ size 746371522
human_rep2.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24020ec9b946e514fa5c5a64021a1a4996849c1cab56f2d1be08dd32ae5ca777
3
+ size 746371522
human_rep3.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:41921e517e5c0cbb19b0ad8a9715dd629eca0bd2554ae82baf0dd91a5174996d
3
+ size 746371522
human_state_dict_rep0.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de8c55686d8b2be8cf19c471910cc07b7d362a45fdcfefe974251975605ed0e7
3
+ size 743755470
human_state_dict_rep1.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cbd34d350f8151e990176a31c1aff4df8d9aef5c0ed5a230e7ae772fb3bd0caa
3
+ size 743755470
human_state_dict_rep2.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2fb9b3f487028d3d211503e5695fd5207516760ada8e4a9d5489b4175ebf096b
3
+ size 743755470
human_state_dict_rep3.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:271dc415cafbd9839c46bc86a81db21676c331c6542deaa8b4a5885eda3eb66d
3
+ size 743755470
mouse_rep0.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9c8aeaa8633d1405913afe95077dd78c5a269100ee2e61c47518f1fa4b102b57
3
+ size 706678018
mouse_rep1.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cff046c696ef28bc3763607e6deb6bdd5d0a17e9c0a1d5ef9bccb119285e506f
3
+ size 706678018
mouse_rep2.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:586c376264f599d5b3b780fb9c10b9fe4cfc4655aee2d4f8c0d14155a21318c6
3
+ size 706678018
mouse_rep3.ckpt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec01f16f0aa80d99f31c00191c7028c6e79c03fd7a22e83f368881ad60004199
3
+ size 706678018
mouse_state_dict_rep0.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5b7ec58e3496126188fa267a362d57bb9f5dde766d30407eb7426d0cb486bcd
3
+ size 705326768
mouse_state_dict_rep1.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8756f3231fa0ce3ea947101c5315b818c82ab33738abbc83c810439cf934a6b2
3
+ size 705326768
mouse_state_dict_rep2.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7e957c9cb7ec0cdad719bfe6a2c2684659d5219a75b2173d2c4a3e872185aad8
3
+ size 705326768
mouse_state_dict_rep3.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:efdbbeb932f06982c5af3e07b4b1fdfbda709f90dbdde483000faf1a06220964
3
+ size 705326768
save_wandb_borzoi_ckpt_human.ipynb ADDED
@@ -0,0 +1,684 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": 1,
6
+ "id": "c31446f3-a976-42a0-9dfb-bc873049464b",
7
+ "metadata": {},
8
+ "outputs": [
9
+ {
10
+ "name": "stderr",
11
+ "output_type": "stream",
12
+ "text": [
13
+ "/opt/conda/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
14
+ " from .autonotebook import tqdm as notebook_tqdm\n"
15
+ ]
16
+ }
17
+ ],
18
+ "source": [
19
+ "import wandb\n",
20
+ "import torch\n",
21
+ "import os\n",
22
+ "import pandas as pd\n",
23
+ "from grelu.lightning import LightningModel\n",
24
+ "import pytorch_lightning as pl\n",
25
+ "from grelu.sequence.utils import get_unique_length, resize"
26
+ ]
27
+ },
28
+ {
29
+ "cell_type": "markdown",
30
+ "id": "45e5daae-fe31-4cc7-b876-683768acd3cf",
31
+ "metadata": {},
32
+ "source": [
33
+ "## set up wandb"
34
+ ]
35
+ },
36
+ {
37
+ "cell_type": "code",
38
+ "execution_count": 2,
39
+ "id": "18c561ab-ba82-4718-abe9-96c44133b314",
40
+ "metadata": {},
41
+ "outputs": [
42
+ {
43
+ "name": "stderr",
44
+ "output_type": "stream",
45
+ "text": [
46
+ "\u001b[34m\u001b[1mwandb\u001b[0m: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information.\n",
47
+ "\u001b[34m\u001b[1mwandb\u001b[0m: Currently logged in as: \u001b[33mavantikalal\u001b[0m (\u001b[33mgrelu\u001b[0m) to \u001b[32mhttps://api.wandb.ai\u001b[0m. Use \u001b[1m`wandb login --relogin`\u001b[0m to force relogin\n"
48
+ ]
49
+ },
50
+ {
51
+ "data": {
52
+ "text/plain": [
53
+ "True"
54
+ ]
55
+ },
56
+ "execution_count": 2,
57
+ "metadata": {},
58
+ "output_type": "execute_result"
59
+ }
60
+ ],
61
+ "source": [
62
+ "wandb.login(host=\"https://api.wandb.ai\")"
63
+ ]
64
+ },
65
+ {
66
+ "cell_type": "code",
67
+ "execution_count": 3,
68
+ "id": "48cf7ec6-1b32-4f96-a0f1-c7ffb1cb5fcf",
69
+ "metadata": {},
70
+ "outputs": [
71
+ {
72
+ "data": {
73
+ "text/html": [
74
+ "Tracking run with wandb version 0.19.7"
75
+ ],
76
+ "text/plain": [
77
+ "<IPython.core.display.HTML object>"
78
+ ]
79
+ },
80
+ "metadata": {},
81
+ "output_type": "display_data"
82
+ },
83
+ {
84
+ "data": {
85
+ "text/html": [
86
+ "Run data is saved locally in <code>/code/github/gReLU-applications/borzoi/wandb/run-20250306_055353-qtz95621</code>"
87
+ ],
88
+ "text/plain": [
89
+ "<IPython.core.display.HTML object>"
90
+ ]
91
+ },
92
+ "metadata": {},
93
+ "output_type": "display_data"
94
+ },
95
+ {
96
+ "data": {
97
+ "text/html": [
98
+ "Syncing run <strong><a href='https://wandb.ai/grelu/borzoi/runs/qtz95621' target=\"_blank\">copy-human</a></strong> to <a href='https://wandb.ai/grelu/borzoi' target=\"_blank\">Weights & Biases</a> (<a href='https://wandb.me/developer-guide' target=\"_blank\">docs</a>)<br>"
99
+ ],
100
+ "text/plain": [
101
+ "<IPython.core.display.HTML object>"
102
+ ]
103
+ },
104
+ "metadata": {},
105
+ "output_type": "display_data"
106
+ },
107
+ {
108
+ "data": {
109
+ "text/html": [
110
+ " View project at <a href='https://wandb.ai/grelu/borzoi' target=\"_blank\">https://wandb.ai/grelu/borzoi</a>"
111
+ ],
112
+ "text/plain": [
113
+ "<IPython.core.display.HTML object>"
114
+ ]
115
+ },
116
+ "metadata": {},
117
+ "output_type": "display_data"
118
+ },
119
+ {
120
+ "data": {
121
+ "text/html": [
122
+ " View run at <a href='https://wandb.ai/grelu/borzoi/runs/qtz95621' target=\"_blank\">https://wandb.ai/grelu/borzoi/runs/qtz95621</a>"
123
+ ],
124
+ "text/plain": [
125
+ "<IPython.core.display.HTML object>"
126
+ ]
127
+ },
128
+ "metadata": {},
129
+ "output_type": "display_data"
130
+ }
131
+ ],
132
+ "source": [
133
+ "run = wandb.init(\n",
134
+ " entity='grelu', project='borzoi', job_type='copy', name='copy-human',\n",
135
+ " settings=wandb.Settings(\n",
136
+ " program_relpath='save_wandb_borzoi_ckpt_human.ipynb',\n",
137
+ " program_abspath='/code/github/gReLU-applications/borzoi/save_wandb_borzoi_ckpt_human.ipynb'\n",
138
+ " ))"
139
+ ]
140
+ },
141
+ {
142
+ "cell_type": "markdown",
143
+ "id": "de1c87dc-cacf-48f0-a081-5260e325cb7b",
144
+ "metadata": {},
145
+ "source": [
146
+ "## Load tasks and intervals"
147
+ ]
148
+ },
149
+ {
150
+ "cell_type": "code",
151
+ "execution_count": 4,
152
+ "id": "556b9768-f3e4-466a-b097-0b3226e788dd",
153
+ "metadata": {},
154
+ "outputs": [],
155
+ "source": [
156
+ "tasks_path = '/code/borzoi/data/targets_human.txt.gz'"
157
+ ]
158
+ },
159
+ {
160
+ "cell_type": "code",
161
+ "execution_count": 5,
162
+ "id": "f8616dde-2567-4a3c-8fb9-de412d342078",
163
+ "metadata": {},
164
+ "outputs": [
165
+ {
166
+ "name": "stdout",
167
+ "output_type": "stream",
168
+ "text": [
169
+ "7611\n"
170
+ ]
171
+ }
172
+ ],
173
+ "source": [
174
+ "tasks = pd.read_table(tasks_path, index_col=0)\n",
175
+ "print(len(tasks))"
176
+ ]
177
+ },
178
+ {
179
+ "cell_type": "code",
180
+ "execution_count": 6,
181
+ "id": "0bf621ee-bd42-4299-98b1-d4f0e5ebb6ec",
182
+ "metadata": {},
183
+ "outputs": [
184
+ {
185
+ "name": "stderr",
186
+ "output_type": "stream",
187
+ "text": [
188
+ "\u001b[34m\u001b[1mwandb\u001b[0m: 1 of 1 files downloaded. \n"
189
+ ]
190
+ }
191
+ ],
192
+ "source": [
193
+ "artifact = run.use_artifact('human_intervals:latest')\n",
194
+ "dir = artifact.download()\n",
195
+ "intervals = pd.read_table(os.path.join(dir, \"data.tsv\"))"
196
+ ]
197
+ },
198
+ {
199
+ "cell_type": "markdown",
200
+ "id": "882d78ab-832c-4166-b130-91e37cda7327",
201
+ "metadata": {},
202
+ "source": [
203
+ "## Process tasks"
204
+ ]
205
+ },
206
+ {
207
+ "cell_type": "code",
208
+ "execution_count": 7,
209
+ "id": "66f24ff2-74bc-41ee-9cc9-5cf9e75c2ebf",
210
+ "metadata": {},
211
+ "outputs": [
212
+ {
213
+ "data": {
214
+ "text/html": [
215
+ "<div>\n",
216
+ "<style scoped>\n",
217
+ " .dataframe tbody tr th:only-of-type {\n",
218
+ " vertical-align: middle;\n",
219
+ " }\n",
220
+ "\n",
221
+ " .dataframe tbody tr th {\n",
222
+ " vertical-align: top;\n",
223
+ " }\n",
224
+ "\n",
225
+ " .dataframe thead th {\n",
226
+ " text-align: right;\n",
227
+ " }\n",
228
+ "</style>\n",
229
+ "<table border=\"1\" class=\"dataframe\">\n",
230
+ " <thead>\n",
231
+ " <tr style=\"text-align: right;\">\n",
232
+ " <th></th>\n",
233
+ " <th>name</th>\n",
234
+ " <th>file</th>\n",
235
+ " <th>clip</th>\n",
236
+ " <th>clip_soft</th>\n",
237
+ " <th>scale</th>\n",
238
+ " <th>sum_stat</th>\n",
239
+ " <th>strand_pair</th>\n",
240
+ " <th>description</th>\n",
241
+ " <th>assay</th>\n",
242
+ " <th>sample</th>\n",
243
+ " </tr>\n",
244
+ " </thead>\n",
245
+ " <tbody>\n",
246
+ " <tr>\n",
247
+ " <th>0</th>\n",
248
+ " <td>CNhs10608+</td>\n",
249
+ " <td>/home/drk/tillage/datasets/human/cage/fantom/C...</td>\n",
250
+ " <td>768</td>\n",
251
+ " <td>384</td>\n",
252
+ " <td>1.0</td>\n",
253
+ " <td>sum</td>\n",
254
+ " <td>1</td>\n",
255
+ " <td>CAGE:Clontech Human Universal Reference Total ...</td>\n",
256
+ " <td>CAGE</td>\n",
257
+ " <td>Clontech Human Universal Reference Total RNA, ...</td>\n",
258
+ " </tr>\n",
259
+ " <tr>\n",
260
+ " <th>1</th>\n",
261
+ " <td>CNhs10608-</td>\n",
262
+ " <td>/home/drk/tillage/datasets/human/cage/fantom/C...</td>\n",
263
+ " <td>768</td>\n",
264
+ " <td>384</td>\n",
265
+ " <td>1.0</td>\n",
266
+ " <td>sum</td>\n",
267
+ " <td>0</td>\n",
268
+ " <td>CAGE:Clontech Human Universal Reference Total ...</td>\n",
269
+ " <td>CAGE</td>\n",
270
+ " <td>Clontech Human Universal Reference Total RNA, ...</td>\n",
271
+ " </tr>\n",
272
+ " </tbody>\n",
273
+ "</table>\n",
274
+ "</div>"
275
+ ],
276
+ "text/plain": [
277
+ " name file clip \\\n",
278
+ "0 CNhs10608+ /home/drk/tillage/datasets/human/cage/fantom/C... 768 \n",
279
+ "1 CNhs10608- /home/drk/tillage/datasets/human/cage/fantom/C... 768 \n",
280
+ "\n",
281
+ " clip_soft scale sum_stat strand_pair \\\n",
282
+ "0 384 1.0 sum 1 \n",
283
+ "1 384 1.0 sum 0 \n",
284
+ "\n",
285
+ " description assay \\\n",
286
+ "0 CAGE:Clontech Human Universal Reference Total ... CAGE \n",
287
+ "1 CAGE:Clontech Human Universal Reference Total ... CAGE \n",
288
+ "\n",
289
+ " sample \n",
290
+ "0 Clontech Human Universal Reference Total RNA, ... \n",
291
+ "1 Clontech Human Universal Reference Total RNA, ... "
292
+ ]
293
+ },
294
+ "execution_count": 7,
295
+ "metadata": {},
296
+ "output_type": "execute_result"
297
+ }
298
+ ],
299
+ "source": [
300
+ "tasks=tasks.reset_index(drop=True)\n",
301
+ "tasks[\"assay\"] = tasks[\"description\"].apply(lambda x: x.split(\":\")[0])\n",
302
+ "tasks[\"assay\"].value_counts()\n",
303
+ "tasks[\"sample\"] = tasks[\"description\"].apply(lambda x: \":\".join(x.split(\":\")[1:]))\n",
304
+ "tasks = tasks.rename(columns={\"identifier\":\"name\"})\n",
305
+ "tasks.head(2)"
306
+ ]
307
+ },
308
+ {
309
+ "cell_type": "code",
310
+ "execution_count": 8,
311
+ "id": "4f96ecc7-238e-4707-a3af-64b0d1828bd4",
312
+ "metadata": {},
313
+ "outputs": [],
314
+ "source": [
315
+ "tasks = tasks.to_dict(orient=\"list\")"
316
+ ]
317
+ },
318
+ {
319
+ "cell_type": "markdown",
320
+ "id": "3056ecfa-1b75-46b7-9e36-4e72c4cb3618",
321
+ "metadata": {},
322
+ "source": [
323
+ "## Process intervals"
324
+ ]
325
+ },
326
+ {
327
+ "cell_type": "code",
328
+ "execution_count": 9,
329
+ "id": "86aae057-b106-4884-81b3-d1aa563ad4f7",
330
+ "metadata": {},
331
+ "outputs": [],
332
+ "source": [
333
+ "train_intervals = intervals[intervals.split=='train'].iloc[:, :3]\n",
334
+ "val_intervals = intervals[intervals.split=='val'].iloc[:, :3]\n",
335
+ "test_intervals = intervals[intervals.split=='test'].iloc[:, :3]\n",
336
+ "del intervals"
337
+ ]
338
+ },
339
+ {
340
+ "cell_type": "markdown",
341
+ "id": "e4bd10a9-40c2-4299-aa74-6474ea5a71ef",
342
+ "metadata": {},
343
+ "source": [
344
+ "## Initialize model"
345
+ ]
346
+ },
347
+ {
348
+ "cell_type": "code",
349
+ "execution_count": 10,
350
+ "id": "06dba49d-11ba-471b-83d7-05fe45a9bd5a",
351
+ "metadata": {},
352
+ "outputs": [],
353
+ "source": [
354
+ "model_params = {\n",
355
+ " \"model_type\":\"BorzoiModel\",\n",
356
+ " \"n_tasks\":7611,\n",
357
+ " \"final_act_func\": 'softplus',\n",
358
+ " \"final_pool_func\":None,\n",
359
+ " \"crop_len\":5120,\n",
360
+ "}\n",
361
+ "train_params={\n",
362
+ " \"task\":\"regression\",\n",
363
+ " \"loss\":\"mse\",\n",
364
+ "}\n",
365
+ "lm = LightningModel(model_params, train_params)"
366
+ ]
367
+ },
368
+ {
369
+ "cell_type": "markdown",
370
+ "id": "ab6a3055-d338-40f0-9fd3-f8f3134df643",
371
+ "metadata": {},
372
+ "source": [
373
+ "## Save checkpoints"
374
+ ]
375
+ },
376
+ {
377
+ "cell_type": "code",
378
+ "execution_count": 11,
379
+ "id": "bbccda9a-7a62-4472-9c0b-34abf98a7e1c",
380
+ "metadata": {},
381
+ "outputs": [
382
+ {
383
+ "name": "stdout",
384
+ "output_type": "stream",
385
+ "text": [
386
+ "0\n"
387
+ ]
388
+ },
389
+ {
390
+ "name": "stderr",
391
+ "output_type": "stream",
392
+ "text": [
393
+ "/tmp/ipykernel_3881069/2231986157.py:16: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.\n",
394
+ " state_dict = torch.load(f\"/data/borzoi/torch_weights/fold{rep}.h5\")\n",
395
+ "Trainer will use only 1 of 8 GPUs because it is running inside an interactive / notebook environment. You may try to set `Trainer(devices=8)` but please note that multi-GPU inside interactive / notebook environments is considered experimental and unstable. Your mileage may vary.\n",
396
+ "GPU available: True (cuda), used: True\n",
397
+ "TPU available: False, using: 0 TPU cores\n",
398
+ "HPU available: False, using: 0 HPUs\n",
399
+ "You are using a CUDA device ('NVIDIA A100-SXM4-80GB') that has Tensor Cores. To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision\n",
400
+ "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]\n"
401
+ ]
402
+ },
403
+ {
404
+ "name": "stdout",
405
+ "output_type": "stream",
406
+ "text": [
407
+ "1\n"
408
+ ]
409
+ },
410
+ {
411
+ "name": "stderr",
412
+ "output_type": "stream",
413
+ "text": [
414
+ "/tmp/ipykernel_3881069/2231986157.py:16: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.\n",
415
+ " state_dict = torch.load(f\"/data/borzoi/torch_weights/fold{rep}.h5\")\n",
416
+ "Trainer will use only 1 of 8 GPUs because it is running inside an interactive / notebook environment. You may try to set `Trainer(devices=8)` but please note that multi-GPU inside interactive / notebook environments is considered experimental and unstable. Your mileage may vary.\n",
417
+ "GPU available: True (cuda), used: True\n",
418
+ "TPU available: False, using: 0 TPU cores\n",
419
+ "HPU available: False, using: 0 HPUs\n",
420
+ "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]\n"
421
+ ]
422
+ },
423
+ {
424
+ "name": "stdout",
425
+ "output_type": "stream",
426
+ "text": [
427
+ "2\n"
428
+ ]
429
+ },
430
+ {
431
+ "name": "stderr",
432
+ "output_type": "stream",
433
+ "text": [
434
+ "/tmp/ipykernel_3881069/2231986157.py:16: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.\n",
435
+ " state_dict = torch.load(f\"/data/borzoi/torch_weights/fold{rep}.h5\")\n",
436
+ "Trainer will use only 1 of 8 GPUs because it is running inside an interactive / notebook environment. You may try to set `Trainer(devices=8)` but please note that multi-GPU inside interactive / notebook environments is considered experimental and unstable. Your mileage may vary.\n",
437
+ "GPU available: True (cuda), used: True\n",
438
+ "TPU available: False, using: 0 TPU cores\n",
439
+ "HPU available: False, using: 0 HPUs\n",
440
+ "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]\n"
441
+ ]
442
+ },
443
+ {
444
+ "name": "stdout",
445
+ "output_type": "stream",
446
+ "text": [
447
+ "3\n"
448
+ ]
449
+ },
450
+ {
451
+ "name": "stderr",
452
+ "output_type": "stream",
453
+ "text": [
454
+ "/tmp/ipykernel_3881069/2231986157.py:16: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.\n",
455
+ " state_dict = torch.load(f\"/data/borzoi/torch_weights/fold{rep}.h5\")\n",
456
+ "Trainer will use only 1 of 8 GPUs because it is running inside an interactive / notebook environment. You may try to set `Trainer(devices=8)` but please note that multi-GPU inside interactive / notebook environments is considered experimental and unstable. Your mileage may vary.\n",
457
+ "GPU available: True (cuda), used: True\n",
458
+ "TPU available: False, using: 0 TPU cores\n",
459
+ "HPU available: False, using: 0 HPUs\n",
460
+ "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]\n"
461
+ ]
462
+ }
463
+ ],
464
+ "source": [
465
+ "for rep in range(4):\n",
466
+ " print(rep)\n",
467
+ " \n",
468
+ " lm.data_params[\"tasks\"] = tasks\n",
469
+ " lm.data_params[\"train\"] = dict()\n",
470
+ " lm.data_params[\"val\"] = dict()\n",
471
+ " lm.data_params[\"test\"] = dict()\n",
472
+ " lm.data_params[\"train\"][\"seq_len\"] = 524288\n",
473
+ " lm.data_params[\"train\"][\"label_len\"] = 6144*32\n",
474
+ " lm.data_params[\"train\"][\"genome\"] = \"hg38\"\n",
475
+ " lm.data_params[\"train\"][\"bin_size\"] = 32\n",
476
+ " lm.data_params[\"train\"][\"intervals\"] = train_intervals.to_dict(orient='list')\n",
477
+ " lm.data_params[\"val\"][\"intervals\"] = val_intervals.to_dict(orient='list')\n",
478
+ " lm.data_params[\"test\"][\"intervals\"] = test_intervals.to_dict(orient='list')\n",
479
+ "\n",
480
+ " state_dict = torch.load(f\"/data/borzoi/torch_weights/fold{rep}.h5\")\n",
481
+ " lm.model.load_state_dict(state_dict)\n",
482
+ " \n",
483
+ " trainer = pl.Trainer()\n",
484
+ " try:\n",
485
+ " trainer.predict(lm)\n",
486
+ " except:\n",
487
+ " pass\n",
488
+ "\n",
489
+ " trainer.save_checkpoint(f'/data/borzoi/torch_weights/human_rep{rep}.ckpt')"
490
+ ]
491
+ },
492
+ {
493
+ "cell_type": "markdown",
494
+ "id": "4b5fbc09-abca-4cb7-a8b7-7eeba7b39cf8",
495
+ "metadata": {},
496
+ "source": [
497
+ "## Upload to wandb"
498
+ ]
499
+ },
500
+ {
501
+ "cell_type": "code",
502
+ "execution_count": 12,
503
+ "id": "c53a69b4-9092-44b9-b8a9-20b72aa8b511",
504
+ "metadata": {},
505
+ "outputs": [],
506
+ "source": [
507
+ "metadata={\n",
508
+ " 'model_params':lm.model_params, \n",
509
+ " 'train_params':lm.train_params, \n",
510
+ " 'data_params':lm.data_params\n",
511
+ " }"
512
+ ]
513
+ },
514
+ {
515
+ "cell_type": "code",
516
+ "execution_count": 13,
517
+ "id": "2a25b878-ca1f-4872-9549-ab502975ca2e",
518
+ "metadata": {},
519
+ "outputs": [
520
+ {
521
+ "name": "stdout",
522
+ "output_type": "stream",
523
+ "text": [
524
+ "0\n"
525
+ ]
526
+ },
527
+ {
528
+ "name": "stderr",
529
+ "output_type": "stream",
530
+ "text": [
531
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n",
532
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n",
533
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n",
534
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n",
535
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n",
536
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n"
537
+ ]
538
+ },
539
+ {
540
+ "name": "stdout",
541
+ "output_type": "stream",
542
+ "text": [
543
+ "1\n"
544
+ ]
545
+ },
546
+ {
547
+ "name": "stderr",
548
+ "output_type": "stream",
549
+ "text": [
550
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n",
551
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n",
552
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n",
553
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n",
554
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n",
555
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n"
556
+ ]
557
+ },
558
+ {
559
+ "name": "stdout",
560
+ "output_type": "stream",
561
+ "text": [
562
+ "2\n"
563
+ ]
564
+ },
565
+ {
566
+ "name": "stderr",
567
+ "output_type": "stream",
568
+ "text": [
569
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n",
570
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n",
571
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n",
572
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n",
573
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n",
574
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n"
575
+ ]
576
+ },
577
+ {
578
+ "name": "stdout",
579
+ "output_type": "stream",
580
+ "text": [
581
+ "3\n"
582
+ ]
583
+ },
584
+ {
585
+ "name": "stderr",
586
+ "output_type": "stream",
587
+ "text": [
588
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n",
589
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n",
590
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n",
591
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n",
592
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n",
593
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 351064 bytes\n"
594
+ ]
595
+ }
596
+ ],
597
+ "source": [
598
+ "for rep in range(4):\n",
599
+ " print(rep)\n",
600
+ " artifact = wandb.Artifact(\n",
601
+ " f'human_rep{rep}', \n",
602
+ " type='model',\n",
603
+ " metadata=metadata,\n",
604
+ " )\n",
605
+ " artifact.add_file(local_path=f'/data/borzoi/torch_weights/human_rep{rep}.ckpt', name='model.ckpt')\n",
606
+ " run.log_artifact(artifact)"
607
+ ]
608
+ },
609
+ {
610
+ "cell_type": "code",
611
+ "execution_count": 14,
612
+ "id": "98dd40ec-f276-4099-a4a7-39ccf1600a58",
613
+ "metadata": {},
614
+ "outputs": [
615
+ {
616
+ "data": {
617
+ "text/html": [],
618
+ "text/plain": [
619
+ "<IPython.core.display.HTML object>"
620
+ ]
621
+ },
622
+ "metadata": {},
623
+ "output_type": "display_data"
624
+ },
625
+ {
626
+ "data": {
627
+ "text/html": [
628
+ " View run <strong style=\"color:#cdcd00\">copy-human</strong> at: <a href='https://wandb.ai/grelu/borzoi/runs/qtz95621' target=\"_blank\">https://wandb.ai/grelu/borzoi/runs/qtz95621</a><br> View project at: <a href='https://wandb.ai/grelu/borzoi' target=\"_blank\">https://wandb.ai/grelu/borzoi</a><br>Synced 6 W&B file(s), 0 media file(s), 8 artifact file(s) and 0 other file(s)"
629
+ ],
630
+ "text/plain": [
631
+ "<IPython.core.display.HTML object>"
632
+ ]
633
+ },
634
+ "metadata": {},
635
+ "output_type": "display_data"
636
+ },
637
+ {
638
+ "data": {
639
+ "text/html": [
640
+ "Find logs at: <code>./wandb/run-20250306_055353-qtz95621/logs</code>"
641
+ ],
642
+ "text/plain": [
643
+ "<IPython.core.display.HTML object>"
644
+ ]
645
+ },
646
+ "metadata": {},
647
+ "output_type": "display_data"
648
+ }
649
+ ],
650
+ "source": [
651
+ "run.finish()"
652
+ ]
653
+ },
654
+ {
655
+ "cell_type": "code",
656
+ "execution_count": null,
657
+ "id": "1c5d6a57-a2a1-495d-90ff-16508e4e236c",
658
+ "metadata": {},
659
+ "outputs": [],
660
+ "source": []
661
+ }
662
+ ],
663
+ "metadata": {
664
+ "kernelspec": {
665
+ "display_name": "Python 3 (ipykernel)",
666
+ "language": "python",
667
+ "name": "python3"
668
+ },
669
+ "language_info": {
670
+ "codemirror_mode": {
671
+ "name": "ipython",
672
+ "version": 3
673
+ },
674
+ "file_extension": ".py",
675
+ "mimetype": "text/x-python",
676
+ "name": "python",
677
+ "nbconvert_exporter": "python",
678
+ "pygments_lexer": "ipython3",
679
+ "version": "3.11.9"
680
+ }
681
+ },
682
+ "nbformat": 4,
683
+ "nbformat_minor": 5
684
+ }
save_wandb_borzoi_ckpt_mouse.ipynb ADDED
@@ -0,0 +1,717 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": 1,
6
+ "id": "c31446f3-a976-42a0-9dfb-bc873049464b",
7
+ "metadata": {},
8
+ "outputs": [
9
+ {
10
+ "name": "stderr",
11
+ "output_type": "stream",
12
+ "text": [
13
+ "/opt/conda/lib/python3.11/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
14
+ " from .autonotebook import tqdm as notebook_tqdm\n"
15
+ ]
16
+ }
17
+ ],
18
+ "source": [
19
+ "import wandb\n",
20
+ "import torch\n",
21
+ "import pandas as pd\n",
22
+ "from grelu.lightning import LightningModel\n",
23
+ "import pytorch_lightning as pl\n",
24
+ "from grelu.sequence.utils import get_unique_length, resize"
25
+ ]
26
+ },
27
+ {
28
+ "cell_type": "markdown",
29
+ "id": "45e5daae-fe31-4cc7-b876-683768acd3cf",
30
+ "metadata": {},
31
+ "source": [
32
+ "## set up wandb"
33
+ ]
34
+ },
35
+ {
36
+ "cell_type": "code",
37
+ "execution_count": 2,
38
+ "id": "18c561ab-ba82-4718-abe9-96c44133b314",
39
+ "metadata": {},
40
+ "outputs": [
41
+ {
42
+ "name": "stderr",
43
+ "output_type": "stream",
44
+ "text": [
45
+ "\u001b[34m\u001b[1mwandb\u001b[0m: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information.\n",
46
+ "\u001b[34m\u001b[1mwandb\u001b[0m: Currently logged in as: \u001b[33mavantikalal\u001b[0m (\u001b[33mgrelu\u001b[0m) to \u001b[32mhttps://api.wandb.ai\u001b[0m. Use \u001b[1m`wandb login --relogin`\u001b[0m to force relogin\n"
47
+ ]
48
+ },
49
+ {
50
+ "data": {
51
+ "text/plain": [
52
+ "True"
53
+ ]
54
+ },
55
+ "execution_count": 2,
56
+ "metadata": {},
57
+ "output_type": "execute_result"
58
+ }
59
+ ],
60
+ "source": [
61
+ "wandb.login(host=\"https://api.wandb.ai\")"
62
+ ]
63
+ },
64
+ {
65
+ "cell_type": "code",
66
+ "execution_count": 3,
67
+ "id": "48cf7ec6-1b32-4f96-a0f1-c7ffb1cb5fcf",
68
+ "metadata": {},
69
+ "outputs": [
70
+ {
71
+ "data": {
72
+ "text/html": [
73
+ "Tracking run with wandb version 0.19.7"
74
+ ],
75
+ "text/plain": [
76
+ "<IPython.core.display.HTML object>"
77
+ ]
78
+ },
79
+ "metadata": {},
80
+ "output_type": "display_data"
81
+ },
82
+ {
83
+ "data": {
84
+ "text/html": [
85
+ "Run data is saved locally in <code>/code/github/gReLU-applications/borzoi/wandb/run-20250304_221737-jxwlrbzp</code>"
86
+ ],
87
+ "text/plain": [
88
+ "<IPython.core.display.HTML object>"
89
+ ]
90
+ },
91
+ "metadata": {},
92
+ "output_type": "display_data"
93
+ },
94
+ {
95
+ "data": {
96
+ "text/html": [
97
+ "Syncing run <strong><a href='https://wandb.ai/grelu/borzoi/runs/jxwlrbzp' target=\"_blank\">copy-mouse</a></strong> to <a href='https://wandb.ai/grelu/borzoi' target=\"_blank\">Weights & Biases</a> (<a href='https://wandb.me/developer-guide' target=\"_blank\">docs</a>)<br>"
98
+ ],
99
+ "text/plain": [
100
+ "<IPython.core.display.HTML object>"
101
+ ]
102
+ },
103
+ "metadata": {},
104
+ "output_type": "display_data"
105
+ },
106
+ {
107
+ "data": {
108
+ "text/html": [
109
+ " View project at <a href='https://wandb.ai/grelu/borzoi' target=\"_blank\">https://wandb.ai/grelu/borzoi</a>"
110
+ ],
111
+ "text/plain": [
112
+ "<IPython.core.display.HTML object>"
113
+ ]
114
+ },
115
+ "metadata": {},
116
+ "output_type": "display_data"
117
+ },
118
+ {
119
+ "data": {
120
+ "text/html": [
121
+ " View run at <a href='https://wandb.ai/grelu/borzoi/runs/jxwlrbzp' target=\"_blank\">https://wandb.ai/grelu/borzoi/runs/jxwlrbzp</a>"
122
+ ],
123
+ "text/plain": [
124
+ "<IPython.core.display.HTML object>"
125
+ ]
126
+ },
127
+ "metadata": {},
128
+ "output_type": "display_data"
129
+ }
130
+ ],
131
+ "source": [
132
+ "run = wandb.init(entity='grelu', project='borzoi', job_type='copy', name='copy-mouse') # Initialize a W&B Run"
133
+ ]
134
+ },
135
+ {
136
+ "cell_type": "markdown",
137
+ "id": "7fdb23c5-371e-4473-835c-cf21f82e610f",
138
+ "metadata": {},
139
+ "source": [
140
+ "## Paths"
141
+ ]
142
+ },
143
+ {
144
+ "cell_type": "code",
145
+ "execution_count": 4,
146
+ "id": "556b9768-f3e4-466a-b097-0b3226e788dd",
147
+ "metadata": {},
148
+ "outputs": [],
149
+ "source": [
150
+ "tasks_path = '/code/borzoi/data/targets_mouse.txt.gz'\n",
151
+ "intervals_path = '/gstore/data/resbioai/grelu/borzoi-data/mm10/sequences.bed'"
152
+ ]
153
+ },
154
+ {
155
+ "cell_type": "markdown",
156
+ "id": "de1c87dc-cacf-48f0-a081-5260e325cb7b",
157
+ "metadata": {},
158
+ "source": [
159
+ "## Process tasks"
160
+ ]
161
+ },
162
+ {
163
+ "cell_type": "code",
164
+ "execution_count": 5,
165
+ "id": "f8616dde-2567-4a3c-8fb9-de412d342078",
166
+ "metadata": {},
167
+ "outputs": [
168
+ {
169
+ "name": "stdout",
170
+ "output_type": "stream",
171
+ "text": [
172
+ "2608\n"
173
+ ]
174
+ }
175
+ ],
176
+ "source": [
177
+ "tasks = pd.read_table(tasks_path, index_col=0)\n",
178
+ "print(len(tasks))"
179
+ ]
180
+ },
181
+ {
182
+ "cell_type": "code",
183
+ "execution_count": 6,
184
+ "id": "66f24ff2-74bc-41ee-9cc9-5cf9e75c2ebf",
185
+ "metadata": {},
186
+ "outputs": [
187
+ {
188
+ "data": {
189
+ "text/html": [
190
+ "<div>\n",
191
+ "<style scoped>\n",
192
+ " .dataframe tbody tr th:only-of-type {\n",
193
+ " vertical-align: middle;\n",
194
+ " }\n",
195
+ "\n",
196
+ " .dataframe tbody tr th {\n",
197
+ " vertical-align: top;\n",
198
+ " }\n",
199
+ "\n",
200
+ " .dataframe thead th {\n",
201
+ " text-align: right;\n",
202
+ " }\n",
203
+ "</style>\n",
204
+ "<table border=\"1\" class=\"dataframe\">\n",
205
+ " <thead>\n",
206
+ " <tr style=\"text-align: right;\">\n",
207
+ " <th></th>\n",
208
+ " <th>name</th>\n",
209
+ " <th>file</th>\n",
210
+ " <th>clip</th>\n",
211
+ " <th>clip_soft</th>\n",
212
+ " <th>scale</th>\n",
213
+ " <th>sum_stat</th>\n",
214
+ " <th>strand_pair</th>\n",
215
+ " <th>description</th>\n",
216
+ " <th>assay</th>\n",
217
+ " <th>sample</th>\n",
218
+ " </tr>\n",
219
+ " </thead>\n",
220
+ " <tbody>\n",
221
+ " <tr>\n",
222
+ " <th>0</th>\n",
223
+ " <td>CNhs10464+</td>\n",
224
+ " <td>/home/drk/tillage/datasets/mouse/cage/fantom/C...</td>\n",
225
+ " <td>768</td>\n",
226
+ " <td>384</td>\n",
227
+ " <td>1.0</td>\n",
228
+ " <td>sum</td>\n",
229
+ " <td>1</td>\n",
230
+ " <td>CAGE:placenta, adult pregnant day17</td>\n",
231
+ " <td>CAGE</td>\n",
232
+ " <td>placenta, adult pregnant day17</td>\n",
233
+ " </tr>\n",
234
+ " <tr>\n",
235
+ " <th>1</th>\n",
236
+ " <td>CNhs10464-</td>\n",
237
+ " <td>/home/drk/tillage/datasets/mouse/cage/fantom/C...</td>\n",
238
+ " <td>768</td>\n",
239
+ " <td>384</td>\n",
240
+ " <td>1.0</td>\n",
241
+ " <td>sum</td>\n",
242
+ " <td>0</td>\n",
243
+ " <td>CAGE:placenta, adult pregnant day17</td>\n",
244
+ " <td>CAGE</td>\n",
245
+ " <td>placenta, adult pregnant day17</td>\n",
246
+ " </tr>\n",
247
+ " </tbody>\n",
248
+ "</table>\n",
249
+ "</div>"
250
+ ],
251
+ "text/plain": [
252
+ " name file clip \\\n",
253
+ "0 CNhs10464+ /home/drk/tillage/datasets/mouse/cage/fantom/C... 768 \n",
254
+ "1 CNhs10464- /home/drk/tillage/datasets/mouse/cage/fantom/C... 768 \n",
255
+ "\n",
256
+ " clip_soft scale sum_stat strand_pair \\\n",
257
+ "0 384 1.0 sum 1 \n",
258
+ "1 384 1.0 sum 0 \n",
259
+ "\n",
260
+ " description assay sample \n",
261
+ "0 CAGE:placenta, adult pregnant day17 CAGE placenta, adult pregnant day17 \n",
262
+ "1 CAGE:placenta, adult pregnant day17 CAGE placenta, adult pregnant day17 "
263
+ ]
264
+ },
265
+ "execution_count": 6,
266
+ "metadata": {},
267
+ "output_type": "execute_result"
268
+ }
269
+ ],
270
+ "source": [
271
+ "tasks=tasks.reset_index(drop=True)\n",
272
+ "tasks[\"assay\"] = tasks[\"description\"].apply(lambda x: x.split(\":\")[0])\n",
273
+ "tasks[\"assay\"].value_counts()\n",
274
+ "tasks[\"sample\"] = tasks[\"description\"].apply(lambda x: \":\".join(x.split(\":\")[1:]))\n",
275
+ "tasks = tasks.rename(columns={\"identifier\":\"name\"})\n",
276
+ "tasks.head(2)"
277
+ ]
278
+ },
279
+ {
280
+ "cell_type": "code",
281
+ "execution_count": 7,
282
+ "id": "4f96ecc7-238e-4707-a3af-64b0d1828bd4",
283
+ "metadata": {},
284
+ "outputs": [],
285
+ "source": [
286
+ "tasks = tasks.to_dict(orient=\"list\")"
287
+ ]
288
+ },
289
+ {
290
+ "cell_type": "markdown",
291
+ "id": "3056ecfa-1b75-46b7-9e36-4e72c4cb3618",
292
+ "metadata": {},
293
+ "source": [
294
+ "## Process intervals"
295
+ ]
296
+ },
297
+ {
298
+ "cell_type": "code",
299
+ "execution_count": 8,
300
+ "id": "e768d909-c699-47b6-86fd-c3ff54357c74",
301
+ "metadata": {},
302
+ "outputs": [
303
+ {
304
+ "data": {
305
+ "text/html": [
306
+ "<div>\n",
307
+ "<style scoped>\n",
308
+ " .dataframe tbody tr th:only-of-type {\n",
309
+ " vertical-align: middle;\n",
310
+ " }\n",
311
+ "\n",
312
+ " .dataframe tbody tr th {\n",
313
+ " vertical-align: top;\n",
314
+ " }\n",
315
+ "\n",
316
+ " .dataframe thead th {\n",
317
+ " text-align: right;\n",
318
+ " }\n",
319
+ "</style>\n",
320
+ "<table border=\"1\" class=\"dataframe\">\n",
321
+ " <thead>\n",
322
+ " <tr style=\"text-align: right;\">\n",
323
+ " <th></th>\n",
324
+ " <th>chrom</th>\n",
325
+ " <th>start</th>\n",
326
+ " <th>end</th>\n",
327
+ " <th>fold</th>\n",
328
+ " </tr>\n",
329
+ " </thead>\n",
330
+ " <tbody>\n",
331
+ " <tr>\n",
332
+ " <th>0</th>\n",
333
+ " <td>chr1</td>\n",
334
+ " <td>46257174</td>\n",
335
+ " <td>46453782</td>\n",
336
+ " <td>fold0</td>\n",
337
+ " </tr>\n",
338
+ " <tr>\n",
339
+ " <th>1</th>\n",
340
+ " <td>chr2</td>\n",
341
+ " <td>83512641</td>\n",
342
+ " <td>83709249</td>\n",
343
+ " <td>fold0</td>\n",
344
+ " </tr>\n",
345
+ " <tr>\n",
346
+ " <th>2</th>\n",
347
+ " <td>chr7</td>\n",
348
+ " <td>16218353</td>\n",
349
+ " <td>16414961</td>\n",
350
+ " <td>fold0</td>\n",
351
+ " </tr>\n",
352
+ " <tr>\n",
353
+ " <th>3</th>\n",
354
+ " <td>chr3</td>\n",
355
+ " <td>113724419</td>\n",
356
+ " <td>113921027</td>\n",
357
+ " <td>fold0</td>\n",
358
+ " </tr>\n",
359
+ " <tr>\n",
360
+ " <th>4</th>\n",
361
+ " <td>chr3</td>\n",
362
+ " <td>107470140</td>\n",
363
+ " <td>107666748</td>\n",
364
+ " <td>fold0</td>\n",
365
+ " </tr>\n",
366
+ " </tbody>\n",
367
+ "</table>\n",
368
+ "</div>"
369
+ ],
370
+ "text/plain": [
371
+ " chrom start end fold\n",
372
+ "0 chr1 46257174 46453782 fold0\n",
373
+ "1 chr2 83512641 83709249 fold0\n",
374
+ "2 chr7 16218353 16414961 fold0\n",
375
+ "3 chr3 113724419 113921027 fold0\n",
376
+ "4 chr3 107470140 107666748 fold0"
377
+ ]
378
+ },
379
+ "execution_count": 8,
380
+ "metadata": {},
381
+ "output_type": "execute_result"
382
+ }
383
+ ],
384
+ "source": [
385
+ "intervals = pd.read_table(intervals_path, header=None)\n",
386
+ "intervals.columns = ['chrom', 'start', 'end', 'fold']\n",
387
+ "intervals.head()"
388
+ ]
389
+ },
390
+ {
391
+ "cell_type": "code",
392
+ "execution_count": 9,
393
+ "id": "fc4f093c-9d9d-4b0d-9883-050099492944",
394
+ "metadata": {},
395
+ "outputs": [
396
+ {
397
+ "data": {
398
+ "text/plain": [
399
+ "split\n",
400
+ "train 36950\n",
401
+ "val 6318\n",
402
+ "test 6101\n",
403
+ "Name: count, dtype: int64"
404
+ ]
405
+ },
406
+ "execution_count": 9,
407
+ "metadata": {},
408
+ "output_type": "execute_result"
409
+ }
410
+ ],
411
+ "source": [
412
+ "intervals['split'] = 'train'\n",
413
+ "intervals.loc[intervals.fold=='fold3', 'split'] = 'test'\n",
414
+ "intervals.loc[intervals.fold=='fold4', 'split'] = 'val'\n",
415
+ "intervals.split.value_counts()"
416
+ ]
417
+ },
418
+ {
419
+ "cell_type": "code",
420
+ "execution_count": 10,
421
+ "id": "7aafdb2e-93b3-406a-97fb-aab39d33aa33",
422
+ "metadata": {},
423
+ "outputs": [
424
+ {
425
+ "data": {
426
+ "text/plain": [
427
+ "196608"
428
+ ]
429
+ },
430
+ "execution_count": 10,
431
+ "metadata": {},
432
+ "output_type": "execute_result"
433
+ }
434
+ ],
435
+ "source": [
436
+ "get_unique_length(intervals)"
437
+ ]
438
+ },
439
+ {
440
+ "cell_type": "code",
441
+ "execution_count": 11,
442
+ "id": "c45008d0-9dda-4660-8e1c-574570eef373",
443
+ "metadata": {},
444
+ "outputs": [],
445
+ "source": [
446
+ "intervals = resize(intervals, 524288)"
447
+ ]
448
+ },
449
+ {
450
+ "cell_type": "code",
451
+ "execution_count": 12,
452
+ "id": "86aae057-b106-4884-81b3-d1aa563ad4f7",
453
+ "metadata": {},
454
+ "outputs": [],
455
+ "source": [
456
+ "train_intervals = intervals[intervals.split=='train'].iloc[:, :3]\n",
457
+ "val_intervals = intervals[intervals.split=='val'].iloc[:, :3]\n",
458
+ "test_intervals = intervals[intervals.split=='test'].iloc[:, :3]\n",
459
+ "del intervals"
460
+ ]
461
+ },
462
+ {
463
+ "cell_type": "markdown",
464
+ "id": "e4bd10a9-40c2-4299-aa74-6474ea5a71ef",
465
+ "metadata": {},
466
+ "source": [
467
+ "## Initialize model"
468
+ ]
469
+ },
470
+ {
471
+ "cell_type": "code",
472
+ "execution_count": 13,
473
+ "id": "06dba49d-11ba-471b-83d7-05fe45a9bd5a",
474
+ "metadata": {},
475
+ "outputs": [],
476
+ "source": [
477
+ "model_params = {\n",
478
+ " \"model_type\":\"BorzoiModel\",\n",
479
+ " \"n_tasks\":2608,\n",
480
+ " \"final_act_func\": 'softplus',\n",
481
+ " \"final_pool_func\":None,\n",
482
+ " \"crop_len\":5120,\n",
483
+ "}\n",
484
+ "train_params={\n",
485
+ " \"task\":\"regression\",\n",
486
+ " \"loss\":\"mse\",\n",
487
+ "}\n",
488
+ "lm = LightningModel(model_params, train_params)"
489
+ ]
490
+ },
491
+ {
492
+ "cell_type": "markdown",
493
+ "id": "ab6a3055-d338-40f0-9fd3-f8f3134df643",
494
+ "metadata": {},
495
+ "source": [
496
+ "## Save checkpoints"
497
+ ]
498
+ },
499
+ {
500
+ "cell_type": "code",
501
+ "execution_count": 14,
502
+ "id": "bbccda9a-7a62-4472-9c0b-34abf98a7e1c",
503
+ "metadata": {},
504
+ "outputs": [
505
+ {
506
+ "name": "stderr",
507
+ "output_type": "stream",
508
+ "text": [
509
+ "/tmp/ipykernel_3294550/1480313343.py:18: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.\n",
510
+ " state_dict = torch.load(f\"/data/borzoi/torch_weights/mouse_fold{rep}.h5\")\n",
511
+ "Trainer will use only 1 of 8 GPUs because it is running inside an interactive / notebook environment. You may try to set `Trainer(devices=8)` but please note that multi-GPU inside interactive / notebook environments is considered experimental and unstable. Your mileage may vary.\n",
512
+ "GPU available: True (cuda), used: True\n",
513
+ "TPU available: False, using: 0 TPU cores\n",
514
+ "HPU available: False, using: 0 HPUs\n",
515
+ "You are using a CUDA device ('NVIDIA A100-SXM4-80GB') that has Tensor Cores. To properly utilize them, you should set `torch.set_float32_matmul_precision('medium' | 'high')` which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision\n",
516
+ "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]\n",
517
+ "/tmp/ipykernel_3294550/1480313343.py:18: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.\n",
518
+ " state_dict = torch.load(f\"/data/borzoi/torch_weights/mouse_fold{rep}.h5\")\n",
519
+ "Trainer will use only 1 of 8 GPUs because it is running inside an interactive / notebook environment. You may try to set `Trainer(devices=8)` but please note that multi-GPU inside interactive / notebook environments is considered experimental and unstable. Your mileage may vary.\n",
520
+ "GPU available: True (cuda), used: True\n",
521
+ "TPU available: False, using: 0 TPU cores\n",
522
+ "HPU available: False, using: 0 HPUs\n",
523
+ "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]\n",
524
+ "/tmp/ipykernel_3294550/1480313343.py:18: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.\n",
525
+ " state_dict = torch.load(f\"/data/borzoi/torch_weights/mouse_fold{rep}.h5\")\n",
526
+ "Trainer will use only 1 of 8 GPUs because it is running inside an interactive / notebook environment. You may try to set `Trainer(devices=8)` but please note that multi-GPU inside interactive / notebook environments is considered experimental and unstable. Your mileage may vary.\n",
527
+ "GPU available: True (cuda), used: True\n",
528
+ "TPU available: False, using: 0 TPU cores\n",
529
+ "HPU available: False, using: 0 HPUs\n",
530
+ "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]\n",
531
+ "/tmp/ipykernel_3294550/1480313343.py:18: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.\n",
532
+ " state_dict = torch.load(f\"/data/borzoi/torch_weights/mouse_fold{rep}.h5\")\n",
533
+ "Trainer will use only 1 of 8 GPUs because it is running inside an interactive / notebook environment. You may try to set `Trainer(devices=8)` but please note that multi-GPU inside interactive / notebook environments is considered experimental and unstable. Your mileage may vary.\n",
534
+ "GPU available: True (cuda), used: True\n",
535
+ "TPU available: False, using: 0 TPU cores\n",
536
+ "HPU available: False, using: 0 HPUs\n",
537
+ "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]\n"
538
+ ]
539
+ }
540
+ ],
541
+ "source": [
542
+ "for rep in range(4):\n",
543
+ " \n",
544
+ " lm.data_params[\"tasks\"] = tasks\n",
545
+ "\n",
546
+ " lm.data_params[\"train\"] = dict()\n",
547
+ " lm.data_params[\"val\"] = dict()\n",
548
+ " lm.data_params[\"test\"] = dict()\n",
549
+ " \n",
550
+ " lm.data_params[\"train\"][\"seq_len\"] = 524288\n",
551
+ " lm.data_params[\"train\"][\"label_len\"] = 6144*32\n",
552
+ " lm.data_params[\"train\"][\"genome\"] = \"hg38\"\n",
553
+ " lm.data_params[\"train\"][\"bin_size\"] = 32\n",
554
+ " \n",
555
+ " lm.data_params[\"train\"][\"intervals\"] = train_intervals.to_dict(orient='list')\n",
556
+ " lm.data_params[\"val\"][\"intervals\"] = val_intervals.to_dict(orient='list')\n",
557
+ " lm.data_params[\"test\"][\"intervals\"] = test_intervals.to_dict(orient='list')\n",
558
+ "\n",
559
+ " state_dict = torch.load(f\"/data/borzoi/torch_weights/mouse_fold{rep}.h5\")\n",
560
+ " lm.model.load_state_dict(state_dict)\n",
561
+ " \n",
562
+ " trainer = pl.Trainer()\n",
563
+ " try:\n",
564
+ " trainer.predict(lm)\n",
565
+ " except:\n",
566
+ " pass\n",
567
+ " \n",
568
+ " trainer.save_checkpoint(f'/data/borzoi/torch_weights/mouse_rep{rep}.ckpt')"
569
+ ]
570
+ },
571
+ {
572
+ "cell_type": "markdown",
573
+ "id": "4b5fbc09-abca-4cb7-a8b7-7eeba7b39cf8",
574
+ "metadata": {},
575
+ "source": [
576
+ "## Upload to wandb"
577
+ ]
578
+ },
579
+ {
580
+ "cell_type": "code",
581
+ "execution_count": 15,
582
+ "id": "c53a69b4-9092-44b9-b8a9-20b72aa8b511",
583
+ "metadata": {},
584
+ "outputs": [],
585
+ "source": [
586
+ "metadata={\n",
587
+ " 'model_params':lm.model_params, \n",
588
+ " 'train_params':lm.train_params, \n",
589
+ " 'data_params':lm.data_params\n",
590
+ " }"
591
+ ]
592
+ },
593
+ {
594
+ "cell_type": "code",
595
+ "execution_count": 16,
596
+ "id": "2a25b878-ca1f-4872-9549-ab502975ca2e",
597
+ "metadata": {},
598
+ "outputs": [
599
+ {
600
+ "name": "stderr",
601
+ "output_type": "stream",
602
+ "text": [
603
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
604
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
605
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
606
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
607
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
608
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
609
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
610
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
611
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
612
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
613
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
614
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
615
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
616
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
617
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
618
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
619
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
620
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
621
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
622
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
623
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
624
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
625
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n",
626
+ "\u001b[34m\u001b[1mwandb\u001b[0m: \u001b[33mWARNING\u001b[0m Serializing object of type list that is 312024 bytes\n"
627
+ ]
628
+ }
629
+ ],
630
+ "source": [
631
+ "for rep in range(4):\n",
632
+ " ckpt = f'/data/borzoi/torch_weights/mouse_rep{rep}.ckpt'\n",
633
+ " artifact = wandb.Artifact(\n",
634
+ " f'mouse_rep{rep}', \n",
635
+ " type='model',\n",
636
+ " metadata=metadata,\n",
637
+ " )\n",
638
+ " artifact.add_file(local_path=ckpt, name='model.ckpt')\n",
639
+ " run.log_artifact(artifact)"
640
+ ]
641
+ },
642
+ {
643
+ "cell_type": "code",
644
+ "execution_count": 17,
645
+ "id": "98dd40ec-f276-4099-a4a7-39ccf1600a58",
646
+ "metadata": {},
647
+ "outputs": [
648
+ {
649
+ "data": {
650
+ "text/html": [],
651
+ "text/plain": [
652
+ "<IPython.core.display.HTML object>"
653
+ ]
654
+ },
655
+ "metadata": {},
656
+ "output_type": "display_data"
657
+ },
658
+ {
659
+ "data": {
660
+ "text/html": [
661
+ " View run <strong style=\"color:#cdcd00\">copy-mouse</strong> at: <a href='https://wandb.ai/grelu/borzoi/runs/jxwlrbzp' target=\"_blank\">https://wandb.ai/grelu/borzoi/runs/jxwlrbzp</a><br> View project at: <a href='https://wandb.ai/grelu/borzoi' target=\"_blank\">https://wandb.ai/grelu/borzoi</a><br>Synced 5 W&B file(s), 0 media file(s), 8 artifact file(s) and 0 other file(s)"
662
+ ],
663
+ "text/plain": [
664
+ "<IPython.core.display.HTML object>"
665
+ ]
666
+ },
667
+ "metadata": {},
668
+ "output_type": "display_data"
669
+ },
670
+ {
671
+ "data": {
672
+ "text/html": [
673
+ "Find logs at: <code>./wandb/run-20250304_221737-jxwlrbzp/logs</code>"
674
+ ],
675
+ "text/plain": [
676
+ "<IPython.core.display.HTML object>"
677
+ ]
678
+ },
679
+ "metadata": {},
680
+ "output_type": "display_data"
681
+ }
682
+ ],
683
+ "source": [
684
+ "run.finish()"
685
+ ]
686
+ },
687
+ {
688
+ "cell_type": "code",
689
+ "execution_count": null,
690
+ "id": "1c5d6a57-a2a1-495d-90ff-16508e4e236c",
691
+ "metadata": {},
692
+ "outputs": [],
693
+ "source": []
694
+ }
695
+ ],
696
+ "metadata": {
697
+ "kernelspec": {
698
+ "display_name": "Python 3 (ipykernel)",
699
+ "language": "python",
700
+ "name": "python3"
701
+ },
702
+ "language_info": {
703
+ "codemirror_mode": {
704
+ "name": "ipython",
705
+ "version": 3
706
+ },
707
+ "file_extension": ".py",
708
+ "mimetype": "text/x-python",
709
+ "name": "python",
710
+ "nbconvert_exporter": "python",
711
+ "pygments_lexer": "ipython3",
712
+ "version": "3.11.9"
713
+ }
714
+ },
715
+ "nbformat": 4,
716
+ "nbformat_minor": 5
717
+ }