samos123 commited on
Commit
dc51402
·
1 Parent(s): c58710a

Upload train.ipynb with huggingface_hub

Browse files
Files changed (1) hide show
  1. train.ipynb +1735 -0
train.ipynb ADDED
@@ -0,0 +1,1735 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "id": "9c3e4532",
6
+ "metadata": {
7
+ "papermill": {
8
+ "duration": 1.064429,
9
+ "end_time": "2023-10-23T04:10:32.617552",
10
+ "exception": false,
11
+ "start_time": "2023-10-23T04:10:31.553123",
12
+ "status": "completed"
13
+ },
14
+ "tags": []
15
+ },
16
+ "source": [
17
+ "# Train models using HuggingFace libraries\n",
18
+ "\n",
19
+ "This notebook takes parameters from a params.json file which is automatically\n",
20
+ "created by Substratus K8s operator.\n",
21
+ "\n",
22
+ "The following parameters influence what happens in this notebook:\n",
23
+ "- `dataset_urls`: A comma separated list of URLs. The URLs should point to\n",
24
+ " json files that contain your training dataset. If unset a json or jsonl\n",
25
+ " file should be present under the `/content/data/` directory.\n",
26
+ "- `prompt_template`: The prompt template to use for training\n",
27
+ "- `push_to_hub`: if this variable is set a repo id, then the trained\n",
28
+ " model will get pushed to HuggingFace hub. For example,\n",
29
+ " set it to \"substratusai/my-model\" to publish to substratusai HF org."
30
+ ]
31
+ },
32
+ {
33
+ "cell_type": "code",
34
+ "execution_count": 1,
35
+ "id": "86ccd646",
36
+ "metadata": {
37
+ "execution": {
38
+ "iopub.execute_input": "2023-10-23T04:10:34.494504Z",
39
+ "iopub.status.busy": "2023-10-23T04:10:34.493261Z",
40
+ "iopub.status.idle": "2023-10-23T04:10:34.506648Z",
41
+ "shell.execute_reply": "2023-10-23T04:10:34.506011Z"
42
+ },
43
+ "papermill": {
44
+ "duration": 0.898669,
45
+ "end_time": "2023-10-23T04:10:34.508149",
46
+ "exception": false,
47
+ "start_time": "2023-10-23T04:10:33.609480",
48
+ "status": "completed"
49
+ },
50
+ "tags": []
51
+ },
52
+ "outputs": [
53
+ {
54
+ "data": {
55
+ "text/plain": [
56
+ "{'dataset_urls': 'https://huggingface.co/datasets/weaviate/WithoutRetrieval-SchemaSplit-Train-80/resolve/main/WithoutRetrieval-SchemaSplit-Train-80.json',\n",
57
+ " 'logging_steps': 50,\n",
58
+ " 'modules_to_save': 'embed_tokens, lm_head',\n",
59
+ " 'num_train_epochs': 3,\n",
60
+ " 'per_device_eval_batch_size': 1,\n",
61
+ " 'per_device_train_batch_size': 1,\n",
62
+ " 'prompt_template': '## Instruction\\nYour task is to write GraphQL for the Natural Language Query provided. Use the provided Schema to generate the GraphQL. The GraphQL should be valid for Weaviate.\\n\\n## Natural Language Query\\n{nlcommand}\\n\\n## Schema\\n{schema}\\n\\n## Answer\\n{output}\\n',\n",
63
+ " 'push_to_hub': 'substratusai/wgql-WithoutRetrieval-SchemaSplit-Train-80',\n",
64
+ " 'save_steps': 50,\n",
65
+ " 'target_modules': 'q_proj, up_proj, o_proj, k_proj, down_proj, gate_proj, v_proj',\n",
66
+ " 'warmup_steps': 100}"
67
+ ]
68
+ },
69
+ "execution_count": 1,
70
+ "metadata": {},
71
+ "output_type": "execute_result"
72
+ }
73
+ ],
74
+ "source": [
75
+ "import json\n",
76
+ "from pathlib import Path\n",
77
+ "\n",
78
+ "params = {}\n",
79
+ "params_path = Path(\"/content/params.json\")\n",
80
+ "if params_path.is_file():\n",
81
+ " with params_path.open(\"r\", encoding=\"UTF-8\") as params_file:\n",
82
+ " params = json.load(params_file)\n",
83
+ "\n",
84
+ "\n",
85
+ "params"
86
+ ]
87
+ },
88
+ {
89
+ "cell_type": "code",
90
+ "execution_count": 2,
91
+ "id": "9fafd16b-d8c9-47bf-9116-c27b1d43a019",
92
+ "metadata": {
93
+ "execution": {
94
+ "iopub.execute_input": "2023-10-23T04:10:36.304465Z",
95
+ "iopub.status.busy": "2023-10-23T04:10:36.303766Z",
96
+ "iopub.status.idle": "2023-10-23T04:10:39.687535Z",
97
+ "shell.execute_reply": "2023-10-23T04:10:39.686882Z"
98
+ },
99
+ "papermill": {
100
+ "duration": 4.284256,
101
+ "end_time": "2023-10-23T04:10:39.689024",
102
+ "exception": false,
103
+ "start_time": "2023-10-23T04:10:35.404768",
104
+ "status": "completed"
105
+ },
106
+ "tags": []
107
+ },
108
+ "outputs": [
109
+ {
110
+ "name": "stdout",
111
+ "output_type": "stream",
112
+ "text": [
113
+ "Using the following URLs for the dataset: ['https://huggingface.co/datasets/weaviate/WithoutRetrieval-SchemaSplit-Train-80/resolve/main/WithoutRetrieval-SchemaSplit-Train-80.json']\n"
114
+ ]
115
+ },
116
+ {
117
+ "data": {
118
+ "application/vnd.jupyter.widget-view+json": {
119
+ "model_id": "e9a6ea0ca5c047b1a8ad11457dcaa2e8",
120
+ "version_major": 2,
121
+ "version_minor": 0
122
+ },
123
+ "text/plain": [
124
+ "Downloading data files: 0%| | 0/1 [00:00<?, ?it/s]"
125
+ ]
126
+ },
127
+ "metadata": {},
128
+ "output_type": "display_data"
129
+ },
130
+ {
131
+ "data": {
132
+ "application/vnd.jupyter.widget-view+json": {
133
+ "model_id": "d801389f43d146089d5f9cbccd6e4ea5",
134
+ "version_major": 2,
135
+ "version_minor": 0
136
+ },
137
+ "text/plain": [
138
+ "Downloading data: 0%| | 0.00/15.1M [00:00<?, ?B/s]"
139
+ ]
140
+ },
141
+ "metadata": {},
142
+ "output_type": "display_data"
143
+ },
144
+ {
145
+ "data": {
146
+ "application/vnd.jupyter.widget-view+json": {
147
+ "model_id": "1994f27dcf1f4979bcc52e555764c189",
148
+ "version_major": 2,
149
+ "version_minor": 0
150
+ },
151
+ "text/plain": [
152
+ "Extracting data files: 0%| | 0/1 [00:00<?, ?it/s]"
153
+ ]
154
+ },
155
+ "metadata": {},
156
+ "output_type": "display_data"
157
+ },
158
+ {
159
+ "data": {
160
+ "application/vnd.jupyter.widget-view+json": {
161
+ "model_id": "0bc5d1c4936443e5bbc72a48420ec9a9",
162
+ "version_major": 2,
163
+ "version_minor": 0
164
+ },
165
+ "text/plain": [
166
+ "Generating train split: 0 examples [00:00, ? examples/s]"
167
+ ]
168
+ },
169
+ "metadata": {},
170
+ "output_type": "display_data"
171
+ },
172
+ {
173
+ "data": {
174
+ "text/plain": [
175
+ "DatasetDict({\n",
176
+ " train: Dataset({\n",
177
+ " features: ['input', 'output', 'nlcommand', 'apiRef', 'apiRefPath', 'schema', 'schemaPath'],\n",
178
+ " num_rows: 3163\n",
179
+ " })\n",
180
+ "})"
181
+ ]
182
+ },
183
+ "execution_count": 2,
184
+ "metadata": {},
185
+ "output_type": "execute_result"
186
+ }
187
+ ],
188
+ "source": [
189
+ "import os \n",
190
+ "from datasets import load_dataset\n",
191
+ "\n",
192
+ "dataset_urls = params.get(\"dataset_urls\")\n",
193
+ "if dataset_urls:\n",
194
+ " urls = [u.strip() for u in dataset_urls.split(\",\")]\n",
195
+ " print(f\"Using the following URLs for the dataset: {urls}\")\n",
196
+ " data = load_dataset(\"json\", data_files=urls)\n",
197
+ "else:\n",
198
+ " data = load_dataset(\"json\", data_files=\"/content/data/*.json*\")\n",
199
+ "data"
200
+ ]
201
+ },
202
+ {
203
+ "cell_type": "code",
204
+ "execution_count": 3,
205
+ "id": "08e478fa-d095-4145-9bd1-b4feec7bc4f0",
206
+ "metadata": {
207
+ "execution": {
208
+ "iopub.execute_input": "2023-10-23T04:10:41.636772Z",
209
+ "iopub.status.busy": "2023-10-23T04:10:41.635928Z",
210
+ "iopub.status.idle": "2023-10-23T04:14:25.471602Z",
211
+ "shell.execute_reply": "2023-10-23T04:14:25.470933Z"
212
+ },
213
+ "papermill": {
214
+ "duration": 225.662956,
215
+ "end_time": "2023-10-23T04:14:26.348905",
216
+ "exception": false,
217
+ "start_time": "2023-10-23T04:10:40.685949",
218
+ "status": "completed"
219
+ },
220
+ "tags": []
221
+ },
222
+ "outputs": [
223
+ {
224
+ "data": {
225
+ "application/vnd.jupyter.widget-view+json": {
226
+ "model_id": "70b0e87d53314d8492421aa174ac2290",
227
+ "version_major": 2,
228
+ "version_minor": 0
229
+ },
230
+ "text/plain": [
231
+ "Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]"
232
+ ]
233
+ },
234
+ "metadata": {},
235
+ "output_type": "display_data"
236
+ },
237
+ {
238
+ "data": {
239
+ "text/plain": [
240
+ "LlamaForCausalLM(\n",
241
+ " (model): LlamaModel(\n",
242
+ " (embed_tokens): Embedding(32000, 4096)\n",
243
+ " (layers): ModuleList(\n",
244
+ " (0-31): 32 x LlamaDecoderLayer(\n",
245
+ " (self_attn): LlamaAttention(\n",
246
+ " (q_proj): Linear(in_features=4096, out_features=4096, bias=False)\n",
247
+ " (k_proj): Linear(in_features=4096, out_features=4096, bias=False)\n",
248
+ " (v_proj): Linear(in_features=4096, out_features=4096, bias=False)\n",
249
+ " (o_proj): Linear(in_features=4096, out_features=4096, bias=False)\n",
250
+ " (rotary_emb): LlamaRotaryEmbedding()\n",
251
+ " )\n",
252
+ " (mlp): LlamaMLP(\n",
253
+ " (gate_proj): Linear(in_features=4096, out_features=11008, bias=False)\n",
254
+ " (up_proj): Linear(in_features=4096, out_features=11008, bias=False)\n",
255
+ " (down_proj): Linear(in_features=11008, out_features=4096, bias=False)\n",
256
+ " (act_fn): SiLUActivation()\n",
257
+ " )\n",
258
+ " (input_layernorm): LlamaRMSNorm()\n",
259
+ " (post_attention_layernorm): LlamaRMSNorm()\n",
260
+ " )\n",
261
+ " )\n",
262
+ " (norm): LlamaRMSNorm()\n",
263
+ " )\n",
264
+ " (lm_head): Linear(in_features=4096, out_features=32000, bias=False)\n",
265
+ ")"
266
+ ]
267
+ },
268
+ "execution_count": 3,
269
+ "metadata": {},
270
+ "output_type": "execute_result"
271
+ }
272
+ ],
273
+ "source": [
274
+ "import transformers\n",
275
+ "import torch\n",
276
+ "import sys\n",
277
+ "from transformers import AutoTokenizer, AutoModelForCausalLM\n",
278
+ "\n",
279
+ "model_path = \"/content/model/\"\n",
280
+ "trained_model_path = \"/content/artifacts\"\n",
281
+ "trained_model_path_lora = \"/content/artifacts/lora\"\n",
282
+ "\n",
283
+ "tokenizer = AutoTokenizer.from_pretrained(model_path,\n",
284
+ " local_files_only=True)\n",
285
+ "model = AutoModelForCausalLM.from_pretrained(\n",
286
+ " model_path, torch_dtype=torch.float16, device_map=\"auto\", trust_remote_code=True)\n",
287
+ "model"
288
+ ]
289
+ },
290
+ {
291
+ "cell_type": "code",
292
+ "execution_count": 4,
293
+ "id": "88908150-1585-4781-9542-d68193d808bc",
294
+ "metadata": {
295
+ "execution": {
296
+ "iopub.execute_input": "2023-10-23T04:14:28.270914Z",
297
+ "iopub.status.busy": "2023-10-23T04:14:28.270009Z",
298
+ "iopub.status.idle": "2023-10-23T04:14:28.276035Z",
299
+ "shell.execute_reply": "2023-10-23T04:14:28.275329Z"
300
+ },
301
+ "papermill": {
302
+ "duration": 0.958941,
303
+ "end_time": "2023-10-23T04:14:28.277543",
304
+ "exception": false,
305
+ "start_time": "2023-10-23T04:14:27.318602",
306
+ "status": "completed"
307
+ },
308
+ "tags": []
309
+ },
310
+ "outputs": [
311
+ {
312
+ "data": {
313
+ "text/plain": [
314
+ "LlamaConfig {\n",
315
+ " \"_name_or_path\": \"/content/model/\",\n",
316
+ " \"architectures\": [\n",
317
+ " \"LlamaForCausalLM\"\n",
318
+ " ],\n",
319
+ " \"attention_bias\": false,\n",
320
+ " \"bos_token_id\": 1,\n",
321
+ " \"eos_token_id\": 2,\n",
322
+ " \"hidden_act\": \"silu\",\n",
323
+ " \"hidden_size\": 4096,\n",
324
+ " \"initializer_range\": 0.02,\n",
325
+ " \"intermediate_size\": 11008,\n",
326
+ " \"max_position_embeddings\": 4096,\n",
327
+ " \"model_type\": \"llama\",\n",
328
+ " \"num_attention_heads\": 32,\n",
329
+ " \"num_hidden_layers\": 32,\n",
330
+ " \"num_key_value_heads\": 32,\n",
331
+ " \"pretraining_tp\": 1,\n",
332
+ " \"rms_norm_eps\": 1e-05,\n",
333
+ " \"rope_scaling\": null,\n",
334
+ " \"rope_theta\": 10000.0,\n",
335
+ " \"tie_word_embeddings\": false,\n",
336
+ " \"torch_dtype\": \"float16\",\n",
337
+ " \"transformers_version\": \"4.34.1\",\n",
338
+ " \"use_cache\": true,\n",
339
+ " \"vocab_size\": 32000\n",
340
+ "}"
341
+ ]
342
+ },
343
+ "execution_count": 4,
344
+ "metadata": {},
345
+ "output_type": "execute_result"
346
+ }
347
+ ],
348
+ "source": [
349
+ "model.config"
350
+ ]
351
+ },
352
+ {
353
+ "cell_type": "code",
354
+ "execution_count": 5,
355
+ "id": "ec8a1a9f-fe60-49c7-ab20-04034323df8a",
356
+ "metadata": {
357
+ "execution": {
358
+ "iopub.execute_input": "2023-10-23T04:14:30.263645Z",
359
+ "iopub.status.busy": "2023-10-23T04:14:30.262915Z",
360
+ "iopub.status.idle": "2023-10-23T04:14:30.268348Z",
361
+ "shell.execute_reply": "2023-10-23T04:14:30.267625Z"
362
+ },
363
+ "papermill": {
364
+ "duration": 1.019477,
365
+ "end_time": "2023-10-23T04:14:30.269891",
366
+ "exception": false,
367
+ "start_time": "2023-10-23T04:14:29.250414",
368
+ "status": "completed"
369
+ },
370
+ "tags": []
371
+ },
372
+ "outputs": [
373
+ {
374
+ "name": "stdout",
375
+ "output_type": "stream",
376
+ "text": [
377
+ "## Instruction\n",
378
+ "Your task is to write GraphQL for the Natural Language Query provided. Use the provided Schema to generate the GraphQL. The GraphQL should be valid for Weaviate.\n",
379
+ "\n",
380
+ "## Natural Language Query\n",
381
+ "{nlcommand}\n",
382
+ "\n",
383
+ "## Schema\n",
384
+ "{schema}\n",
385
+ "\n",
386
+ "## Answer\n",
387
+ "{output}\n",
388
+ "</s>\n"
389
+ ]
390
+ }
391
+ ],
392
+ "source": [
393
+ "default_prompt = \"\"\"\n",
394
+ "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n",
395
+ "### Instruction:\n",
396
+ "{prompt}\n",
397
+ "### Response:\n",
398
+ "{completion}\n",
399
+ "\"\"\"\n",
400
+ "\n",
401
+ "prompt = params.get(\"prompt_template\", default_prompt)\n",
402
+ "\n",
403
+ "eos_token = tokenizer.convert_ids_to_tokens(model.config.eos_token_id)\n",
404
+ "if prompt[-len(eos_token):] != eos_token:\n",
405
+ " prompt = prompt + eos_token\n",
406
+ "\n",
407
+ "print(prompt)\n"
408
+ ]
409
+ },
410
+ {
411
+ "cell_type": "code",
412
+ "execution_count": 6,
413
+ "id": "0abf96e1-3bc1-4ae7-80ac-c2e585e9c7c1",
414
+ "metadata": {
415
+ "execution": {
416
+ "iopub.execute_input": "2023-10-23T04:14:32.183851Z",
417
+ "iopub.status.busy": "2023-10-23T04:14:32.183550Z",
418
+ "iopub.status.idle": "2023-10-23T04:14:33.043374Z",
419
+ "shell.execute_reply": "2023-10-23T04:14:33.042525Z"
420
+ },
421
+ "papermill": {
422
+ "duration": 1.829206,
423
+ "end_time": "2023-10-23T04:14:33.045328",
424
+ "exception": false,
425
+ "start_time": "2023-10-23T04:14:31.216122",
426
+ "status": "completed"
427
+ },
428
+ "tags": []
429
+ },
430
+ "outputs": [
431
+ {
432
+ "name": "stdout",
433
+ "output_type": "stream",
434
+ "text": [
435
+ "Mon Oct 23 04:14:32 2023 \r\n",
436
+ "+-----------------------------------------------------------------------------+\r\n",
437
+ "| NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0 |\r\n",
438
+ "|-------------------------------+----------------------+----------------------+\r\n",
439
+ "| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n",
440
+ "| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\r\n",
441
+ "| | | MIG M. |\r\n",
442
+ "|===============================+======================+======================|\r\n",
443
+ "| 0 NVIDIA L4 Off | 00000000:00:04.0 Off | 0 |\r\n",
444
+ "| N/A 59C P0 31W / 72W | 3570MiB / 23034MiB | 0% Default |\r\n",
445
+ "| | | N/A |\r\n",
446
+ "+-------------------------------+----------------------+----------------------+\r\n"
447
+ ]
448
+ },
449
+ {
450
+ "name": "stdout",
451
+ "output_type": "stream",
452
+ "text": [
453
+ "| 1 NVIDIA L4 Off | 00000000:00:05.0 Off | 0 |\r\n",
454
+ "| N/A 64C P0 32W / 72W | 4096MiB / 23034MiB | 0% Default |\r\n",
455
+ "| | | N/A |\r\n",
456
+ "+-------------------------------+----------------------+----------------------+\r\n",
457
+ "| 2 NVIDIA L4 Off | 00000000:00:06.0 Off | 0 |\r\n",
458
+ "| N/A 65C P0 32W / 72W | 4096MiB / 23034MiB | 0% Default |\r\n",
459
+ "| | | N/A |\r\n",
460
+ "+-------------------------------+----------------------+----------------------+\r\n",
461
+ "| 3 NVIDIA L4 Off | 00000000:00:07.0 Off | 0 |\r\n",
462
+ "| N/A 62C P0 29W / 72W | 3570MiB / 23034MiB | 0% Default |\r\n",
463
+ "| | | N/A |\r\n",
464
+ "+-------------------------------+----------------------+----------------------+\r\n",
465
+ " \r\n",
466
+ "+-----------------------------------------------------------------------------+\r\n",
467
+ "| Processes: |\r\n",
468
+ "| GPU GI CI PID Type Process name GPU Memory |\r\n",
469
+ "| ID ID Usage |\r\n",
470
+ "|=============================================================================|\r\n",
471
+ "+-----------------------------------------------------------------------------+\r\n"
472
+ ]
473
+ }
474
+ ],
475
+ "source": [
476
+ "! nvidia-smi"
477
+ ]
478
+ },
479
+ {
480
+ "attachments": {},
481
+ "cell_type": "markdown",
482
+ "id": "4d1e1795-c783-4ddf-999e-f1de19258928",
483
+ "metadata": {
484
+ "papermill": {
485
+ "duration": 1.031477,
486
+ "end_time": "2023-10-23T04:14:35.109265",
487
+ "exception": false,
488
+ "start_time": "2023-10-23T04:14:34.077788",
489
+ "status": "completed"
490
+ },
491
+ "tags": []
492
+ },
493
+ "source": [
494
+ "Prompt before fine tuning"
495
+ ]
496
+ },
497
+ {
498
+ "cell_type": "code",
499
+ "execution_count": 7,
500
+ "id": "f5dd944b-e2bd-4bfd-a5fa-55bc90239926",
501
+ "metadata": {
502
+ "execution": {
503
+ "iopub.execute_input": "2023-10-23T04:14:42.797437Z",
504
+ "iopub.status.busy": "2023-10-23T04:14:42.796639Z",
505
+ "iopub.status.idle": "2023-10-23T04:14:42.819008Z",
506
+ "shell.execute_reply": "2023-10-23T04:14:42.818263Z"
507
+ },
508
+ "papermill": {
509
+ "duration": 6.737466,
510
+ "end_time": "2023-10-23T04:14:42.820457",
511
+ "exception": false,
512
+ "start_time": "2023-10-23T04:14:36.082991",
513
+ "status": "completed"
514
+ },
515
+ "tags": []
516
+ },
517
+ "outputs": [
518
+ {
519
+ "data": {
520
+ "text/plain": [
521
+ "LlamaTokenizerFast(name_or_path='/content/model/', vocab_size=32000, model_max_length=1000000000000000019884624838656, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>', 'pad_token': '[PAD]'}, clean_up_tokenization_spaces=False), added_tokens_decoder={\n",
522
+ "\t0: AddedToken(\"<unk>\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\n",
523
+ "\t1: AddedToken(\"<s>\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\n",
524
+ "\t2: AddedToken(\"</s>\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\n",
525
+ "\t32000: AddedToken(\"[PAD]\", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),\n",
526
+ "}"
527
+ ]
528
+ },
529
+ "execution_count": 7,
530
+ "metadata": {},
531
+ "output_type": "execute_result"
532
+ }
533
+ ],
534
+ "source": [
535
+ "from typing import Dict\n",
536
+ "# source: https://github.com/artidoro/qlora\n",
537
+ "DEFAULT_PAD_TOKEN = params.get(\"pad_token\", \"[PAD]\")\n",
538
+ "\n",
539
+ "def smart_tokenizer_and_embedding_resize(\n",
540
+ " special_tokens_dict: Dict,\n",
541
+ " tokenizer: transformers.PreTrainedTokenizer,\n",
542
+ " model: transformers.PreTrainedModel,\n",
543
+ "):\n",
544
+ " \"\"\"Resize tokenizer and embedding.\n",
545
+ "\n",
546
+ " Note: This is the unoptimized version that may make your embedding size not be divisible by 64.\n",
547
+ " \"\"\"\n",
548
+ " num_new_tokens = tokenizer.add_special_tokens(special_tokens_dict)\n",
549
+ " model.resize_token_embeddings(len(tokenizer))\n",
550
+ " if num_new_tokens > 0:\n",
551
+ " input_embeddings_data = model.get_input_embeddings().weight.data\n",
552
+ " output_embeddings_data = model.get_output_embeddings().weight.data\n",
553
+ "\n",
554
+ " input_embeddings_avg = input_embeddings_data[:-num_new_tokens].mean(dim=0, keepdim=True)\n",
555
+ " output_embeddings_avg = output_embeddings_data[:-num_new_tokens].mean(dim=0, keepdim=True)\n",
556
+ "\n",
557
+ " input_embeddings_data[-num_new_tokens:] = input_embeddings_avg\n",
558
+ " output_embeddings_data[-num_new_tokens:] = output_embeddings_avg\n",
559
+ "\n",
560
+ "if tokenizer._pad_token is None:\n",
561
+ " smart_tokenizer_and_embedding_resize(\n",
562
+ " special_tokens_dict=dict(pad_token=DEFAULT_PAD_TOKEN),\n",
563
+ " tokenizer=tokenizer,\n",
564
+ " model=model,\n",
565
+ " )\n",
566
+ "\n",
567
+ "if isinstance(tokenizer, transformers.LlamaTokenizer):\n",
568
+ " # LLaMA tokenizer may not have correct special tokens set.\n",
569
+ " # Check and add them if missing to prevent them from being parsed into different tokens.\n",
570
+ " # Note that these are present in the vocabulary.\n",
571
+ " # Note also that `model.config.pad_token_id` is 0 which corresponds to `<unk>` token.\n",
572
+ " print('Adding special tokens.')\n",
573
+ " tokenizer.add_special_tokens({\n",
574
+ " \"eos_token\": tokenizer.convert_ids_to_tokens(model.config.eos_token_id),\n",
575
+ " \"bos_token\": tokenizer.convert_ids_to_tokens(model.config.bos_token_id),\n",
576
+ " \"unk_token\": tokenizer.convert_ids_to_tokens(\n",
577
+ " model.config.pad_token_id if model.config.pad_token_id != -1 else tokenizer.pad_token_id\n",
578
+ " ),\n",
579
+ " })\n",
580
+ "\n",
581
+ "tokenizer"
582
+ ]
583
+ },
584
+ {
585
+ "cell_type": "code",
586
+ "execution_count": 8,
587
+ "id": "e78b510d",
588
+ "metadata": {
589
+ "execution": {
590
+ "iopub.execute_input": "2023-10-23T04:14:44.839307Z",
591
+ "iopub.status.busy": "2023-10-23T04:14:44.838706Z",
592
+ "iopub.status.idle": "2023-10-23T04:14:49.825169Z",
593
+ "shell.execute_reply": "2023-10-23T04:14:49.824517Z"
594
+ },
595
+ "papermill": {
596
+ "duration": 6.202467,
597
+ "end_time": "2023-10-23T04:14:50.025232",
598
+ "exception": false,
599
+ "start_time": "2023-10-23T04:14:43.822765",
600
+ "status": "completed"
601
+ },
602
+ "tags": []
603
+ },
604
+ "outputs": [
605
+ {
606
+ "data": {
607
+ "application/vnd.jupyter.widget-view+json": {
608
+ "model_id": "9aae387275d64be9a5a634561fb9c78c",
609
+ "version_major": 2,
610
+ "version_minor": 0
611
+ },
612
+ "text/plain": [
613
+ "Map: 0%| | 0/3163 [00:00<?, ? examples/s]"
614
+ ]
615
+ },
616
+ "metadata": {},
617
+ "output_type": "display_data"
618
+ },
619
+ {
620
+ "name": "stdout",
621
+ "output_type": "stream",
622
+ "text": [
623
+ "After tokenizing: DatasetDict({\n",
624
+ " train: Dataset({\n",
625
+ " features: ['input', 'output', 'nlcommand', 'apiRef', 'apiRefPath', 'schema', 'schemaPath', 'input_ids', 'attention_mask'],\n",
626
+ " num_rows: 3163\n",
627
+ " })\n",
628
+ "})\n"
629
+ ]
630
+ }
631
+ ],
632
+ "source": [
633
+ "from typing import Dict\n",
634
+ "\n",
635
+ "data = data.map(lambda x: tokenizer(prompt.format_map(x)))\n",
636
+ "\n",
637
+ "print(\"After tokenizing:\", data)"
638
+ ]
639
+ },
640
+ {
641
+ "cell_type": "code",
642
+ "execution_count": 9,
643
+ "id": "5dae6c6f-3ae1-4697-852e-fce24a82b9e8",
644
+ "metadata": {
645
+ "execution": {
646
+ "iopub.execute_input": "2023-10-23T04:14:52.022833Z",
647
+ "iopub.status.busy": "2023-10-23T04:14:52.022093Z",
648
+ "iopub.status.idle": "2023-10-23T04:16:25.579956Z",
649
+ "shell.execute_reply": "2023-10-23T04:16:25.579258Z"
650
+ },
651
+ "papermill": {
652
+ "duration": 95.759145,
653
+ "end_time": "2023-10-23T04:16:26.774525",
654
+ "exception": false,
655
+ "start_time": "2023-10-23T04:14:51.015380",
656
+ "status": "completed"
657
+ },
658
+ "tags": []
659
+ },
660
+ "outputs": [
661
+ {
662
+ "name": "stdout",
663
+ "output_type": "stream",
664
+ "text": [
665
+ "LoraConfig(peft_type=<PeftType.LORA: 'LORA'>, auto_mapping=None, base_model_name_or_path=None, revision=None, task_type='CAUSAL_LM', inference_mode=False, r=16, target_modules=['q_proj', 'up_proj', 'o_proj', 'k_proj', 'down_proj', 'gate_proj', 'v_proj'], lora_alpha=16, lora_dropout=0.05, fan_in_fan_out=False, bias='none', modules_to_save=['embed_tokens', 'lm_head'], init_lora_weights=True, layers_to_transform=None, layers_pattern=None)\n"
666
+ ]
667
+ },
668
+ {
669
+ "name": "stdout",
670
+ "output_type": "stream",
671
+ "text": [
672
+ "trainable params: 564,281,344 || all params: 7,040,552,960 || trainable%: 8.01473047935144\n"
673
+ ]
674
+ }
675
+ ],
676
+ "source": [
677
+ "from peft import get_peft_model, LoraConfig, prepare_model_for_kbit_training\n",
678
+ "\n",
679
+ "target_modules = params.get(\"target_modules\")\n",
680
+ "if target_modules:\n",
681
+ " target_modules = [mod.strip() for mod in target_modules.split(\",\")]\n",
682
+ "\n",
683
+ "modules_to_save = params.get(\"modules_to_save\")\n",
684
+ "if modules_to_save:\n",
685
+ " modules_to_save = [mod.strip() for mod in modules_to_save.split(\",\")]\n",
686
+ "\n",
687
+ "lora_config2 = LoraConfig(\n",
688
+ " r=16,\n",
689
+ " lora_alpha=16,\n",
690
+ " lora_dropout=0.05,\n",
691
+ " bias=\"none\",\n",
692
+ " task_type=\"CAUSAL_LM\",\n",
693
+ " target_modules=target_modules,\n",
694
+ " modules_to_save = modules_to_save\n",
695
+ ")\n",
696
+ "print(lora_config2)\n",
697
+ "\n",
698
+ "model = prepare_model_for_kbit_training(model)\n",
699
+ "\n",
700
+ "# add LoRA adaptor\n",
701
+ "model = get_peft_model(model, lora_config2)\n",
702
+ "model.print_trainable_parameters()"
703
+ ]
704
+ },
705
+ {
706
+ "cell_type": "code",
707
+ "execution_count": 10,
708
+ "id": "70a3e36c-62cf-45aa-8f37-0db0e40857dc",
709
+ "metadata": {
710
+ "execution": {
711
+ "iopub.execute_input": "2023-10-23T04:16:28.759999Z",
712
+ "iopub.status.busy": "2023-10-23T04:16:28.759043Z",
713
+ "iopub.status.idle": "2023-10-23T04:16:28.778449Z",
714
+ "shell.execute_reply": "2023-10-23T04:16:28.777816Z"
715
+ },
716
+ "papermill": {
717
+ "duration": 1.003134,
718
+ "end_time": "2023-10-23T04:16:28.780332",
719
+ "exception": false,
720
+ "start_time": "2023-10-23T04:16:27.777198",
721
+ "status": "completed"
722
+ },
723
+ "tags": []
724
+ },
725
+ "outputs": [
726
+ {
727
+ "data": {
728
+ "text/plain": [
729
+ "TrainingArguments(\n",
730
+ "_n_gpu=4,\n",
731
+ "adafactor=False,\n",
732
+ "adam_beta1=0.9,\n",
733
+ "adam_beta2=0.999,\n",
734
+ "adam_epsilon=1e-08,\n",
735
+ "auto_find_batch_size=False,\n",
736
+ "bf16=False,\n",
737
+ "bf16_full_eval=False,\n",
738
+ "data_seed=None,\n",
739
+ "dataloader_drop_last=False,\n",
740
+ "dataloader_num_workers=0,\n",
741
+ "dataloader_pin_memory=True,\n",
742
+ "ddp_backend=None,\n",
743
+ "ddp_broadcast_buffers=None,\n",
744
+ "ddp_bucket_cap_mb=None,\n",
745
+ "ddp_find_unused_parameters=None,\n",
746
+ "ddp_timeout=1800,\n",
747
+ "debug=[],\n",
748
+ "deepspeed=None,\n",
749
+ "disable_tqdm=False,\n",
750
+ "dispatch_batches=None,\n",
751
+ "do_eval=False,\n",
752
+ "do_predict=False,\n",
753
+ "do_train=False,\n",
754
+ "eval_accumulation_steps=None,\n",
755
+ "eval_delay=0,\n",
756
+ "eval_steps=None,\n",
757
+ "evaluation_strategy=no,\n",
758
+ "fp16=True,\n",
759
+ "fp16_backend=auto,\n",
760
+ "fp16_full_eval=False,\n",
761
+ "fp16_opt_level=O1,\n",
762
+ "fsdp=[],\n",
763
+ "fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False},\n",
764
+ "fsdp_min_num_params=0,\n",
765
+ "fsdp_transformer_layer_cls_to_wrap=None,\n",
766
+ "full_determinism=False,\n",
767
+ "gradient_accumulation_steps=4,\n",
768
+ "gradient_checkpointing=False,\n",
769
+ "greater_is_better=None,\n",
770
+ "group_by_length=False,\n",
771
+ "half_precision_backend=auto,\n",
772
+ "hub_always_push=False,\n",
773
+ "hub_model_id=None,\n",
774
+ "hub_private_repo=False,\n",
775
+ "hub_strategy=every_save,\n",
776
+ "hub_token=<HUB_TOKEN>,\n",
777
+ "ignore_data_skip=False,\n",
778
+ "include_inputs_for_metrics=False,\n",
779
+ "include_tokens_per_second=False,\n",
780
+ "jit_mode_eval=False,\n",
781
+ "label_names=None,\n",
782
+ "label_smoothing_factor=0.0,\n",
783
+ "learning_rate=3e-05,\n",
784
+ "length_column_name=length,\n",
785
+ "load_best_model_at_end=False,\n",
786
+ "local_rank=0,\n",
787
+ "log_level=passive,\n",
788
+ "log_level_replica=warning,\n",
789
+ "log_on_each_node=True,\n",
790
+ "logging_dir=/content/artifacts/checkpoints/runs/Oct23_04-16-28_wgqlg-withoutretrieval-schemasplit-train-80-v2-modeller-wk9wh,\n",
791
+ "logging_first_step=False,\n",
792
+ "logging_nan_inf_filter=True,\n",
793
+ "logging_steps=50,\n",
794
+ "logging_strategy=steps,\n",
795
+ "lr_scheduler_type=cosine,\n",
796
+ "max_grad_norm=1.0,\n",
797
+ "max_steps=-1,\n",
798
+ "metric_for_best_model=None,\n",
799
+ "mp_parameters=,\n",
800
+ "no_cuda=False,\n",
801
+ "num_train_epochs=3.0,\n",
802
+ "optim=paged_adamw_32bit,\n",
803
+ "optim_args=None,\n",
804
+ "output_dir=/content/artifacts/checkpoints,\n",
805
+ "overwrite_output_dir=False,\n",
806
+ "past_index=-1,\n",
807
+ "per_device_eval_batch_size=1,\n",
808
+ "per_device_train_batch_size=1,\n",
809
+ "prediction_loss_only=False,\n",
810
+ "push_to_hub=False,\n",
811
+ "push_to_hub_model_id=None,\n",
812
+ "push_to_hub_organization=None,\n",
813
+ "push_to_hub_token=<PUSH_TO_HUB_TOKEN>,\n",
814
+ "ray_scope=last,\n",
815
+ "remove_unused_columns=True,\n",
816
+ "report_to=[],\n",
817
+ "resume_from_checkpoint=None,\n",
818
+ "run_name=/content/artifacts/checkpoints,\n",
819
+ "save_on_each_node=False,\n",
820
+ "save_safetensors=False,\n",
821
+ "save_steps=50,\n",
822
+ "save_strategy=steps,\n",
823
+ "save_total_limit=None,\n",
824
+ "seed=42,\n",
825
+ "sharded_ddp=[],\n",
826
+ "skip_memory_metrics=True,\n",
827
+ "tf32=None,\n",
828
+ "torch_compile=False,\n",
829
+ "torch_compile_backend=None,\n",
830
+ "torch_compile_mode=None,\n",
831
+ "torchdynamo=None,\n",
832
+ "tpu_metrics_debug=False,\n",
833
+ "tpu_num_cores=None,\n",
834
+ "use_cpu=False,\n",
835
+ "use_ipex=False,\n",
836
+ "use_legacy_prediction_loop=False,\n",
837
+ "use_mps_device=False,\n",
838
+ "warmup_ratio=0.02,\n",
839
+ "warmup_steps=100,\n",
840
+ "weight_decay=0.0,\n",
841
+ ")"
842
+ ]
843
+ },
844
+ "execution_count": 10,
845
+ "metadata": {},
846
+ "output_type": "execute_result"
847
+ }
848
+ ],
849
+ "source": [
850
+ "from utils import parse_training_args\n",
851
+ "\n",
852
+ "training_args = parse_training_args(params)\n",
853
+ "training_args"
854
+ ]
855
+ },
856
+ {
857
+ "cell_type": "code",
858
+ "execution_count": 11,
859
+ "id": "2ae3e5f9-e28e-457b-b6bf-a62a472241bf",
860
+ "metadata": {
861
+ "execution": {
862
+ "iopub.execute_input": "2023-10-23T04:16:30.856558Z",
863
+ "iopub.status.busy": "2023-10-23T04:16:30.845583Z",
864
+ "iopub.status.idle": "2023-10-23T04:16:30.859550Z",
865
+ "shell.execute_reply": "2023-10-23T04:16:30.858918Z"
866
+ },
867
+ "papermill": {
868
+ "duration": 1.039895,
869
+ "end_time": "2023-10-23T04:16:30.861071",
870
+ "exception": false,
871
+ "start_time": "2023-10-23T04:16:29.821176",
872
+ "status": "completed"
873
+ },
874
+ "tags": []
875
+ },
876
+ "outputs": [],
877
+ "source": [
878
+ "# data = data[\"train\"].train_test_split(test_size=0.1)\n",
879
+ "# data\n"
880
+ ]
881
+ },
882
+ {
883
+ "cell_type": "code",
884
+ "execution_count": 12,
885
+ "id": "5bc91439-6108-445c-8f85-e6558c9f0677",
886
+ "metadata": {
887
+ "execution": {
888
+ "iopub.execute_input": "2023-10-23T04:16:32.873189Z",
889
+ "iopub.status.busy": "2023-10-23T04:16:32.872448Z",
890
+ "iopub.status.idle": "2023-10-23T04:16:33.145627Z",
891
+ "shell.execute_reply": "2023-10-23T04:16:33.144802Z"
892
+ },
893
+ "papermill": {
894
+ "duration": 1.290055,
895
+ "end_time": "2023-10-23T04:16:33.147320",
896
+ "exception": false,
897
+ "start_time": "2023-10-23T04:16:31.857265",
898
+ "status": "completed"
899
+ },
900
+ "tags": []
901
+ },
902
+ "outputs": [
903
+ {
904
+ "name": "stderr",
905
+ "output_type": "stream",
906
+ "text": [
907
+ "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
908
+ "To disable this warning, you can either:\n",
909
+ "\t- Avoid using `tokenizers` before the fork if possible\n",
910
+ "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n"
911
+ ]
912
+ }
913
+ ],
914
+ "source": [
915
+ "! mkdir -p {trained_model_path_lora}"
916
+ ]
917
+ },
918
+ {
919
+ "cell_type": "code",
920
+ "execution_count": 13,
921
+ "id": "b33e407a-9d4f-49f6-a74b-b80db8cc3a8a",
922
+ "metadata": {
923
+ "execution": {
924
+ "iopub.execute_input": "2023-10-23T04:16:36.127583Z",
925
+ "iopub.status.busy": "2023-10-23T04:16:36.126817Z",
926
+ "iopub.status.idle": "2023-10-23T07:07:47.130996Z",
927
+ "shell.execute_reply": "2023-10-23T07:07:47.130335Z"
928
+ },
929
+ "papermill": {
930
+ "duration": 10272.969761,
931
+ "end_time": "2023-10-23T07:07:47.132555",
932
+ "exception": false,
933
+ "start_time": "2023-10-23T04:16:34.162794",
934
+ "status": "completed"
935
+ },
936
+ "tags": []
937
+ },
938
+ "outputs": [
939
+ {
940
+ "name": "stderr",
941
+ "output_type": "stream",
942
+ "text": [
943
+ "You're using a LlamaTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.\n"
944
+ ]
945
+ },
946
+ {
947
+ "data": {
948
+ "text/html": [
949
+ "\n",
950
+ " <div>\n",
951
+ " \n",
952
+ " <progress value='2370' max='2370' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
953
+ " [2370/2370 2:51:03, Epoch 2/3]\n",
954
+ " </div>\n",
955
+ " <table border=\"1\" class=\"dataframe\">\n",
956
+ " <thead>\n",
957
+ " <tr style=\"text-align: left;\">\n",
958
+ " <th>Step</th>\n",
959
+ " <th>Training Loss</th>\n",
960
+ " </tr>\n",
961
+ " </thead>\n",
962
+ " <tbody>\n",
963
+ " <tr>\n",
964
+ " <td>50</td>\n",
965
+ " <td>0.881200</td>\n",
966
+ " </tr>\n",
967
+ " <tr>\n",
968
+ " <td>100</td>\n",
969
+ " <td>0.341200</td>\n",
970
+ " </tr>\n",
971
+ " <tr>\n",
972
+ " <td>150</td>\n",
973
+ " <td>0.178000</td>\n",
974
+ " </tr>\n",
975
+ " <tr>\n",
976
+ " <td>200</td>\n",
977
+ " <td>0.138400</td>\n",
978
+ " </tr>\n",
979
+ " <tr>\n",
980
+ " <td>250</td>\n",
981
+ " <td>0.104300</td>\n",
982
+ " </tr>\n",
983
+ " <tr>\n",
984
+ " <td>300</td>\n",
985
+ " <td>0.085100</td>\n",
986
+ " </tr>\n",
987
+ " <tr>\n",
988
+ " <td>350</td>\n",
989
+ " <td>0.070900</td>\n",
990
+ " </tr>\n",
991
+ " <tr>\n",
992
+ " <td>400</td>\n",
993
+ " <td>0.059100</td>\n",
994
+ " </tr>\n",
995
+ " <tr>\n",
996
+ " <td>450</td>\n",
997
+ " <td>0.054200</td>\n",
998
+ " </tr>\n",
999
+ " <tr>\n",
1000
+ " <td>500</td>\n",
1001
+ " <td>0.052800</td>\n",
1002
+ " </tr>\n",
1003
+ " <tr>\n",
1004
+ " <td>550</td>\n",
1005
+ " <td>0.049400</td>\n",
1006
+ " </tr>\n",
1007
+ " <tr>\n",
1008
+ " <td>600</td>\n",
1009
+ " <td>0.046500</td>\n",
1010
+ " </tr>\n",
1011
+ " <tr>\n",
1012
+ " <td>650</td>\n",
1013
+ " <td>0.041700</td>\n",
1014
+ " </tr>\n",
1015
+ " <tr>\n",
1016
+ " <td>700</td>\n",
1017
+ " <td>0.044300</td>\n",
1018
+ " </tr>\n",
1019
+ " <tr>\n",
1020
+ " <td>750</td>\n",
1021
+ " <td>0.043600</td>\n",
1022
+ " </tr>\n",
1023
+ " <tr>\n",
1024
+ " <td>800</td>\n",
1025
+ " <td>0.042000</td>\n",
1026
+ " </tr>\n",
1027
+ " <tr>\n",
1028
+ " <td>850</td>\n",
1029
+ " <td>0.035900</td>\n",
1030
+ " </tr>\n",
1031
+ " <tr>\n",
1032
+ " <td>900</td>\n",
1033
+ " <td>0.038100</td>\n",
1034
+ " </tr>\n",
1035
+ " <tr>\n",
1036
+ " <td>950</td>\n",
1037
+ " <td>0.033700</td>\n",
1038
+ " </tr>\n",
1039
+ " <tr>\n",
1040
+ " <td>1000</td>\n",
1041
+ " <td>0.033300</td>\n",
1042
+ " </tr>\n",
1043
+ " <tr>\n",
1044
+ " <td>1050</td>\n",
1045
+ " <td>0.033800</td>\n",
1046
+ " </tr>\n",
1047
+ " <tr>\n",
1048
+ " <td>1100</td>\n",
1049
+ " <td>0.033500</td>\n",
1050
+ " </tr>\n",
1051
+ " <tr>\n",
1052
+ " <td>1150</td>\n",
1053
+ " <td>0.032800</td>\n",
1054
+ " </tr>\n",
1055
+ " <tr>\n",
1056
+ " <td>1200</td>\n",
1057
+ " <td>0.033500</td>\n",
1058
+ " </tr>\n",
1059
+ " <tr>\n",
1060
+ " <td>1250</td>\n",
1061
+ " <td>0.031600</td>\n",
1062
+ " </tr>\n",
1063
+ " <tr>\n",
1064
+ " <td>1300</td>\n",
1065
+ " <td>0.033600</td>\n",
1066
+ " </tr>\n",
1067
+ " <tr>\n",
1068
+ " <td>1350</td>\n",
1069
+ " <td>0.032900</td>\n",
1070
+ " </tr>\n",
1071
+ " <tr>\n",
1072
+ " <td>1400</td>\n",
1073
+ " <td>0.029600</td>\n",
1074
+ " </tr>\n",
1075
+ " <tr>\n",
1076
+ " <td>1450</td>\n",
1077
+ " <td>0.033000</td>\n",
1078
+ " </tr>\n",
1079
+ " <tr>\n",
1080
+ " <td>1500</td>\n",
1081
+ " <td>0.032800</td>\n",
1082
+ " </tr>\n",
1083
+ " <tr>\n",
1084
+ " <td>1550</td>\n",
1085
+ " <td>0.032300</td>\n",
1086
+ " </tr>\n",
1087
+ " <tr>\n",
1088
+ " <td>1600</td>\n",
1089
+ " <td>0.030600</td>\n",
1090
+ " </tr>\n",
1091
+ " <tr>\n",
1092
+ " <td>1650</td>\n",
1093
+ " <td>0.025900</td>\n",
1094
+ " </tr>\n",
1095
+ " <tr>\n",
1096
+ " <td>1700</td>\n",
1097
+ " <td>0.027000</td>\n",
1098
+ " </tr>\n",
1099
+ " <tr>\n",
1100
+ " <td>1750</td>\n",
1101
+ " <td>0.027400</td>\n",
1102
+ " </tr>\n",
1103
+ " <tr>\n",
1104
+ " <td>1800</td>\n",
1105
+ " <td>0.025700</td>\n",
1106
+ " </tr>\n",
1107
+ " <tr>\n",
1108
+ " <td>1850</td>\n",
1109
+ " <td>0.025400</td>\n",
1110
+ " </tr>\n",
1111
+ " <tr>\n",
1112
+ " <td>1900</td>\n",
1113
+ " <td>0.026400</td>\n",
1114
+ " </tr>\n",
1115
+ " <tr>\n",
1116
+ " <td>1950</td>\n",
1117
+ " <td>0.025500</td>\n",
1118
+ " </tr>\n",
1119
+ " <tr>\n",
1120
+ " <td>2000</td>\n",
1121
+ " <td>0.026300</td>\n",
1122
+ " </tr>\n",
1123
+ " <tr>\n",
1124
+ " <td>2050</td>\n",
1125
+ " <td>0.025600</td>\n",
1126
+ " </tr>\n",
1127
+ " <tr>\n",
1128
+ " <td>2100</td>\n",
1129
+ " <td>0.026500</td>\n",
1130
+ " </tr>\n",
1131
+ " <tr>\n",
1132
+ " <td>2150</td>\n",
1133
+ " <td>0.025600</td>\n",
1134
+ " </tr>\n",
1135
+ " <tr>\n",
1136
+ " <td>2200</td>\n",
1137
+ " <td>0.026000</td>\n",
1138
+ " </tr>\n",
1139
+ " <tr>\n",
1140
+ " <td>2250</td>\n",
1141
+ " <td>0.026500</td>\n",
1142
+ " </tr>\n",
1143
+ " <tr>\n",
1144
+ " <td>2300</td>\n",
1145
+ " <td>0.025700</td>\n",
1146
+ " </tr>\n",
1147
+ " <tr>\n",
1148
+ " <td>2350</td>\n",
1149
+ " <td>0.025800</td>\n",
1150
+ " </tr>\n",
1151
+ " </tbody>\n",
1152
+ "</table><p>"
1153
+ ],
1154
+ "text/plain": [
1155
+ "<IPython.core.display.HTML object>"
1156
+ ]
1157
+ },
1158
+ "metadata": {},
1159
+ "output_type": "display_data"
1160
+ },
1161
+ {
1162
+ "data": {
1163
+ "text/plain": [
1164
+ "TrainOutput(global_step=2370, training_loss=0.06678998734377607, metrics={'train_runtime': 10270.6027, 'train_samples_per_second': 0.924, 'train_steps_per_second': 0.231, 'total_flos': 2.160162583196713e+17, 'train_loss': 0.06678998734377607, 'epoch': 3.0})"
1165
+ ]
1166
+ },
1167
+ "execution_count": 13,
1168
+ "metadata": {},
1169
+ "output_type": "execute_result"
1170
+ }
1171
+ ],
1172
+ "source": [
1173
+ "trainer = transformers.Trainer(\n",
1174
+ " model=model,\n",
1175
+ " train_dataset=data[\"train\"],\n",
1176
+ "# eval_dataset=data[\"test\"],\n",
1177
+ " args=training_args,\n",
1178
+ " data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),\n",
1179
+ ")\n",
1180
+ "model.config.use_cache = False # silence the warnings. Please re-enable for inference!\n",
1181
+ "\n",
1182
+ "checkpoint_path = Path(\"/content/artifacts/checkpoints\")\n",
1183
+ "\n",
1184
+ "# Only set resume_from_checkpoint True when directory exists and contains files\n",
1185
+ "resume_from_checkpoint = checkpoint_path.is_dir() and any(checkpoint_path.iterdir())\n",
1186
+ "if resume_from_checkpoint:\n",
1187
+ " print(\"Resuming from checkpoint:\", list(checkpoint_path.rglob(\"\")))\n",
1188
+ "trainer.train(resume_from_checkpoint=resume_from_checkpoint)"
1189
+ ]
1190
+ },
1191
+ {
1192
+ "cell_type": "code",
1193
+ "execution_count": 14,
1194
+ "id": "172e47a7-400e-4f82-a5e3-38135ecf532f",
1195
+ "metadata": {
1196
+ "execution": {
1197
+ "iopub.execute_input": "2023-10-23T07:07:49.427665Z",
1198
+ "iopub.status.busy": "2023-10-23T07:07:49.427050Z",
1199
+ "iopub.status.idle": "2023-10-23T07:08:07.740366Z",
1200
+ "shell.execute_reply": "2023-10-23T07:08:07.739680Z"
1201
+ },
1202
+ "papermill": {
1203
+ "duration": 19.377847,
1204
+ "end_time": "2023-10-23T07:08:07.742055",
1205
+ "exception": false,
1206
+ "start_time": "2023-10-23T07:07:48.364208",
1207
+ "status": "completed"
1208
+ },
1209
+ "tags": []
1210
+ },
1211
+ "outputs": [
1212
+ {
1213
+ "data": {
1214
+ "text/plain": [
1215
+ "PeftModelForCausalLM(\n",
1216
+ " (base_model): LoraModel(\n",
1217
+ " (model): LlamaForCausalLM(\n",
1218
+ " (model): LlamaModel(\n",
1219
+ " (embed_tokens): ModulesToSaveWrapper(\n",
1220
+ " (original_module): Embedding(32001, 4096)\n",
1221
+ " (modules_to_save): ModuleDict(\n",
1222
+ " (default): Embedding(32001, 4096)\n",
1223
+ " )\n",
1224
+ " )\n",
1225
+ " (layers): ModuleList(\n",
1226
+ " (0-31): 32 x LlamaDecoderLayer(\n",
1227
+ " (self_attn): LlamaAttention(\n",
1228
+ " (q_proj): Linear(\n",
1229
+ " in_features=4096, out_features=4096, bias=False\n",
1230
+ " (lora_dropout): ModuleDict(\n",
1231
+ " (default): Dropout(p=0.05, inplace=False)\n",
1232
+ " )\n",
1233
+ " (lora_A): ModuleDict(\n",
1234
+ " (default): Linear(in_features=4096, out_features=16, bias=False)\n",
1235
+ " )\n",
1236
+ " (lora_B): ModuleDict(\n",
1237
+ " (default): Linear(in_features=16, out_features=4096, bias=False)\n",
1238
+ " )\n",
1239
+ " (lora_embedding_A): ParameterDict()\n",
1240
+ " (lora_embedding_B): ParameterDict()\n",
1241
+ " )\n",
1242
+ " (k_proj): Linear(\n",
1243
+ " in_features=4096, out_features=4096, bias=False\n",
1244
+ " (lora_dropout): ModuleDict(\n",
1245
+ " (default): Dropout(p=0.05, inplace=False)\n",
1246
+ " )\n",
1247
+ " (lora_A): ModuleDict(\n",
1248
+ " (default): Linear(in_features=4096, out_features=16, bias=False)\n",
1249
+ " )\n",
1250
+ " (lora_B): ModuleDict(\n",
1251
+ " (default): Linear(in_features=16, out_features=4096, bias=False)\n",
1252
+ " )\n",
1253
+ " (lora_embedding_A): ParameterDict()\n",
1254
+ " (lora_embedding_B): ParameterDict()\n",
1255
+ " )\n",
1256
+ " (v_proj): Linear(\n",
1257
+ " in_features=4096, out_features=4096, bias=False\n",
1258
+ " (lora_dropout): ModuleDict(\n",
1259
+ " (default): Dropout(p=0.05, inplace=False)\n",
1260
+ " )\n",
1261
+ " (lora_A): ModuleDict(\n",
1262
+ " (default): Linear(in_features=4096, out_features=16, bias=False)\n",
1263
+ " )\n",
1264
+ " (lora_B): ModuleDict(\n",
1265
+ " (default): Linear(in_features=16, out_features=4096, bias=False)\n",
1266
+ " )\n",
1267
+ " (lora_embedding_A): ParameterDict()\n",
1268
+ " (lora_embedding_B): ParameterDict()\n",
1269
+ " )\n",
1270
+ " (o_proj): Linear(\n",
1271
+ " in_features=4096, out_features=4096, bias=False\n",
1272
+ " (lora_dropout): ModuleDict(\n",
1273
+ " (default): Dropout(p=0.05, inplace=False)\n",
1274
+ " )\n",
1275
+ " (lora_A): ModuleDict(\n",
1276
+ " (default): Linear(in_features=4096, out_features=16, bias=False)\n",
1277
+ " )\n",
1278
+ " (lora_B): ModuleDict(\n",
1279
+ " (default): Linear(in_features=16, out_features=4096, bias=False)\n",
1280
+ " )\n",
1281
+ " (lora_embedding_A): ParameterDict()\n",
1282
+ " (lora_embedding_B): ParameterDict()\n",
1283
+ " )\n",
1284
+ " (rotary_emb): LlamaRotaryEmbedding()\n",
1285
+ " )\n",
1286
+ " (mlp): LlamaMLP(\n",
1287
+ " (gate_proj): Linear(\n",
1288
+ " in_features=4096, out_features=11008, bias=False\n",
1289
+ " (lora_dropout): ModuleDict(\n",
1290
+ " (default): Dropout(p=0.05, inplace=False)\n",
1291
+ " )\n",
1292
+ " (lora_A): ModuleDict(\n",
1293
+ " (default): Linear(in_features=4096, out_features=16, bias=False)\n",
1294
+ " )\n",
1295
+ " (lora_B): ModuleDict(\n",
1296
+ " (default): Linear(in_features=16, out_features=11008, bias=False)\n",
1297
+ " )\n",
1298
+ " (lora_embedding_A): ParameterDict()\n",
1299
+ " (lora_embedding_B): ParameterDict()\n",
1300
+ " )\n",
1301
+ " (up_proj): Linear(\n",
1302
+ " in_features=4096, out_features=11008, bias=False\n",
1303
+ " (lora_dropout): ModuleDict(\n",
1304
+ " (default): Dropout(p=0.05, inplace=False)\n",
1305
+ " )\n",
1306
+ " (lora_A): ModuleDict(\n",
1307
+ " (default): Linear(in_features=4096, out_features=16, bias=False)\n",
1308
+ " )\n",
1309
+ " (lora_B): ModuleDict(\n",
1310
+ " (default): Linear(in_features=16, out_features=11008, bias=False)\n",
1311
+ " )\n",
1312
+ " (lora_embedding_A): ParameterDict()\n",
1313
+ " (lora_embedding_B): ParameterDict()\n",
1314
+ " )\n",
1315
+ " (down_proj): Linear(\n",
1316
+ " in_features=11008, out_features=4096, bias=False\n",
1317
+ " (lora_dropout): ModuleDict(\n",
1318
+ " (default): Dropout(p=0.05, inplace=False)\n",
1319
+ " )\n",
1320
+ " (lora_A): ModuleDict(\n",
1321
+ " (default): Linear(in_features=11008, out_features=16, bias=False)\n",
1322
+ " )\n",
1323
+ " (lora_B): ModuleDict(\n",
1324
+ " (default): Linear(in_features=16, out_features=4096, bias=False)\n",
1325
+ " )\n",
1326
+ " (lora_embedding_A): ParameterDict()\n",
1327
+ " (lora_embedding_B): ParameterDict()\n",
1328
+ " )\n",
1329
+ " (act_fn): SiLUActivation()\n",
1330
+ " )\n",
1331
+ " (input_layernorm): LlamaRMSNorm()\n",
1332
+ " (post_attention_layernorm): LlamaRMSNorm()\n",
1333
+ " )\n",
1334
+ " )\n",
1335
+ " (norm): LlamaRMSNorm()\n",
1336
+ " )\n",
1337
+ " (lm_head): ModulesToSaveWrapper(\n",
1338
+ " (original_module): Linear(in_features=4096, out_features=32001, bias=False)\n",
1339
+ " (modules_to_save): ModuleDict(\n",
1340
+ " (default): Linear(in_features=4096, out_features=32001, bias=False)\n",
1341
+ " )\n",
1342
+ " )\n",
1343
+ " )\n",
1344
+ " )\n",
1345
+ ")"
1346
+ ]
1347
+ },
1348
+ "execution_count": 14,
1349
+ "metadata": {},
1350
+ "output_type": "execute_result"
1351
+ }
1352
+ ],
1353
+ "source": [
1354
+ "model.save_pretrained(trained_model_path_lora)\n",
1355
+ "model"
1356
+ ]
1357
+ },
1358
+ {
1359
+ "cell_type": "code",
1360
+ "execution_count": 15,
1361
+ "id": "dea4e68e-57a7-48bd-bad9-f03dfe3f8a06",
1362
+ "metadata": {
1363
+ "execution": {
1364
+ "iopub.execute_input": "2023-10-23T07:08:09.719819Z",
1365
+ "iopub.status.busy": "2023-10-23T07:08:09.719055Z",
1366
+ "iopub.status.idle": "2023-10-23T07:08:09.968284Z",
1367
+ "shell.execute_reply": "2023-10-23T07:08:09.967347Z"
1368
+ },
1369
+ "papermill": {
1370
+ "duration": 1.229019,
1371
+ "end_time": "2023-10-23T07:08:09.969828",
1372
+ "exception": false,
1373
+ "start_time": "2023-10-23T07:08:08.740809",
1374
+ "status": "completed"
1375
+ },
1376
+ "tags": []
1377
+ },
1378
+ "outputs": [
1379
+ {
1380
+ "name": "stdout",
1381
+ "output_type": "stream",
1382
+ "text": [
1383
+ "total 1.2G\r\n",
1384
+ " 512 -rw-r--r-- 1 root 3003 88 Oct 23 07:07 README.md\r\n",
1385
+ "1.0K -rw-r--r-- 1 root 3003 550 Oct 23 07:08 adapter_config.json\r\n",
1386
+ "1.2G -rw-r--r-- 1 root 3003 1.2G Oct 23 07:07 adapter_model.bin\r\n"
1387
+ ]
1388
+ },
1389
+ {
1390
+ "name": "stderr",
1391
+ "output_type": "stream",
1392
+ "text": [
1393
+ "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
1394
+ "To disable this warning, you can either:\n",
1395
+ "\t- Avoid using `tokenizers` before the fork if possible\n",
1396
+ "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n"
1397
+ ]
1398
+ }
1399
+ ],
1400
+ "source": [
1401
+ "! ls -lash {trained_model_path_lora}"
1402
+ ]
1403
+ },
1404
+ {
1405
+ "cell_type": "code",
1406
+ "execution_count": 16,
1407
+ "id": "09db36b7-ead6-4368-9bfb-13ba1ba800a5",
1408
+ "metadata": {
1409
+ "execution": {
1410
+ "iopub.execute_input": "2023-10-23T07:08:11.940246Z",
1411
+ "iopub.status.busy": "2023-10-23T07:08:11.939444Z",
1412
+ "iopub.status.idle": "2023-10-23T07:09:04.484842Z",
1413
+ "shell.execute_reply": "2023-10-23T07:09:04.484162Z"
1414
+ },
1415
+ "papermill": {
1416
+ "duration": 54.728628,
1417
+ "end_time": "2023-10-23T07:09:05.635793",
1418
+ "exception": false,
1419
+ "start_time": "2023-10-23T07:08:10.907165",
1420
+ "status": "completed"
1421
+ },
1422
+ "tags": []
1423
+ },
1424
+ "outputs": [
1425
+ {
1426
+ "data": {
1427
+ "text/plain": [
1428
+ "LlamaForCausalLM(\n",
1429
+ " (model): LlamaModel(\n",
1430
+ " (embed_tokens): Embedding(32001, 4096)\n",
1431
+ " (layers): ModuleList(\n",
1432
+ " (0-31): 32 x LlamaDecoderLayer(\n",
1433
+ " (self_attn): LlamaAttention(\n",
1434
+ " (q_proj): Linear(in_features=4096, out_features=4096, bias=False)\n",
1435
+ " (k_proj): Linear(in_features=4096, out_features=4096, bias=False)\n",
1436
+ " (v_proj): Linear(in_features=4096, out_features=4096, bias=False)\n",
1437
+ " (o_proj): Linear(in_features=4096, out_features=4096, bias=False)\n",
1438
+ " (rotary_emb): LlamaRotaryEmbedding()\n",
1439
+ " )\n",
1440
+ " (mlp): LlamaMLP(\n",
1441
+ " (gate_proj): Linear(in_features=4096, out_features=11008, bias=False)\n",
1442
+ " (up_proj): Linear(in_features=4096, out_features=11008, bias=False)\n",
1443
+ " (down_proj): Linear(in_features=11008, out_features=4096, bias=False)\n",
1444
+ " (act_fn): SiLUActivation()\n",
1445
+ " )\n",
1446
+ " (input_layernorm): LlamaRMSNorm()\n",
1447
+ " (post_attention_layernorm): LlamaRMSNorm()\n",
1448
+ " )\n",
1449
+ " )\n",
1450
+ " (norm): LlamaRMSNorm()\n",
1451
+ " )\n",
1452
+ " (lm_head): Linear(in_features=4096, out_features=32001, bias=False)\n",
1453
+ ")"
1454
+ ]
1455
+ },
1456
+ "execution_count": 16,
1457
+ "metadata": {},
1458
+ "output_type": "execute_result"
1459
+ }
1460
+ ],
1461
+ "source": [
1462
+ "model = model.merge_and_unload().half()\n",
1463
+ "model"
1464
+ ]
1465
+ },
1466
+ {
1467
+ "cell_type": "code",
1468
+ "execution_count": 17,
1469
+ "id": "270a9a72-3a12-4d83-aa7d-2d167cb28cb4",
1470
+ "metadata": {
1471
+ "execution": {
1472
+ "iopub.execute_input": "2023-10-23T07:09:07.731540Z",
1473
+ "iopub.status.busy": "2023-10-23T07:09:07.730902Z",
1474
+ "iopub.status.idle": "2023-10-23T07:09:07.975280Z",
1475
+ "shell.execute_reply": "2023-10-23T07:09:07.974458Z"
1476
+ },
1477
+ "papermill": {
1478
+ "duration": 1.355032,
1479
+ "end_time": "2023-10-23T07:09:07.976846",
1480
+ "exception": false,
1481
+ "start_time": "2023-10-23T07:09:06.621814",
1482
+ "status": "completed"
1483
+ },
1484
+ "tags": []
1485
+ },
1486
+ "outputs": [
1487
+ {
1488
+ "name": "stdout",
1489
+ "output_type": "stream",
1490
+ "text": [
1491
+ "total 0\r\n",
1492
+ "drwxr-xr-x 1 root 3003 0 Oct 23 04:16 checkpoints\r\n",
1493
+ "drwxr-xr-x 1 root 3003 0 Oct 23 04:16 lora\r\n",
1494
+ "drwxr-xr-x 1 root 3003 0 Oct 23 04:10 src\r\n"
1495
+ ]
1496
+ },
1497
+ {
1498
+ "name": "stderr",
1499
+ "output_type": "stream",
1500
+ "text": [
1501
+ "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
1502
+ "To disable this warning, you can either:\n",
1503
+ "\t- Avoid using `tokenizers` before the fork if possible\n",
1504
+ "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n"
1505
+ ]
1506
+ }
1507
+ ],
1508
+ "source": [
1509
+ "! ls -l {trained_model_path}"
1510
+ ]
1511
+ },
1512
+ {
1513
+ "cell_type": "code",
1514
+ "execution_count": 18,
1515
+ "id": "260e9d79-6eb8-4516-bf8f-825a25606391",
1516
+ "metadata": {
1517
+ "execution": {
1518
+ "iopub.execute_input": "2023-10-23T07:09:09.990340Z",
1519
+ "iopub.status.busy": "2023-10-23T07:09:09.989655Z",
1520
+ "iopub.status.idle": "2023-10-23T07:11:33.903117Z",
1521
+ "shell.execute_reply": "2023-10-23T07:11:33.902350Z"
1522
+ },
1523
+ "papermill": {
1524
+ "duration": 145.986999,
1525
+ "end_time": "2023-10-23T07:11:34.968252",
1526
+ "exception": false,
1527
+ "start_time": "2023-10-23T07:09:08.981253",
1528
+ "status": "completed"
1529
+ },
1530
+ "tags": []
1531
+ },
1532
+ "outputs": [
1533
+ {
1534
+ "data": {
1535
+ "text/plain": [
1536
+ "('/content/artifacts/tokenizer_config.json',\n",
1537
+ " '/content/artifacts/special_tokens_map.json',\n",
1538
+ " '/content/artifacts/tokenizer.model',\n",
1539
+ " '/content/artifacts/added_tokens.json',\n",
1540
+ " '/content/artifacts/tokenizer.json')"
1541
+ ]
1542
+ },
1543
+ "execution_count": 18,
1544
+ "metadata": {},
1545
+ "output_type": "execute_result"
1546
+ }
1547
+ ],
1548
+ "source": [
1549
+ "model.save_pretrained(trained_model_path)\n",
1550
+ "tokenizer.save_pretrained(trained_model_path)"
1551
+ ]
1552
+ },
1553
+ {
1554
+ "cell_type": "code",
1555
+ "execution_count": 19,
1556
+ "id": "6d90a920-fb22-4291-8466-411ff41e31be",
1557
+ "metadata": {
1558
+ "execution": {
1559
+ "iopub.execute_input": "2023-10-23T07:11:36.839690Z",
1560
+ "iopub.status.busy": "2023-10-23T07:11:36.838894Z",
1561
+ "iopub.status.idle": "2023-10-23T07:11:37.088096Z",
1562
+ "shell.execute_reply": "2023-10-23T07:11:37.087230Z"
1563
+ },
1564
+ "papermill": {
1565
+ "duration": 1.198205,
1566
+ "end_time": "2023-10-23T07:11:37.089762",
1567
+ "exception": false,
1568
+ "start_time": "2023-10-23T07:11:35.891557",
1569
+ "status": "completed"
1570
+ },
1571
+ "tags": []
1572
+ },
1573
+ "outputs": [
1574
+ {
1575
+ "name": "stdout",
1576
+ "output_type": "stream",
1577
+ "text": [
1578
+ "total 13G\r\n",
1579
+ " 512 -rw-r--r-- 1 root 3003 21 Oct 23 07:11 added_tokens.json\r\n",
1580
+ " 0 drwxr-xr-x 1 root 3003 0 Oct 23 04:16 checkpoints\r\n",
1581
+ "1.0K -rw-r--r-- 1 root 3003 648 Oct 23 07:09 config.json\r\n",
1582
+ " 512 -rw-r--r-- 1 root 3003 183 Oct 23 07:09 generation_config.json\r\n",
1583
+ " 0 drwxr-xr-x 1 root 3003 0 Oct 23 04:16 lora\r\n",
1584
+ "9.3G -rw-r--r-- 1 root 3003 9.3G Oct 23 07:09 pytorch_model-00001-of-00002.bin\r\n",
1585
+ "3.3G -rw-r--r-- 1 root 3003 3.3G Oct 23 07:11 pytorch_model-00002-of-00002.bin\r\n",
1586
+ " 24K -rw-r--r-- 1 root 3003 24K Oct 23 07:11 pytorch_model.bin.index.json\r\n",
1587
+ "1.0K -rw-r--r-- 1 root 3003 552 Oct 23 07:11 special_tokens_map.json\r\n",
1588
+ " 0 drwxr-xr-x 1 root 3003 0 Oct 23 04:10 src\r\n",
1589
+ "1.8M -rw-r--r-- 1 root 3003 1.8M Oct 23 07:11 tokenizer.json\r\n",
1590
+ "489K -rw-r--r-- 1 root 3003 489K Oct 23 07:11 tokenizer.model\r\n",
1591
+ "1.5K -rw-r--r-- 1 root 3003 1.1K Oct 23 07:11 tokenizer_config.json\r\n"
1592
+ ]
1593
+ },
1594
+ {
1595
+ "name": "stderr",
1596
+ "output_type": "stream",
1597
+ "text": [
1598
+ "huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...\n",
1599
+ "To disable this warning, you can either:\n",
1600
+ "\t- Avoid using `tokenizers` before the fork if possible\n",
1601
+ "\t- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)\n"
1602
+ ]
1603
+ }
1604
+ ],
1605
+ "source": [
1606
+ "! ls -lash {trained_model_path}"
1607
+ ]
1608
+ },
1609
+ {
1610
+ "cell_type": "code",
1611
+ "execution_count": 20,
1612
+ "id": "202a694a",
1613
+ "metadata": {
1614
+ "execution": {
1615
+ "iopub.execute_input": "2023-10-23T07:11:39.015703Z",
1616
+ "iopub.status.busy": "2023-10-23T07:11:39.014885Z"
1617
+ },
1618
+ "papermill": {
1619
+ "duration": null,
1620
+ "end_time": null,
1621
+ "exception": false,
1622
+ "start_time": "2023-10-23T07:11:38.011529",
1623
+ "status": "running"
1624
+ },
1625
+ "tags": []
1626
+ },
1627
+ "outputs": [
1628
+ {
1629
+ "data": {
1630
+ "application/vnd.jupyter.widget-view+json": {
1631
+ "model_id": "06408c12de9a45139bdafb067bc717dd",
1632
+ "version_major": 2,
1633
+ "version_minor": 0
1634
+ },
1635
+ "text/plain": [
1636
+ "pytorch_model-00002-of-00002.bin: 0%| | 0.00/3.50G [00:00<?, ?B/s]"
1637
+ ]
1638
+ },
1639
+ "metadata": {},
1640
+ "output_type": "display_data"
1641
+ },
1642
+ {
1643
+ "data": {
1644
+ "application/vnd.jupyter.widget-view+json": {
1645
+ "model_id": "10ecb48e9cad4d3fa8ac2f9964ce90fe",
1646
+ "version_major": 2,
1647
+ "version_minor": 0
1648
+ },
1649
+ "text/plain": [
1650
+ "Upload 2 LFS files: 0%| | 0/2 [00:00<?, ?it/s]"
1651
+ ]
1652
+ },
1653
+ "metadata": {},
1654
+ "output_type": "display_data"
1655
+ },
1656
+ {
1657
+ "data": {
1658
+ "application/vnd.jupyter.widget-view+json": {
1659
+ "model_id": "8e122f8d4b30478cb39b5502cd6323de",
1660
+ "version_major": 2,
1661
+ "version_minor": 0
1662
+ },
1663
+ "text/plain": [
1664
+ "pytorch_model-00001-of-00002.bin: 0%| | 0.00/9.98G [00:00<?, ?B/s]"
1665
+ ]
1666
+ },
1667
+ "metadata": {},
1668
+ "output_type": "display_data"
1669
+ }
1670
+ ],
1671
+ "source": [
1672
+ "from huggingface_hub import HfApi\n",
1673
+ "import shutil\n",
1674
+ "\n",
1675
+ "tokenizer_model_path_base = Path(model_path) / \"tokenizer.model\"\n",
1676
+ "tokenizer_model_path_trained = Path(trained_model_path) / \"tokenizer.model\"\n",
1677
+ "if tokenizer_model_path_base.exists() and not tokenizer_model_path_trained.exists():\n",
1678
+ " shutil.copy(tokenizer_model_path_base, tokenizer_model_path_trained)\n",
1679
+ "\n",
1680
+ "repo_id = params.get(\"push_to_hub\")\n",
1681
+ "if repo_id:\n",
1682
+ " model.push_to_hub(repo_id)\n",
1683
+ " tokenizer.push_to_hub(repo_id)\n",
1684
+ " hf_api = HfApi()\n",
1685
+ " # Upload tokenizer.model if it was in base model\n",
1686
+ " if tokenizer_model_path_base.exists():\n",
1687
+ " hf_api.upload_file(\n",
1688
+ " path_or_fileobj=tokenizer_model_path_base,\n",
1689
+ " path_in_repo=tokenizer_model_path_base.name,\n",
1690
+ " repo_id=repo_id,\n",
1691
+ " )\n",
1692
+ " logs_path = Path(\"/content/artifacts/src/train.ipynb\")\n",
1693
+ " if logs_path.exists():\n",
1694
+ " hf_api.upload_file(\n",
1695
+ " path_or_fileobj=logs_path,\n",
1696
+ " path_in_repo=logs_path.name,\n",
1697
+ " repo_id=repo_id,\n",
1698
+ " )\n"
1699
+ ]
1700
+ }
1701
+ ],
1702
+ "metadata": {
1703
+ "kernelspec": {
1704
+ "display_name": "Python 3 (ipykernel)",
1705
+ "language": "python",
1706
+ "name": "python3"
1707
+ },
1708
+ "language_info": {
1709
+ "codemirror_mode": {
1710
+ "name": "ipython",
1711
+ "version": 3
1712
+ },
1713
+ "file_extension": ".py",
1714
+ "mimetype": "text/x-python",
1715
+ "name": "python",
1716
+ "nbconvert_exporter": "python",
1717
+ "pygments_lexer": "ipython3",
1718
+ "version": "3.10.12"
1719
+ },
1720
+ "papermill": {
1721
+ "default_parameters": {},
1722
+ "duration": null,
1723
+ "end_time": null,
1724
+ "environment_variables": {},
1725
+ "exception": null,
1726
+ "input_path": "/content/src/train.ipynb",
1727
+ "output_path": "/content/artifacts/src/train.ipynb",
1728
+ "parameters": {},
1729
+ "start_time": "2023-10-23T04:10:29.401501",
1730
+ "version": "2.4.0"
1731
+ }
1732
+ },
1733
+ "nbformat": 4,
1734
+ "nbformat_minor": 5
1735
+ }