Ankit15nov commited on
Commit
6bb0190
·
1 Parent(s): 2ca546e

assignment 3 uploaded

Browse files
Eleuther_AI_Evaluation_Harness_Notebook.ipynb ADDED
@@ -0,0 +1,408 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {
6
+ "id": "yxqqCH-FqfL4"
7
+ },
8
+ "source": [
9
+ "### Eleuther AI Evaluation Harness\n",
10
+ "\n",
11
+ "It's easiest to let Eleuther AI explain what they were going for:\n",
12
+ "\n",
13
+ "\n",
14
+ ">\"...the LM Evaluation Harness, [is] a unifying framework that allows any causal language model to be tested on the same exact inputs and codebase. This provides a ground-truth location to evaluate new LLMs and saves practitioners time implementing few-shot evaluations repeatedly while ensuring that their results can be compared against previous work. The LM Eval Harness currently supports several different NLP tasks and model frameworks, all with a unified interface and task versioning for reproducibility.\"\n",
15
+ "\n",
16
+ "Let's get started with a simple task called `hellaswag`!"
17
+ ]
18
+ },
19
+ {
20
+ "cell_type": "code",
21
+ "execution_count": 2,
22
+ "metadata": {
23
+ "id": "xfSF5WA3qfqF"
24
+ },
25
+ "outputs": [],
26
+ "source": [
27
+ "import locale\n",
28
+ "locale.getpreferredencoding = lambda: \"UTF-8\""
29
+ ]
30
+ },
31
+ {
32
+ "cell_type": "markdown",
33
+ "metadata": {
34
+ "id": "XMeOdr0lqmoO"
35
+ },
36
+ "source": [
37
+ "First, we'll want to clone the Eleuther AI repository so we can use their evaluation scripts."
38
+ ]
39
+ },
40
+ {
41
+ "cell_type": "code",
42
+ "execution_count": 2,
43
+ "metadata": {
44
+ "colab": {
45
+ "base_uri": "https://localhost:8080/"
46
+ },
47
+ "id": "jZEkK1VoqnII",
48
+ "outputId": "23fc393a-e2b7-4e3b-cac3-868dae464bdf"
49
+ },
50
+ "outputs": [
51
+ {
52
+ "name": "stdout",
53
+ "output_type": "stream",
54
+ "text": [
55
+ "Cloning into 'lm-evaluation-harness'...\n",
56
+ "remote: Enumerating objects: 19181, done.\u001b[K\n",
57
+ "remote: Counting objects: 100% (5038/5038), done.\u001b[K\n",
58
+ "remote: Compressing objects: 100% (1385/1385), done.\u001b[K\n",
59
+ "remote: Total 19181 (delta 3934), reused 4486 (delta 3599), pack-reused 14143\u001b[K\n",
60
+ "Receiving objects: 100% (19181/19181), 20.07 MiB | 25.81 MiB/s, done.\n",
61
+ "Resolving deltas: 100% (12760/12760), done.\n"
62
+ ]
63
+ }
64
+ ],
65
+ "source": [
66
+ "!git clone https://github.com/EleutherAI/lm-evaluation-harness"
67
+ ]
68
+ },
69
+ {
70
+ "cell_type": "markdown",
71
+ "metadata": {
72
+ "id": "lDiK6G23qrHV"
73
+ },
74
+ "source": [
75
+ "Next, let's install the required dependencies."
76
+ ]
77
+ },
78
+ {
79
+ "cell_type": "code",
80
+ "execution_count": 3,
81
+ "metadata": {
82
+ "colab": {
83
+ "base_uri": "https://localhost:8080/"
84
+ },
85
+ "id": "29_UWp69qrcS",
86
+ "outputId": "4888e8cb-d584-41a7-ba3d-6b4edea8615c"
87
+ },
88
+ "outputs": [
89
+ {
90
+ "name": "stdout",
91
+ "output_type": "stream",
92
+ "text": [
93
+ "/home/ec2-user/SageMaker/FourthBrain/Building-With-LLMs-EXL-main/Week 3/lm-evaluation-harness\n"
94
+ ]
95
+ }
96
+ ],
97
+ "source": [
98
+ "%cd lm-evaluation-harness/"
99
+ ]
100
+ },
101
+ {
102
+ "cell_type": "code",
103
+ "execution_count": 4,
104
+ "metadata": {
105
+ "colab": {
106
+ "base_uri": "https://localhost:8080/"
107
+ },
108
+ "id": "osdz9eocqtNQ",
109
+ "outputId": "0ab53535-05fc-45a9-c689-e6c5e1ddc1ce"
110
+ },
111
+ "outputs": [],
112
+ "source": [
113
+ "!pip install -q -e ."
114
+ ]
115
+ },
116
+ {
117
+ "cell_type": "markdown",
118
+ "metadata": {
119
+ "id": "B85uF28Bt07B"
120
+ },
121
+ "source": [
122
+ "These tests can/will take a long time!\n",
123
+ "\n",
124
+ "While the script is provided to explain how you can run some tests - you shouldn't run this cell yourself unless you have a lot of time!"
125
+ ]
126
+ },
127
+ {
128
+ "cell_type": "code",
129
+ "execution_count": 5,
130
+ "metadata": {
131
+ "colab": {
132
+ "base_uri": "https://localhost:8080/"
133
+ },
134
+ "id": "tMerNGv1qvai",
135
+ "outputId": "9453324a-867f-43a3-cdf6-e7202b9a73dd"
136
+ },
137
+ "outputs": [
138
+ {
139
+ "name": "stdout",
140
+ "output_type": "stream",
141
+ "text": [
142
+ "Selected Tasks: ['hellaswag']\n",
143
+ "Using device 'cuda:0'\n",
144
+ "Downloading builder script: 100%|██████████| 4.36k/4.36k [00:00<00:00, 7.09MB/s]\n",
145
+ "Downloading metadata: 100%|████████████████| 2.53k/2.53k [00:00<00:00, 4.32MB/s]\n",
146
+ "Downloading readme: 100%|██████████████████| 6.85k/6.85k [00:00<00:00, 6.65MB/s]\n",
147
+ "Downloading data files: 0%| | 0/3 [00:00<?, ?it/s]\n",
148
+ "Downloading data: 0%| | 0.00/12.1M [00:00<?, ?B/s]\u001b[A\n",
149
+ "Downloading data: 12.8MB [00:00, 128MB/s] \u001b[A\n",
150
+ "Downloading data: 27.1MB [00:00, 137MB/s]\u001b[A\n",
151
+ "Downloading data: 47.5MB [00:00, 138MB/s]\u001b[A\n",
152
+ "Downloading data files: 33%|███████ | 1/3 [00:03<00:07, 3.67s/it]\n",
153
+ "Downloading data: 11.8MB [00:00, 138MB/s] \u001b[A\n",
154
+ "Downloading data files: 67%|██████████████ | 2/3 [00:04<00:02, 2.13s/it]\n",
155
+ "Downloading data: 12.2MB [00:00, 135MB/s] \u001b[A\n",
156
+ "Downloading data files: 100%|█████████████████████| 3/3 [00:05<00:00, 1.92s/it]\n",
157
+ "Extracting data files: 100%|████████████████████| 3/3 [00:00<00:00, 3337.64it/s]\n",
158
+ "Generating train split: 100%|███| 39905/39905 [00:03<00:00, 12629.23 examples/s]\n",
159
+ "Generating test split: 100%|████| 10003/10003 [00:00<00:00, 12595.93 examples/s]\n",
160
+ "Generating validation split: 100%|█| 10042/10042 [00:00<00:00, 12512.37 examples\n",
161
+ "Task: hellaswag; number of docs: 10042\n",
162
+ "Task: hellaswag; document 0; context prompt (starting on next line):\n",
163
+ "Personal Care and Style: How to increase breast size with a bra. Check your bra size. Wearing a bra that is too big will not make your breasts look larger. That is why it is important to wear the right size bra for you.\n",
164
+ "(end of prompt on previous line)\n",
165
+ "Requests: [Req_loglikelihood('Personal Care and Style: How to increase breast size with a bra. Check your bra size. Wearing a bra that is too big will not make your breasts look larger. That is why it is important to wear the right size bra for you.', ' You can visit a lingerie shop and have them measure you to help you fit a bra to your size, or measure yourself before you shop for a new bra to ensure that you get a good fit. Use a flexible tape measure, like one found in a sewing kit.')[0]\n",
166
+ ", Req_loglikelihood('Personal Care and Style: How to increase breast size with a bra. Check your bra size. Wearing a bra that is too big will not make your breasts look larger. That is why it is important to wear the right size bra for you.', ' This is why it is important to keep your breasts under protection when in the shower and only wear bras that are larger than your breast size. If you are not wearing a bra, try wearing something that is a little bigger.')[0]\n",
167
+ ", Req_loglikelihood('Personal Care and Style: How to increase breast size with a bra. Check your bra size. Wearing a bra that is too big will not make your breasts look larger. That is why it is important to wear the right size bra for you.', ' For a girl, a bra with a support strap will be easier for her, because most women are unable to pull through bra straps and bras that are too small will not be able to support breasts from side-to-side. Many bras have even been created that cover the breast side, and can be sent to other women in the world to make them look bigger.')[0]\n",
168
+ ", Req_loglikelihood('Personal Care and Style: How to increase breast size with a bra. Check your bra size. Wearing a bra that is too big will not make your breasts look larger. That is why it is important to wear the right size bra for you.', ' Choose a color that is flattering to your breast type and specific event, in addition to those that make you uncomfortable. Look for sports bras made from natural material, such as spandex or lycra, as this is a more breathable bra.')[0]\n",
169
+ "]\n",
170
+ "Running loglikelihood requests\n",
171
+ "100%|█████████████████████████████████████| 40145/40145 [33:25<00:00, 20.02it/s]\n",
172
+ "{\n",
173
+ " \"results\": {\n",
174
+ " \"hellaswag\": {\n",
175
+ " \"acc\": 0.3444532961561442,\n",
176
+ " \"acc_stderr\": 0.0047421851692647675,\n",
177
+ " \"acc_norm\": 0.4296952798247361,\n",
178
+ " \"acc_norm_stderr\": 0.004940208641372079\n",
179
+ " }\n",
180
+ " },\n",
181
+ " \"versions\": {\n",
182
+ " \"hellaswag\": 0\n",
183
+ " },\n",
184
+ " \"config\": {\n",
185
+ " \"model\": \"hf-causal\",\n",
186
+ " \"model_args\": \"pretrained=bigscience/bloom-1b1\",\n",
187
+ " \"num_fewshot\": 0,\n",
188
+ " \"batch_size\": null,\n",
189
+ " \"batch_sizes\": [],\n",
190
+ " \"device\": \"cuda:0\",\n",
191
+ " \"no_cache\": false,\n",
192
+ " \"limit\": null,\n",
193
+ " \"bootstrap_iters\": 100000,\n",
194
+ " \"description_dict\": {}\n",
195
+ " }\n",
196
+ "}\n",
197
+ "hf-causal (pretrained=bigscience/bloom-1b1), limit: None, provide_description: False, num_fewshot: 0, batch_size: None\n",
198
+ "| Task |Version| Metric |Value | |Stderr|\n",
199
+ "|---------|------:|--------|-----:|---|-----:|\n",
200
+ "|hellaswag| 0|acc |0.3445|± |0.0047|\n",
201
+ "| | |acc_norm|0.4297|± |0.0049|\n",
202
+ "\n"
203
+ ]
204
+ }
205
+ ],
206
+ "source": [
207
+ "!python main.py \\\n",
208
+ " --model hf-causal \\\n",
209
+ " --model_args pretrained=bigscience/bloom-1b1 \\\n",
210
+ " --tasks hellaswag \\\n",
211
+ " --device cuda:0"
212
+ ]
213
+ },
214
+ {
215
+ "cell_type": "markdown",
216
+ "metadata": {
217
+ "id": "r0bFAUTWtOzc"
218
+ },
219
+ "source": [
220
+ "### Assignment Part 2: \n",
221
+ "\n",
222
+ "Test your model on another task! The task choice is up to you, but you'll need to explain it - and determine the models performance on that task.\n",
223
+ "\n",
224
+ "Again, this task will take a large amount of time - "
225
+ ]
226
+ },
227
+ {
228
+ "cell_type": "code",
229
+ "execution_count": 6,
230
+ "metadata": {
231
+ "id": "jctZJ2DJtd6l"
232
+ },
233
+ "outputs": [
234
+ {
235
+ "name": "stdout",
236
+ "output_type": "stream",
237
+ "text": [
238
+ "Selected Tasks: ['babi']\n",
239
+ "Using device 'cuda:0'\n",
240
+ "Downloading readme: 100%|██████████████████| 2.20k/2.20k [00:00<00:00, 3.90MB/s]\n",
241
+ "Repo card metadata block was not found. Setting CardData to empty.\n",
242
+ "Downloading data files: 0%| | 0/3 [00:00<?, ?it/s]\n",
243
+ "Downloading data: 0%| | 0.00/6.78M [00:00<?, ?B/s]\u001b[A\n",
244
+ "Downloading data: 100%|████████████████████| 6.78M/6.78M [00:00<00:00, 43.8MB/s]\u001b[A\n",
245
+ "Downloading data files: 33%|███████ | 1/3 [00:00<00:00, 6.41it/s]\n",
246
+ "Downloading data: 100%|██████████████████████| 747k/747k [00:00<00:00, 28.8MB/s]\u001b[A\n",
247
+ "\n",
248
+ "Downloading data: 0%| | 0.00/7.56M [00:00<?, ?B/s]\u001b[A\n",
249
+ "Downloading data: 100%|████████████████████| 7.56M/7.56M [00:00<00:00, 55.3MB/s]\u001b[A\n",
250
+ "Downloading data files: 100%|█████████████████████| 3/3 [00:00<00:00, 9.34it/s]\n",
251
+ "Extracting data files: 100%|████████████████████| 3/3 [00:00<00:00, 3468.28it/s]\n",
252
+ "Generating train split: 17109 examples [00:00, 892042.35 examples/s]\n",
253
+ "Generating validation split: 1891 examples [00:00, 623490.99 examples/s]\n",
254
+ "Generating test split: 19000 examples [00:00, 1446889.43 examples/s]\n",
255
+ "Task: babi; number of docs: 19000\n",
256
+ "Task: babi; document 0; context prompt (starting on next line):\n",
257
+ "Julius is a lion.\n",
258
+ "Greg is a frog.\n",
259
+ "Greg is white.\n",
260
+ "Julius is white.\n",
261
+ "Bernhard is a rhino.\n",
262
+ "Brian is a rhino.\n",
263
+ "Lily is a lion.\n",
264
+ "Brian is green.\n",
265
+ "Lily is gray.\n",
266
+ "What color is Bernhard?\n",
267
+ "(end of prompt on previous line)\n",
268
+ "Requests: Req_greedy_until('Julius is a lion.\\nGreg is a frog.\\nGreg is white.\\nJulius is white.\\nBernhard is a rhino.\\nBrian is a rhino.\\nLily is a lion.\\nBrian is green.\\nLily is gray.\\nWhat color is Bernhard?', ['\\n'])[None]\n",
269
+ "\n",
270
+ "Running greedy_until requests\n",
271
+ " 0%| | 0/17839 [00:00<?, ?it/s]\n",
272
+ "Traceback (most recent call last):\n",
273
+ " File \"/home/ec2-user/SageMaker/FourthBrain/Building-With-LLMs-EXL-main/Week 3/lm-evaluation-harness/main.py\", line 93, in <module>\n",
274
+ " main()\n",
275
+ " File \"/home/ec2-user/SageMaker/FourthBrain/Building-With-LLMs-EXL-main/Week 3/lm-evaluation-harness/main.py\", line 59, in main\n",
276
+ " results = evaluator.simple_evaluate(\n",
277
+ " File \"/home/ec2-user/SageMaker/FourthBrain/Building-With-LLMs-EXL-main/Week 3/lm-evaluation-harness/lm_eval/utils.py\", line 243, in _wrapper\n",
278
+ " return fn(*args, **kwargs)\n",
279
+ " File \"/home/ec2-user/SageMaker/FourthBrain/Building-With-LLMs-EXL-main/Week 3/lm-evaluation-harness/lm_eval/evaluator.py\", line 105, in simple_evaluate\n",
280
+ " results = evaluate(\n",
281
+ " File \"/home/ec2-user/SageMaker/FourthBrain/Building-With-LLMs-EXL-main/Week 3/lm-evaluation-harness/lm_eval/utils.py\", line 243, in _wrapper\n",
282
+ " return fn(*args, **kwargs)\n",
283
+ " File \"/home/ec2-user/SageMaker/FourthBrain/Building-With-LLMs-EXL-main/Week 3/lm-evaluation-harness/lm_eval/evaluator.py\", line 305, in evaluate\n",
284
+ " resps = getattr(lm, reqtype)([req.args for req in reqs])\n",
285
+ " File \"/home/ec2-user/SageMaker/FourthBrain/Building-With-LLMs-EXL-main/Week 3/lm-evaluation-harness/lm_eval/base.py\", line 922, in fn\n",
286
+ " rem_res = getattr(self.lm, attr)(remaining_reqs)\n",
287
+ " File \"/home/ec2-user/SageMaker/FourthBrain/Building-With-LLMs-EXL-main/Week 3/lm-evaluation-harness/lm_eval/base.py\", line 429, in greedy_until\n",
288
+ " until = request_args[\"until\"]\n",
289
+ "TypeError: list indices must be integers or slices, not str\n"
290
+ ]
291
+ }
292
+ ],
293
+ "source": [
294
+ "### YOUR CODE HERE\n",
295
+ "!python main.py \\\n",
296
+ " --model hf-causal \\\n",
297
+ " --model_args pretrained=bigscience/bloom-1b1 \\\n",
298
+ " --tasks babi \\\n",
299
+ " --device cuda:0"
300
+ ]
301
+ },
302
+ {
303
+ "cell_type": "code",
304
+ "execution_count": 4,
305
+ "metadata": {},
306
+ "outputs": [
307
+ {
308
+ "name": "stdout",
309
+ "output_type": "stream",
310
+ "text": [
311
+ "Selected Tasks: ['wsc']\n",
312
+ "Using device 'cuda:0'\n",
313
+ "Downloading builder script: 100%|██████████| 30.7k/30.7k [00:00<00:00, 39.8MB/s]\n",
314
+ "Downloading metadata: 100%|████████████████| 38.7k/38.7k [00:00<00:00, 39.8MB/s]\n",
315
+ "Downloading readme: 100%|██████████████████| 14.8k/14.8k [00:00<00:00, 24.6MB/s]\n",
316
+ "Downloading data: 100%|████████████████████| 32.8k/32.8k [00:00<00:00, 40.8MB/s]\n",
317
+ "Generating train split: 100%|███████| 554/554 [00:00<00:00, 14258.02 examples/s]\n",
318
+ "Generating validation split: 100%|██| 104/104 [00:00<00:00, 14349.88 examples/s]\n",
319
+ "Generating test split: 100%|████████| 146/146 [00:00<00:00, 16169.00 examples/s]\n",
320
+ "Task: wsc; number of docs: 104\n",
321
+ "Task: wsc; document 0; context prompt (starting on next line):\n",
322
+ "Passage: Meanwhile, in the forest, the elephants are calling and hunting high and low for Arthur and Celeste, and their mothers are very worried. Fortunately, in flying over the town, an old marabou bird has seen *them* and come back quickly to tell the news.\n",
323
+ "Question: In the passage above, does the pronoun \"*them*\" refer to \"*the elephants*\"?\n",
324
+ "Answer:\n",
325
+ "(end of prompt on previous line)\n",
326
+ "Requests: (Req_loglikelihood('Passage: Meanwhile, in the forest, the elephants are calling and hunting high and low for Arthur and Celeste, and their mothers are very worried. Fortunately, in flying over the town, an old marabou bird has seen *them* and come back quickly to tell the news.\\nQuestion: In the passage above, does the pronoun \"*them*\" refer to \"*the elephants*\"?\\nAnswer:', ' yes')[0]\n",
327
+ ", Req_loglikelihood('Passage: Meanwhile, in the forest, the elephants are calling and hunting high and low for Arthur and Celeste, and their mothers are very worried. Fortunately, in flying over the town, an old marabou bird has seen *them* and come back quickly to tell the news.\\nQuestion: In the passage above, does the pronoun \"*them*\" refer to \"*the elephants*\"?\\nAnswer:', ' no')[0]\n",
328
+ ")\n",
329
+ "Running loglikelihood requests\n",
330
+ "100%|█████████████████████████████████████████| 202/202 [00:05<00:00, 35.05it/s]\n",
331
+ "{\n",
332
+ " \"results\": {\n",
333
+ " \"wsc\": {\n",
334
+ " \"acc\": 0.36538461538461536,\n",
335
+ " \"acc_stderr\": 0.0474473339327792\n",
336
+ " }\n",
337
+ " },\n",
338
+ " \"versions\": {\n",
339
+ " \"wsc\": 0\n",
340
+ " },\n",
341
+ " \"config\": {\n",
342
+ " \"model\": \"hf-causal\",\n",
343
+ " \"model_args\": \"pretrained=bigscience/bloom-1b1\",\n",
344
+ " \"num_fewshot\": 0,\n",
345
+ " \"batch_size\": null,\n",
346
+ " \"batch_sizes\": [],\n",
347
+ " \"device\": \"cuda:0\",\n",
348
+ " \"no_cache\": false,\n",
349
+ " \"limit\": null,\n",
350
+ " \"bootstrap_iters\": 100000,\n",
351
+ " \"description_dict\": {}\n",
352
+ " }\n",
353
+ "}\n",
354
+ "hf-causal (pretrained=bigscience/bloom-1b1), limit: None, provide_description: False, num_fewshot: 0, batch_size: None\n",
355
+ "|Task|Version|Metric|Value | |Stderr|\n",
356
+ "|----|------:|------|-----:|---|-----:|\n",
357
+ "|wsc | 0|acc |0.3654|± |0.0474|\n",
358
+ "\n"
359
+ ]
360
+ }
361
+ ],
362
+ "source": [
363
+ "\n",
364
+ "\n",
365
+ "### YOUR CODE HERE\n",
366
+ "!python main.py \\\n",
367
+ " --model hf-causal \\\n",
368
+ " --model_args pretrained=bigscience/bloom-1b1 \\\n",
369
+ " --tasks wsc \\\n",
370
+ " --device cuda:0"
371
+ ]
372
+ },
373
+ {
374
+ "cell_type": "code",
375
+ "execution_count": null,
376
+ "metadata": {},
377
+ "outputs": [],
378
+ "source": []
379
+ }
380
+ ],
381
+ "metadata": {
382
+ "accelerator": "GPU",
383
+ "colab": {
384
+ "gpuType": "A100",
385
+ "machine_shape": "hm",
386
+ "provenance": []
387
+ },
388
+ "kernelspec": {
389
+ "display_name": "conda_pytorch_p310",
390
+ "language": "python",
391
+ "name": "conda_pytorch_p310"
392
+ },
393
+ "language_info": {
394
+ "codemirror_mode": {
395
+ "name": "ipython",
396
+ "version": 3
397
+ },
398
+ "file_extension": ".py",
399
+ "mimetype": "text/x-python",
400
+ "name": "python",
401
+ "nbconvert_exporter": "python",
402
+ "pygments_lexer": "ipython3",
403
+ "version": "3.10.12"
404
+ }
405
+ },
406
+ "nbformat": 4,
407
+ "nbformat_minor": 1
408
+ }