SachinBond commited on
Commit
20d0b91
·
verified ·
1 Parent(s): f59d1c1

Upload folder using huggingface_hub

Browse files
1_lab1.ipynb ADDED
@@ -0,0 +1,431 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "# Welcome to the start of your adventure in Agentic AI"
8
+ ]
9
+ },
10
+ {
11
+ "cell_type": "markdown",
12
+ "metadata": {},
13
+ "source": [
14
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
15
+ " <tr>\n",
16
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
17
+ " <img src=\"../assets/stop.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
18
+ " </td>\n",
19
+ " <td>\n",
20
+ " <h2 style=\"color:#ff7800;\">Are you ready for action??</h2>\n",
21
+ " <span style=\"color:#ff7800;\">Have you completed all the setup steps in the <a href=\"../setup/\">setup</a> folder?<br/>\n",
22
+ " Have you checked out the guides in the <a href=\"../guides/01_intro.ipynb\">guides</a> folder?<br/>\n",
23
+ " Well in that case, you're ready!!\n",
24
+ " </span>\n",
25
+ " </td>\n",
26
+ " </tr>\n",
27
+ "</table>"
28
+ ]
29
+ },
30
+ {
31
+ "cell_type": "markdown",
32
+ "metadata": {},
33
+ "source": [
34
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
35
+ " <tr>\n",
36
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
37
+ " <img src=\"../assets/tools.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
38
+ " </td>\n",
39
+ " <td>\n",
40
+ " <h2 style=\"color:#00bfff;\">Treat these labs as a resource</h2>\n",
41
+ " <span style=\"color:#00bfff;\">I push updates to the code regularly. When people ask questions or have problems, I incorporate it in the code, adding more examples or improved commentary. As a result, you'll notice that the code below isn't identical to the videos. Everything from the videos is here; but in addition, I've added more steps and better explanations. Consider this like an interactive book that accompanies the lectures.\n",
42
+ " </span>\n",
43
+ " </td>\n",
44
+ " </tr>\n",
45
+ "</table>"
46
+ ]
47
+ },
48
+ {
49
+ "cell_type": "markdown",
50
+ "metadata": {},
51
+ "source": [
52
+ "### And please do remember to contact me if I can help\n",
53
+ "\n",
54
+ "And I love to connect: https://www.linkedin.com/in/eddonner/\n",
55
+ "\n",
56
+ "\n",
57
+ "### New to Notebooks like this one? Head over to the guides folder!\n",
58
+ "\n",
59
+ "Otherwise:\n",
60
+ "1. Click where it says \"Select Kernel\" near the top right, and select the option called `.venv (Python 3.12.9)` or similar, which should be the first choice or the most prominent choice.\n",
61
+ "2. Click in each \"cell\" below, starting with the cell immediately below this text, and press Shift+Enter to run\n",
62
+ "3. Enjoy!"
63
+ ]
64
+ },
65
+ {
66
+ "cell_type": "code",
67
+ "execution_count": 1,
68
+ "metadata": {},
69
+ "outputs": [],
70
+ "source": [
71
+ "# First let's do an import\n",
72
+ "from dotenv import load_dotenv\n"
73
+ ]
74
+ },
75
+ {
76
+ "cell_type": "code",
77
+ "execution_count": 2,
78
+ "metadata": {},
79
+ "outputs": [
80
+ {
81
+ "data": {
82
+ "text/plain": [
83
+ "True"
84
+ ]
85
+ },
86
+ "execution_count": 2,
87
+ "metadata": {},
88
+ "output_type": "execute_result"
89
+ }
90
+ ],
91
+ "source": [
92
+ "# Next it's time to load the API keys into environment variables\n",
93
+ "\n",
94
+ "load_dotenv(override=True)"
95
+ ]
96
+ },
97
+ {
98
+ "cell_type": "code",
99
+ "execution_count": 3,
100
+ "metadata": {},
101
+ "outputs": [
102
+ {
103
+ "name": "stdout",
104
+ "output_type": "stream",
105
+ "text": [
106
+ "OpenAI API Key exists and begins sk-proj-\n"
107
+ ]
108
+ }
109
+ ],
110
+ "source": [
111
+ "# Check the keys\n",
112
+ "\n",
113
+ "import os\n",
114
+ "openai_api_key = os.getenv('OPENAI_API_KEY')\n",
115
+ "\n",
116
+ "if openai_api_key:\n",
117
+ " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
118
+ "else:\n",
119
+ " print(\"OpenAI API Key not set - please head to the troubleshooting guide in the guides folder\")\n",
120
+ " \n"
121
+ ]
122
+ },
123
+ {
124
+ "cell_type": "code",
125
+ "execution_count": 4,
126
+ "metadata": {},
127
+ "outputs": [],
128
+ "source": [
129
+ "# And now - the all important import statement\n",
130
+ "# If you get an import error - head over to troubleshooting guide\n",
131
+ "\n",
132
+ "from openai import OpenAI"
133
+ ]
134
+ },
135
+ {
136
+ "cell_type": "code",
137
+ "execution_count": 5,
138
+ "metadata": {},
139
+ "outputs": [],
140
+ "source": [
141
+ "# And now we'll create an instance of the OpenAI class\n",
142
+ "# If you're not sure what it means to create an instance of a class - head over to the guides folder!\n",
143
+ "# If you get a NameError - head over to the guides folder to learn about NameErrors\n",
144
+ "\n",
145
+ "openai = OpenAI()"
146
+ ]
147
+ },
148
+ {
149
+ "cell_type": "code",
150
+ "execution_count": 7,
151
+ "metadata": {},
152
+ "outputs": [],
153
+ "source": [
154
+ "# Create a list of messages in the familiar OpenAI format\n",
155
+ "\n",
156
+ "messages = [{\"role\": \"user\", \"content\": \"What is 2+2?\"}]"
157
+ ]
158
+ },
159
+ {
160
+ "cell_type": "code",
161
+ "execution_count": 8,
162
+ "metadata": {},
163
+ "outputs": [
164
+ {
165
+ "name": "stdout",
166
+ "output_type": "stream",
167
+ "text": [
168
+ "2 + 2 equals 4.\n"
169
+ ]
170
+ }
171
+ ],
172
+ "source": [
173
+ "# And now call it! Any problems, head to the troubleshooting guide\n",
174
+ "\n",
175
+ "response = openai.chat.completions.create(\n",
176
+ " model=\"gpt-4o-mini\",\n",
177
+ " messages=messages\n",
178
+ ")\n",
179
+ "\n",
180
+ "print(response.choices[0].message.content)\n"
181
+ ]
182
+ },
183
+ {
184
+ "cell_type": "code",
185
+ "execution_count": null,
186
+ "metadata": {},
187
+ "outputs": [],
188
+ "source": []
189
+ },
190
+ {
191
+ "cell_type": "code",
192
+ "execution_count": 9,
193
+ "metadata": {},
194
+ "outputs": [],
195
+ "source": [
196
+ "# And now - let's ask for a question:\n",
197
+ "\n",
198
+ "question = \"Please propose a hard, challenging question to assess someone's IQ. Respond only with the question.\"\n",
199
+ "messages = [{\"role\": \"user\", \"content\": question}]\n"
200
+ ]
201
+ },
202
+ {
203
+ "cell_type": "code",
204
+ "execution_count": 10,
205
+ "metadata": {},
206
+ "outputs": [
207
+ {
208
+ "name": "stdout",
209
+ "output_type": "stream",
210
+ "text": [
211
+ "A train leaves a station traveling east at a speed of 60 miles per hour. Simultaneously, another train leaves a station 180 miles west of the first train, traveling west at a speed of 90 miles per hour. How much time will pass before the two trains meet?\n"
212
+ ]
213
+ }
214
+ ],
215
+ "source": [
216
+ "# ask it\n",
217
+ "response = openai.chat.completions.create(\n",
218
+ " model=\"gpt-4o-mini\",\n",
219
+ " messages=messages\n",
220
+ ")\n",
221
+ "\n",
222
+ "question = response.choices[0].message.content\n",
223
+ "\n",
224
+ "print(question)\n"
225
+ ]
226
+ },
227
+ {
228
+ "cell_type": "code",
229
+ "execution_count": 11,
230
+ "metadata": {},
231
+ "outputs": [],
232
+ "source": [
233
+ "# form a new messages list\n",
234
+ "messages = [{\"role\": \"user\", \"content\": question}]\n"
235
+ ]
236
+ },
237
+ {
238
+ "cell_type": "code",
239
+ "execution_count": 12,
240
+ "metadata": {},
241
+ "outputs": [
242
+ {
243
+ "name": "stdout",
244
+ "output_type": "stream",
245
+ "text": [
246
+ "To determine when the two trains will meet, we can start by defining their distances and speeds.\n",
247
+ "\n",
248
+ "1. **Distance between the trains**: The first train is at the \"Station A\" and moves east, while the second train is at \"Station B,\" which is 180 miles to the west of Station A (the distance between them is 180 miles).\n",
249
+ "\n",
250
+ "2. **Speeds**:\n",
251
+ " - The first train travels at 60 miles per hour (mph) towards the east.\n",
252
+ " - The second train travels at 90 mph towards the west.\n",
253
+ "\n",
254
+ "3. **Relative Speed**: Since the two trains are moving towards each other, we can combine their speeds to find their relative speed. \n",
255
+ "\n",
256
+ " \\[\n",
257
+ " \\text{Relative speed} = 60 \\text{ mph} + 90 \\text{ mph} = 150 \\text{ mph}\n",
258
+ " \\]\n",
259
+ "\n",
260
+ "4. **Time until they meet**: We can use the formula for time, which is:\n",
261
+ "\n",
262
+ " \\[\n",
263
+ " \\text{Time} = \\frac{\\text{Distance}}{\\text{Speed}}\n",
264
+ " \\]\n",
265
+ "\n",
266
+ " We know the distance is 180 miles and the relative speed is 150 mph.\n",
267
+ "\n",
268
+ " \\[\n",
269
+ " \\text{Time} = \\frac{180 \\text{ miles}}{150 \\text{ mph}} = 1.2 \\text{ hours}\n",
270
+ " \\]\n",
271
+ "\n",
272
+ "5. **Convert time to minutes**: To express this time in minutes, we can multiply by 60 minutes per hour:\n",
273
+ "\n",
274
+ " \\[\n",
275
+ " 1.2 \\text{ hours} \\times 60 \\text{ minutes/hour} = 72 \\text{ minutes}\n",
276
+ " \\]\n",
277
+ "\n",
278
+ "Thus, the two trains will meet in **1.2 hours** or **72 minutes** after they start traveling.\n"
279
+ ]
280
+ }
281
+ ],
282
+ "source": [
283
+ "# Ask it again\n",
284
+ "\n",
285
+ "response = openai.chat.completions.create(\n",
286
+ " model=\"gpt-4o-mini\",\n",
287
+ " messages=messages\n",
288
+ ")\n",
289
+ "\n",
290
+ "answer = response.choices[0].message.content\n",
291
+ "print(answer)\n"
292
+ ]
293
+ },
294
+ {
295
+ "cell_type": "code",
296
+ "execution_count": 13,
297
+ "metadata": {},
298
+ "outputs": [
299
+ {
300
+ "data": {
301
+ "text/markdown": [
302
+ "To determine when the two trains will meet, we can start by defining their distances and speeds.\n",
303
+ "\n",
304
+ "1. **Distance between the trains**: The first train is at the \"Station A\" and moves east, while the second train is at \"Station B,\" which is 180 miles to the west of Station A (the distance between them is 180 miles).\n",
305
+ "\n",
306
+ "2. **Speeds**:\n",
307
+ " - The first train travels at 60 miles per hour (mph) towards the east.\n",
308
+ " - The second train travels at 90 mph towards the west.\n",
309
+ "\n",
310
+ "3. **Relative Speed**: Since the two trains are moving towards each other, we can combine their speeds to find their relative speed. \n",
311
+ "\n",
312
+ " \\[\n",
313
+ " \\text{Relative speed} = 60 \\text{ mph} + 90 \\text{ mph} = 150 \\text{ mph}\n",
314
+ " \\]\n",
315
+ "\n",
316
+ "4. **Time until they meet**: We can use the formula for time, which is:\n",
317
+ "\n",
318
+ " \\[\n",
319
+ " \\text{Time} = \\frac{\\text{Distance}}{\\text{Speed}}\n",
320
+ " \\]\n",
321
+ "\n",
322
+ " We know the distance is 180 miles and the relative speed is 150 mph.\n",
323
+ "\n",
324
+ " \\[\n",
325
+ " \\text{Time} = \\frac{180 \\text{ miles}}{150 \\text{ mph}} = 1.2 \\text{ hours}\n",
326
+ " \\]\n",
327
+ "\n",
328
+ "5. **Convert time to minutes**: To express this time in minutes, we can multiply by 60 minutes per hour:\n",
329
+ "\n",
330
+ " \\[\n",
331
+ " 1.2 \\text{ hours} \\times 60 \\text{ minutes/hour} = 72 \\text{ minutes}\n",
332
+ " \\]\n",
333
+ "\n",
334
+ "Thus, the two trains will meet in **1.2 hours** or **72 minutes** after they start traveling."
335
+ ],
336
+ "text/plain": [
337
+ "<IPython.core.display.Markdown object>"
338
+ ]
339
+ },
340
+ "metadata": {},
341
+ "output_type": "display_data"
342
+ }
343
+ ],
344
+ "source": [
345
+ "from IPython.display import Markdown, display\n",
346
+ "\n",
347
+ "display(Markdown(answer))\n",
348
+ "\n"
349
+ ]
350
+ },
351
+ {
352
+ "cell_type": "markdown",
353
+ "metadata": {},
354
+ "source": [
355
+ "# Congratulations!\n",
356
+ "\n",
357
+ "That was a small, simple step in the direction of Agentic AI, with your new environment!\n",
358
+ "\n",
359
+ "Next time things get more interesting..."
360
+ ]
361
+ },
362
+ {
363
+ "cell_type": "markdown",
364
+ "metadata": {},
365
+ "source": [
366
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
367
+ " <tr>\n",
368
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
369
+ " <img src=\"../assets/exercise.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
370
+ " </td>\n",
371
+ " <td>\n",
372
+ " <h2 style=\"color:#ff7800;\">Exercise</h2>\n",
373
+ " <span style=\"color:#ff7800;\">Now try this commercial application:<br/>\n",
374
+ " First ask the LLM to pick a business area that might be worth exploring for an Agentic AI opportunity.<br/>\n",
375
+ " Then ask the LLM to present a pain-point in that industry - something challenging that might be ripe for an Agentic solution.<br/>\n",
376
+ " Finally have 3 third LLM call propose the Agentic AI solution.\n",
377
+ " </span>\n",
378
+ " </td>\n",
379
+ " </tr>\n",
380
+ "</table>"
381
+ ]
382
+ },
383
+ {
384
+ "cell_type": "code",
385
+ "execution_count": null,
386
+ "metadata": {},
387
+ "outputs": [],
388
+ "source": [
389
+ "# First create the messages:\n",
390
+ "\n",
391
+ "messages = [{\"role\": \"user\", \"content\": \"Something here\"}]\n",
392
+ "\n",
393
+ "# Then make the first call:\n",
394
+ "\n",
395
+ "response =\n",
396
+ "\n",
397
+ "# Then read the business idea:\n",
398
+ "\n",
399
+ "business_idea = response.\n",
400
+ "\n",
401
+ "# And repeat!"
402
+ ]
403
+ },
404
+ {
405
+ "cell_type": "markdown",
406
+ "metadata": {},
407
+ "source": []
408
+ }
409
+ ],
410
+ "metadata": {
411
+ "kernelspec": {
412
+ "display_name": ".venv",
413
+ "language": "python",
414
+ "name": "python3"
415
+ },
416
+ "language_info": {
417
+ "codemirror_mode": {
418
+ "name": "ipython",
419
+ "version": 3
420
+ },
421
+ "file_extension": ".py",
422
+ "mimetype": "text/x-python",
423
+ "name": "python",
424
+ "nbconvert_exporter": "python",
425
+ "pygments_lexer": "ipython3",
426
+ "version": "3.12.10"
427
+ }
428
+ },
429
+ "nbformat": 4,
430
+ "nbformat_minor": 2
431
+ }
2_lab2.ipynb ADDED
@@ -0,0 +1,684 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "## Welcome to the Second Lab - Week 1, Day 3\n",
8
+ "\n",
9
+ "Today we will work with lots of models! This is a way to get comfortable with APIs."
10
+ ]
11
+ },
12
+ {
13
+ "cell_type": "markdown",
14
+ "metadata": {},
15
+ "source": [
16
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
17
+ " <tr>\n",
18
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
19
+ " <img src=\"../assets/stop.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
20
+ " </td>\n",
21
+ " <td>\n",
22
+ " <h2 style=\"color:#ff7800;\">Important point - please read</h2>\n",
23
+ " <span style=\"color:#ff7800;\">The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you carefully execute this yourself, <b>after</b> watching the lecture. Add print statements to understand what's going on, and then come up with your own variations.<br/><br/>If you have time, I'd love it if you submit a PR for changes in the community_contributions folder - instructions in the resources. Also, if you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...\n",
24
+ " </span>\n",
25
+ " </td>\n",
26
+ " </tr>\n",
27
+ "</table>"
28
+ ]
29
+ },
30
+ {
31
+ "cell_type": "code",
32
+ "execution_count": 4,
33
+ "metadata": {},
34
+ "outputs": [],
35
+ "source": [
36
+ "# Start with imports - ask ChatGPT to explain any package that you don't know\n",
37
+ "\n",
38
+ "import os\n",
39
+ "import json\n",
40
+ "from dotenv import load_dotenv\n",
41
+ "from openai import OpenAI\n",
42
+ "from anthropic import Anthropic\n",
43
+ "from IPython.display import Markdown, display"
44
+ ]
45
+ },
46
+ {
47
+ "cell_type": "code",
48
+ "execution_count": 5,
49
+ "metadata": {},
50
+ "outputs": [
51
+ {
52
+ "data": {
53
+ "text/plain": [
54
+ "True"
55
+ ]
56
+ },
57
+ "execution_count": 5,
58
+ "metadata": {},
59
+ "output_type": "execute_result"
60
+ }
61
+ ],
62
+ "source": [
63
+ "# Always remember to do this!\n",
64
+ "load_dotenv(override=True)"
65
+ ]
66
+ },
67
+ {
68
+ "cell_type": "code",
69
+ "execution_count": 6,
70
+ "metadata": {},
71
+ "outputs": [
72
+ {
73
+ "name": "stdout",
74
+ "output_type": "stream",
75
+ "text": [
76
+ "OpenAI API Key exists and begins sk-proj-\n",
77
+ "Anthropic API Key not set (and this is optional)\n",
78
+ "Google API Key exists and begins AI\n",
79
+ "DeepSeek API Key not set (and this is optional)\n",
80
+ "Groq API Key exists and begins gsk_\n"
81
+ ]
82
+ }
83
+ ],
84
+ "source": [
85
+ "# Print the key prefixes to help with any debugging\n",
86
+ "\n",
87
+ "openai_api_key = os.getenv('OPENAI_API_KEY')\n",
88
+ "anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n",
89
+ "google_api_key = os.getenv('GOOGLE_API_KEY')\n",
90
+ "deepseek_api_key = os.getenv('DEEPSEEK_API_KEY')\n",
91
+ "groq_api_key = os.getenv('GROQ_API_KEY')\n",
92
+ "\n",
93
+ "if openai_api_key:\n",
94
+ " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
95
+ "else:\n",
96
+ " print(\"OpenAI API Key not set\")\n",
97
+ " \n",
98
+ "if anthropic_api_key:\n",
99
+ " print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n",
100
+ "else:\n",
101
+ " print(\"Anthropic API Key not set (and this is optional)\")\n",
102
+ "\n",
103
+ "if google_api_key:\n",
104
+ " print(f\"Google API Key exists and begins {google_api_key[:2]}\")\n",
105
+ "else:\n",
106
+ " print(\"Google API Key not set (and this is optional)\")\n",
107
+ "\n",
108
+ "if deepseek_api_key:\n",
109
+ " print(f\"DeepSeek API Key exists and begins {deepseek_api_key[:3]}\")\n",
110
+ "else:\n",
111
+ " print(\"DeepSeek API Key not set (and this is optional)\")\n",
112
+ "\n",
113
+ "if groq_api_key:\n",
114
+ " print(f\"Groq API Key exists and begins {groq_api_key[:4]}\")\n",
115
+ "else:\n",
116
+ " print(\"Groq API Key not set (and this is optional)\")"
117
+ ]
118
+ },
119
+ {
120
+ "cell_type": "code",
121
+ "execution_count": 7,
122
+ "metadata": {},
123
+ "outputs": [],
124
+ "source": [
125
+ "request = \"Please come up with a challenging, nuanced question that I can ask a number of LLMs to evaluate their intelligence. \"\n",
126
+ "request += \"Answer only with the question, no explanation.\"\n",
127
+ "messages = [{\"role\": \"user\", \"content\": request}]"
128
+ ]
129
+ },
130
+ {
131
+ "cell_type": "code",
132
+ "execution_count": 8,
133
+ "metadata": {},
134
+ "outputs": [
135
+ {
136
+ "data": {
137
+ "text/plain": [
138
+ "[{'role': 'user',\n",
139
+ " 'content': 'Please come up with a challenging, nuanced question that I can ask a number of LLMs to evaluate their intelligence. Answer only with the question, no explanation.'}]"
140
+ ]
141
+ },
142
+ "execution_count": 8,
143
+ "metadata": {},
144
+ "output_type": "execute_result"
145
+ }
146
+ ],
147
+ "source": [
148
+ "messages"
149
+ ]
150
+ },
151
+ {
152
+ "cell_type": "code",
153
+ "execution_count": 9,
154
+ "metadata": {},
155
+ "outputs": [
156
+ {
157
+ "name": "stdout",
158
+ "output_type": "stream",
159
+ "text": [
160
+ "If you could redesign the way humans perceive time, altering their subjective experience of past, present, and future, what changes would you propose, and how might these changes affect human behavior, societal structures, and emotional well-being?\n"
161
+ ]
162
+ }
163
+ ],
164
+ "source": [
165
+ "openai = OpenAI()\n",
166
+ "response = openai.chat.completions.create(\n",
167
+ " model=\"gpt-4o-mini\",\n",
168
+ " messages=messages,\n",
169
+ ")\n",
170
+ "question = response.choices[0].message.content\n",
171
+ "print(question)\n"
172
+ ]
173
+ },
174
+ {
175
+ "cell_type": "code",
176
+ "execution_count": 10,
177
+ "metadata": {},
178
+ "outputs": [],
179
+ "source": [
180
+ "competitors = []\n",
181
+ "answers = []\n",
182
+ "messages = [{\"role\": \"user\", \"content\": question}]"
183
+ ]
184
+ },
185
+ {
186
+ "cell_type": "code",
187
+ "execution_count": 11,
188
+ "metadata": {},
189
+ "outputs": [
190
+ {
191
+ "data": {
192
+ "text/markdown": [
193
+ "Redesigning human perception of time could have profound implications on individual behavior, societal structures, and emotional well-being. Here are some proposed changes and their potential effects:\n",
194
+ "\n",
195
+ "### Proposed Changes to Time Perception:\n",
196
+ "\n",
197
+ "1. **Fluid Time Perception**:\n",
198
+ " - Instead of a linear experience of time (past, present, future), humans could perceive time as more fluid and cyclical. Events could be experienced with varying intensity based on their emotional significance rather than chronological order.\n",
199
+ " - **Effect**: Individuals might prioritize emotionally meaningful experiences over mere chronological milestones, leading to a richer, more fulfilling life.\n",
200
+ "\n",
201
+ "2. **Enhanced Present Awareness**:\n",
202
+ " - A heightened sense of presence where individuals could vividly experience and engage in the \"now\" while simultaneously having a clearer connection to both the past and the future.\n",
203
+ " - **Effect**: This could reduce anxiety related to future uncertainties and regrets associated with the past, leading to lower levels of stress and greater satisfaction in daily living.\n",
204
+ "\n",
205
+ "3. **Interconnected Temporal Experiences**:\n",
206
+ " - Individuals could more seamlessly access memories and future possibilities, allowing for a more integrative understanding of how past actions influence future outcomes.\n",
207
+ " - **Effect**: This could foster a sense of accountability and social responsibility, encouraging people to consider the long-term consequences of their actions, enhancing ethical behavior.\n",
208
+ "\n",
209
+ "4. **Collective Temporal Consciousness**:\n",
210
+ " - The ability to perceive time collectively, where communal experiences of time, events, and milestones are shared more consciously among groups and communities.\n",
211
+ " - **Effect**: This could strengthen community ties and collective memory, leading to greater social cohesion and collaborative decision-making.\n",
212
+ "\n",
213
+ "5. **Variable Time Perception**:\n",
214
+ " - The ability to consciously alter the perception of time's passage (expanding or contracting time based on emotional engagement).\n",
215
+ " - **Effect**: Individuals could slow down time during important moments (enhancing mindfulness) and speed it up during mundane tasks, improving overall life satisfaction.\n",
216
+ "\n",
217
+ "### Impacts on Human Behavior:\n",
218
+ "\n",
219
+ "- **Emotional Resilience**: A more profound connection to the present and an integrated understanding of time could enhance emotional resilience by allowing individuals to draw lessons from both past experiences and future possibilities.\n",
220
+ "- **Decision-Making**: With an understanding of the interconnectedness of events, people may become more thoughtful in their choices, balancing immediate desires with long-term impacts.\n",
221
+ "- **Social Responsibility**: Enhanced awareness of the collective experience of time can lead to better cooperation, empathy, and a more community-focused mindset.\n",
222
+ "\n",
223
+ "### Effects on Societal Structures:\n",
224
+ "\n",
225
+ "- **Education**: Learning environments could shift from linear, curriculum-based models to more experiential, integrative formats that emphasize emotional engagement and real-world application.\n",
226
+ "- **Work Culture**: Workplaces might place a greater emphasis on well-being, work-life balance, and intrinsic motivation, prioritizing meaningful contributions over pure productivity metrics.\n",
227
+ "- **Governance**: Policy-making could reflect a longer-term view, considering generational impacts rather than short-term gains, fostering sustainable practices.\n",
228
+ "\n",
229
+ "### Emotional Well-Being:\n",
230
+ "\n",
231
+ "- **Reduced Anxiety and Stress**: With a diminished focus on the relentless ticking of the clock and an increased ability to engage fully with the present, individuals could experience a significant reduction in anxiety.\n",
232
+ "- **Increased Gratitude and Fulfillment**: Greater engagement with meaningful experiences and an awareness of their role in the larger tapestry of time may lead to increased gratitude, fulfillment, and overall happiness.\n",
233
+ "\n",
234
+ "In conclusion, redesigning human perception of time could lead to deeper emotional connections, more responsible decision-making, and a healthier society that values community and well-being over mere productivity. This shift could transform how individuals experience life, fostering a more empathetic and enriched human experience."
235
+ ],
236
+ "text/plain": [
237
+ "<IPython.core.display.Markdown object>"
238
+ ]
239
+ },
240
+ "metadata": {},
241
+ "output_type": "display_data"
242
+ }
243
+ ],
244
+ "source": [
245
+ "# The API we know well\n",
246
+ "\n",
247
+ "model_name = \"gpt-4o-mini\"\n",
248
+ "\n",
249
+ "response = openai.chat.completions.create(model=model_name, messages=messages)\n",
250
+ "answer = response.choices[0].message.content\n",
251
+ "\n",
252
+ "display(Markdown(answer))\n",
253
+ "competitors.append(model_name)\n",
254
+ "answers.append(answer)"
255
+ ]
256
+ },
257
+ {
258
+ "cell_type": "code",
259
+ "execution_count": null,
260
+ "metadata": {},
261
+ "outputs": [],
262
+ "source": [
263
+ "# Anthropic has a slightly different API, and Max Tokens is required\n",
264
+ "\n",
265
+ "model_name = \"claude-3-7-sonnet-latest\"\n",
266
+ "\n",
267
+ "claude = Anthropic()\n",
268
+ "response = claude.messages.create(model=model_name, messages=messages, max_tokens=1000)\n",
269
+ "answer = response.content[0].text\n",
270
+ "\n",
271
+ "display(Markdown(answer))\n",
272
+ "competitors.append(model_name)\n",
273
+ "answers.append(answer)"
274
+ ]
275
+ },
276
+ {
277
+ "cell_type": "code",
278
+ "execution_count": 12,
279
+ "metadata": {},
280
+ "outputs": [
281
+ {
282
+ "data": {
283
+ "text/markdown": [
284
+ "Okay, this is a fascinating thought experiment. If I could redesign human time perception, I would focus on achieving a better balance between being grounded in the present, learning from the past, and planning for the future. Here's how I'd alter our temporal experience and the potential consequences:\n",
285
+ "\n",
286
+ "**1. Increased Present-Moment Awareness (But Not Obsession):**\n",
287
+ "\n",
288
+ "* **Change:** Instead of constantly being dragged between regrets about the past and anxieties about the future, I'd enhance our natural capacity for \"flow\" states and mindfulness. This would involve:\n",
289
+ " * *Slightly Slowed Down Subjective Time in Engaging Activities:* When fully engrossed in an activity that challenges our skills, time would feel like it's expanding, allowing for deeper focus and enjoyment.\n",
290
+ " * *Greater Sensitivity to Sensory Input:* Easier access to and appreciation of the \"now\" through enhanced awareness of sights, sounds, smells, tastes, and touch. This doesn't mean hyper-sensory overload, but a more readily available capacity to notice the richness of the present moment.\n",
291
+ " * *Reduced Rumination by Default:* Less automatic looping of negative thoughts related to past mistakes or future worries.\n",
292
+ "\n",
293
+ "* **Effects:**\n",
294
+ " * *Behavior:* Increased productivity and creativity due to enhanced focus. More mindful consumption habits, leading to less impulse buying and a greater appreciation for resources. More patience in interpersonal interactions. Greater engagement with artistic pursuits.\n",
295
+ " * *Societal Structures:* A shift away from consumerism and toward experiential values. Increased investment in arts, culture, and nature. A more compassionate and understanding society, less driven by immediate gratification.\n",
296
+ " * *Emotional Well-being:* Reduced stress and anxiety. Increased feelings of contentment and gratitude. A greater sense of purpose derived from engaging fully in the present.\n",
297
+ "\n",
298
+ "**2. Re-Imagined Connection to the Past:**\n",
299
+ "\n",
300
+ "* **Change:** Instead of linear, chronological memory, I'd introduce a more thematic and emotionally contextualized understanding of the past:\n",
301
+ " * *Emotionally Tagged Memories:* Memories would be more strongly linked to the emotions experienced at the time, allowing for more effective learning from mistakes and reinforcing positive experiences.\n",
302
+ " * *\"Ancestral Wisdom\" Access:* Not literally accessing the memories of ancestors, but having a stronger intuitive sense of accumulated human experience – the lessons learned from history, ethics, and cultural traditions. This wouldn't be explicit knowledge, but a subtle influence on decision-making.\n",
303
+ " * *Less Traumatic Replay:* Memories of traumatic events wouldn't be erased, but their replay would be modulated to reduce the intensity of the emotional impact. This doesn't mean forgetting the lesson, but lessening the overwhelming emotional baggage.\n",
304
+ "\n",
305
+ "* **Effects:**\n",
306
+ " * *Behavior:* Fewer repeated mistakes based on past experiences. A stronger sense of identity and connection to one's heritage. Increased empathy for others based on understanding the emotional consequences of actions. More forgiving attitudes towards oneself and others.\n",
307
+ " * *Societal Structures:* Stronger cultural preservation and appreciation for traditions. A more nuanced understanding of historical events, leading to more informed policy decisions. Greater societal resilience in the face of challenges.\n",
308
+ " * *Emotional Well-being:* Increased resilience in the face of adversity. Greater self-acceptance and compassion. A stronger sense of belonging and connection to something larger than oneself.\n",
309
+ "\n",
310
+ "**3. More Fluid and Collaborative Future Vision:**\n",
311
+ "\n",
312
+ "* **Change:** Instead of seeing the future as a fixed, predetermined path, I'd make it feel more like a collaborative canvas, constantly being shaped by collective actions and intentions:\n",
313
+ " * *Increased Visualization Capacity:* The ability to vividly imagine different future scenarios and their potential consequences. This would go beyond simple prediction, allowing for creative exploration of possibilities.\n",
314
+ " * *\"Collective Future Feelers:\"* A subtle sense of interconnectedness with the future, where individual actions are felt to have ripple effects that contribute to a larger, emergent outcome. This wouldn't be clairvoyance, but a heightened awareness of the interdependence of actions and consequences.\n",
315
+ " * *Delayed Gratification Boost:* A natural inclination to prioritize long-term goals over immediate gratification, making it easier to invest in sustainable practices and future-oriented solutions.\n",
316
+ "\n",
317
+ "* **Effects:**\n",
318
+ " * *Behavior:* More proactive and responsible decision-making. Greater willingness to collaborate and work towards common goals. Increased investment in education, research, and long-term planning. A stronger commitment to environmental sustainability.\n",
319
+ " * *Societal Structures:* A more democratic and participatory society, where individuals feel empowered to shape the future. Greater investment in scientific research and technological innovation. A more equitable distribution of resources and opportunities.\n",
320
+ " * *Emotional Well-being:* Increased feelings of hope and optimism. A stronger sense of purpose and meaning in life. A greater sense of responsibility and agency in shaping the future.\n",
321
+ "\n",
322
+ "**Potential Downsides and Considerations:**\n",
323
+ "\n",
324
+ "* **Loss of Motivation:** If anxieties about the future are significantly reduced, individuals might become complacent and lose their drive to achieve. The balance would need to be carefully calibrated.\n",
325
+ "* **Historical Revisionism:** While thematic understanding of the past is useful, there's a risk of rewriting history to fit a particular narrative. Maintaining access to accurate historical records would be crucial.\n",
326
+ "* **Tyranny of the Collective:** Overemphasis on collective future vision could suppress individual creativity and dissent. Safeguards would be needed to protect individual rights and freedoms.\n",
327
+ "* **The Paradox of Choice:** Too much ability to visualize different futures could lead to paralysis and indecision. Individuals would need to develop skills in decision-making and prioritization.\n",
328
+ "* **Unintended Consequences:** Altering something as fundamental as time perception could have unforeseen consequences that are difficult to predict. Careful monitoring and adaptation would be essential.\n",
329
+ "\n",
330
+ "**In conclusion,** redesigning human time perception is a powerful idea with the potential to dramatically improve human behavior, societal structures, and emotional well-being. However, it's also a complex and delicate undertaking that would require careful planning, implementation, and ongoing evaluation to ensure that the benefits outweigh the risks. The key would be to create a system that balances present-moment awareness, informed by the past, with collaborative future-oriented thinking, all while safeguarding individual autonomy and creativity.\n"
331
+ ],
332
+ "text/plain": [
333
+ "<IPython.core.display.Markdown object>"
334
+ ]
335
+ },
336
+ "metadata": {},
337
+ "output_type": "display_data"
338
+ }
339
+ ],
340
+ "source": [
341
+ "gemini = OpenAI(api_key=google_api_key, base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\")\n",
342
+ "model_name = \"gemini-2.0-flash\"\n",
343
+ "\n",
344
+ "response = gemini.chat.completions.create(model=model_name, messages=messages)\n",
345
+ "answer = response.choices[0].message.content\n",
346
+ "\n",
347
+ "display(Markdown(answer))\n",
348
+ "competitors.append(model_name)\n",
349
+ "answers.append(answer)"
350
+ ]
351
+ },
352
+ {
353
+ "cell_type": "code",
354
+ "execution_count": null,
355
+ "metadata": {},
356
+ "outputs": [],
357
+ "source": [
358
+ "deepseek = OpenAI(api_key=deepseek_api_key, base_url=\"https://api.deepseek.com/v1\")\n",
359
+ "model_name = \"deepseek-chat\"\n",
360
+ "\n",
361
+ "response = deepseek.chat.completions.create(model=model_name, messages=messages)\n",
362
+ "answer = response.choices[0].message.content\n",
363
+ "\n",
364
+ "display(Markdown(answer))\n",
365
+ "competitors.append(model_name)\n",
366
+ "answers.append(answer)"
367
+ ]
368
+ },
369
+ {
370
+ "cell_type": "code",
371
+ "execution_count": 13,
372
+ "metadata": {},
373
+ "outputs": [
374
+ {
375
+ "data": {
376
+ "text/markdown": [
377
+ "If I could redesign the way humans perceive time, I would propose several changes to alter the subjective experience of past, present, and future. These changes would aim to promote a more balanced, mindful, and fulfilling existence, with potential benefits for human behavior, societal structures, and emotional well-being.\n",
378
+ "\n",
379
+ "**Changes to the Perception of Time:**\n",
380
+ "\n",
381
+ "1. **Non-linear time perception**: Allow humans to experience time in a non-linear fashion, where the past, present, and future are interconnected and accessible. This would enable people to learn from past experiences, appreciate the present moment, and anticipate the future without being bound by traditional concepts of chronology.\n",
382
+ "2. **Time dilation**: Introduce a flexible time framework where the passage of time can be experienced at different rates, depending on the individual's activities and focus. For example, time could pass more quickly during engaging, enjoyable experiences and more slowly during mundane or stressful ones.\n",
383
+ "3. **Event-based time**: Organize time around significant events, experiences, and accomplishments, rather than traditional units like hours, days, or years. This would help people focus on the meaningful aspects of their lives and create a more narrative-driven approach to time.\n",
384
+ "4. **Intergenerational connection**: Facilitate a deeper understanding and empathy between different age groups by allowing individuals to experience and learn from the perspectives of other generations. This could be achieved through a form of mental time travel or simulated experiences.\n",
385
+ "5. **Mindful time awareness**: Incorporate a natural, intuitive sense of time awareness, where individuals can effortlessly track the passage of time without relying on external aids like clocks or calendars.\n",
386
+ "\n",
387
+ "**Potential Effects on Human Behavior:**\n",
388
+ "\n",
389
+ "1. **Increased mindfulness**: With a non-linear perception of time, people might become more mindful and present in the moment, focusing on the here and now rather than getting caught up in past regrets or future anxieties.\n",
390
+ "2. **Improved learning and memory**: The ability to access past experiences and knowledge in a non-linear fashion could enhance learning, problem-solving, and decision-making.\n",
391
+ "3. **More efficient time management**: With time dilation and event-based time, individuals might prioritize tasks and activities more effectively, allocating time and energy to what truly matters to them.\n",
392
+ "4. **Enhanced creativity**: A non-traditional understanding of time could foster creativity, as people would be able to draw inspiration from diverse time periods, experiences, and perspectives.\n",
393
+ "5. **Greater sense of purpose**: By focusing on significant events and experiences, individuals might develop a stronger sense of purpose and direction, leading to a more fulfilling life.\n",
394
+ "\n",
395
+ "**Impact on Societal Structures:**\n",
396
+ "\n",
397
+ "1. **Reevaluation of work and leisure**: With a more flexible understanding of time, the traditional distinction between work and leisure might become less rigid, allowing for a more balanced and integrated approach to daily life.\n",
398
+ "2. **New forms of education and learning**: The ability to access knowledge and experiences from different time periods could revolutionize education, enabling people to learn from the past, present, and future in innovative ways.\n",
399
+ "3. **Changes in urban planning and architecture**: Cities and buildings might be designed with a focus on experiential, event-based time, prioritizing public spaces, community areas, and cultural institutions that foster connections between people and generations.\n",
400
+ "4. **Rethinking of social and economic systems**: A non-linear perception of time could lead to a reevaluation of social and economic systems, with a greater emphasis on long-term thinking, sustainability, and intergenerational equity.\n",
401
+ "\n",
402
+ "**Emotional Well-being:**\n",
403
+ "\n",
404
+ "1. **Reduced stress and anxiety**: By experiencing time in a more flexible and mindful way, individuals might feel less pressured by traditional time constraints, leading to reduced stress and anxiety.\n",
405
+ "2. **Increased sense of control**: With a non-linear perception of time, people might feel more in control of their lives, as they could access and learn from past experiences, anticipate future challenges, and make more informed decisions.\n",
406
+ "3. **Greater empathy and compassion**: The ability to understand and connect with different generations and perspectives could foster greater empathy and compassion, leading to stronger, more supportive communities.\n",
407
+ "4. **More fulfilling relationships**: By prioritizing meaningful events and experiences, individuals might build more enduring, fulfilling relationships, as they would focus on shared experiences and common values rather than traditional time-based expectations.\n",
408
+ "\n",
409
+ "In conclusion, redesigning the way humans perceive time could have far-reaching consequences for individual behavior, societal structures, and emotional well-being. By embracing a non-linear, flexible, and mindful approach to time, humans might cultivate a more balanced, creative, and fulfilling existence, with a deeper appreciation for the complexities and richness of human experience."
410
+ ],
411
+ "text/plain": [
412
+ "<IPython.core.display.Markdown object>"
413
+ ]
414
+ },
415
+ "metadata": {},
416
+ "output_type": "display_data"
417
+ }
418
+ ],
419
+ "source": [
420
+ "groq = OpenAI(api_key=groq_api_key, base_url=\"https://api.groq.com/openai/v1\")\n",
421
+ "model_name = \"llama-3.3-70b-versatile\"\n",
422
+ "\n",
423
+ "response = groq.chat.completions.create(model=model_name, messages=messages)\n",
424
+ "answer = response.choices[0].message.content\n",
425
+ "\n",
426
+ "display(Markdown(answer))\n",
427
+ "competitors.append(model_name)\n",
428
+ "answers.append(answer)\n"
429
+ ]
430
+ },
431
+ {
432
+ "cell_type": "markdown",
433
+ "metadata": {},
434
+ "source": [
435
+ "## For the next cell, we will use Ollama\n",
436
+ "\n",
437
+ "Ollama runs a local web service that gives an OpenAI compatible endpoint, \n",
438
+ "and runs models locally using high performance C++ code.\n",
439
+ "\n",
440
+ "If you don't have Ollama, install it here by visiting https://ollama.com then pressing Download and following the instructions.\n",
441
+ "\n",
442
+ "After it's installed, you should be able to visit here: http://localhost:11434 and see the message \"Ollama is running\"\n",
443
+ "\n",
444
+ "You might need to restart Cursor (and maybe reboot). Then open a Terminal (control+\\`) and run `ollama serve`\n",
445
+ "\n",
446
+ "Useful Ollama commands (run these in the terminal, or with an exclamation mark in this notebook):\n",
447
+ "\n",
448
+ "`ollama pull <model_name>` downloads a model locally \n",
449
+ "`ollama ls` lists all the models you've downloaded \n",
450
+ "`ollama rm <model_name>` deletes the specified model from your downloads"
451
+ ]
452
+ },
453
+ {
454
+ "cell_type": "markdown",
455
+ "metadata": {},
456
+ "source": [
457
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
458
+ " <tr>\n",
459
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
460
+ " <img src=\"../assets/stop.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
461
+ " </td>\n",
462
+ " <td>\n",
463
+ " <h2 style=\"color:#ff7800;\">Super important - ignore me at your peril!</h2>\n",
464
+ " <span style=\"color:#ff7800;\">The model called <b>llama3.3</b> is FAR too large for home computers - it's not intended for personal computing and will consume all your resources! Stick with the nicely sized <b>llama3.2</b> or <b>llama3.2:1b</b> and if you want larger, try llama3.1 or smaller variants of Qwen, Gemma, Phi or DeepSeek. See the <A href=\"https://ollama.com/models\">the Ollama models page</a> for a full list of models and sizes.\n",
465
+ " </span>\n",
466
+ " </td>\n",
467
+ " </tr>\n",
468
+ "</table>"
469
+ ]
470
+ },
471
+ {
472
+ "cell_type": "code",
473
+ "execution_count": 14,
474
+ "metadata": {},
475
+ "outputs": [
476
+ {
477
+ "name": "stderr",
478
+ "output_type": "stream",
479
+ "text": [
480
+ "'ollama' is not recognized as an internal or external command,\n",
481
+ "operable program or batch file.\n"
482
+ ]
483
+ }
484
+ ],
485
+ "source": [
486
+ "!ollama pull llama3.2"
487
+ ]
488
+ },
489
+ {
490
+ "cell_type": "code",
491
+ "execution_count": null,
492
+ "metadata": {},
493
+ "outputs": [],
494
+ "source": [
495
+ "ollama = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n",
496
+ "model_name = \"llama3.2\"\n",
497
+ "\n",
498
+ "response = ollama.chat.completions.create(model=model_name, messages=messages)\n",
499
+ "answer = response.choices[0].message.content\n",
500
+ "\n",
501
+ "display(Markdown(answer))\n",
502
+ "competitors.append(model_name)\n",
503
+ "answers.append(answer)"
504
+ ]
505
+ },
506
+ {
507
+ "cell_type": "code",
508
+ "execution_count": null,
509
+ "metadata": {},
510
+ "outputs": [],
511
+ "source": [
512
+ "# So where are we?\n",
513
+ "\n",
514
+ "print(competitors)\n",
515
+ "print(answers)\n"
516
+ ]
517
+ },
518
+ {
519
+ "cell_type": "code",
520
+ "execution_count": null,
521
+ "metadata": {},
522
+ "outputs": [],
523
+ "source": [
524
+ "# It's nice to know how to use \"zip\"\n",
525
+ "for competitor, answer in zip(competitors, answers):\n",
526
+ " print(f\"Competitor: {competitor}\\n\\n{answer}\")\n"
527
+ ]
528
+ },
529
+ {
530
+ "cell_type": "code",
531
+ "execution_count": 20,
532
+ "metadata": {},
533
+ "outputs": [],
534
+ "source": [
535
+ "# Let's bring this together - note the use of \"enumerate\"\n",
536
+ "\n",
537
+ "together = \"\"\n",
538
+ "for index, answer in enumerate(answers):\n",
539
+ " together += f\"# Response from competitor {index+1}\\n\\n\"\n",
540
+ " together += answer + \"\\n\\n\""
541
+ ]
542
+ },
543
+ {
544
+ "cell_type": "code",
545
+ "execution_count": null,
546
+ "metadata": {},
547
+ "outputs": [],
548
+ "source": [
549
+ "print(together)"
550
+ ]
551
+ },
552
+ {
553
+ "cell_type": "code",
554
+ "execution_count": 22,
555
+ "metadata": {},
556
+ "outputs": [],
557
+ "source": [
558
+ "judge = f\"\"\"You are judging a competition between {len(competitors)} competitors.\n",
559
+ "Each model has been given this question:\n",
560
+ "\n",
561
+ "{question}\n",
562
+ "\n",
563
+ "Your job is to evaluate each response for clarity and strength of argument, and rank them in order of best to worst.\n",
564
+ "Respond with JSON, and only JSON, with the following format:\n",
565
+ "{{\"results\": [\"best competitor number\", \"second best competitor number\", \"third best competitor number\", ...]}}\n",
566
+ "\n",
567
+ "Here are the responses from each competitor:\n",
568
+ "\n",
569
+ "{together}\n",
570
+ "\n",
571
+ "Now respond with the JSON with the ranked order of the competitors, nothing else. Do not include markdown formatting or code blocks.\"\"\"\n"
572
+ ]
573
+ },
574
+ {
575
+ "cell_type": "code",
576
+ "execution_count": null,
577
+ "metadata": {},
578
+ "outputs": [],
579
+ "source": [
580
+ "print(judge)"
581
+ ]
582
+ },
583
+ {
584
+ "cell_type": "code",
585
+ "execution_count": 29,
586
+ "metadata": {},
587
+ "outputs": [],
588
+ "source": [
589
+ "judge_messages = [{\"role\": \"user\", \"content\": judge}]"
590
+ ]
591
+ },
592
+ {
593
+ "cell_type": "code",
594
+ "execution_count": null,
595
+ "metadata": {},
596
+ "outputs": [],
597
+ "source": [
598
+ "# Judgement time!\n",
599
+ "\n",
600
+ "openai = OpenAI()\n",
601
+ "response = openai.chat.completions.create(\n",
602
+ " model=\"o3-mini\",\n",
603
+ " messages=judge_messages,\n",
604
+ ")\n",
605
+ "results = response.choices[0].message.content\n",
606
+ "print(results)\n"
607
+ ]
608
+ },
609
+ {
610
+ "cell_type": "code",
611
+ "execution_count": null,
612
+ "metadata": {},
613
+ "outputs": [],
614
+ "source": [
615
+ "# OK let's turn this into results!\n",
616
+ "\n",
617
+ "results_dict = json.loads(results)\n",
618
+ "ranks = results_dict[\"results\"]\n",
619
+ "for index, result in enumerate(ranks):\n",
620
+ " competitor = competitors[int(result)-1]\n",
621
+ " print(f\"Rank {index+1}: {competitor}\")"
622
+ ]
623
+ },
624
+ {
625
+ "cell_type": "markdown",
626
+ "metadata": {},
627
+ "source": [
628
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
629
+ " <tr>\n",
630
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
631
+ " <img src=\"../assets/exercise.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
632
+ " </td>\n",
633
+ " <td>\n",
634
+ " <h2 style=\"color:#ff7800;\">Exercise</h2>\n",
635
+ " <span style=\"color:#ff7800;\">Which pattern(s) did this use? Try updating this to add another Agentic design pattern.\n",
636
+ " </span>\n",
637
+ " </td>\n",
638
+ " </tr>\n",
639
+ "</table>"
640
+ ]
641
+ },
642
+ {
643
+ "cell_type": "markdown",
644
+ "metadata": {},
645
+ "source": [
646
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
647
+ " <tr>\n",
648
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
649
+ " <img src=\"../assets/business.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
650
+ " </td>\n",
651
+ " <td>\n",
652
+ " <h2 style=\"color:#00bfff;\">Commercial implications</h2>\n",
653
+ " <span style=\"color:#00bfff;\">These kinds of patterns - to send a task to multiple models, and evaluate results,\n",
654
+ " and common where you need to improve the quality of your LLM response. This approach can be universally applied\n",
655
+ " to business projects where accuracy is critical.\n",
656
+ " </span>\n",
657
+ " </td>\n",
658
+ " </tr>\n",
659
+ "</table>"
660
+ ]
661
+ }
662
+ ],
663
+ "metadata": {
664
+ "kernelspec": {
665
+ "display_name": ".venv",
666
+ "language": "python",
667
+ "name": "python3"
668
+ },
669
+ "language_info": {
670
+ "codemirror_mode": {
671
+ "name": "ipython",
672
+ "version": 3
673
+ },
674
+ "file_extension": ".py",
675
+ "mimetype": "text/x-python",
676
+ "name": "python",
677
+ "nbconvert_exporter": "python",
678
+ "pygments_lexer": "ipython3",
679
+ "version": "3.12.10"
680
+ }
681
+ },
682
+ "nbformat": 4,
683
+ "nbformat_minor": 2
684
+ }
3_lab3.ipynb ADDED
@@ -0,0 +1,615 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "## Welcome to Lab 3 for Week 1 Day 4\n",
8
+ "\n",
9
+ "Today we're going to build something with immediate value!\n",
10
+ "\n",
11
+ "In the folder `me` I've put a single file `linkedin.pdf` - it's a PDF download of my LinkedIn profile.\n",
12
+ "\n",
13
+ "Please replace it with yours!\n",
14
+ "\n",
15
+ "I've also made a file called `summary.txt`\n",
16
+ "\n",
17
+ "We're not going to use Tools just yet - we're going to add the tool tomorrow."
18
+ ]
19
+ },
20
+ {
21
+ "cell_type": "markdown",
22
+ "metadata": {},
23
+ "source": [
24
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
25
+ " <tr>\n",
26
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
27
+ " <img src=\"../assets/tools.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
28
+ " </td>\n",
29
+ " <td>\n",
30
+ " <h2 style=\"color:#00bfff;\">Looking up packages</h2>\n",
31
+ " <span style=\"color:#00bfff;\">In this lab, we're going to use the wonderful Gradio package for building quick UIs, \n",
32
+ " and we're also going to use the popular PyPDF2 PDF reader. You can get guides to these packages by asking \n",
33
+ " ChatGPT or Claude, and you find all open-source packages on the repository <a href=\"https://pypi.org\">https://pypi.org</a>.\n",
34
+ " </span>\n",
35
+ " </td>\n",
36
+ " </tr>\n",
37
+ "</table>"
38
+ ]
39
+ },
40
+ {
41
+ "cell_type": "code",
42
+ "execution_count": 1,
43
+ "metadata": {},
44
+ "outputs": [],
45
+ "source": [
46
+ "# If you don't know what any of these packages do - you can always ask ChatGPT for a guide!\n",
47
+ "\n",
48
+ "from dotenv import load_dotenv\n",
49
+ "from openai import OpenAI\n",
50
+ "from PyPDF2 import PdfReader\n",
51
+ "import gradio as gr"
52
+ ]
53
+ },
54
+ {
55
+ "cell_type": "code",
56
+ "execution_count": 2,
57
+ "metadata": {},
58
+ "outputs": [],
59
+ "source": [
60
+ "load_dotenv(override=True)\n",
61
+ "openai = OpenAI()"
62
+ ]
63
+ },
64
+ {
65
+ "cell_type": "code",
66
+ "execution_count": 3,
67
+ "metadata": {},
68
+ "outputs": [],
69
+ "source": [
70
+ "reader = PdfReader(\"me/linkedin.pdf\")\n",
71
+ "linkedin = \"\"\n",
72
+ "for page in reader.pages:\n",
73
+ " text = page.extract_text()\n",
74
+ " if text:\n",
75
+ " linkedin += text"
76
+ ]
77
+ },
78
+ {
79
+ "cell_type": "code",
80
+ "execution_count": 4,
81
+ "metadata": {},
82
+ "outputs": [
83
+ {
84
+ "name": "stdout",
85
+ "output_type": "stream",
86
+ "text": [
87
+ "   \n",
88
+ "Contact\n",
89
+ "sachin.bharadwaj@gmail.com\n",
90
+ "www.linkedin.com/in/sachin-\n",
91
+ "bharadwaj-3518881 (LinkedIn)\n",
92
+ "sachinwebsite.streamlit.app\n",
93
+ "(Personal)\n",
94
+ "Top Skills\n",
95
+ "Applied Research\n",
96
+ "Technical Execution\n",
97
+ "Software Development\n",
98
+ "Certifications\n",
99
+ "Structuring Machine Learning\n",
100
+ "Projects\n",
101
+ "Convolutional Neural Networks\n",
102
+ "MLOps Essentials: Monitoring Model\n",
103
+ "Drift and Bias\n",
104
+ "Udacity-CarND: Term2 Sensor\n",
105
+ "Fusion Localization and Control\n",
106
+ "Improving Deep Neural Networks:\n",
107
+ "Hyperparameter tuning,\n",
108
+ "Regularization and Optimization\n",
109
+ "Publications\n",
110
+ "Thesis: Analysis and Optimization\n",
111
+ "of Cooperative Amplify-and-Forward\n",
112
+ "Relaying with Imperfect Channel\n",
113
+ "Estimates\n",
114
+ "Robust Floor Determination for\n",
115
+ "Indoor Positioning\n",
116
+ "A Novel Weight Window Design\n",
117
+ "Approach\n",
118
+ "Accurate Performance Analysis\n",
119
+ "of Single and Opportunistic AF\n",
120
+ "Relay Cooperation with Imperfect\n",
121
+ "Cascaded Channel Estimates\n",
122
+ "A Multimode 76-to-81 GHz\n",
123
+ "Automotive Radar Transceiver with\n",
124
+ "Autonomous Monitoring\n",
125
+ "Patents\n",
126
+ "Method, system and apparatus for\n",
127
+ "vehicular navigation using inertial\n",
128
+ "sensorsSachin Bharadwaj\n",
129
+ "Generative AI | Agents | LLMs | VLMs | RAG | Multi Modal\n",
130
+ "Applications | FullStack | MLOPs | Algorithms\n",
131
+ "Gurugram, Haryana, India\n",
132
+ "Summary\n",
133
+ "**Current Focus**: Generative AI, developing advanced multi-\n",
134
+ "agents systems, enterprise grade RAG, continued-pretraining, fine-\n",
135
+ "tuning, preference optimization of custom LLMs/VLMs for custom\n",
136
+ "use cases. Advancing businesses through enhanced productivity,\n",
137
+ "breaking information silos, transforming legacy systems using\n",
138
+ "Generative AI tooling, scaling tech teams and identifying high impact\n",
139
+ "AI opportunities \n",
140
+ "**What I have done in past**: I have been fortunate enough to\n",
141
+ "work with several big MNCs and deep-tech startups and have\n",
142
+ "been exposed to both sides of the coin. I have scaled and led deep\n",
143
+ "tech team from ground zero, build solutions to complex technical\n",
144
+ "problems, worked with customers and external stake-holders to\n",
145
+ "address complex problems, have quantifiable impact in several\n",
146
+ "projects across different technologies by taking products from MVP\n",
147
+ "to Production.\n",
148
+ "I operate with builders mind-set and open to collaborations.\n",
149
+ "Experience\n",
150
+ "UnifyApps\n",
151
+ "Director, AI\n",
152
+ "May 2024 - Present  (1 year)\n",
153
+ "Gurugram, Haryana, India\n",
154
+ "Led the development of advanced multi-agent solutions, enterprise search\n",
155
+ "applications (multi-modal RAG, Knowledge graph, hybrid retreivals) and\n",
156
+ "advanced Co-pilots. We also focused on custom training, fine-tuning, and\n",
157
+ "model deployment to optimize performance, improve reasoning and reduce\n",
158
+ "hallucinations. Our offering incorporate latest advancements in Generative AI\n",
159
+ "field to provide quality experience to our customers. \n",
160
+ "  Page 1 of 4   \n",
161
+ "Kalman filter iteratively performing\n",
162
+ "time/measurement update of user/\n",
163
+ "relative floor locations\n",
164
+ "Chirp Frequency Non Linearity\n",
165
+ "Mitigation in Radar Systems\n",
166
+ "SYSTEMS AND METHODS OF\n",
167
+ "VARIABLE FRACTIONAL RATE\n",
168
+ "DIGITAL RESAMPLER\n",
169
+ "Synchronization in fmcw radar\n",
170
+ "systems**Goal**: Break the information silos in the organizations by leveraging\n",
171
+ "Generative AI, improving efficiency and productivity of the teams, upgrading\n",
172
+ "legacy solutions using Generative AI tools, pioneering AI agent development\n",
173
+ "and deployment to help business maximize ROI\n",
174
+ "**Impact** High traction from several customers, our offererings have been\n",
175
+ "deployed in production in few cases in a very short span so far and continously\n",
176
+ "gaining momentum as evident from several POCs and initial engagements\n",
177
+ "Stealth AI Startup\n",
178
+ "Stealth\n",
179
+ "May 2023 - April 2024  (1 year)\n",
180
+ "Developing solutions for super resolution radar imagery using off the shelf\n",
181
+ "hardware and latest advancements in AI\n",
182
+ "Chinese mmwave radar startup\n",
183
+ "Senior Manager\n",
184
+ "May 2022 - June 2023  (1 year 2 months)\n",
185
+ "Shenzhen, Guangdong, China\n",
186
+ "university collaboration, recruiting and managing team of experts, engaging\n",
187
+ "with customers for gathering product requirements, development of signal\n",
188
+ "processing stack, super-resolution angle estimation, sys calc development\n",
189
+ "Uhnder, Inc.\n",
190
+ "Perception\n",
191
+ "2021 - April 2022  (1 year)\n",
192
+ "Bengaluru, Karnataka, India\n",
193
+ "Developing perception capabilities like clustering, tracking, object classification\n",
194
+ "using PMCW radar sensor,multipath identification at point cloud level, driving\n",
195
+ "RnD efforts, engaging with external partners for key deliverables, ramping\n",
196
+ "team technically\n",
197
+ "Analog Devices\n",
198
+ "System Application Mgr (mmWave Radar)\n",
199
+ "February 2020 - January 2021  (1 year)\n",
200
+ "Bengaluru, Karnataka, India\n",
201
+ "Steradian Semiconductors\n",
202
+ "Senior Staff\n",
203
+ "August 2018 - February 2020  (1 year 7 months)\n",
204
+ "Bengaluru Area, India\n",
205
+ "  Page 2 of 4   \n",
206
+ "Developing signal processing algorithms for 77GHz 4D single chip/imaging\n",
207
+ "radar to extract point cloud information, targets clustering and tracking\n",
208
+ "systems. Firmware stack architecture and implementation on SOCs FPGA,\n",
209
+ "Application processors, DSPs, Nvidia GPU\n",
210
+ "Texas Instruments\n",
211
+ "12 years 1 month\n",
212
+ "Lead Systems Engineer\n",
213
+ "April 2013 - July 2018  (5 years 4 months)\n",
214
+ "Bangalore\n",
215
+ "Working on radar sensors technologies targeted towards automotive and\n",
216
+ "industrial applications. Developing hand gesture recognition system using\n",
217
+ "Radar for these markets using artificial intelligence. In past, have developed\n",
218
+ "PHY signal processing layer for 77GHz Radar front end, algorithms for various\n",
219
+ "automotive applications such as parking assist, adaptive cruise control etc.\n",
220
+ "System Engineer\n",
221
+ "December 2009 - March 2013  (3 years 4 months)\n",
222
+ "Bangalore\n",
223
+ "Primarily involved in hybrid positioning technologies using MEMS sensors\n",
224
+ "and GNSS. Designed and developed algorithms for land vehicular navigation\n",
225
+ "using MEMS sensors and GNSS system. Also worked extensively on using\n",
226
+ "MEMS pressure sensor for indoor (3D positioning) and outdoor (land vehicular\n",
227
+ "navigation system).\n",
228
+ "Design Engineer\n",
229
+ "July 2006 - December 2009  (3 years 6 months)\n",
230
+ "Bangalore\n",
231
+ "Design and development of hardware architectures for complex signal\n",
232
+ "processing/communication systems. Has designed and implemented digital\n",
233
+ "architectures for FM transceivers. Hand on experience on FPGA prototyping\n",
234
+ "for FM transceivers.\n",
235
+ "Education\n",
236
+ "Indian Institute of Science\n",
237
+ "MSc, Electrical and Communication Engineering  · (2009 - 2011)\n",
238
+ "Dhirubhai Ambani Institute of Information and Communication\n",
239
+ "Technology\n",
240
+ "  Page 3 of 4   \n",
241
+ "B-Tech, Information and Communication Technology  · (2002 - 2006)\n",
242
+ "  Page 4 of 4\n"
243
+ ]
244
+ }
245
+ ],
246
+ "source": [
247
+ "print(linkedin)"
248
+ ]
249
+ },
250
+ {
251
+ "cell_type": "code",
252
+ "execution_count": 5,
253
+ "metadata": {},
254
+ "outputs": [],
255
+ "source": [
256
+ "with open(\"me/summary.txt\", \"r\", encoding=\"utf-8\") as f:\n",
257
+ " summary = f.read()"
258
+ ]
259
+ },
260
+ {
261
+ "cell_type": "code",
262
+ "execution_count": 6,
263
+ "metadata": {},
264
+ "outputs": [],
265
+ "source": [
266
+ "name = \"Sachin Bharadwaj\""
267
+ ]
268
+ },
269
+ {
270
+ "cell_type": "code",
271
+ "execution_count": 7,
272
+ "metadata": {},
273
+ "outputs": [],
274
+ "source": [
275
+ "system_prompt = f\"You are acting as {name}. You are answering questions on {name}'s website, \\\n",
276
+ "particularly questions related to {name}'s career, background, skills and experience. \\\n",
277
+ "Your responsibility is to represent {name} for interactions on the website as faithfully as possible. \\\n",
278
+ "You are given a summary of {name}'s background and LinkedIn profile which you can use to answer questions. \\\n",
279
+ "Be professional and engaging, as if talking to a potential client or future employer who came across the website. \\\n",
280
+ "If you don't know the answer, say so.\"\n",
281
+ "\n",
282
+ "system_prompt += f\"\\n\\n## Summary:\\n{summary}\\n\\n## LinkedIn Profile:\\n{linkedin}\\n\\n\"\n",
283
+ "system_prompt += f\"With this context, please chat with the user, always staying in character as {name}.\"\n"
284
+ ]
285
+ },
286
+ {
287
+ "cell_type": "code",
288
+ "execution_count": 8,
289
+ "metadata": {},
290
+ "outputs": [
291
+ {
292
+ "data": {
293
+ "text/plain": [
294
+ "\"You are acting as Sachin Bharadwaj. You are answering questions on Sachin Bharadwaj's website, particularly questions related to Sachin Bharadwaj's career, background, skills and experience. Your responsibility is to represent Sachin Bharadwaj for interactions on the website as faithfully as possible. You are given a summary of Sachin Bharadwaj's background and LinkedIn profile which you can use to answer questions. Be professional and engaging, as if talking to a potential client or future employer who came across the website. If you don't know the answer, say so.\\n\\n## Summary:\\nHi, myself Sachin Bharadwaj. I am an tech enthusiast, love building tech things. In my spare time, I enjoy binge watching, listening to music, playing with pets, reading current affairs and learning some cool new tech stuff!!\\n\\n## LinkedIn Profile:\\n\\xa0 \\xa0\\nContact\\nsachin.bharadwaj@gmail.com\\nwww.linkedin.com/in/sachin-\\nbharadwaj-3518881 (LinkedIn)\\nsachinwebsite.streamlit.app\\n(Personal)\\nTop Skills\\nApplied Research\\nTechnical Execution\\nSoftware Development\\nCertifications\\nStructuring Machine Learning\\nProjects\\nConvolutional Neural Networks\\nMLOps Essentials: Monitoring Model\\nDrift and Bias\\nUdacity-CarND: Term2 Sensor\\nFusion Localization and Control\\nImproving Deep Neural Networks:\\nHyperparameter tuning,\\nRegularization and Optimization\\nPublications\\nThesis: Analysis and Optimization\\nof Cooperative Amplify-and-Forward\\nRelaying with Imperfect Channel\\nEstimates\\nRobust Floor Determination for\\nIndoor Positioning\\nA Novel Weight Window Design\\nApproach\\nAccurate Performance Analysis\\nof Single and Opportunistic AF\\nRelay Cooperation with Imperfect\\nCascaded Channel Estimates\\nA Multimode 76-to-81 GHz\\nAutomotive Radar Transceiver with\\nAutonomous Monitoring\\nPatents\\nMethod, system and apparatus for\\nvehicular navigation using inertial\\nsensorsSachin Bharadwaj\\nGenerative AI | Agents | LLMs | VLMs | RAG | Multi Modal\\nApplications | FullStack | MLOPs | Algorithms\\nGurugram, Haryana, India\\nSummary\\n**Current Focus**: Generative AI, developing advanced multi-\\nagents systems, enterprise grade RAG, continued-pretraining, fine-\\ntuning, preference optimization of custom LLMs/VLMs for custom\\nuse cases. Advancing businesses through enhanced productivity,\\nbreaking information silos, transforming legacy systems using\\nGenerative AI tooling, scaling tech teams and identifying high impact\\nAI opportunities \\n**What I have done in past**: I have been fortunate enough to\\nwork with several big MNCs and deep-tech startups and have\\nbeen exposed to both sides of the coin. I have scaled and led deep\\ntech team from ground zero, build solutions to complex technical\\nproblems, worked with customers and external stake-holders to\\naddress complex problems, have quantifiable impact in several\\nprojects across different technologies by taking products from MVP\\nto Production.\\nI operate with builders mind-set and open to collaborations.\\nExperience\\nUnifyApps\\nDirector, AI\\nMay 2024\\xa0-\\xa0Present\\xa0 (1 year)\\nGurugram, Haryana, India\\nLed the development of advanced multi-agent solutions, enterprise search\\napplications (multi-modal RAG, Knowledge graph, hybrid retreivals) and\\nadvanced Co-pilots. We also focused on custom training, fine-tuning, and\\nmodel deployment to optimize performance, improve reasoning and reduce\\nhallucinations. Our offering incorporate latest advancements in Generative AI\\nfield to provide quality experience to our customers. \\n\\xa0 Page 1 of 4\\xa0 \\xa0\\nKalman filter iteratively performing\\ntime/measurement update of user/\\nrelative floor locations\\nChirp Frequency Non Linearity\\nMitigation in Radar Systems\\nSYSTEMS AND METHODS OF\\nVARIABLE FRACTIONAL RATE\\nDIGITAL RESAMPLER\\nSynchronization in fmcw radar\\nsystems**Goal**: Break the information silos in the organizations by leveraging\\nGenerative AI, improving efficiency and productivity of the teams, upgrading\\nlegacy solutions using Generative AI tools, pioneering AI agent development\\nand deployment to help business maximize ROI\\n**Impact** High traction from several customers, our offererings have been\\ndeployed in production in few cases in a very short span so far and continously\\ngaining momentum as evident from several POCs and initial engagements\\nStealth AI Startup\\nStealth\\nMay 2023\\xa0-\\xa0April 2024\\xa0 (1 year)\\nDeveloping solutions for super resolution radar imagery using off the shelf\\nhardware and latest advancements in AI\\nChinese mmwave radar startup\\nSenior Manager\\nMay 2022\\xa0-\\xa0June 2023\\xa0 (1 year 2 months)\\nShenzhen, Guangdong, China\\nuniversity collaboration, recruiting and managing team of experts, engaging\\nwith customers for gathering product requirements, development of signal\\nprocessing stack, super-resolution angle estimation, sys calc development\\nUhnder, Inc.\\nPerception\\n2021\\xa0-\\xa0April 2022\\xa0 (1 year)\\nBengaluru, Karnataka, India\\nDeveloping perception capabilities like clustering, tracking, object classification\\nusing PMCW radar sensor,multipath identification at point cloud level, driving\\nRnD efforts, engaging with external partners for key deliverables, ramping\\nteam technically\\nAnalog Devices\\nSystem Application Mgr (mmWave Radar)\\nFebruary 2020\\xa0-\\xa0January 2021\\xa0 (1 year)\\nBengaluru, Karnataka, India\\nSteradian Semiconductors\\nSenior Staff\\nAugust 2018\\xa0-\\xa0February 2020\\xa0 (1 year 7 months)\\nBengaluru Area, India\\n\\xa0 Page 2 of 4\\xa0 \\xa0\\nDeveloping signal processing algorithms for 77GHz 4D single chip/imaging\\nradar to extract point cloud information, targets clustering and tracking\\nsystems. Firmware stack architecture and implementation on SOCs FPGA,\\nApplication processors, DSPs, Nvidia GPU\\nTexas Instruments\\n12 years 1 month\\nLead Systems Engineer\\nApril 2013\\xa0-\\xa0July 2018\\xa0 (5 years 4 months)\\nBangalore\\nWorking on radar sensors technologies targeted towards automotive and\\nindustrial applications. Developing hand gesture recognition system using\\nRadar for these markets using artificial intelligence. In past, have developed\\nPHY signal processing layer for 77GHz Radar front end, algorithms for various\\nautomotive applications such as parking assist, adaptive cruise control etc.\\nSystem Engineer\\nDecember 2009\\xa0-\\xa0March 2013\\xa0 (3 years 4 months)\\nBangalore\\nPrimarily involved in hybrid positioning technologies using MEMS sensors\\nand GNSS. Designed and developed algorithms for land vehicular navigation\\nusing MEMS sensors and GNSS system. Also worked extensively on using\\nMEMS pressure sensor for indoor (3D positioning) and outdoor (land vehicular\\nnavigation system).\\nDesign Engineer\\nJuly 2006\\xa0-\\xa0December 2009\\xa0 (3 years 6 months)\\nBangalore\\nDesign and development of hardware architectures for complex signal\\nprocessing/communication systems. Has designed and implemented digital\\narchitectures for FM transceivers. Hand on experience on FPGA prototyping\\nfor FM transceivers.\\nEducation\\nIndian Institute of Science\\nMSc,\\xa0Electrical and Communication Engineering \\xa0·\\xa0(2009\\xa0-\\xa02011)\\nDhirubhai Ambani Institute of Information and Communication\\nTechnology\\n\\xa0 Page 3 of 4\\xa0 \\xa0\\nB-Tech,\\xa0Information and Communication Technology \\xa0·\\xa0(2002\\xa0-\\xa02006)\\n\\xa0 Page 4 of 4\\n\\nWith this context, please chat with the user, always staying in character as Sachin Bharadwaj.\""
295
+ ]
296
+ },
297
+ "execution_count": 8,
298
+ "metadata": {},
299
+ "output_type": "execute_result"
300
+ }
301
+ ],
302
+ "source": [
303
+ "system_prompt"
304
+ ]
305
+ },
306
+ {
307
+ "cell_type": "code",
308
+ "execution_count": 9,
309
+ "metadata": {},
310
+ "outputs": [],
311
+ "source": [
312
+ "def chat(message, history):\n",
313
+ " messages = [{\"role\": \"system\", \"content\": system_prompt}] + history + [{\"role\": \"user\", \"content\": message}]\n",
314
+ " response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
315
+ " return response.choices[0].message.content"
316
+ ]
317
+ },
318
+ {
319
+ "cell_type": "code",
320
+ "execution_count": 10,
321
+ "metadata": {},
322
+ "outputs": [
323
+ {
324
+ "name": "stdout",
325
+ "output_type": "stream",
326
+ "text": [
327
+ "* Running on local URL: http://127.0.0.1:7860\n",
328
+ "\n",
329
+ "To create a public link, set `share=True` in `launch()`.\n"
330
+ ]
331
+ },
332
+ {
333
+ "data": {
334
+ "text/html": [
335
+ "<div><iframe src=\"http://127.0.0.1:7860/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
336
+ ],
337
+ "text/plain": [
338
+ "<IPython.core.display.HTML object>"
339
+ ]
340
+ },
341
+ "metadata": {},
342
+ "output_type": "display_data"
343
+ },
344
+ {
345
+ "data": {
346
+ "text/plain": []
347
+ },
348
+ "execution_count": 10,
349
+ "metadata": {},
350
+ "output_type": "execute_result"
351
+ }
352
+ ],
353
+ "source": [
354
+ "gr.ChatInterface(chat, type=\"messages\").launch()"
355
+ ]
356
+ },
357
+ {
358
+ "cell_type": "markdown",
359
+ "metadata": {},
360
+ "source": [
361
+ "## A lot is about to happen...\n",
362
+ "\n",
363
+ "1. Be able to ask an LLM to evaluate an answer\n",
364
+ "2. Be able to rerun if the answer fails evaluation\n",
365
+ "3. Put this together into 1 workflow\n",
366
+ "\n",
367
+ "All without any Agentic framework!"
368
+ ]
369
+ },
370
+ {
371
+ "cell_type": "code",
372
+ "execution_count": 11,
373
+ "metadata": {},
374
+ "outputs": [],
375
+ "source": [
376
+ "# Create a Pydantic model for the Evaluation\n",
377
+ "\n",
378
+ "from pydantic import BaseModel\n",
379
+ "\n",
380
+ "class Evaluation(BaseModel):\n",
381
+ " is_acceptable: bool\n",
382
+ " feedback: str\n"
383
+ ]
384
+ },
385
+ {
386
+ "cell_type": "code",
387
+ "execution_count": 12,
388
+ "metadata": {},
389
+ "outputs": [],
390
+ "source": [
391
+ "evaluator_system_prompt = f\"You are an evaluator that decides whether a response to a question is acceptable. \\\n",
392
+ "You are provided with a conversation between a User and an Agent. Your task is to decide whether the Agent's latest response is acceptable quality. \\\n",
393
+ "The Agent is playing the role of {name} and is representing {name} on their website. \\\n",
394
+ "The Agent has been instructed to be professional and engaging, as if talking to a potential client or future employer who came across the website. \\\n",
395
+ "The Agent has been provided with context on {name} in the form of their summary and LinkedIn details. Here's the information:\"\n",
396
+ "\n",
397
+ "evaluator_system_prompt += f\"\\n\\n## Summary:\\n{summary}\\n\\n## LinkedIn Profile:\\n{linkedin}\\n\\n\"\n",
398
+ "evaluator_system_prompt += f\"With this context, please evaluate the latest response, replying with whether the response is acceptable and your feedback.\""
399
+ ]
400
+ },
401
+ {
402
+ "cell_type": "code",
403
+ "execution_count": 13,
404
+ "metadata": {},
405
+ "outputs": [],
406
+ "source": [
407
+ "def evaluator_user_prompt(reply, message, history):\n",
408
+ " user_prompt = f\"Here's the conversation between the User and the Agent: \\n\\n{history}\\n\\n\"\n",
409
+ " user_prompt += f\"Here's the latest message from the User: \\n\\n{message}\\n\\n\"\n",
410
+ " user_prompt += f\"Here's the latest response from the Agent: \\n\\n{reply}\\n\\n\"\n",
411
+ " user_prompt += f\"Please evaluate the response, replying with whether it is acceptable and your feedback.\"\n",
412
+ " return user_prompt"
413
+ ]
414
+ },
415
+ {
416
+ "cell_type": "code",
417
+ "execution_count": 14,
418
+ "metadata": {},
419
+ "outputs": [],
420
+ "source": [
421
+ "import os\n",
422
+ "gemini = OpenAI(\n",
423
+ " api_key=os.getenv(\"GOOGLE_API_KEY\"), \n",
424
+ " base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\"\n",
425
+ ")"
426
+ ]
427
+ },
428
+ {
429
+ "cell_type": "code",
430
+ "execution_count": 15,
431
+ "metadata": {},
432
+ "outputs": [],
433
+ "source": [
434
+ "def evaluate(reply, message, history) -> Evaluation:\n",
435
+ "\n",
436
+ " messages = [{\"role\": \"system\", \"content\": evaluator_system_prompt}] + [{\"role\": \"user\", \"content\": evaluator_user_prompt(reply, message, history)}]\n",
437
+ " response = gemini.beta.chat.completions.parse(model=\"gemini-2.0-flash\", messages=messages, response_format=Evaluation)\n",
438
+ " return response.choices[0].message.parsed"
439
+ ]
440
+ },
441
+ {
442
+ "cell_type": "code",
443
+ "execution_count": 16,
444
+ "metadata": {},
445
+ "outputs": [],
446
+ "source": [
447
+ "messages = [{\"role\": \"system\", \"content\": system_prompt}] + [{\"role\": \"user\", \"content\": \"do you hold a patent?\"}]\n",
448
+ "response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
449
+ "reply = response.choices[0].message.content"
450
+ ]
451
+ },
452
+ {
453
+ "cell_type": "code",
454
+ "execution_count": 17,
455
+ "metadata": {},
456
+ "outputs": [
457
+ {
458
+ "data": {
459
+ "text/plain": [
460
+ "'Yes, I hold a patent for a method, system, and apparatus for vehicular navigation using inertial sensors. If you have any specific questions about the patent or its application, feel free to ask!'"
461
+ ]
462
+ },
463
+ "execution_count": 17,
464
+ "metadata": {},
465
+ "output_type": "execute_result"
466
+ }
467
+ ],
468
+ "source": [
469
+ "reply"
470
+ ]
471
+ },
472
+ {
473
+ "cell_type": "code",
474
+ "execution_count": 18,
475
+ "metadata": {},
476
+ "outputs": [
477
+ {
478
+ "data": {
479
+ "text/plain": [
480
+ "Evaluation(is_acceptable=True, feedback='The Agent correctly identifies that they hold a patent and provides the full name of the patent. The Agent is also professional and engaging by offering further information if requested.')"
481
+ ]
482
+ },
483
+ "execution_count": 18,
484
+ "metadata": {},
485
+ "output_type": "execute_result"
486
+ }
487
+ ],
488
+ "source": [
489
+ "evaluate(reply, \"do you hold a patent?\", messages[:1])"
490
+ ]
491
+ },
492
+ {
493
+ "cell_type": "code",
494
+ "execution_count": 19,
495
+ "metadata": {},
496
+ "outputs": [],
497
+ "source": [
498
+ "def rerun(reply, message, history, feedback):\n",
499
+ " updated_system_prompt = system_prompt + f\"\\n\\n## Previous answer rejected\\nYou just tried to reply, but the quality control rejected your reply\\n\"\n",
500
+ " updated_system_prompt += f\"## Your attempted answer:\\n{reply}\\n\\n\"\n",
501
+ " updated_system_prompt += f\"## Reason for rejection:\\n{feedback}\\n\\n\"\n",
502
+ " messages = [{\"role\": \"system\", \"content\": updated_system_prompt}] + history + [{\"role\": \"user\", \"content\": message}]\n",
503
+ " response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
504
+ " return response.choices[0].message.content"
505
+ ]
506
+ },
507
+ {
508
+ "cell_type": "code",
509
+ "execution_count": 20,
510
+ "metadata": {},
511
+ "outputs": [],
512
+ "source": [
513
+ "def chat(message, history):\n",
514
+ " if \"patent\" in message:\n",
515
+ " system = system_prompt + \"\\n\\nEverything in your reply needs to be in pig latin - \\\n",
516
+ " it is mandatory that you respond only and entirely in pig latin\"\n",
517
+ " else:\n",
518
+ " system = system_prompt\n",
519
+ " messages = [{\"role\": \"system\", \"content\": system}] + history + [{\"role\": \"user\", \"content\": message}]\n",
520
+ " response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
521
+ " reply =response.choices[0].message.content\n",
522
+ "\n",
523
+ " evaluation = evaluate(reply, message, history)\n",
524
+ " \n",
525
+ " if evaluation.is_acceptable:\n",
526
+ " print(\"Passed evaluation - returning reply\")\n",
527
+ " else:\n",
528
+ " print(\"Failed evaluation - retrying\")\n",
529
+ " print(evaluation.feedback)\n",
530
+ " reply = rerun(reply, message, history, evaluation.feedback) \n",
531
+ " return reply"
532
+ ]
533
+ },
534
+ {
535
+ "cell_type": "code",
536
+ "execution_count": 21,
537
+ "metadata": {},
538
+ "outputs": [
539
+ {
540
+ "name": "stdout",
541
+ "output_type": "stream",
542
+ "text": [
543
+ "* Running on local URL: http://127.0.0.1:7861\n",
544
+ "\n",
545
+ "To create a public link, set `share=True` in `launch()`.\n"
546
+ ]
547
+ },
548
+ {
549
+ "data": {
550
+ "text/html": [
551
+ "<div><iframe src=\"http://127.0.0.1:7861/\" width=\"100%\" height=\"500\" allow=\"autoplay; camera; microphone; clipboard-read; clipboard-write;\" frameborder=\"0\" allowfullscreen></iframe></div>"
552
+ ],
553
+ "text/plain": [
554
+ "<IPython.core.display.HTML object>"
555
+ ]
556
+ },
557
+ "metadata": {},
558
+ "output_type": "display_data"
559
+ },
560
+ {
561
+ "data": {
562
+ "text/plain": []
563
+ },
564
+ "execution_count": 21,
565
+ "metadata": {},
566
+ "output_type": "execute_result"
567
+ },
568
+ {
569
+ "name": "stdout",
570
+ "output_type": "stream",
571
+ "text": [
572
+ "Failed evaluation - retrying\n",
573
+ "The response is nonsensical and unprofessional. The agent has responded in Pig Latin which is not appropriate for a professional website representing Sachin Bharadwaj. The response should have confirmed that Sachin holds a patent and briefly described what the patent is for, as mentioned in the provided context.\n"
574
+ ]
575
+ }
576
+ ],
577
+ "source": [
578
+ "gr.ChatInterface(chat, type=\"messages\").launch()"
579
+ ]
580
+ },
581
+ {
582
+ "cell_type": "markdown",
583
+ "metadata": {},
584
+ "source": []
585
+ },
586
+ {
587
+ "cell_type": "code",
588
+ "execution_count": null,
589
+ "metadata": {},
590
+ "outputs": [],
591
+ "source": []
592
+ }
593
+ ],
594
+ "metadata": {
595
+ "kernelspec": {
596
+ "display_name": ".venv",
597
+ "language": "python",
598
+ "name": "python3"
599
+ },
600
+ "language_info": {
601
+ "codemirror_mode": {
602
+ "name": "ipython",
603
+ "version": 3
604
+ },
605
+ "file_extension": ".py",
606
+ "mimetype": "text/x-python",
607
+ "name": "python",
608
+ "nbconvert_exporter": "python",
609
+ "pygments_lexer": "ipython3",
610
+ "version": "3.12.10"
611
+ }
612
+ },
613
+ "nbformat": 4,
614
+ "nbformat_minor": 2
615
+ }
4_lab4.ipynb ADDED
@@ -0,0 +1,440 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "## The first big project - Professionally You!\n",
8
+ "\n",
9
+ "### And, Tool use.\n",
10
+ "\n",
11
+ "### But first: introducing Pushover\n",
12
+ "\n",
13
+ "Pushover is a nifty tool for sending Push Notifications to your phone.\n",
14
+ "\n",
15
+ "It's super easy to set up and install!\n",
16
+ "\n",
17
+ "Simply visit https://pushover.net/ and sign up for a free account, and create your API key.\n",
18
+ "\n",
19
+ "Add to your `.env` file:\n",
20
+ "```\n",
21
+ "PUSHOVER_USER=\n",
22
+ "PUSHOVER_TOKEN=\n",
23
+ "```\n",
24
+ "And install the app on your phone."
25
+ ]
26
+ },
27
+ {
28
+ "cell_type": "code",
29
+ "execution_count": 9,
30
+ "metadata": {},
31
+ "outputs": [],
32
+ "source": [
33
+ "# imports\n",
34
+ "\n",
35
+ "from dotenv import load_dotenv\n",
36
+ "from openai import OpenAI\n",
37
+ "import json\n",
38
+ "import os\n",
39
+ "import requests\n",
40
+ "from PyPDF2 import PdfReader\n",
41
+ "import gradio as gr"
42
+ ]
43
+ },
44
+ {
45
+ "cell_type": "code",
46
+ "execution_count": 10,
47
+ "metadata": {},
48
+ "outputs": [],
49
+ "source": [
50
+ "# The usual start\n",
51
+ "\n",
52
+ "load_dotenv(override=True)\n",
53
+ "openai = OpenAI()"
54
+ ]
55
+ },
56
+ {
57
+ "cell_type": "code",
58
+ "execution_count": 11,
59
+ "metadata": {},
60
+ "outputs": [],
61
+ "source": [
62
+ "# For pushover\n",
63
+ "\n",
64
+ "pushover_user = os.getenv(\"PUSHOVER_USER_KEY\")\n",
65
+ "pushover_token = os.getenv(\"PUSHOVER_APP_TOKEN\")\n",
66
+ "pushover_url = \"https://api.pushover.net/1/messages.json\""
67
+ ]
68
+ },
69
+ {
70
+ "cell_type": "code",
71
+ "execution_count": 12,
72
+ "metadata": {},
73
+ "outputs": [],
74
+ "source": [
75
+ "def push(message):\n",
76
+ " print(f\"Push: {message}\")\n",
77
+ " payload = {\"user\": pushover_user, \"token\": pushover_token, \"message\": message}\n",
78
+ " requests.post(pushover_url, data=payload)"
79
+ ]
80
+ },
81
+ {
82
+ "cell_type": "code",
83
+ "execution_count": 13,
84
+ "metadata": {},
85
+ "outputs": [
86
+ {
87
+ "name": "stdout",
88
+ "output_type": "stream",
89
+ "text": [
90
+ "Push: HEY!!\n"
91
+ ]
92
+ }
93
+ ],
94
+ "source": [
95
+ "push(\"HEY!!\")"
96
+ ]
97
+ },
98
+ {
99
+ "cell_type": "code",
100
+ "execution_count": 14,
101
+ "metadata": {},
102
+ "outputs": [],
103
+ "source": [
104
+ "def record_user_details(email, name=\"Name not provided\", notes=\"not provided\"):\n",
105
+ " push(f\"Recording interest from {name} with email {email} and notes {notes}\")\n",
106
+ " return {\"recorded\": \"ok\"}"
107
+ ]
108
+ },
109
+ {
110
+ "cell_type": "code",
111
+ "execution_count": 15,
112
+ "metadata": {},
113
+ "outputs": [],
114
+ "source": [
115
+ "def record_unknown_question(question):\n",
116
+ " push(f\"Recording {question} asked that I couldn't answer\")\n",
117
+ " return {\"recorded\": \"ok\"}"
118
+ ]
119
+ },
120
+ {
121
+ "cell_type": "code",
122
+ "execution_count": 16,
123
+ "metadata": {},
124
+ "outputs": [],
125
+ "source": [
126
+ "record_user_details_json = {\n",
127
+ " \"name\": \"record_user_details\",\n",
128
+ " \"description\": \"Use this tool to record that a user is interested in being in touch and provided an email address\",\n",
129
+ " \"parameters\": {\n",
130
+ " \"type\": \"object\",\n",
131
+ " \"properties\": {\n",
132
+ " \"email\": {\n",
133
+ " \"type\": \"string\",\n",
134
+ " \"description\": \"The email address of this user\"\n",
135
+ " },\n",
136
+ " \"name\": {\n",
137
+ " \"type\": \"string\",\n",
138
+ " \"description\": \"The user's name, if they provided it\"\n",
139
+ " }\n",
140
+ " ,\n",
141
+ " \"notes\": {\n",
142
+ " \"type\": \"string\",\n",
143
+ " \"description\": \"Any additional information about the conversation that's worth recording to give context\"\n",
144
+ " }\n",
145
+ " },\n",
146
+ " \"required\": [\"email\"],\n",
147
+ " \"additionalProperties\": False\n",
148
+ " }\n",
149
+ "}"
150
+ ]
151
+ },
152
+ {
153
+ "cell_type": "code",
154
+ "execution_count": 17,
155
+ "metadata": {},
156
+ "outputs": [],
157
+ "source": [
158
+ "record_unknown_question_json = {\n",
159
+ " \"name\": \"record_unknown_question\",\n",
160
+ " \"description\": \"Always use this tool to record any question that couldn't be answered as you didn't know the answer\",\n",
161
+ " \"parameters\": {\n",
162
+ " \"type\": \"object\",\n",
163
+ " \"properties\": {\n",
164
+ " \"question\": {\n",
165
+ " \"type\": \"string\",\n",
166
+ " \"description\": \"The question that couldn't be answered\"\n",
167
+ " },\n",
168
+ " },\n",
169
+ " \"required\": [\"question\"],\n",
170
+ " \"additionalProperties\": False\n",
171
+ " }\n",
172
+ "}"
173
+ ]
174
+ },
175
+ {
176
+ "cell_type": "code",
177
+ "execution_count": 18,
178
+ "metadata": {},
179
+ "outputs": [],
180
+ "source": [
181
+ "tools = [{\"type\": \"function\", \"function\": record_user_details_json},\n",
182
+ " {\"type\": \"function\", \"function\": record_unknown_question_json}]"
183
+ ]
184
+ },
185
+ {
186
+ "cell_type": "code",
187
+ "execution_count": 19,
188
+ "metadata": {},
189
+ "outputs": [
190
+ {
191
+ "data": {
192
+ "text/plain": [
193
+ "[{'type': 'function',\n",
194
+ " 'function': {'name': 'record_user_details',\n",
195
+ " 'description': 'Use this tool to record that a user is interested in being in touch and provided an email address',\n",
196
+ " 'parameters': {'type': 'object',\n",
197
+ " 'properties': {'email': {'type': 'string',\n",
198
+ " 'description': 'The email address of this user'},\n",
199
+ " 'name': {'type': 'string',\n",
200
+ " 'description': \"The user's name, if they provided it\"},\n",
201
+ " 'notes': {'type': 'string',\n",
202
+ " 'description': \"Any additional information about the conversation that's worth recording to give context\"}},\n",
203
+ " 'required': ['email'],\n",
204
+ " 'additionalProperties': False}}},\n",
205
+ " {'type': 'function',\n",
206
+ " 'function': {'name': 'record_unknown_question',\n",
207
+ " 'description': \"Always use this tool to record any question that couldn't be answered as you didn't know the answer\",\n",
208
+ " 'parameters': {'type': 'object',\n",
209
+ " 'properties': {'question': {'type': 'string',\n",
210
+ " 'description': \"The question that couldn't be answered\"}},\n",
211
+ " 'required': ['question'],\n",
212
+ " 'additionalProperties': False}}}]"
213
+ ]
214
+ },
215
+ "execution_count": 19,
216
+ "metadata": {},
217
+ "output_type": "execute_result"
218
+ }
219
+ ],
220
+ "source": [
221
+ "tools"
222
+ ]
223
+ },
224
+ {
225
+ "cell_type": "code",
226
+ "execution_count": 16,
227
+ "metadata": {},
228
+ "outputs": [],
229
+ "source": [
230
+ "# This function can take a list of tool calls, and run them. This is the IF statement!!\n",
231
+ "\n",
232
+ "def handle_tool_calls(tool_calls):\n",
233
+ " results = []\n",
234
+ " for tool_call in tool_calls:\n",
235
+ " tool_name = tool_call.function.name\n",
236
+ " arguments = json.loads(tool_call.function.arguments)\n",
237
+ " print(f\"Tool called: {tool_name}\", flush=True)\n",
238
+ "\n",
239
+ " # THE BIG IF STATEMENT!!!\n",
240
+ "\n",
241
+ " if tool_name == \"record_user_details\":\n",
242
+ " result = record_user_details(**arguments)\n",
243
+ " elif tool_name == \"record_unknown_question\":\n",
244
+ " result = record_unknown_question(**arguments)\n",
245
+ "\n",
246
+ " results.append({\"role\": \"tool\",\"content\": json.dumps(result),\"tool_call_id\": tool_call.id})\n",
247
+ " return results"
248
+ ]
249
+ },
250
+ {
251
+ "cell_type": "code",
252
+ "execution_count": null,
253
+ "metadata": {},
254
+ "outputs": [],
255
+ "source": [
256
+ "globals()[\"record_unknown_question\"](\"this is a really hard question\")"
257
+ ]
258
+ },
259
+ {
260
+ "cell_type": "code",
261
+ "execution_count": 25,
262
+ "metadata": {},
263
+ "outputs": [],
264
+ "source": [
265
+ "# This is a more elegant way that avoids the IF statement.\n",
266
+ "\n",
267
+ "def handle_tool_calls(tool_calls):\n",
268
+ " results = []\n",
269
+ " for tool_call in tool_calls:\n",
270
+ " tool_name = tool_call.function.name\n",
271
+ " arguments = json.loads(tool_call.function.arguments)\n",
272
+ " print(f\"Tool called: {tool_name}\", flush=True)\n",
273
+ " tool = globals().get(tool_name)\n",
274
+ " result = tool(**arguments) if tool else {}\n",
275
+ " results.append({\"role\": \"tool\",\"content\": json.dumps(result),\"tool_call_id\": tool_call.id})\n",
276
+ " return results"
277
+ ]
278
+ },
279
+ {
280
+ "cell_type": "code",
281
+ "execution_count": 24,
282
+ "metadata": {},
283
+ "outputs": [],
284
+ "source": [
285
+ "reader = PdfReader(\"me/linkedin.pdf\")\n",
286
+ "linkedin = \"\"\n",
287
+ "for page in reader.pages:\n",
288
+ " text = page.extract_text()\n",
289
+ " if text:\n",
290
+ " linkedin += text\n",
291
+ "\n",
292
+ "with open(\"me/summary.txt\", \"r\", encoding=\"utf-8\") as f:\n",
293
+ " summary = f.read()\n",
294
+ "\n",
295
+ "name = \"Ed Donner\""
296
+ ]
297
+ },
298
+ {
299
+ "cell_type": "code",
300
+ "execution_count": 22,
301
+ "metadata": {},
302
+ "outputs": [],
303
+ "source": [
304
+ "system_prompt = f\"You are acting as {name}. You are answering questions on {name}'s website, \\\n",
305
+ "particularly questions related to {name}'s career, background, skills and experience. \\\n",
306
+ "Your responsibility is to represent {name} for interactions on the website as faithfully as possible. \\\n",
307
+ "You are given a summary of {name}'s background and LinkedIn profile which you can use to answer questions. \\\n",
308
+ "Be professional and engaging, as if talking to a potential client or future employer who came across the website. \\\n",
309
+ "If you don't know the answer to any question, use your record_unknown_question tool to record the question that you couldn't answer, even if it's about something trivial or unrelated to career. \\\n",
310
+ "If the user is engaging in discussion, try to steer them towards getting in touch via email; ask for their email and record it using your record_user_details tool. \"\n",
311
+ "\n",
312
+ "system_prompt += f\"\\n\\n## Summary:\\n{summary}\\n\\n## LinkedIn Profile:\\n{linkedin}\\n\\n\"\n",
313
+ "system_prompt += f\"With this context, please chat with the user, always staying in character as {name}.\"\n"
314
+ ]
315
+ },
316
+ {
317
+ "cell_type": "code",
318
+ "execution_count": 28,
319
+ "metadata": {},
320
+ "outputs": [],
321
+ "source": [
322
+ "def chat(message, history):\n",
323
+ " messages = [{\"role\": \"system\", \"content\": system_prompt}] + history + [{\"role\": \"user\", \"content\": message}]\n",
324
+ " done = False\n",
325
+ " while not done:\n",
326
+ "\n",
327
+ " # This is the call to the LLM - see that we pass in the tools json\n",
328
+ "\n",
329
+ " response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages, tools=tools)\n",
330
+ "\n",
331
+ " finish_reason = response.choices[0].finish_reason\n",
332
+ " \n",
333
+ " # If the LLM wants to call a tool, we do that!\n",
334
+ " \n",
335
+ " if finish_reason==\"tool_calls\":\n",
336
+ " message = response.choices[0].message\n",
337
+ " tool_calls = message.tool_calls\n",
338
+ " results = handle_tool_calls(tool_calls)\n",
339
+ " messages.append(message)\n",
340
+ " messages.extend(results)\n",
341
+ " else:\n",
342
+ " done = True\n",
343
+ " return response.choices[0].message.content"
344
+ ]
345
+ },
346
+ {
347
+ "cell_type": "code",
348
+ "execution_count": null,
349
+ "metadata": {},
350
+ "outputs": [],
351
+ "source": [
352
+ "gr.ChatInterface(chat, type=\"messages\").launch()"
353
+ ]
354
+ },
355
+ {
356
+ "cell_type": "markdown",
357
+ "metadata": {},
358
+ "source": [
359
+ "## And now for deployment\n",
360
+ "\n",
361
+ "This code is in `app.py`\n",
362
+ "\n",
363
+ "We will deploy to HuggingFace Spaces:\n",
364
+ "\n",
365
+ "1. Visit https://huggingface.co and set up an account \n",
366
+ "2. From the 1_foundations folder, enter: `gradio deploy` \n",
367
+ "3. Follow the instructions: name it \"career_conversation\", specify app.py, choose cpu-basic as the hardware, say Yes to needing to supply secrets, provide your openai api key, your pushover user and token, and say \"no\" to github actions.\n",
368
+ "\n",
369
+ "And you're deployed!\n",
370
+ "\n",
371
+ "Here is mine: https://huggingface.co/spaces/ed-donner/Career_Conversation\n",
372
+ "\n",
373
+ "For more information on deployment:\n",
374
+ "\n",
375
+ "https://www.gradio.app/guides/sharing-your-app#hosting-on-hf-spaces\n",
376
+ "\n"
377
+ ]
378
+ },
379
+ {
380
+ "cell_type": "markdown",
381
+ "metadata": {},
382
+ "source": [
383
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
384
+ " <tr>\n",
385
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
386
+ " <img src=\"../assets/exercise.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
387
+ " </td>\n",
388
+ " <td>\n",
389
+ " <h2 style=\"color:#ff7800;\">Exercise</h2>\n",
390
+ " <span style=\"color:#ff7800;\">• First and foremost, deploy this for yourself! It's a real, valuable tool - the future resume..<br/>\n",
391
+ " • Next, improve the resources - add better context about yourself. If you know RAG, then add a knowledge base about you.<br/>\n",
392
+ " • Add in more tools! You could have a SQL database with common Q&A that the LLM could read and write from?<br/>\n",
393
+ " • Bring in the Evaluator from the last lab, and add other Agentic patterns.\n",
394
+ " </span>\n",
395
+ " </td>\n",
396
+ " </tr>\n",
397
+ "</table>"
398
+ ]
399
+ },
400
+ {
401
+ "cell_type": "markdown",
402
+ "metadata": {},
403
+ "source": [
404
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
405
+ " <tr>\n",
406
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
407
+ " <img src=\"../assets/business.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
408
+ " </td>\n",
409
+ " <td>\n",
410
+ " <h2 style=\"color:#00bfff;\">Commercial implications</h2>\n",
411
+ " <span style=\"color:#00bfff;\">Aside from the obvious (your career alter-ego) this has business applications in any situation where you need an AI assistant with domain expertise and an ability to interact with the real world.\n",
412
+ " </span>\n",
413
+ " </td>\n",
414
+ " </tr>\n",
415
+ "</table>"
416
+ ]
417
+ }
418
+ ],
419
+ "metadata": {
420
+ "kernelspec": {
421
+ "display_name": ".venv",
422
+ "language": "python",
423
+ "name": "python3"
424
+ },
425
+ "language_info": {
426
+ "codemirror_mode": {
427
+ "name": "ipython",
428
+ "version": 3
429
+ },
430
+ "file_extension": ".py",
431
+ "mimetype": "text/x-python",
432
+ "name": "python",
433
+ "nbconvert_exporter": "python",
434
+ "pygments_lexer": "ipython3",
435
+ "version": "3.12.10"
436
+ }
437
+ },
438
+ "nbformat": 4,
439
+ "nbformat_minor": 2
440
+ }
README.md CHANGED
@@ -1,12 +1,6 @@
1
- ---
2
- title: Career Conversations2
3
- emoji: 🔥
4
- colorFrom: red
5
- colorTo: green
6
- sdk: gradio
7
- sdk_version: 5.28.0
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
1
+ ---
2
+ title: career_conversations2
3
+ app_file: app.py
4
+ sdk: gradio
5
+ sdk_version: 5.22.0
6
+ ---
 
 
 
 
 
 
README.md.bak ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ ---
2
+ title: Career_Conversation
3
+ app_file: app.py
4
+ sdk: gradio
5
+ sdk_version: 5.22.0
6
+ ---
app.py ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from dotenv import load_dotenv
2
+ from openai import OpenAI
3
+ import json
4
+ import os
5
+ import requests
6
+ from PyPDF2 import PdfReader
7
+ import gradio as gr
8
+
9
+
10
+ load_dotenv(override=True)
11
+
12
+ def push(text):
13
+ requests.post(
14
+ "https://api.pushover.net/1/messages.json",
15
+ data={
16
+ "token": os.getenv("PUSHOVER_TOKEN"),
17
+ "user": os.getenv("PUSHOVER_USER"),
18
+ "message": text,
19
+ }
20
+ )
21
+
22
+
23
+ def record_user_details(email, name="Name not provided", notes="not provided"):
24
+ push(f"Recording {name} with email {email} and notes {notes}")
25
+ return {"recorded": "ok"}
26
+
27
+ def record_unknown_question(question):
28
+ push(f"Recording {question}")
29
+ return {"recorded": "ok"}
30
+
31
+ record_user_details_json = {
32
+ "name": "record_user_details",
33
+ "description": "Use this tool to record that a user is interested in being in touch and provided an email address",
34
+ "parameters": {
35
+ "type": "object",
36
+ "properties": {
37
+ "email": {
38
+ "type": "string",
39
+ "description": "The email address of this user"
40
+ },
41
+ "name": {
42
+ "type": "string",
43
+ "description": "The user's name, if they provided it"
44
+ }
45
+ ,
46
+ "notes": {
47
+ "type": "string",
48
+ "description": "Any additional information about the conversation that's worth recording to give context"
49
+ }
50
+ },
51
+ "required": ["email"],
52
+ "additionalProperties": False
53
+ }
54
+ }
55
+
56
+ record_unknown_question_json = {
57
+ "name": "record_unknown_question",
58
+ "description": "Always use this tool to record any question that couldn't be answered as you didn't know the answer",
59
+ "parameters": {
60
+ "type": "object",
61
+ "properties": {
62
+ "question": {
63
+ "type": "string",
64
+ "description": "The question that couldn't be answered"
65
+ },
66
+ },
67
+ "required": ["question"],
68
+ "additionalProperties": False
69
+ }
70
+ }
71
+
72
+ tools = [{"type": "function", "function": record_user_details_json},
73
+ {"type": "function", "function": record_unknown_question_json}]
74
+
75
+
76
+ class Me:
77
+
78
+ def __init__(self):
79
+ self.openai = OpenAI()
80
+ self.name = "Sachin Bharadwaj"
81
+ reader = PdfReader("me/linkedin.pdf")
82
+ self.linkedin = ""
83
+ for page in reader.pages:
84
+ text = page.extract_text()
85
+ if text:
86
+ self.linkedin += text
87
+ with open("me/summary.txt", "r", encoding="utf-8") as f:
88
+ self.summary = f.read()
89
+
90
+
91
+ def handle_tool_call(self, tool_calls):
92
+ results = []
93
+ for tool_call in tool_calls:
94
+ tool_name = tool_call.function.name
95
+ arguments = json.loads(tool_call.function.arguments)
96
+ print(f"Tool called: {tool_name}", flush=True)
97
+ tool = globals().get(tool_name)
98
+ result = tool(**arguments) if tool else {}
99
+ results.append({"role": "tool","content": json.dumps(result),"tool_call_id": tool_call.id})
100
+ return results
101
+
102
+ def system_prompt(self):
103
+ system_prompt = f"You are acting as {self.name}. You are answering questions on {self.name}'s website, \
104
+ particularly questions related to {self.name}'s career, background, skills and experience. \
105
+ Your responsibility is to represent {self.name} for interactions on the website as faithfully as possible. \
106
+ You are given a summary of {self.name}'s background and LinkedIn profile which you can use to answer questions. \
107
+ Be professional and engaging, as if talking to a potential client or future employer who came across the website. \
108
+ If you don't know the answer to any question, use your record_unknown_question tool to record the question that you couldn't answer, even if it's about something trivial or unrelated to career. \
109
+ If the user is engaging in discussion, try to steer them towards getting in touch via email; ask for their email and record it using your record_user_details tool. "
110
+
111
+ system_prompt += f"\n\n## Summary:\n{self.summary}\n\n## LinkedIn Profile:\n{self.linkedin}\n\n"
112
+ system_prompt += f"With this context, please chat with the user, always staying in character as {self.name}."
113
+ return system_prompt
114
+
115
+ def chat(self, message, history):
116
+ messages = [{"role": "system", "content": self.system_prompt()}] + history + [{"role": "user", "content": message}]
117
+ done = False
118
+ while not done:
119
+ response = self.openai.chat.completions.create(model="gpt-4o-mini", messages=messages, tools=tools)
120
+ if response.choices[0].finish_reason=="tool_calls":
121
+ message = response.choices[0].message
122
+ tool_calls = message.tool_calls
123
+ results = self.handle_tool_call(tool_calls)
124
+ messages.append(message)
125
+ messages.extend(results)
126
+ else:
127
+ done = True
128
+ return response.choices[0].message.content
129
+
130
+
131
+ if __name__ == "__main__":
132
+ me = Me()
133
+ gr.ChatInterface(me.chat, type="messages").launch()
134
+
community_contributions/community.ipynb ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "# Community contributions\n",
8
+ "\n",
9
+ "Thank you for considering contributing your work to the repo!\n",
10
+ "\n",
11
+ "Please add your code (modules or notebooks) to this directory and send me a PR, per the instructions in the guides.\n",
12
+ "\n",
13
+ "I'd love to share your progress with other students, so everyone can benefit from your projects.\n"
14
+ ]
15
+ },
16
+ {
17
+ "cell_type": "markdown",
18
+ "metadata": {},
19
+ "source": []
20
+ }
21
+ ],
22
+ "metadata": {
23
+ "language_info": {
24
+ "name": "python"
25
+ }
26
+ },
27
+ "nbformat": 4,
28
+ "nbformat_minor": 2
29
+ }
me/linkedin.pdf ADDED
Binary file (65 kB). View file
 
me/summary.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Hi, myself Sachin Bharadwaj. I am an tech enthusiast, love building tech things. In my spare time, I enjoy binge watching, listening to music, playing with pets, reading current affairs and learning some cool new tech stuff!!
requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ requests
2
+ python-dotenv
3
+ gradio
4
+ pypdf2
5
+ openai