santprac commited on
Commit
cddf423
·
verified ·
1 Parent(s): c9d3388

Upload folder using huggingface_hub

Browse files
0_googleapi.ipynb ADDED
File without changes
1_lab1.ipynb ADDED
@@ -0,0 +1,323 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "# Welcome to the start of your adventure in Agentic AI"
8
+ ]
9
+ },
10
+ {
11
+ "cell_type": "markdown",
12
+ "metadata": {},
13
+ "source": [
14
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
15
+ " <tr>\n",
16
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
17
+ " <img src=\"../assets/stop.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
18
+ " </td>\n",
19
+ " <td>\n",
20
+ " <h2 style=\"color:#ff7800;\">Are you ready for action??</h2>\n",
21
+ " <span style=\"color:#ff7800;\">Have you completed all the setup steps in the <a href=\"../setup/\">setup</a> folder?<br/>\n",
22
+ " Have you checked out the guides in the <a href=\"../guides/01_intro.ipynb\">guides</a> folder?<br/>\n",
23
+ " Well in that case, you're ready!!\n",
24
+ " </span>\n",
25
+ " </td>\n",
26
+ " </tr>\n",
27
+ "</table>"
28
+ ]
29
+ },
30
+ {
31
+ "cell_type": "markdown",
32
+ "metadata": {},
33
+ "source": [
34
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
35
+ " <tr>\n",
36
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
37
+ " <img src=\"../assets/tools.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
38
+ " </td>\n",
39
+ " <td>\n",
40
+ " <h2 style=\"color:#00bfff;\">Treat these labs as a resource</h2>\n",
41
+ " <span style=\"color:#00bfff;\">I push updates to the code regularly. When people ask questions or have problems, I incorporate it in the code, adding more examples or improved commentary. As a result, you'll notice that the code below isn't identical to the videos. Everything from the videos is here; but in addition, I've added more steps and better explanations. Consider this like an interactive book that accompanies the lectures.\n",
42
+ " </span>\n",
43
+ " </td>\n",
44
+ " </tr>\n",
45
+ "</table>"
46
+ ]
47
+ },
48
+ {
49
+ "cell_type": "markdown",
50
+ "metadata": {},
51
+ "source": [
52
+ "### And please do remember to contact me if I can help\n",
53
+ "\n",
54
+ "And I love to connect: https://www.linkedin.com/in/eddonner/\n",
55
+ "\n",
56
+ "\n",
57
+ "### New to Notebooks like this one? Head over to the guides folder!\n",
58
+ "\n",
59
+ "Just to check you've already added the Python and Jupyter extensions to Cursor, if not already installed:\n",
60
+ "- Open extensions (View >> extensions)\n",
61
+ "- Search for python, and when the results show, click on the ms-python one, and Install it if not already installed\n",
62
+ "- Search for jupyter, and when the results show, click on the Microsoft one, and Install it if not already installed \n",
63
+ "Then View >> Explorer to bring back the File Explorer.\n",
64
+ "\n",
65
+ "And then:\n",
66
+ "1. Click where it says \"Select Kernel\" near the top right, and select the option called `.venv (Python 3.12.9)` or similar, which should be the first choice or the most prominent choice. You may need to choose \"Python Environments\" first.\n",
67
+ "2. Click in each \"cell\" below, starting with the cell immediately below this text, and press Shift+Enter to run\n",
68
+ "3. Enjoy!\n",
69
+ "\n",
70
+ "After you click \"Select Kernel\", if there is no option like `.venv (Python 3.12.9)` then please do the following: \n",
71
+ "1. From the Cursor menu, choose Settings >> VSCode Settings (NOTE: be sure to select `VSCode Settings` not `Cursor Settings`) \n",
72
+ "2. In the Settings search bar, type \"venv\" \n",
73
+ "3. In the field \"Path to folder with a list of Virtual Environments\" put the path to the project root, like C:\\Users\\username\\projects\\agents (on a Windows PC) or /Users/username/projects/agents (on Mac or Linux). \n",
74
+ "And then try again."
75
+ ]
76
+ },
77
+ {
78
+ "cell_type": "code",
79
+ "execution_count": 1,
80
+ "metadata": {},
81
+ "outputs": [],
82
+ "source": [
83
+ "# First let's do an import\n",
84
+ "from dotenv import load_dotenv\n"
85
+ ]
86
+ },
87
+ {
88
+ "cell_type": "code",
89
+ "execution_count": null,
90
+ "metadata": {},
91
+ "outputs": [],
92
+ "source": [
93
+ "# Next it's time to load the API keys into environment variables\n",
94
+ "\n",
95
+ "load_dotenv(override=True)"
96
+ ]
97
+ },
98
+ {
99
+ "cell_type": "code",
100
+ "execution_count": null,
101
+ "metadata": {},
102
+ "outputs": [],
103
+ "source": [
104
+ "# Check the keys\n",
105
+ "\n",
106
+ "import os\n",
107
+ "openai_api_key = os.getenv('OPENAI_API_KEY')\n",
108
+ "\n",
109
+ "if openai_api_key:\n",
110
+ " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
111
+ "else:\n",
112
+ " print(\"OpenAI API Key not set - please head to the troubleshooting guide in the guides folder\")\n",
113
+ " \n"
114
+ ]
115
+ },
116
+ {
117
+ "cell_type": "code",
118
+ "execution_count": 5,
119
+ "metadata": {},
120
+ "outputs": [],
121
+ "source": [
122
+ "# And now - the all important import statement\n",
123
+ "# If you get an import error - head over to troubleshooting guide\n",
124
+ "\n",
125
+ "from openai import OpenAI"
126
+ ]
127
+ },
128
+ {
129
+ "cell_type": "code",
130
+ "execution_count": 6,
131
+ "metadata": {},
132
+ "outputs": [],
133
+ "source": [
134
+ "# And now we'll create an instance of the OpenAI class\n",
135
+ "# If you're not sure what it means to create an instance of a class - head over to the guides folder!\n",
136
+ "# If you get a NameError - head over to the guides folder to learn about NameErrors\n",
137
+ "\n",
138
+ "openai = OpenAI()"
139
+ ]
140
+ },
141
+ {
142
+ "cell_type": "code",
143
+ "execution_count": 16,
144
+ "metadata": {},
145
+ "outputs": [],
146
+ "source": [
147
+ "# Create a list of messages in the familiar OpenAI format\n",
148
+ "\n",
149
+ "messages = [{\"role\": \"user\", \"content\": \"What is 2+2?\"}]"
150
+ ]
151
+ },
152
+ {
153
+ "cell_type": "code",
154
+ "execution_count": null,
155
+ "metadata": {},
156
+ "outputs": [],
157
+ "source": [
158
+ "# And now call it! Any problems, head to the troubleshooting guide\n",
159
+ "\n",
160
+ "response = openai.chat.completions.create(\n",
161
+ " model=\"gpt-4o-mini\",\n",
162
+ " messages=messages\n",
163
+ ")\n",
164
+ "\n",
165
+ "print(response.choices[0].message.content)\n"
166
+ ]
167
+ },
168
+ {
169
+ "cell_type": "code",
170
+ "execution_count": null,
171
+ "metadata": {},
172
+ "outputs": [],
173
+ "source": []
174
+ },
175
+ {
176
+ "cell_type": "code",
177
+ "execution_count": 18,
178
+ "metadata": {},
179
+ "outputs": [],
180
+ "source": [
181
+ "# And now - let's ask for a question:\n",
182
+ "\n",
183
+ "question = \"Please propose a hard, challenging question to assess someone's IQ. Respond only with the question.\"\n",
184
+ "messages = [{\"role\": \"user\", \"content\": question}]\n"
185
+ ]
186
+ },
187
+ {
188
+ "cell_type": "code",
189
+ "execution_count": null,
190
+ "metadata": {},
191
+ "outputs": [],
192
+ "source": [
193
+ "# ask it\n",
194
+ "response = openai.chat.completions.create(\n",
195
+ " model=\"gpt-4o-mini\",\n",
196
+ " messages=messages\n",
197
+ ")\n",
198
+ "\n",
199
+ "question = response.choices[0].message.content\n",
200
+ "\n",
201
+ "print(question)\n"
202
+ ]
203
+ },
204
+ {
205
+ "cell_type": "code",
206
+ "execution_count": 28,
207
+ "metadata": {},
208
+ "outputs": [],
209
+ "source": [
210
+ "# form a new messages list\n",
211
+ "messages = [{\"role\": \"user\", \"content\": question}]\n"
212
+ ]
213
+ },
214
+ {
215
+ "cell_type": "code",
216
+ "execution_count": null,
217
+ "metadata": {},
218
+ "outputs": [],
219
+ "source": [
220
+ "# Ask it again\n",
221
+ "\n",
222
+ "response = openai.chat.completions.create(\n",
223
+ " model=\"gpt-4o-mini\",\n",
224
+ " messages=messages\n",
225
+ ")\n",
226
+ "\n",
227
+ "answer = response.choices[0].message.content\n",
228
+ "print(answer)\n"
229
+ ]
230
+ },
231
+ {
232
+ "cell_type": "code",
233
+ "execution_count": null,
234
+ "metadata": {},
235
+ "outputs": [],
236
+ "source": [
237
+ "from IPython.display import Markdown, display\n",
238
+ "\n",
239
+ "display(Markdown(answer))\n",
240
+ "\n"
241
+ ]
242
+ },
243
+ {
244
+ "cell_type": "markdown",
245
+ "metadata": {},
246
+ "source": [
247
+ "# Congratulations!\n",
248
+ "\n",
249
+ "That was a small, simple step in the direction of Agentic AI, with your new environment!\n",
250
+ "\n",
251
+ "Next time things get more interesting..."
252
+ ]
253
+ },
254
+ {
255
+ "cell_type": "markdown",
256
+ "metadata": {},
257
+ "source": [
258
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
259
+ " <tr>\n",
260
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
261
+ " <img src=\"../assets/exercise.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
262
+ " </td>\n",
263
+ " <td>\n",
264
+ " <h2 style=\"color:#ff7800;\">Exercise</h2>\n",
265
+ " <span style=\"color:#ff7800;\">Now try this commercial application:<br/>\n",
266
+ " First ask the LLM to pick a business area that might be worth exploring for an Agentic AI opportunity.<br/>\n",
267
+ " Then ask the LLM to present a pain-point in that industry - something challenging that might be ripe for an Agentic solution.<br/>\n",
268
+ " Finally have 3 third LLM call propose the Agentic AI solution.\n",
269
+ " </span>\n",
270
+ " </td>\n",
271
+ " </tr>\n",
272
+ "</table>"
273
+ ]
274
+ },
275
+ {
276
+ "cell_type": "code",
277
+ "execution_count": null,
278
+ "metadata": {},
279
+ "outputs": [],
280
+ "source": [
281
+ "# First create the messages:\n",
282
+ "\n",
283
+ "messages = [{\"role\": \"user\", \"content\": \"Something here\"}]\n",
284
+ "\n",
285
+ "# Then make the first call:\n",
286
+ "\n",
287
+ "response =\n",
288
+ "\n",
289
+ "# Then read the business idea:\n",
290
+ "\n",
291
+ "business_idea = response.\n",
292
+ "\n",
293
+ "# And repeat!"
294
+ ]
295
+ },
296
+ {
297
+ "cell_type": "markdown",
298
+ "metadata": {},
299
+ "source": []
300
+ }
301
+ ],
302
+ "metadata": {
303
+ "kernelspec": {
304
+ "display_name": ".venv",
305
+ "language": "python",
306
+ "name": "python3"
307
+ },
308
+ "language_info": {
309
+ "codemirror_mode": {
310
+ "name": "ipython",
311
+ "version": 3
312
+ },
313
+ "file_extension": ".py",
314
+ "mimetype": "text/x-python",
315
+ "name": "python",
316
+ "nbconvert_exporter": "python",
317
+ "pygments_lexer": "ipython3",
318
+ "version": "3.12.9"
319
+ }
320
+ },
321
+ "nbformat": 4,
322
+ "nbformat_minor": 2
323
+ }
2_lab2.ipynb ADDED
@@ -0,0 +1,474 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "## Welcome to the Second Lab - Week 1, Day 3\n",
8
+ "\n",
9
+ "Today we will work with lots of models! This is a way to get comfortable with APIs."
10
+ ]
11
+ },
12
+ {
13
+ "cell_type": "markdown",
14
+ "metadata": {},
15
+ "source": [
16
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
17
+ " <tr>\n",
18
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
19
+ " <img src=\"../assets/stop.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
20
+ " </td>\n",
21
+ " <td>\n",
22
+ " <h2 style=\"color:#ff7800;\">Important point - please read</h2>\n",
23
+ " <span style=\"color:#ff7800;\">The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you carefully execute this yourself, <b>after</b> watching the lecture. Add print statements to understand what's going on, and then come up with your own variations.<br/><br/>If you have time, I'd love it if you submit a PR for changes in the community_contributions folder - instructions in the resources. Also, if you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...\n",
24
+ " </span>\n",
25
+ " </td>\n",
26
+ " </tr>\n",
27
+ "</table>"
28
+ ]
29
+ },
30
+ {
31
+ "cell_type": "code",
32
+ "execution_count": 1,
33
+ "metadata": {},
34
+ "outputs": [],
35
+ "source": [
36
+ "# Start with imports - ask ChatGPT to explain any package that you don't know\n",
37
+ "\n",
38
+ "import os\n",
39
+ "import json\n",
40
+ "from dotenv import load_dotenv\n",
41
+ "from openai import OpenAI\n",
42
+ "from anthropic import Anthropic\n",
43
+ "from IPython.display import Markdown, display"
44
+ ]
45
+ },
46
+ {
47
+ "cell_type": "code",
48
+ "execution_count": null,
49
+ "metadata": {},
50
+ "outputs": [],
51
+ "source": [
52
+ "# Always remember to do this!\n",
53
+ "load_dotenv(override=True)"
54
+ ]
55
+ },
56
+ {
57
+ "cell_type": "code",
58
+ "execution_count": null,
59
+ "metadata": {},
60
+ "outputs": [],
61
+ "source": [
62
+ "# Print the key prefixes to help with any debugging\n",
63
+ "\n",
64
+ "openai_api_key = os.getenv('OPENAI_API_KEY')\n",
65
+ "anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n",
66
+ "google_api_key = os.getenv('GOOGLE_API_KEY')\n",
67
+ "deepseek_api_key = os.getenv('DEEPSEEK_API_KEY')\n",
68
+ "groq_api_key = os.getenv('GROQ_API_KEY')\n",
69
+ "\n",
70
+ "if openai_api_key:\n",
71
+ " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
72
+ "else:\n",
73
+ " print(\"OpenAI API Key not set\")\n",
74
+ " \n",
75
+ "if anthropic_api_key:\n",
76
+ " print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n",
77
+ "else:\n",
78
+ " print(\"Anthropic API Key not set (and this is optional)\")\n",
79
+ "\n",
80
+ "if google_api_key:\n",
81
+ " print(f\"Google API Key exists and begins {google_api_key[:2]}\")\n",
82
+ "else:\n",
83
+ " print(\"Google API Key not set (and this is optional)\")\n",
84
+ "\n",
85
+ "if deepseek_api_key:\n",
86
+ " print(f\"DeepSeek API Key exists and begins {deepseek_api_key[:3]}\")\n",
87
+ "else:\n",
88
+ " print(\"DeepSeek API Key not set (and this is optional)\")\n",
89
+ "\n",
90
+ "if groq_api_key:\n",
91
+ " print(f\"Groq API Key exists and begins {groq_api_key[:4]}\")\n",
92
+ "else:\n",
93
+ " print(\"Groq API Key not set (and this is optional)\")"
94
+ ]
95
+ },
96
+ {
97
+ "cell_type": "code",
98
+ "execution_count": 4,
99
+ "metadata": {},
100
+ "outputs": [],
101
+ "source": [
102
+ "request = \"Please come up with a challenging, nuanced question that I can ask a number of LLMs to evaluate their intelligence. \"\n",
103
+ "request += \"Answer only with the question, no explanation.\"\n",
104
+ "messages = [{\"role\": \"user\", \"content\": request}]"
105
+ ]
106
+ },
107
+ {
108
+ "cell_type": "code",
109
+ "execution_count": null,
110
+ "metadata": {},
111
+ "outputs": [],
112
+ "source": [
113
+ "messages"
114
+ ]
115
+ },
116
+ {
117
+ "cell_type": "code",
118
+ "execution_count": null,
119
+ "metadata": {},
120
+ "outputs": [],
121
+ "source": [
122
+ "openai = OpenAI()\n",
123
+ "response = openai.chat.completions.create(\n",
124
+ " model=\"gpt-4o-mini\",\n",
125
+ " messages=messages,\n",
126
+ ")\n",
127
+ "question = response.choices[0].message.content\n",
128
+ "print(question)\n"
129
+ ]
130
+ },
131
+ {
132
+ "cell_type": "code",
133
+ "execution_count": 7,
134
+ "metadata": {},
135
+ "outputs": [],
136
+ "source": [
137
+ "competitors = []\n",
138
+ "answers = []\n",
139
+ "messages = [{\"role\": \"user\", \"content\": question}]"
140
+ ]
141
+ },
142
+ {
143
+ "cell_type": "code",
144
+ "execution_count": null,
145
+ "metadata": {},
146
+ "outputs": [],
147
+ "source": [
148
+ "# The API we know well\n",
149
+ "\n",
150
+ "model_name = \"gpt-4o-mini\"\n",
151
+ "\n",
152
+ "response = openai.chat.completions.create(model=model_name, messages=messages)\n",
153
+ "answer = response.choices[0].message.content\n",
154
+ "\n",
155
+ "display(Markdown(answer))\n",
156
+ "competitors.append(model_name)\n",
157
+ "answers.append(answer)"
158
+ ]
159
+ },
160
+ {
161
+ "cell_type": "code",
162
+ "execution_count": null,
163
+ "metadata": {},
164
+ "outputs": [],
165
+ "source": [
166
+ "# Anthropic has a slightly different API, and Max Tokens is required\n",
167
+ "\n",
168
+ "model_name = \"claude-3-7-sonnet-latest\"\n",
169
+ "\n",
170
+ "claude = Anthropic()\n",
171
+ "response = claude.messages.create(model=model_name, messages=messages, max_tokens=1000)\n",
172
+ "answer = response.content[0].text\n",
173
+ "\n",
174
+ "display(Markdown(answer))\n",
175
+ "competitors.append(model_name)\n",
176
+ "answers.append(answer)"
177
+ ]
178
+ },
179
+ {
180
+ "cell_type": "code",
181
+ "execution_count": null,
182
+ "metadata": {},
183
+ "outputs": [],
184
+ "source": [
185
+ "gemini = OpenAI(api_key=google_api_key, base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\")\n",
186
+ "model_name = \"gemini-2.0-flash\"\n",
187
+ "\n",
188
+ "response = gemini.chat.completions.create(model=model_name, messages=messages)\n",
189
+ "answer = response.choices[0].message.content\n",
190
+ "\n",
191
+ "display(Markdown(answer))\n",
192
+ "competitors.append(model_name)\n",
193
+ "answers.append(answer)"
194
+ ]
195
+ },
196
+ {
197
+ "cell_type": "code",
198
+ "execution_count": null,
199
+ "metadata": {},
200
+ "outputs": [],
201
+ "source": [
202
+ "deepseek = OpenAI(api_key=deepseek_api_key, base_url=\"https://api.deepseek.com/v1\")\n",
203
+ "model_name = \"deepseek-chat\"\n",
204
+ "\n",
205
+ "response = deepseek.chat.completions.create(model=model_name, messages=messages)\n",
206
+ "answer = response.choices[0].message.content\n",
207
+ "\n",
208
+ "display(Markdown(answer))\n",
209
+ "competitors.append(model_name)\n",
210
+ "answers.append(answer)"
211
+ ]
212
+ },
213
+ {
214
+ "cell_type": "code",
215
+ "execution_count": null,
216
+ "metadata": {},
217
+ "outputs": [],
218
+ "source": [
219
+ "groq = OpenAI(api_key=groq_api_key, base_url=\"https://api.groq.com/openai/v1\")\n",
220
+ "model_name = \"llama-3.3-70b-versatile\"\n",
221
+ "\n",
222
+ "response = groq.chat.completions.create(model=model_name, messages=messages)\n",
223
+ "answer = response.choices[0].message.content\n",
224
+ "\n",
225
+ "display(Markdown(answer))\n",
226
+ "competitors.append(model_name)\n",
227
+ "answers.append(answer)\n"
228
+ ]
229
+ },
230
+ {
231
+ "cell_type": "markdown",
232
+ "metadata": {},
233
+ "source": [
234
+ "## For the next cell, we will use Ollama\n",
235
+ "\n",
236
+ "Ollama runs a local web service that gives an OpenAI compatible endpoint, \n",
237
+ "and runs models locally using high performance C++ code.\n",
238
+ "\n",
239
+ "If you don't have Ollama, install it here by visiting https://ollama.com then pressing Download and following the instructions.\n",
240
+ "\n",
241
+ "After it's installed, you should be able to visit here: http://localhost:11434 and see the message \"Ollama is running\"\n",
242
+ "\n",
243
+ "You might need to restart Cursor (and maybe reboot). Then open a Terminal (control+\\`) and run `ollama serve`\n",
244
+ "\n",
245
+ "Useful Ollama commands (run these in the terminal, or with an exclamation mark in this notebook):\n",
246
+ "\n",
247
+ "`ollama pull <model_name>` downloads a model locally \n",
248
+ "`ollama ls` lists all the models you've downloaded \n",
249
+ "`ollama rm <model_name>` deletes the specified model from your downloads"
250
+ ]
251
+ },
252
+ {
253
+ "cell_type": "markdown",
254
+ "metadata": {},
255
+ "source": [
256
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
257
+ " <tr>\n",
258
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
259
+ " <img src=\"../assets/stop.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
260
+ " </td>\n",
261
+ " <td>\n",
262
+ " <h2 style=\"color:#ff7800;\">Super important - ignore me at your peril!</h2>\n",
263
+ " <span style=\"color:#ff7800;\">The model called <b>llama3.3</b> is FAR too large for home computers - it's not intended for personal computing and will consume all your resources! Stick with the nicely sized <b>llama3.2</b> or <b>llama3.2:1b</b> and if you want larger, try llama3.1 or smaller variants of Qwen, Gemma, Phi or DeepSeek. See the <A href=\"https://ollama.com/models\">the Ollama models page</a> for a full list of models and sizes.\n",
264
+ " </span>\n",
265
+ " </td>\n",
266
+ " </tr>\n",
267
+ "</table>"
268
+ ]
269
+ },
270
+ {
271
+ "cell_type": "code",
272
+ "execution_count": null,
273
+ "metadata": {},
274
+ "outputs": [],
275
+ "source": [
276
+ "!ollama pull llama3.2"
277
+ ]
278
+ },
279
+ {
280
+ "cell_type": "code",
281
+ "execution_count": null,
282
+ "metadata": {},
283
+ "outputs": [],
284
+ "source": [
285
+ "ollama = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n",
286
+ "model_name = \"llama3.2\"\n",
287
+ "\n",
288
+ "response = ollama.chat.completions.create(model=model_name, messages=messages)\n",
289
+ "answer = response.choices[0].message.content\n",
290
+ "\n",
291
+ "display(Markdown(answer))\n",
292
+ "competitors.append(model_name)\n",
293
+ "answers.append(answer)"
294
+ ]
295
+ },
296
+ {
297
+ "cell_type": "code",
298
+ "execution_count": null,
299
+ "metadata": {},
300
+ "outputs": [],
301
+ "source": [
302
+ "# So where are we?\n",
303
+ "\n",
304
+ "print(competitors)\n",
305
+ "print(answers)\n"
306
+ ]
307
+ },
308
+ {
309
+ "cell_type": "code",
310
+ "execution_count": null,
311
+ "metadata": {},
312
+ "outputs": [],
313
+ "source": [
314
+ "# It's nice to know how to use \"zip\"\n",
315
+ "for competitor, answer in zip(competitors, answers):\n",
316
+ " print(f\"Competitor: {competitor}\\n\\n{answer}\")\n"
317
+ ]
318
+ },
319
+ {
320
+ "cell_type": "code",
321
+ "execution_count": 20,
322
+ "metadata": {},
323
+ "outputs": [],
324
+ "source": [
325
+ "# Let's bring this together - note the use of \"enumerate\"\n",
326
+ "\n",
327
+ "together = \"\"\n",
328
+ "for index, answer in enumerate(answers):\n",
329
+ " together += f\"# Response from competitor {index+1}\\n\\n\"\n",
330
+ " together += answer + \"\\n\\n\""
331
+ ]
332
+ },
333
+ {
334
+ "cell_type": "code",
335
+ "execution_count": null,
336
+ "metadata": {},
337
+ "outputs": [],
338
+ "source": [
339
+ "print(together)"
340
+ ]
341
+ },
342
+ {
343
+ "cell_type": "code",
344
+ "execution_count": 22,
345
+ "metadata": {},
346
+ "outputs": [],
347
+ "source": [
348
+ "judge = f\"\"\"You are judging a competition between {len(competitors)} competitors.\n",
349
+ "Each model has been given this question:\n",
350
+ "\n",
351
+ "{question}\n",
352
+ "\n",
353
+ "Your job is to evaluate each response for clarity and strength of argument, and rank them in order of best to worst.\n",
354
+ "Respond with JSON, and only JSON, with the following format:\n",
355
+ "{{\"results\": [\"best competitor number\", \"second best competitor number\", \"third best competitor number\", ...]}}\n",
356
+ "\n",
357
+ "Here are the responses from each competitor:\n",
358
+ "\n",
359
+ "{together}\n",
360
+ "\n",
361
+ "Now respond with the JSON with the ranked order of the competitors, nothing else. Do not include markdown formatting or code blocks.\"\"\"\n"
362
+ ]
363
+ },
364
+ {
365
+ "cell_type": "code",
366
+ "execution_count": null,
367
+ "metadata": {},
368
+ "outputs": [],
369
+ "source": [
370
+ "print(judge)"
371
+ ]
372
+ },
373
+ {
374
+ "cell_type": "code",
375
+ "execution_count": 29,
376
+ "metadata": {},
377
+ "outputs": [],
378
+ "source": [
379
+ "judge_messages = [{\"role\": \"user\", \"content\": judge}]"
380
+ ]
381
+ },
382
+ {
383
+ "cell_type": "code",
384
+ "execution_count": null,
385
+ "metadata": {},
386
+ "outputs": [],
387
+ "source": [
388
+ "# Judgement time!\n",
389
+ "\n",
390
+ "openai = OpenAI()\n",
391
+ "response = openai.chat.completions.create(\n",
392
+ " model=\"o3-mini\",\n",
393
+ " messages=judge_messages,\n",
394
+ ")\n",
395
+ "results = response.choices[0].message.content\n",
396
+ "print(results)\n"
397
+ ]
398
+ },
399
+ {
400
+ "cell_type": "code",
401
+ "execution_count": null,
402
+ "metadata": {},
403
+ "outputs": [],
404
+ "source": [
405
+ "# OK let's turn this into results!\n",
406
+ "\n",
407
+ "results_dict = json.loads(results)\n",
408
+ "ranks = results_dict[\"results\"]\n",
409
+ "for index, result in enumerate(ranks):\n",
410
+ " competitor = competitors[int(result)-1]\n",
411
+ " print(f\"Rank {index+1}: {competitor}\")"
412
+ ]
413
+ },
414
+ {
415
+ "cell_type": "markdown",
416
+ "metadata": {},
417
+ "source": [
418
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
419
+ " <tr>\n",
420
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
421
+ " <img src=\"../assets/exercise.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
422
+ " </td>\n",
423
+ " <td>\n",
424
+ " <h2 style=\"color:#ff7800;\">Exercise</h2>\n",
425
+ " <span style=\"color:#ff7800;\">Which pattern(s) did this use? Try updating this to add another Agentic design pattern.\n",
426
+ " </span>\n",
427
+ " </td>\n",
428
+ " </tr>\n",
429
+ "</table>"
430
+ ]
431
+ },
432
+ {
433
+ "cell_type": "markdown",
434
+ "metadata": {},
435
+ "source": [
436
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
437
+ " <tr>\n",
438
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
439
+ " <img src=\"../assets/business.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
440
+ " </td>\n",
441
+ " <td>\n",
442
+ " <h2 style=\"color:#00bfff;\">Commercial implications</h2>\n",
443
+ " <span style=\"color:#00bfff;\">These kinds of patterns - to send a task to multiple models, and evaluate results,\n",
444
+ " and common where you need to improve the quality of your LLM response. This approach can be universally applied\n",
445
+ " to business projects where accuracy is critical.\n",
446
+ " </span>\n",
447
+ " </td>\n",
448
+ " </tr>\n",
449
+ "</table>"
450
+ ]
451
+ }
452
+ ],
453
+ "metadata": {
454
+ "kernelspec": {
455
+ "display_name": ".venv",
456
+ "language": "python",
457
+ "name": "python3"
458
+ },
459
+ "language_info": {
460
+ "codemirror_mode": {
461
+ "name": "ipython",
462
+ "version": 3
463
+ },
464
+ "file_extension": ".py",
465
+ "mimetype": "text/x-python",
466
+ "name": "python",
467
+ "nbconvert_exporter": "python",
468
+ "pygments_lexer": "ipython3",
469
+ "version": "3.12.9"
470
+ }
471
+ },
472
+ "nbformat": 4,
473
+ "nbformat_minor": 2
474
+ }
3_lab3.ipynb ADDED
@@ -0,0 +1,351 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "## Welcome to Lab 3 for Week 1 Day 4\n",
8
+ "\n",
9
+ "Today we're going to build something with immediate value!\n",
10
+ "\n",
11
+ "In the folder `me` I've put a single file `linkedin.pdf` - it's a PDF download of my LinkedIn profile.\n",
12
+ "\n",
13
+ "Please replace it with yours!\n",
14
+ "\n",
15
+ "I've also made a file called `summary.txt`\n",
16
+ "\n",
17
+ "We're not going to use Tools just yet - we're going to add the tool tomorrow."
18
+ ]
19
+ },
20
+ {
21
+ "cell_type": "markdown",
22
+ "metadata": {},
23
+ "source": [
24
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
25
+ " <tr>\n",
26
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
27
+ " <img src=\"../assets/tools.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
28
+ " </td>\n",
29
+ " <td>\n",
30
+ " <h2 style=\"color:#00bfff;\">Looking up packages</h2>\n",
31
+ " <span style=\"color:#00bfff;\">In this lab, we're going to use the wonderful Gradio package for building quick UIs, \n",
32
+ " and we're also going to use the popular PyPDF2 PDF reader. You can get guides to these packages by asking \n",
33
+ " ChatGPT or Claude, and you find all open-source packages on the repository <a href=\"https://pypi.org\">https://pypi.org</a>.\n",
34
+ " </span>\n",
35
+ " </td>\n",
36
+ " </tr>\n",
37
+ "</table>"
38
+ ]
39
+ },
40
+ {
41
+ "cell_type": "code",
42
+ "execution_count": null,
43
+ "metadata": {},
44
+ "outputs": [],
45
+ "source": [
46
+ "# If you don't know what any of these packages do - you can always ask ChatGPT for a guide!\n",
47
+ "\n",
48
+ "from dotenv import load_dotenv\n",
49
+ "from openai import OpenAI\n",
50
+ "from pypdf import PdfReader\n",
51
+ "import gradio as gr"
52
+ ]
53
+ },
54
+ {
55
+ "cell_type": "code",
56
+ "execution_count": 3,
57
+ "metadata": {},
58
+ "outputs": [],
59
+ "source": [
60
+ "load_dotenv(override=True)\n",
61
+ "openai = OpenAI()"
62
+ ]
63
+ },
64
+ {
65
+ "cell_type": "code",
66
+ "execution_count": 4,
67
+ "metadata": {},
68
+ "outputs": [],
69
+ "source": [
70
+ "reader = PdfReader(\"me/linkedin.pdf\")\n",
71
+ "linkedin = \"\"\n",
72
+ "for page in reader.pages:\n",
73
+ " text = page.extract_text()\n",
74
+ " if text:\n",
75
+ " linkedin += text"
76
+ ]
77
+ },
78
+ {
79
+ "cell_type": "code",
80
+ "execution_count": null,
81
+ "metadata": {},
82
+ "outputs": [],
83
+ "source": [
84
+ "print(linkedin)"
85
+ ]
86
+ },
87
+ {
88
+ "cell_type": "code",
89
+ "execution_count": 5,
90
+ "metadata": {},
91
+ "outputs": [],
92
+ "source": [
93
+ "with open(\"me/summary.txt\", \"r\", encoding=\"utf-8\") as f:\n",
94
+ " summary = f.read()"
95
+ ]
96
+ },
97
+ {
98
+ "cell_type": "code",
99
+ "execution_count": 6,
100
+ "metadata": {},
101
+ "outputs": [],
102
+ "source": [
103
+ "name = \"Ed Donner\""
104
+ ]
105
+ },
106
+ {
107
+ "cell_type": "code",
108
+ "execution_count": 7,
109
+ "metadata": {},
110
+ "outputs": [],
111
+ "source": [
112
+ "system_prompt = f\"You are acting as {name}. You are answering questions on {name}'s website, \\\n",
113
+ "particularly questions related to {name}'s career, background, skills and experience. \\\n",
114
+ "Your responsibility is to represent {name} for interactions on the website as faithfully as possible. \\\n",
115
+ "You are given a summary of {name}'s background and LinkedIn profile which you can use to answer questions. \\\n",
116
+ "Be professional and engaging, as if talking to a potential client or future employer who came across the website. \\\n",
117
+ "If you don't know the answer, say so.\"\n",
118
+ "\n",
119
+ "system_prompt += f\"\\n\\n## Summary:\\n{summary}\\n\\n## LinkedIn Profile:\\n{linkedin}\\n\\n\"\n",
120
+ "system_prompt += f\"With this context, please chat with the user, always staying in character as {name}.\"\n"
121
+ ]
122
+ },
123
+ {
124
+ "cell_type": "code",
125
+ "execution_count": null,
126
+ "metadata": {},
127
+ "outputs": [],
128
+ "source": [
129
+ "system_prompt"
130
+ ]
131
+ },
132
+ {
133
+ "cell_type": "code",
134
+ "execution_count": 9,
135
+ "metadata": {},
136
+ "outputs": [],
137
+ "source": [
138
+ "def chat(message, history):\n",
139
+ " messages = [{\"role\": \"system\", \"content\": system_prompt}] + history + [{\"role\": \"user\", \"content\": message}]\n",
140
+ " response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
141
+ " return response.choices[0].message.content"
142
+ ]
143
+ },
144
+ {
145
+ "cell_type": "code",
146
+ "execution_count": null,
147
+ "metadata": {},
148
+ "outputs": [],
149
+ "source": [
150
+ "gr.ChatInterface(chat, type=\"messages\").launch()"
151
+ ]
152
+ },
153
+ {
154
+ "cell_type": "markdown",
155
+ "metadata": {},
156
+ "source": [
157
+ "## A lot is about to happen...\n",
158
+ "\n",
159
+ "1. Be able to ask an LLM to evaluate an answer\n",
160
+ "2. Be able to rerun if the answer fails evaluation\n",
161
+ "3. Put this together into 1 workflow\n",
162
+ "\n",
163
+ "All without any Agentic framework!"
164
+ ]
165
+ },
166
+ {
167
+ "cell_type": "code",
168
+ "execution_count": 11,
169
+ "metadata": {},
170
+ "outputs": [],
171
+ "source": [
172
+ "# Create a Pydantic model for the Evaluation\n",
173
+ "\n",
174
+ "from pydantic import BaseModel\n",
175
+ "\n",
176
+ "class Evaluation(BaseModel):\n",
177
+ " is_acceptable: bool\n",
178
+ " feedback: str\n"
179
+ ]
180
+ },
181
+ {
182
+ "cell_type": "code",
183
+ "execution_count": 23,
184
+ "metadata": {},
185
+ "outputs": [],
186
+ "source": [
187
+ "evaluator_system_prompt = f\"You are an evaluator that decides whether a response to a question is acceptable. \\\n",
188
+ "You are provided with a conversation between a User and an Agent. Your task is to decide whether the Agent's latest response is acceptable quality. \\\n",
189
+ "The Agent is playing the role of {name} and is representing {name} on their website. \\\n",
190
+ "The Agent has been instructed to be professional and engaging, as if talking to a potential client or future employer who came across the website. \\\n",
191
+ "The Agent has been provided with context on {name} in the form of their summary and LinkedIn details. Here's the information:\"\n",
192
+ "\n",
193
+ "evaluator_system_prompt += f\"\\n\\n## Summary:\\n{summary}\\n\\n## LinkedIn Profile:\\n{linkedin}\\n\\n\"\n",
194
+ "evaluator_system_prompt += f\"With this context, please evaluate the latest response, replying with whether the response is acceptable and your feedback.\""
195
+ ]
196
+ },
197
+ {
198
+ "cell_type": "code",
199
+ "execution_count": 24,
200
+ "metadata": {},
201
+ "outputs": [],
202
+ "source": [
203
+ "def evaluator_user_prompt(reply, message, history):\n",
204
+ " user_prompt = f\"Here's the conversation between the User and the Agent: \\n\\n{history}\\n\\n\"\n",
205
+ " user_prompt += f\"Here's the latest message from the User: \\n\\n{message}\\n\\n\"\n",
206
+ " user_prompt += f\"Here's the latest response from the Agent: \\n\\n{reply}\\n\\n\"\n",
207
+ " user_prompt += f\"Please evaluate the response, replying with whether it is acceptable and your feedback.\"\n",
208
+ " return user_prompt"
209
+ ]
210
+ },
211
+ {
212
+ "cell_type": "code",
213
+ "execution_count": 25,
214
+ "metadata": {},
215
+ "outputs": [],
216
+ "source": [
217
+ "import os\n",
218
+ "gemini = OpenAI(\n",
219
+ " api_key=os.getenv(\"GOOGLE_API_KEY\"), \n",
220
+ " base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\"\n",
221
+ ")"
222
+ ]
223
+ },
224
+ {
225
+ "cell_type": "code",
226
+ "execution_count": 26,
227
+ "metadata": {},
228
+ "outputs": [],
229
+ "source": [
230
+ "def evaluate(reply, message, history) -> Evaluation:\n",
231
+ "\n",
232
+ " messages = [{\"role\": \"system\", \"content\": evaluator_system_prompt}] + [{\"role\": \"user\", \"content\": evaluator_user_prompt(reply, message, history)}]\n",
233
+ " response = gemini.beta.chat.completions.parse(model=\"gemini-2.0-flash\", messages=messages, response_format=Evaluation)\n",
234
+ " return response.choices[0].message.parsed"
235
+ ]
236
+ },
237
+ {
238
+ "cell_type": "code",
239
+ "execution_count": 27,
240
+ "metadata": {},
241
+ "outputs": [],
242
+ "source": [
243
+ "messages = [{\"role\": \"system\", \"content\": system_prompt}] + [{\"role\": \"user\", \"content\": \"do you hold a patent?\"}]\n",
244
+ "response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
245
+ "reply = response.choices[0].message.content"
246
+ ]
247
+ },
248
+ {
249
+ "cell_type": "code",
250
+ "execution_count": null,
251
+ "metadata": {},
252
+ "outputs": [],
253
+ "source": [
254
+ "reply"
255
+ ]
256
+ },
257
+ {
258
+ "cell_type": "code",
259
+ "execution_count": null,
260
+ "metadata": {},
261
+ "outputs": [],
262
+ "source": [
263
+ "evaluate(reply, \"do you hold a patent?\", messages[:1])"
264
+ ]
265
+ },
266
+ {
267
+ "cell_type": "code",
268
+ "execution_count": 30,
269
+ "metadata": {},
270
+ "outputs": [],
271
+ "source": [
272
+ "def rerun(reply, message, history, feedback):\n",
273
+ " updated_system_prompt = system_prompt + f\"\\n\\n## Previous answer rejected\\nYou just tried to reply, but the quality control rejected your reply\\n\"\n",
274
+ " updated_system_prompt += f\"## Your attempted answer:\\n{reply}\\n\\n\"\n",
275
+ " updated_system_prompt += f\"## Reason for rejection:\\n{feedback}\\n\\n\"\n",
276
+ " messages = [{\"role\": \"system\", \"content\": updated_system_prompt}] + history + [{\"role\": \"user\", \"content\": message}]\n",
277
+ " response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
278
+ " return response.choices[0].message.content"
279
+ ]
280
+ },
281
+ {
282
+ "cell_type": "code",
283
+ "execution_count": 35,
284
+ "metadata": {},
285
+ "outputs": [],
286
+ "source": [
287
+ "def chat(message, history):\n",
288
+ " if \"patent\" in message:\n",
289
+ " system = system_prompt + \"\\n\\nEverything in your reply needs to be in pig latin - \\\n",
290
+ " it is mandatory that you respond only and entirely in pig latin\"\n",
291
+ " else:\n",
292
+ " system = system_prompt\n",
293
+ " messages = [{\"role\": \"system\", \"content\": system}] + history + [{\"role\": \"user\", \"content\": message}]\n",
294
+ " response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
295
+ " reply =response.choices[0].message.content\n",
296
+ "\n",
297
+ " evaluation = evaluate(reply, message, history)\n",
298
+ " \n",
299
+ " if evaluation.is_acceptable:\n",
300
+ " print(\"Passed evaluation - returning reply\")\n",
301
+ " else:\n",
302
+ " print(\"Failed evaluation - retrying\")\n",
303
+ " print(evaluation.feedback)\n",
304
+ " reply = rerun(reply, message, history, evaluation.feedback) \n",
305
+ " return reply"
306
+ ]
307
+ },
308
+ {
309
+ "cell_type": "code",
310
+ "execution_count": null,
311
+ "metadata": {},
312
+ "outputs": [],
313
+ "source": [
314
+ "gr.ChatInterface(chat, type=\"messages\").launch()"
315
+ ]
316
+ },
317
+ {
318
+ "cell_type": "markdown",
319
+ "metadata": {},
320
+ "source": []
321
+ },
322
+ {
323
+ "cell_type": "code",
324
+ "execution_count": null,
325
+ "metadata": {},
326
+ "outputs": [],
327
+ "source": []
328
+ }
329
+ ],
330
+ "metadata": {
331
+ "kernelspec": {
332
+ "display_name": ".venv",
333
+ "language": "python",
334
+ "name": "python3"
335
+ },
336
+ "language_info": {
337
+ "codemirror_mode": {
338
+ "name": "ipython",
339
+ "version": 3
340
+ },
341
+ "file_extension": ".py",
342
+ "mimetype": "text/x-python",
343
+ "name": "python",
344
+ "nbconvert_exporter": "python",
345
+ "pygments_lexer": "ipython3",
346
+ "version": "3.12.9"
347
+ }
348
+ },
349
+ "nbformat": 4,
350
+ "nbformat_minor": 2
351
+ }
4_lab4.ipynb ADDED
@@ -0,0 +1,422 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "## The first big project - Professionally You!\n",
8
+ "\n",
9
+ "### And, Tool use.\n",
10
+ "\n",
11
+ "### But first: introducing Pushover\n",
12
+ "\n",
13
+ "Pushover is a nifty tool for sending Push Notifications to your phone.\n",
14
+ "\n",
15
+ "It's super easy to set up and install!\n",
16
+ "\n",
17
+ "Simply visit https://pushover.net/ and sign up for a free account, and create your API keys.\n",
18
+ "\n",
19
+ "As student Ron pointed out (thank you Ron!) there are actually 2 tokens to create in Pushover: \n",
20
+ "1. The User token which you get from the home page of Pushover\n",
21
+ "2. The Application token which you get by going to https://pushover.net/apps/build and creating an app \n",
22
+ "\n",
23
+ "(This is so you could choose to organize your push notifications into different apps in the future.)\n",
24
+ "\n",
25
+ "\n",
26
+ "Add to your `.env` file:\n",
27
+ "```\n",
28
+ "PUSHOVER_USER=put_your_user_token_here\n",
29
+ "PUSHOVER_TOKEN=put_the_application_level_token_here\n",
30
+ "```\n",
31
+ "\n",
32
+ "And install the Pushover app on your phone."
33
+ ]
34
+ },
35
+ {
36
+ "cell_type": "code",
37
+ "execution_count": 1,
38
+ "metadata": {},
39
+ "outputs": [],
40
+ "source": [
41
+ "# imports\n",
42
+ "\n",
43
+ "from dotenv import load_dotenv\n",
44
+ "from openai import OpenAI\n",
45
+ "import json\n",
46
+ "import os\n",
47
+ "import requests\n",
48
+ "from pypdf import PdfReader\n",
49
+ "import gradio as gr"
50
+ ]
51
+ },
52
+ {
53
+ "cell_type": "code",
54
+ "execution_count": 2,
55
+ "metadata": {},
56
+ "outputs": [],
57
+ "source": [
58
+ "# The usual start\n",
59
+ "\n",
60
+ "load_dotenv(override=True)\n",
61
+ "openai = OpenAI()"
62
+ ]
63
+ },
64
+ {
65
+ "cell_type": "code",
66
+ "execution_count": 3,
67
+ "metadata": {},
68
+ "outputs": [],
69
+ "source": [
70
+ "# For pushover\n",
71
+ "\n",
72
+ "pushover_user = os.getenv(\"PUSHOVER_USER\")\n",
73
+ "pushover_token = os.getenv(\"PUSHOVER_TOKEN\")\n",
74
+ "pushover_url = \"https://api.pushover.net/1/messages.json\""
75
+ ]
76
+ },
77
+ {
78
+ "cell_type": "code",
79
+ "execution_count": 4,
80
+ "metadata": {},
81
+ "outputs": [],
82
+ "source": [
83
+ "def push(message):\n",
84
+ " print(f\"Push: {message}\")\n",
85
+ " payload = {\"user\": pushover_user, \"token\": pushover_token, \"message\": message}\n",
86
+ " requests.post(pushover_url, data=payload)"
87
+ ]
88
+ },
89
+ {
90
+ "cell_type": "code",
91
+ "execution_count": null,
92
+ "metadata": {},
93
+ "outputs": [],
94
+ "source": [
95
+ "push(\"HEY!!\")"
96
+ ]
97
+ },
98
+ {
99
+ "cell_type": "code",
100
+ "execution_count": 9,
101
+ "metadata": {},
102
+ "outputs": [],
103
+ "source": [
104
+ "def record_user_details(email, name=\"Name not provided\", notes=\"not provided\"):\n",
105
+ " push(f\"Recording interest from {name} with email {email} and notes {notes}\")\n",
106
+ " return {\"recorded\": \"ok\"}"
107
+ ]
108
+ },
109
+ {
110
+ "cell_type": "code",
111
+ "execution_count": 4,
112
+ "metadata": {},
113
+ "outputs": [],
114
+ "source": [
115
+ "def record_unknown_question(question):\n",
116
+ " push(f\"Recording {question} asked that I couldn't answer\")\n",
117
+ " return {\"recorded\": \"ok\"}"
118
+ ]
119
+ },
120
+ {
121
+ "cell_type": "code",
122
+ "execution_count": 5,
123
+ "metadata": {},
124
+ "outputs": [],
125
+ "source": [
126
+ "record_user_details_json = {\n",
127
+ " \"name\": \"record_user_details\",\n",
128
+ " \"description\": \"Use this tool to record that a user is interested in being in touch and provided an email address\",\n",
129
+ " \"parameters\": {\n",
130
+ " \"type\": \"object\",\n",
131
+ " \"properties\": {\n",
132
+ " \"email\": {\n",
133
+ " \"type\": \"string\",\n",
134
+ " \"description\": \"The email address of this user\"\n",
135
+ " },\n",
136
+ " \"name\": {\n",
137
+ " \"type\": \"string\",\n",
138
+ " \"description\": \"The user's name, if they provided it\"\n",
139
+ " }\n",
140
+ " ,\n",
141
+ " \"notes\": {\n",
142
+ " \"type\": \"string\",\n",
143
+ " \"description\": \"Any additional information about the conversation that's worth recording to give context\"\n",
144
+ " }\n",
145
+ " },\n",
146
+ " \"required\": [\"email\"],\n",
147
+ " \"additionalProperties\": False\n",
148
+ " }\n",
149
+ "}"
150
+ ]
151
+ },
152
+ {
153
+ "cell_type": "code",
154
+ "execution_count": 6,
155
+ "metadata": {},
156
+ "outputs": [],
157
+ "source": [
158
+ "record_unknown_question_json = {\n",
159
+ " \"name\": \"record_unknown_question\",\n",
160
+ " \"description\": \"Always use this tool to record any question that couldn't be answered as you didn't know the answer\",\n",
161
+ " \"parameters\": {\n",
162
+ " \"type\": \"object\",\n",
163
+ " \"properties\": {\n",
164
+ " \"question\": {\n",
165
+ " \"type\": \"string\",\n",
166
+ " \"description\": \"The question that couldn't be answered\"\n",
167
+ " },\n",
168
+ " },\n",
169
+ " \"required\": [\"question\"],\n",
170
+ " \"additionalProperties\": False\n",
171
+ " }\n",
172
+ "}"
173
+ ]
174
+ },
175
+ {
176
+ "cell_type": "code",
177
+ "execution_count": 7,
178
+ "metadata": {},
179
+ "outputs": [],
180
+ "source": [
181
+ "tools = [{\"type\": \"function\", \"function\": record_user_details_json},\n",
182
+ " {\"type\": \"function\", \"function\": record_unknown_question_json}]"
183
+ ]
184
+ },
185
+ {
186
+ "cell_type": "code",
187
+ "execution_count": null,
188
+ "metadata": {},
189
+ "outputs": [],
190
+ "source": [
191
+ "tools"
192
+ ]
193
+ },
194
+ {
195
+ "cell_type": "code",
196
+ "execution_count": 16,
197
+ "metadata": {},
198
+ "outputs": [],
199
+ "source": [
200
+ "# This function can take a list of tool calls, and run them. This is the IF statement!!\n",
201
+ "\n",
202
+ "def handle_tool_calls(tool_calls):\n",
203
+ " results = []\n",
204
+ " for tool_call in tool_calls:\n",
205
+ " tool_name = tool_call.function.name\n",
206
+ " arguments = json.loads(tool_call.function.arguments)\n",
207
+ " print(f\"Tool called: {tool_name}\", flush=True)\n",
208
+ "\n",
209
+ " # THE BIG IF STATEMENT!!!\n",
210
+ "\n",
211
+ " if tool_name == \"record_user_details\":\n",
212
+ " result = record_user_details(**arguments)\n",
213
+ " elif tool_name == \"record_unknown_question\":\n",
214
+ " result = record_unknown_question(**arguments)\n",
215
+ "\n",
216
+ " results.append({\"role\": \"tool\",\"content\": json.dumps(result),\"tool_call_id\": tool_call.id})\n",
217
+ " return results"
218
+ ]
219
+ },
220
+ {
221
+ "cell_type": "code",
222
+ "execution_count": null,
223
+ "metadata": {},
224
+ "outputs": [],
225
+ "source": [
226
+ "globals()[\"record_unknown_question\"](\"this is a really hard question\")"
227
+ ]
228
+ },
229
+ {
230
+ "cell_type": "code",
231
+ "execution_count": 25,
232
+ "metadata": {},
233
+ "outputs": [],
234
+ "source": [
235
+ "# This is a more elegant way that avoids the IF statement.\n",
236
+ "\n",
237
+ "def handle_tool_calls(tool_calls):\n",
238
+ " results = []\n",
239
+ " for tool_call in tool_calls:\n",
240
+ " tool_name = tool_call.function.name\n",
241
+ " arguments = json.loads(tool_call.function.arguments)\n",
242
+ " print(f\"Tool called: {tool_name}\", flush=True)\n",
243
+ " tool = globals().get(tool_name)\n",
244
+ " result = tool(**arguments) if tool else {}\n",
245
+ " results.append({\"role\": \"tool\",\"content\": json.dumps(result),\"tool_call_id\": tool_call.id})\n",
246
+ " return results"
247
+ ]
248
+ },
249
+ {
250
+ "cell_type": "code",
251
+ "execution_count": 4,
252
+ "metadata": {},
253
+ "outputs": [],
254
+ "source": [
255
+ "reader = PdfReader(\"me/linkedin.pdf\")\n",
256
+ "linkedin = \"\"\n",
257
+ "for page in reader.pages:\n",
258
+ " text = page.extract_text()\n",
259
+ " if text:\n",
260
+ " linkedin += text\n",
261
+ "\n",
262
+ "with open(\"me/summary.txt\", \"r\", encoding=\"utf-8\") as f:\n",
263
+ " summary = f.read()\n",
264
+ "\n",
265
+ "name = \"Ed Donner\""
266
+ ]
267
+ },
268
+ {
269
+ "cell_type": "code",
270
+ "execution_count": 22,
271
+ "metadata": {},
272
+ "outputs": [],
273
+ "source": [
274
+ "system_prompt = f\"You are acting as {name}. You are answering questions on {name}'s website, \\\n",
275
+ "particularly questions related to {name}'s career, background, skills and experience. \\\n",
276
+ "Your responsibility is to represent {name} for interactions on the website as faithfully as possible. \\\n",
277
+ "You are given a summary of {name}'s background and LinkedIn profile which you can use to answer questions. \\\n",
278
+ "Be professional and engaging, as if talking to a potential client or future employer who came across the website. \\\n",
279
+ "If you don't know the answer to any question, use your record_unknown_question tool to record the question that you couldn't answer, even if it's about something trivial or unrelated to career. \\\n",
280
+ "If the user is engaging in discussion, try to steer them towards getting in touch via email; ask for their email and record it using your record_user_details tool. \"\n",
281
+ "\n",
282
+ "system_prompt += f\"\\n\\n## Summary:\\n{summary}\\n\\n## LinkedIn Profile:\\n{linkedin}\\n\\n\"\n",
283
+ "system_prompt += f\"With this context, please chat with the user, always staying in character as {name}.\"\n"
284
+ ]
285
+ },
286
+ {
287
+ "cell_type": "code",
288
+ "execution_count": 28,
289
+ "metadata": {},
290
+ "outputs": [],
291
+ "source": [
292
+ "def chat(message, history):\n",
293
+ " messages = [{\"role\": \"system\", \"content\": system_prompt}] + history + [{\"role\": \"user\", \"content\": message}]\n",
294
+ " done = False\n",
295
+ " while not done:\n",
296
+ "\n",
297
+ " # This is the call to the LLM - see that we pass in the tools json\n",
298
+ "\n",
299
+ " response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages, tools=tools)\n",
300
+ "\n",
301
+ " finish_reason = response.choices[0].finish_reason\n",
302
+ " \n",
303
+ " # If the LLM wants to call a tool, we do that!\n",
304
+ " \n",
305
+ " if finish_reason==\"tool_calls\":\n",
306
+ " message = response.choices[0].message\n",
307
+ " tool_calls = message.tool_calls\n",
308
+ " results = handle_tool_calls(tool_calls)\n",
309
+ " messages.append(message)\n",
310
+ " messages.extend(results)\n",
311
+ " else:\n",
312
+ " done = True\n",
313
+ " return response.choices[0].message.content"
314
+ ]
315
+ },
316
+ {
317
+ "cell_type": "code",
318
+ "execution_count": null,
319
+ "metadata": {},
320
+ "outputs": [],
321
+ "source": [
322
+ "gr.ChatInterface(chat, type=\"messages\").launch()"
323
+ ]
324
+ },
325
+ {
326
+ "cell_type": "markdown",
327
+ "metadata": {},
328
+ "source": [
329
+ "## And now for deployment\n",
330
+ "\n",
331
+ "This code is in `app.py`\n",
332
+ "\n",
333
+ "We will deploy to HuggingFace Spaces. Thank you student Robert M for improving these instructions.\n",
334
+ "\n",
335
+ "Before you start: remember to update the files in the \"me\" directory - your LinkedIn profile and summary.txt - so that it talks about you!\n",
336
+ "\n",
337
+ "1. Visit https://huggingface.co and set up an account \n",
338
+ "2. From the Avatar menu on the top right, choose Access Tokens. Choose \"Create New Token\". Give it WRITE permissions.\n",
339
+ "3. Take this token and add it to your .env file: `HF_TOKEN=hf_xxx`\n",
340
+ "4. From the 1_foundations folder, enter: `gradio deploy` \n",
341
+ "5. Follow the instructions: name it \"career_conversation\", specify app.py, choose cpu-basic as the hardware, say Yes to needing to supply secrets, provide your openai api key, your pushover user and token, and say \"no\" to github actions.\n",
342
+ "\n",
343
+ "And you're deployed!\n",
344
+ "\n",
345
+ "Here is mine: https://huggingface.co/spaces/ed-donner/Career_Conversation\n",
346
+ "\n",
347
+ "I just got a push notification that a student asked me how they can become President of their country 😂😂\n",
348
+ "\n",
349
+ "For more information on deployment:\n",
350
+ "\n",
351
+ "https://www.gradio.app/guides/sharing-your-app#hosting-on-hf-spaces\n",
352
+ "\n",
353
+ "To delete your Space in the future: \n",
354
+ "1. Log in to HuggingFace\n",
355
+ "2. From the Avatar menu, select your profile\n",
356
+ "3. Click on the Space itself\n",
357
+ "4. Click the settings wheel on the top right\n",
358
+ "5. Scroll to the Delete section at the bottom\n"
359
+ ]
360
+ },
361
+ {
362
+ "cell_type": "markdown",
363
+ "metadata": {},
364
+ "source": [
365
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
366
+ " <tr>\n",
367
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
368
+ " <img src=\"../assets/exercise.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
369
+ " </td>\n",
370
+ " <td>\n",
371
+ " <h2 style=\"color:#ff7800;\">Exercise</h2>\n",
372
+ " <span style=\"color:#ff7800;\">• First and foremost, deploy this for yourself! It's a real, valuable tool - the future resume..<br/>\n",
373
+ " • Next, improve the resources - add better context about yourself. If you know RAG, then add a knowledge base about you.<br/>\n",
374
+ " • Add in more tools! You could have a SQL database with common Q&A that the LLM could read and write from?<br/>\n",
375
+ " • Bring in the Evaluator from the last lab, and add other Agentic patterns.\n",
376
+ " </span>\n",
377
+ " </td>\n",
378
+ " </tr>\n",
379
+ "</table>"
380
+ ]
381
+ },
382
+ {
383
+ "cell_type": "markdown",
384
+ "metadata": {},
385
+ "source": [
386
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
387
+ " <tr>\n",
388
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
389
+ " <img src=\"../assets/business.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
390
+ " </td>\n",
391
+ " <td>\n",
392
+ " <h2 style=\"color:#00bfff;\">Commercial implications</h2>\n",
393
+ " <span style=\"color:#00bfff;\">Aside from the obvious (your career alter-ego) this has business applications in any situation where you need an AI assistant with domain expertise and an ability to interact with the real world.\n",
394
+ " </span>\n",
395
+ " </td>\n",
396
+ " </tr>\n",
397
+ "</table>"
398
+ ]
399
+ }
400
+ ],
401
+ "metadata": {
402
+ "kernelspec": {
403
+ "display_name": ".venv",
404
+ "language": "python",
405
+ "name": "python3"
406
+ },
407
+ "language_info": {
408
+ "codemirror_mode": {
409
+ "name": "ipython",
410
+ "version": 3
411
+ },
412
+ "file_extension": ".py",
413
+ "mimetype": "text/x-python",
414
+ "name": "python",
415
+ "nbconvert_exporter": "python",
416
+ "pygments_lexer": "ipython3",
417
+ "version": "3.12.9"
418
+ }
419
+ },
420
+ "nbformat": 4,
421
+ "nbformat_minor": 2
422
+ }
README.md CHANGED
@@ -1,12 +1,6 @@
1
  ---
2
- title: Career Conversations
3
- emoji: 📉
4
- colorFrom: indigo
5
- colorTo: purple
6
  sdk: gradio
7
  sdk_version: 5.29.0
8
- app_file: app.py
9
- pinned: false
10
  ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
1
  ---
2
+ title: career_conversations
3
+ app_file: app.py
 
 
4
  sdk: gradio
5
  sdk_version: 5.29.0
 
 
6
  ---
 
 
app.py ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from dotenv import load_dotenv
2
+ from openai import OpenAI
3
+ import json
4
+ import os
5
+ import requests
6
+ from pypdf import PdfReader
7
+ import gradio as gr
8
+
9
+
10
+ load_dotenv(override=True)
11
+
12
+
13
+ def push(text):
14
+ requests.post(
15
+ "https://api.pushover.net/1/messages.json",
16
+ data={
17
+ "token": os.getenv("PUSHOVER_TOKEN"),
18
+ "user": os.getenv("PUSHOVER_USER"),
19
+ "message": text,
20
+ }
21
+ )
22
+
23
+
24
+ def record_user_details(email, name="Name not provided", notes="not provided"):
25
+ push(f"Recording {name} with email {email} and notes {notes}")
26
+ return {"recorded": "ok"}
27
+
28
+ def record_unknown_question(question):
29
+ push(f"Recording {question}")
30
+ return {"recorded": "ok"}
31
+
32
+ record_user_details_json = {
33
+ "name": "record_user_details",
34
+ "description": "Use this tool to record that a user is interested in being in touch and provided an email address",
35
+ "parameters": {
36
+ "type": "object",
37
+ "properties": {
38
+ "email": {
39
+ "type": "string",
40
+ "description": "The email address of this user"
41
+ },
42
+ "name": {
43
+ "type": "string",
44
+ "description": "The user's name, if they provided it"
45
+ }
46
+ ,
47
+ "notes": {
48
+ "type": "string",
49
+ "description": "Any additional information about the conversation that's worth recording to give context"
50
+ }
51
+ },
52
+ "required": ["email"],
53
+ "additionalProperties": False
54
+ }
55
+ }
56
+
57
+ record_unknown_question_json = {
58
+ "name": "record_unknown_question",
59
+ "description": "Always use this tool to record any question that couldn't be answered as you didn't know the answer",
60
+ "parameters": {
61
+ "type": "object",
62
+ "properties": {
63
+ "question": {
64
+ "type": "string",
65
+ "description": "The question that couldn't be answered"
66
+ },
67
+ },
68
+ "required": ["question"],
69
+ "additionalProperties": False
70
+ }
71
+ }
72
+
73
+ tools = [{"type": "function", "function": record_user_details_json},
74
+ {"type": "function", "function": record_unknown_question_json}]
75
+
76
+
77
+ class Me:
78
+
79
+ def __init__(self):
80
+ self.openai = OpenAI()
81
+ self.name = "Santosh Kumar"
82
+ reader = PdfReader("me/linkedin_santosh.pdf")
83
+ self.linkedin = ""
84
+ for page in reader.pages:
85
+ text = page.extract_text()
86
+ if text:
87
+ self.linkedin += text
88
+ with open("me/summary_santosh.txt", "r", encoding="utf-8") as f:
89
+ self.summary = f.read()
90
+
91
+
92
+ def handle_tool_call(self, tool_calls):
93
+ results = []
94
+ for tool_call in tool_calls:
95
+ tool_name = tool_call.function.name
96
+ arguments = json.loads(tool_call.function.arguments)
97
+ print(f"Tool called: {tool_name}", flush=True)
98
+ tool = globals().get(tool_name)
99
+ result = tool(**arguments) if tool else {}
100
+ results.append({"role": "tool","content": json.dumps(result),"tool_call_id": tool_call.id})
101
+ return results
102
+
103
+ def system_prompt(self):
104
+ system_prompt = f"You are acting as {self.name}. You are answering questions on {self.name}'s website, \
105
+ particularly questions related to {self.name}'s career, background, skills and experience. \
106
+ Your responsibility is to represent {self.name} for interactions on the website as faithfully as possible. \
107
+ You are given a summary of {self.name}'s background and LinkedIn profile which you can use to answer questions. \
108
+ Be professional and engaging, as if talking to a potential client or future employer who came across the website. \
109
+ If you don't know the answer to any question, use your record_unknown_question tool to record the question that you couldn't answer, even if it's about something trivial or unrelated to career. \
110
+ If the user is engaging in discussion, try to steer them towards getting in touch via email; ask for their email and record it using your record_user_details tool. "
111
+
112
+ system_prompt += f"\n\n## Summary:\n{self.summary}\n\n## LinkedIn Profile:\n{self.linkedin}\n\n"
113
+ system_prompt += f"With this context, please chat with the user, always staying in character as {self.name}."
114
+ return system_prompt
115
+
116
+ def chat(self, message, history):
117
+ messages = [{"role": "system", "content": self.system_prompt()}] + history + [{"role": "user", "content": message}]
118
+ done = False
119
+ while not done:
120
+ response = self.openai.chat.completions.create(model="gpt-4o-mini", messages=messages, tools=tools)
121
+ if response.choices[0].finish_reason=="tool_calls":
122
+ message = response.choices[0].message
123
+ tool_calls = message.tool_calls
124
+ results = self.handle_tool_call(tool_calls)
125
+ messages.append(message)
126
+ messages.extend(results)
127
+ else:
128
+ done = True
129
+ return response.choices[0].message.content
130
+
131
+
132
+ if __name__ == "__main__":
133
+ me = Me()
134
+ gr.ChatInterface(me.chat, type="messages").launch()
135
+
career_conversations/.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
career_conversations/0_googleapi.ipynb ADDED
File without changes
career_conversations/1_lab1.ipynb ADDED
@@ -0,0 +1,323 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "# Welcome to the start of your adventure in Agentic AI"
8
+ ]
9
+ },
10
+ {
11
+ "cell_type": "markdown",
12
+ "metadata": {},
13
+ "source": [
14
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
15
+ " <tr>\n",
16
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
17
+ " <img src=\"../assets/stop.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
18
+ " </td>\n",
19
+ " <td>\n",
20
+ " <h2 style=\"color:#ff7800;\">Are you ready for action??</h2>\n",
21
+ " <span style=\"color:#ff7800;\">Have you completed all the setup steps in the <a href=\"../setup/\">setup</a> folder?<br/>\n",
22
+ " Have you checked out the guides in the <a href=\"../guides/01_intro.ipynb\">guides</a> folder?<br/>\n",
23
+ " Well in that case, you're ready!!\n",
24
+ " </span>\n",
25
+ " </td>\n",
26
+ " </tr>\n",
27
+ "</table>"
28
+ ]
29
+ },
30
+ {
31
+ "cell_type": "markdown",
32
+ "metadata": {},
33
+ "source": [
34
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
35
+ " <tr>\n",
36
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
37
+ " <img src=\"../assets/tools.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
38
+ " </td>\n",
39
+ " <td>\n",
40
+ " <h2 style=\"color:#00bfff;\">Treat these labs as a resource</h2>\n",
41
+ " <span style=\"color:#00bfff;\">I push updates to the code regularly. When people ask questions or have problems, I incorporate it in the code, adding more examples or improved commentary. As a result, you'll notice that the code below isn't identical to the videos. Everything from the videos is here; but in addition, I've added more steps and better explanations. Consider this like an interactive book that accompanies the lectures.\n",
42
+ " </span>\n",
43
+ " </td>\n",
44
+ " </tr>\n",
45
+ "</table>"
46
+ ]
47
+ },
48
+ {
49
+ "cell_type": "markdown",
50
+ "metadata": {},
51
+ "source": [
52
+ "### And please do remember to contact me if I can help\n",
53
+ "\n",
54
+ "And I love to connect: https://www.linkedin.com/in/eddonner/\n",
55
+ "\n",
56
+ "\n",
57
+ "### New to Notebooks like this one? Head over to the guides folder!\n",
58
+ "\n",
59
+ "Just to check you've already added the Python and Jupyter extensions to Cursor, if not already installed:\n",
60
+ "- Open extensions (View >> extensions)\n",
61
+ "- Search for python, and when the results show, click on the ms-python one, and Install it if not already installed\n",
62
+ "- Search for jupyter, and when the results show, click on the Microsoft one, and Install it if not already installed \n",
63
+ "Then View >> Explorer to bring back the File Explorer.\n",
64
+ "\n",
65
+ "And then:\n",
66
+ "1. Click where it says \"Select Kernel\" near the top right, and select the option called `.venv (Python 3.12.9)` or similar, which should be the first choice or the most prominent choice. You may need to choose \"Python Environments\" first.\n",
67
+ "2. Click in each \"cell\" below, starting with the cell immediately below this text, and press Shift+Enter to run\n",
68
+ "3. Enjoy!\n",
69
+ "\n",
70
+ "After you click \"Select Kernel\", if there is no option like `.venv (Python 3.12.9)` then please do the following: \n",
71
+ "1. From the Cursor menu, choose Settings >> VSCode Settings (NOTE: be sure to select `VSCode Settings` not `Cursor Settings`) \n",
72
+ "2. In the Settings search bar, type \"venv\" \n",
73
+ "3. In the field \"Path to folder with a list of Virtual Environments\" put the path to the project root, like C:\\Users\\username\\projects\\agents (on a Windows PC) or /Users/username/projects/agents (on Mac or Linux). \n",
74
+ "And then try again."
75
+ ]
76
+ },
77
+ {
78
+ "cell_type": "code",
79
+ "execution_count": 1,
80
+ "metadata": {},
81
+ "outputs": [],
82
+ "source": [
83
+ "# First let's do an import\n",
84
+ "from dotenv import load_dotenv\n"
85
+ ]
86
+ },
87
+ {
88
+ "cell_type": "code",
89
+ "execution_count": null,
90
+ "metadata": {},
91
+ "outputs": [],
92
+ "source": [
93
+ "# Next it's time to load the API keys into environment variables\n",
94
+ "\n",
95
+ "load_dotenv(override=True)"
96
+ ]
97
+ },
98
+ {
99
+ "cell_type": "code",
100
+ "execution_count": null,
101
+ "metadata": {},
102
+ "outputs": [],
103
+ "source": [
104
+ "# Check the keys\n",
105
+ "\n",
106
+ "import os\n",
107
+ "openai_api_key = os.getenv('OPENAI_API_KEY')\n",
108
+ "\n",
109
+ "if openai_api_key:\n",
110
+ " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
111
+ "else:\n",
112
+ " print(\"OpenAI API Key not set - please head to the troubleshooting guide in the guides folder\")\n",
113
+ " \n"
114
+ ]
115
+ },
116
+ {
117
+ "cell_type": "code",
118
+ "execution_count": 5,
119
+ "metadata": {},
120
+ "outputs": [],
121
+ "source": [
122
+ "# And now - the all important import statement\n",
123
+ "# If you get an import error - head over to troubleshooting guide\n",
124
+ "\n",
125
+ "from openai import OpenAI"
126
+ ]
127
+ },
128
+ {
129
+ "cell_type": "code",
130
+ "execution_count": 6,
131
+ "metadata": {},
132
+ "outputs": [],
133
+ "source": [
134
+ "# And now we'll create an instance of the OpenAI class\n",
135
+ "# If you're not sure what it means to create an instance of a class - head over to the guides folder!\n",
136
+ "# If you get a NameError - head over to the guides folder to learn about NameErrors\n",
137
+ "\n",
138
+ "openai = OpenAI()"
139
+ ]
140
+ },
141
+ {
142
+ "cell_type": "code",
143
+ "execution_count": 16,
144
+ "metadata": {},
145
+ "outputs": [],
146
+ "source": [
147
+ "# Create a list of messages in the familiar OpenAI format\n",
148
+ "\n",
149
+ "messages = [{\"role\": \"user\", \"content\": \"What is 2+2?\"}]"
150
+ ]
151
+ },
152
+ {
153
+ "cell_type": "code",
154
+ "execution_count": null,
155
+ "metadata": {},
156
+ "outputs": [],
157
+ "source": [
158
+ "# And now call it! Any problems, head to the troubleshooting guide\n",
159
+ "\n",
160
+ "response = openai.chat.completions.create(\n",
161
+ " model=\"gpt-4o-mini\",\n",
162
+ " messages=messages\n",
163
+ ")\n",
164
+ "\n",
165
+ "print(response.choices[0].message.content)\n"
166
+ ]
167
+ },
168
+ {
169
+ "cell_type": "code",
170
+ "execution_count": null,
171
+ "metadata": {},
172
+ "outputs": [],
173
+ "source": []
174
+ },
175
+ {
176
+ "cell_type": "code",
177
+ "execution_count": 18,
178
+ "metadata": {},
179
+ "outputs": [],
180
+ "source": [
181
+ "# And now - let's ask for a question:\n",
182
+ "\n",
183
+ "question = \"Please propose a hard, challenging question to assess someone's IQ. Respond only with the question.\"\n",
184
+ "messages = [{\"role\": \"user\", \"content\": question}]\n"
185
+ ]
186
+ },
187
+ {
188
+ "cell_type": "code",
189
+ "execution_count": null,
190
+ "metadata": {},
191
+ "outputs": [],
192
+ "source": [
193
+ "# ask it\n",
194
+ "response = openai.chat.completions.create(\n",
195
+ " model=\"gpt-4o-mini\",\n",
196
+ " messages=messages\n",
197
+ ")\n",
198
+ "\n",
199
+ "question = response.choices[0].message.content\n",
200
+ "\n",
201
+ "print(question)\n"
202
+ ]
203
+ },
204
+ {
205
+ "cell_type": "code",
206
+ "execution_count": 28,
207
+ "metadata": {},
208
+ "outputs": [],
209
+ "source": [
210
+ "# form a new messages list\n",
211
+ "messages = [{\"role\": \"user\", \"content\": question}]\n"
212
+ ]
213
+ },
214
+ {
215
+ "cell_type": "code",
216
+ "execution_count": null,
217
+ "metadata": {},
218
+ "outputs": [],
219
+ "source": [
220
+ "# Ask it again\n",
221
+ "\n",
222
+ "response = openai.chat.completions.create(\n",
223
+ " model=\"gpt-4o-mini\",\n",
224
+ " messages=messages\n",
225
+ ")\n",
226
+ "\n",
227
+ "answer = response.choices[0].message.content\n",
228
+ "print(answer)\n"
229
+ ]
230
+ },
231
+ {
232
+ "cell_type": "code",
233
+ "execution_count": null,
234
+ "metadata": {},
235
+ "outputs": [],
236
+ "source": [
237
+ "from IPython.display import Markdown, display\n",
238
+ "\n",
239
+ "display(Markdown(answer))\n",
240
+ "\n"
241
+ ]
242
+ },
243
+ {
244
+ "cell_type": "markdown",
245
+ "metadata": {},
246
+ "source": [
247
+ "# Congratulations!\n",
248
+ "\n",
249
+ "That was a small, simple step in the direction of Agentic AI, with your new environment!\n",
250
+ "\n",
251
+ "Next time things get more interesting..."
252
+ ]
253
+ },
254
+ {
255
+ "cell_type": "markdown",
256
+ "metadata": {},
257
+ "source": [
258
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
259
+ " <tr>\n",
260
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
261
+ " <img src=\"../assets/exercise.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
262
+ " </td>\n",
263
+ " <td>\n",
264
+ " <h2 style=\"color:#ff7800;\">Exercise</h2>\n",
265
+ " <span style=\"color:#ff7800;\">Now try this commercial application:<br/>\n",
266
+ " First ask the LLM to pick a business area that might be worth exploring for an Agentic AI opportunity.<br/>\n",
267
+ " Then ask the LLM to present a pain-point in that industry - something challenging that might be ripe for an Agentic solution.<br/>\n",
268
+ " Finally have 3 third LLM call propose the Agentic AI solution.\n",
269
+ " </span>\n",
270
+ " </td>\n",
271
+ " </tr>\n",
272
+ "</table>"
273
+ ]
274
+ },
275
+ {
276
+ "cell_type": "code",
277
+ "execution_count": null,
278
+ "metadata": {},
279
+ "outputs": [],
280
+ "source": [
281
+ "# First create the messages:\n",
282
+ "\n",
283
+ "messages = [{\"role\": \"user\", \"content\": \"Something here\"}]\n",
284
+ "\n",
285
+ "# Then make the first call:\n",
286
+ "\n",
287
+ "response =\n",
288
+ "\n",
289
+ "# Then read the business idea:\n",
290
+ "\n",
291
+ "business_idea = response.\n",
292
+ "\n",
293
+ "# And repeat!"
294
+ ]
295
+ },
296
+ {
297
+ "cell_type": "markdown",
298
+ "metadata": {},
299
+ "source": []
300
+ }
301
+ ],
302
+ "metadata": {
303
+ "kernelspec": {
304
+ "display_name": ".venv",
305
+ "language": "python",
306
+ "name": "python3"
307
+ },
308
+ "language_info": {
309
+ "codemirror_mode": {
310
+ "name": "ipython",
311
+ "version": 3
312
+ },
313
+ "file_extension": ".py",
314
+ "mimetype": "text/x-python",
315
+ "name": "python",
316
+ "nbconvert_exporter": "python",
317
+ "pygments_lexer": "ipython3",
318
+ "version": "3.12.9"
319
+ }
320
+ },
321
+ "nbformat": 4,
322
+ "nbformat_minor": 2
323
+ }
career_conversations/2_lab2.ipynb ADDED
@@ -0,0 +1,474 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "## Welcome to the Second Lab - Week 1, Day 3\n",
8
+ "\n",
9
+ "Today we will work with lots of models! This is a way to get comfortable with APIs."
10
+ ]
11
+ },
12
+ {
13
+ "cell_type": "markdown",
14
+ "metadata": {},
15
+ "source": [
16
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
17
+ " <tr>\n",
18
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
19
+ " <img src=\"../assets/stop.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
20
+ " </td>\n",
21
+ " <td>\n",
22
+ " <h2 style=\"color:#ff7800;\">Important point - please read</h2>\n",
23
+ " <span style=\"color:#ff7800;\">The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you carefully execute this yourself, <b>after</b> watching the lecture. Add print statements to understand what's going on, and then come up with your own variations.<br/><br/>If you have time, I'd love it if you submit a PR for changes in the community_contributions folder - instructions in the resources. Also, if you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...\n",
24
+ " </span>\n",
25
+ " </td>\n",
26
+ " </tr>\n",
27
+ "</table>"
28
+ ]
29
+ },
30
+ {
31
+ "cell_type": "code",
32
+ "execution_count": 1,
33
+ "metadata": {},
34
+ "outputs": [],
35
+ "source": [
36
+ "# Start with imports - ask ChatGPT to explain any package that you don't know\n",
37
+ "\n",
38
+ "import os\n",
39
+ "import json\n",
40
+ "from dotenv import load_dotenv\n",
41
+ "from openai import OpenAI\n",
42
+ "from anthropic import Anthropic\n",
43
+ "from IPython.display import Markdown, display"
44
+ ]
45
+ },
46
+ {
47
+ "cell_type": "code",
48
+ "execution_count": null,
49
+ "metadata": {},
50
+ "outputs": [],
51
+ "source": [
52
+ "# Always remember to do this!\n",
53
+ "load_dotenv(override=True)"
54
+ ]
55
+ },
56
+ {
57
+ "cell_type": "code",
58
+ "execution_count": null,
59
+ "metadata": {},
60
+ "outputs": [],
61
+ "source": [
62
+ "# Print the key prefixes to help with any debugging\n",
63
+ "\n",
64
+ "openai_api_key = os.getenv('OPENAI_API_KEY')\n",
65
+ "anthropic_api_key = os.getenv('ANTHROPIC_API_KEY')\n",
66
+ "google_api_key = os.getenv('GOOGLE_API_KEY')\n",
67
+ "deepseek_api_key = os.getenv('DEEPSEEK_API_KEY')\n",
68
+ "groq_api_key = os.getenv('GROQ_API_KEY')\n",
69
+ "\n",
70
+ "if openai_api_key:\n",
71
+ " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n",
72
+ "else:\n",
73
+ " print(\"OpenAI API Key not set\")\n",
74
+ " \n",
75
+ "if anthropic_api_key:\n",
76
+ " print(f\"Anthropic API Key exists and begins {anthropic_api_key[:7]}\")\n",
77
+ "else:\n",
78
+ " print(\"Anthropic API Key not set (and this is optional)\")\n",
79
+ "\n",
80
+ "if google_api_key:\n",
81
+ " print(f\"Google API Key exists and begins {google_api_key[:2]}\")\n",
82
+ "else:\n",
83
+ " print(\"Google API Key not set (and this is optional)\")\n",
84
+ "\n",
85
+ "if deepseek_api_key:\n",
86
+ " print(f\"DeepSeek API Key exists and begins {deepseek_api_key[:3]}\")\n",
87
+ "else:\n",
88
+ " print(\"DeepSeek API Key not set (and this is optional)\")\n",
89
+ "\n",
90
+ "if groq_api_key:\n",
91
+ " print(f\"Groq API Key exists and begins {groq_api_key[:4]}\")\n",
92
+ "else:\n",
93
+ " print(\"Groq API Key not set (and this is optional)\")"
94
+ ]
95
+ },
96
+ {
97
+ "cell_type": "code",
98
+ "execution_count": 4,
99
+ "metadata": {},
100
+ "outputs": [],
101
+ "source": [
102
+ "request = \"Please come up with a challenging, nuanced question that I can ask a number of LLMs to evaluate their intelligence. \"\n",
103
+ "request += \"Answer only with the question, no explanation.\"\n",
104
+ "messages = [{\"role\": \"user\", \"content\": request}]"
105
+ ]
106
+ },
107
+ {
108
+ "cell_type": "code",
109
+ "execution_count": null,
110
+ "metadata": {},
111
+ "outputs": [],
112
+ "source": [
113
+ "messages"
114
+ ]
115
+ },
116
+ {
117
+ "cell_type": "code",
118
+ "execution_count": null,
119
+ "metadata": {},
120
+ "outputs": [],
121
+ "source": [
122
+ "openai = OpenAI()\n",
123
+ "response = openai.chat.completions.create(\n",
124
+ " model=\"gpt-4o-mini\",\n",
125
+ " messages=messages,\n",
126
+ ")\n",
127
+ "question = response.choices[0].message.content\n",
128
+ "print(question)\n"
129
+ ]
130
+ },
131
+ {
132
+ "cell_type": "code",
133
+ "execution_count": 7,
134
+ "metadata": {},
135
+ "outputs": [],
136
+ "source": [
137
+ "competitors = []\n",
138
+ "answers = []\n",
139
+ "messages = [{\"role\": \"user\", \"content\": question}]"
140
+ ]
141
+ },
142
+ {
143
+ "cell_type": "code",
144
+ "execution_count": null,
145
+ "metadata": {},
146
+ "outputs": [],
147
+ "source": [
148
+ "# The API we know well\n",
149
+ "\n",
150
+ "model_name = \"gpt-4o-mini\"\n",
151
+ "\n",
152
+ "response = openai.chat.completions.create(model=model_name, messages=messages)\n",
153
+ "answer = response.choices[0].message.content\n",
154
+ "\n",
155
+ "display(Markdown(answer))\n",
156
+ "competitors.append(model_name)\n",
157
+ "answers.append(answer)"
158
+ ]
159
+ },
160
+ {
161
+ "cell_type": "code",
162
+ "execution_count": null,
163
+ "metadata": {},
164
+ "outputs": [],
165
+ "source": [
166
+ "# Anthropic has a slightly different API, and Max Tokens is required\n",
167
+ "\n",
168
+ "model_name = \"claude-3-7-sonnet-latest\"\n",
169
+ "\n",
170
+ "claude = Anthropic()\n",
171
+ "response = claude.messages.create(model=model_name, messages=messages, max_tokens=1000)\n",
172
+ "answer = response.content[0].text\n",
173
+ "\n",
174
+ "display(Markdown(answer))\n",
175
+ "competitors.append(model_name)\n",
176
+ "answers.append(answer)"
177
+ ]
178
+ },
179
+ {
180
+ "cell_type": "code",
181
+ "execution_count": null,
182
+ "metadata": {},
183
+ "outputs": [],
184
+ "source": [
185
+ "gemini = OpenAI(api_key=google_api_key, base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\")\n",
186
+ "model_name = \"gemini-2.0-flash\"\n",
187
+ "\n",
188
+ "response = gemini.chat.completions.create(model=model_name, messages=messages)\n",
189
+ "answer = response.choices[0].message.content\n",
190
+ "\n",
191
+ "display(Markdown(answer))\n",
192
+ "competitors.append(model_name)\n",
193
+ "answers.append(answer)"
194
+ ]
195
+ },
196
+ {
197
+ "cell_type": "code",
198
+ "execution_count": null,
199
+ "metadata": {},
200
+ "outputs": [],
201
+ "source": [
202
+ "deepseek = OpenAI(api_key=deepseek_api_key, base_url=\"https://api.deepseek.com/v1\")\n",
203
+ "model_name = \"deepseek-chat\"\n",
204
+ "\n",
205
+ "response = deepseek.chat.completions.create(model=model_name, messages=messages)\n",
206
+ "answer = response.choices[0].message.content\n",
207
+ "\n",
208
+ "display(Markdown(answer))\n",
209
+ "competitors.append(model_name)\n",
210
+ "answers.append(answer)"
211
+ ]
212
+ },
213
+ {
214
+ "cell_type": "code",
215
+ "execution_count": null,
216
+ "metadata": {},
217
+ "outputs": [],
218
+ "source": [
219
+ "groq = OpenAI(api_key=groq_api_key, base_url=\"https://api.groq.com/openai/v1\")\n",
220
+ "model_name = \"llama-3.3-70b-versatile\"\n",
221
+ "\n",
222
+ "response = groq.chat.completions.create(model=model_name, messages=messages)\n",
223
+ "answer = response.choices[0].message.content\n",
224
+ "\n",
225
+ "display(Markdown(answer))\n",
226
+ "competitors.append(model_name)\n",
227
+ "answers.append(answer)\n"
228
+ ]
229
+ },
230
+ {
231
+ "cell_type": "markdown",
232
+ "metadata": {},
233
+ "source": [
234
+ "## For the next cell, we will use Ollama\n",
235
+ "\n",
236
+ "Ollama runs a local web service that gives an OpenAI compatible endpoint, \n",
237
+ "and runs models locally using high performance C++ code.\n",
238
+ "\n",
239
+ "If you don't have Ollama, install it here by visiting https://ollama.com then pressing Download and following the instructions.\n",
240
+ "\n",
241
+ "After it's installed, you should be able to visit here: http://localhost:11434 and see the message \"Ollama is running\"\n",
242
+ "\n",
243
+ "You might need to restart Cursor (and maybe reboot). Then open a Terminal (control+\\`) and run `ollama serve`\n",
244
+ "\n",
245
+ "Useful Ollama commands (run these in the terminal, or with an exclamation mark in this notebook):\n",
246
+ "\n",
247
+ "`ollama pull <model_name>` downloads a model locally \n",
248
+ "`ollama ls` lists all the models you've downloaded \n",
249
+ "`ollama rm <model_name>` deletes the specified model from your downloads"
250
+ ]
251
+ },
252
+ {
253
+ "cell_type": "markdown",
254
+ "metadata": {},
255
+ "source": [
256
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
257
+ " <tr>\n",
258
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
259
+ " <img src=\"../assets/stop.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
260
+ " </td>\n",
261
+ " <td>\n",
262
+ " <h2 style=\"color:#ff7800;\">Super important - ignore me at your peril!</h2>\n",
263
+ " <span style=\"color:#ff7800;\">The model called <b>llama3.3</b> is FAR too large for home computers - it's not intended for personal computing and will consume all your resources! Stick with the nicely sized <b>llama3.2</b> or <b>llama3.2:1b</b> and if you want larger, try llama3.1 or smaller variants of Qwen, Gemma, Phi or DeepSeek. See the <A href=\"https://ollama.com/models\">the Ollama models page</a> for a full list of models and sizes.\n",
264
+ " </span>\n",
265
+ " </td>\n",
266
+ " </tr>\n",
267
+ "</table>"
268
+ ]
269
+ },
270
+ {
271
+ "cell_type": "code",
272
+ "execution_count": null,
273
+ "metadata": {},
274
+ "outputs": [],
275
+ "source": [
276
+ "!ollama pull llama3.2"
277
+ ]
278
+ },
279
+ {
280
+ "cell_type": "code",
281
+ "execution_count": null,
282
+ "metadata": {},
283
+ "outputs": [],
284
+ "source": [
285
+ "ollama = OpenAI(base_url='http://localhost:11434/v1', api_key='ollama')\n",
286
+ "model_name = \"llama3.2\"\n",
287
+ "\n",
288
+ "response = ollama.chat.completions.create(model=model_name, messages=messages)\n",
289
+ "answer = response.choices[0].message.content\n",
290
+ "\n",
291
+ "display(Markdown(answer))\n",
292
+ "competitors.append(model_name)\n",
293
+ "answers.append(answer)"
294
+ ]
295
+ },
296
+ {
297
+ "cell_type": "code",
298
+ "execution_count": null,
299
+ "metadata": {},
300
+ "outputs": [],
301
+ "source": [
302
+ "# So where are we?\n",
303
+ "\n",
304
+ "print(competitors)\n",
305
+ "print(answers)\n"
306
+ ]
307
+ },
308
+ {
309
+ "cell_type": "code",
310
+ "execution_count": null,
311
+ "metadata": {},
312
+ "outputs": [],
313
+ "source": [
314
+ "# It's nice to know how to use \"zip\"\n",
315
+ "for competitor, answer in zip(competitors, answers):\n",
316
+ " print(f\"Competitor: {competitor}\\n\\n{answer}\")\n"
317
+ ]
318
+ },
319
+ {
320
+ "cell_type": "code",
321
+ "execution_count": 20,
322
+ "metadata": {},
323
+ "outputs": [],
324
+ "source": [
325
+ "# Let's bring this together - note the use of \"enumerate\"\n",
326
+ "\n",
327
+ "together = \"\"\n",
328
+ "for index, answer in enumerate(answers):\n",
329
+ " together += f\"# Response from competitor {index+1}\\n\\n\"\n",
330
+ " together += answer + \"\\n\\n\""
331
+ ]
332
+ },
333
+ {
334
+ "cell_type": "code",
335
+ "execution_count": null,
336
+ "metadata": {},
337
+ "outputs": [],
338
+ "source": [
339
+ "print(together)"
340
+ ]
341
+ },
342
+ {
343
+ "cell_type": "code",
344
+ "execution_count": 22,
345
+ "metadata": {},
346
+ "outputs": [],
347
+ "source": [
348
+ "judge = f\"\"\"You are judging a competition between {len(competitors)} competitors.\n",
349
+ "Each model has been given this question:\n",
350
+ "\n",
351
+ "{question}\n",
352
+ "\n",
353
+ "Your job is to evaluate each response for clarity and strength of argument, and rank them in order of best to worst.\n",
354
+ "Respond with JSON, and only JSON, with the following format:\n",
355
+ "{{\"results\": [\"best competitor number\", \"second best competitor number\", \"third best competitor number\", ...]}}\n",
356
+ "\n",
357
+ "Here are the responses from each competitor:\n",
358
+ "\n",
359
+ "{together}\n",
360
+ "\n",
361
+ "Now respond with the JSON with the ranked order of the competitors, nothing else. Do not include markdown formatting or code blocks.\"\"\"\n"
362
+ ]
363
+ },
364
+ {
365
+ "cell_type": "code",
366
+ "execution_count": null,
367
+ "metadata": {},
368
+ "outputs": [],
369
+ "source": [
370
+ "print(judge)"
371
+ ]
372
+ },
373
+ {
374
+ "cell_type": "code",
375
+ "execution_count": 29,
376
+ "metadata": {},
377
+ "outputs": [],
378
+ "source": [
379
+ "judge_messages = [{\"role\": \"user\", \"content\": judge}]"
380
+ ]
381
+ },
382
+ {
383
+ "cell_type": "code",
384
+ "execution_count": null,
385
+ "metadata": {},
386
+ "outputs": [],
387
+ "source": [
388
+ "# Judgement time!\n",
389
+ "\n",
390
+ "openai = OpenAI()\n",
391
+ "response = openai.chat.completions.create(\n",
392
+ " model=\"o3-mini\",\n",
393
+ " messages=judge_messages,\n",
394
+ ")\n",
395
+ "results = response.choices[0].message.content\n",
396
+ "print(results)\n"
397
+ ]
398
+ },
399
+ {
400
+ "cell_type": "code",
401
+ "execution_count": null,
402
+ "metadata": {},
403
+ "outputs": [],
404
+ "source": [
405
+ "# OK let's turn this into results!\n",
406
+ "\n",
407
+ "results_dict = json.loads(results)\n",
408
+ "ranks = results_dict[\"results\"]\n",
409
+ "for index, result in enumerate(ranks):\n",
410
+ " competitor = competitors[int(result)-1]\n",
411
+ " print(f\"Rank {index+1}: {competitor}\")"
412
+ ]
413
+ },
414
+ {
415
+ "cell_type": "markdown",
416
+ "metadata": {},
417
+ "source": [
418
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
419
+ " <tr>\n",
420
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
421
+ " <img src=\"../assets/exercise.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
422
+ " </td>\n",
423
+ " <td>\n",
424
+ " <h2 style=\"color:#ff7800;\">Exercise</h2>\n",
425
+ " <span style=\"color:#ff7800;\">Which pattern(s) did this use? Try updating this to add another Agentic design pattern.\n",
426
+ " </span>\n",
427
+ " </td>\n",
428
+ " </tr>\n",
429
+ "</table>"
430
+ ]
431
+ },
432
+ {
433
+ "cell_type": "markdown",
434
+ "metadata": {},
435
+ "source": [
436
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
437
+ " <tr>\n",
438
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
439
+ " <img src=\"../assets/business.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
440
+ " </td>\n",
441
+ " <td>\n",
442
+ " <h2 style=\"color:#00bfff;\">Commercial implications</h2>\n",
443
+ " <span style=\"color:#00bfff;\">These kinds of patterns - to send a task to multiple models, and evaluate results,\n",
444
+ " and common where you need to improve the quality of your LLM response. This approach can be universally applied\n",
445
+ " to business projects where accuracy is critical.\n",
446
+ " </span>\n",
447
+ " </td>\n",
448
+ " </tr>\n",
449
+ "</table>"
450
+ ]
451
+ }
452
+ ],
453
+ "metadata": {
454
+ "kernelspec": {
455
+ "display_name": ".venv",
456
+ "language": "python",
457
+ "name": "python3"
458
+ },
459
+ "language_info": {
460
+ "codemirror_mode": {
461
+ "name": "ipython",
462
+ "version": 3
463
+ },
464
+ "file_extension": ".py",
465
+ "mimetype": "text/x-python",
466
+ "name": "python",
467
+ "nbconvert_exporter": "python",
468
+ "pygments_lexer": "ipython3",
469
+ "version": "3.12.9"
470
+ }
471
+ },
472
+ "nbformat": 4,
473
+ "nbformat_minor": 2
474
+ }
career_conversations/3_lab3.ipynb ADDED
@@ -0,0 +1,351 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "## Welcome to Lab 3 for Week 1 Day 4\n",
8
+ "\n",
9
+ "Today we're going to build something with immediate value!\n",
10
+ "\n",
11
+ "In the folder `me` I've put a single file `linkedin.pdf` - it's a PDF download of my LinkedIn profile.\n",
12
+ "\n",
13
+ "Please replace it with yours!\n",
14
+ "\n",
15
+ "I've also made a file called `summary.txt`\n",
16
+ "\n",
17
+ "We're not going to use Tools just yet - we're going to add the tool tomorrow."
18
+ ]
19
+ },
20
+ {
21
+ "cell_type": "markdown",
22
+ "metadata": {},
23
+ "source": [
24
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
25
+ " <tr>\n",
26
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
27
+ " <img src=\"../assets/tools.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
28
+ " </td>\n",
29
+ " <td>\n",
30
+ " <h2 style=\"color:#00bfff;\">Looking up packages</h2>\n",
31
+ " <span style=\"color:#00bfff;\">In this lab, we're going to use the wonderful Gradio package for building quick UIs, \n",
32
+ " and we're also going to use the popular PyPDF2 PDF reader. You can get guides to these packages by asking \n",
33
+ " ChatGPT or Claude, and you find all open-source packages on the repository <a href=\"https://pypi.org\">https://pypi.org</a>.\n",
34
+ " </span>\n",
35
+ " </td>\n",
36
+ " </tr>\n",
37
+ "</table>"
38
+ ]
39
+ },
40
+ {
41
+ "cell_type": "code",
42
+ "execution_count": null,
43
+ "metadata": {},
44
+ "outputs": [],
45
+ "source": [
46
+ "# If you don't know what any of these packages do - you can always ask ChatGPT for a guide!\n",
47
+ "\n",
48
+ "from dotenv import load_dotenv\n",
49
+ "from openai import OpenAI\n",
50
+ "from pypdf import PdfReader\n",
51
+ "import gradio as gr"
52
+ ]
53
+ },
54
+ {
55
+ "cell_type": "code",
56
+ "execution_count": 3,
57
+ "metadata": {},
58
+ "outputs": [],
59
+ "source": [
60
+ "load_dotenv(override=True)\n",
61
+ "openai = OpenAI()"
62
+ ]
63
+ },
64
+ {
65
+ "cell_type": "code",
66
+ "execution_count": 4,
67
+ "metadata": {},
68
+ "outputs": [],
69
+ "source": [
70
+ "reader = PdfReader(\"me/linkedin.pdf\")\n",
71
+ "linkedin = \"\"\n",
72
+ "for page in reader.pages:\n",
73
+ " text = page.extract_text()\n",
74
+ " if text:\n",
75
+ " linkedin += text"
76
+ ]
77
+ },
78
+ {
79
+ "cell_type": "code",
80
+ "execution_count": null,
81
+ "metadata": {},
82
+ "outputs": [],
83
+ "source": [
84
+ "print(linkedin)"
85
+ ]
86
+ },
87
+ {
88
+ "cell_type": "code",
89
+ "execution_count": 5,
90
+ "metadata": {},
91
+ "outputs": [],
92
+ "source": [
93
+ "with open(\"me/summary.txt\", \"r\", encoding=\"utf-8\") as f:\n",
94
+ " summary = f.read()"
95
+ ]
96
+ },
97
+ {
98
+ "cell_type": "code",
99
+ "execution_count": 6,
100
+ "metadata": {},
101
+ "outputs": [],
102
+ "source": [
103
+ "name = \"Ed Donner\""
104
+ ]
105
+ },
106
+ {
107
+ "cell_type": "code",
108
+ "execution_count": 7,
109
+ "metadata": {},
110
+ "outputs": [],
111
+ "source": [
112
+ "system_prompt = f\"You are acting as {name}. You are answering questions on {name}'s website, \\\n",
113
+ "particularly questions related to {name}'s career, background, skills and experience. \\\n",
114
+ "Your responsibility is to represent {name} for interactions on the website as faithfully as possible. \\\n",
115
+ "You are given a summary of {name}'s background and LinkedIn profile which you can use to answer questions. \\\n",
116
+ "Be professional and engaging, as if talking to a potential client or future employer who came across the website. \\\n",
117
+ "If you don't know the answer, say so.\"\n",
118
+ "\n",
119
+ "system_prompt += f\"\\n\\n## Summary:\\n{summary}\\n\\n## LinkedIn Profile:\\n{linkedin}\\n\\n\"\n",
120
+ "system_prompt += f\"With this context, please chat with the user, always staying in character as {name}.\"\n"
121
+ ]
122
+ },
123
+ {
124
+ "cell_type": "code",
125
+ "execution_count": null,
126
+ "metadata": {},
127
+ "outputs": [],
128
+ "source": [
129
+ "system_prompt"
130
+ ]
131
+ },
132
+ {
133
+ "cell_type": "code",
134
+ "execution_count": 9,
135
+ "metadata": {},
136
+ "outputs": [],
137
+ "source": [
138
+ "def chat(message, history):\n",
139
+ " messages = [{\"role\": \"system\", \"content\": system_prompt}] + history + [{\"role\": \"user\", \"content\": message}]\n",
140
+ " response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
141
+ " return response.choices[0].message.content"
142
+ ]
143
+ },
144
+ {
145
+ "cell_type": "code",
146
+ "execution_count": null,
147
+ "metadata": {},
148
+ "outputs": [],
149
+ "source": [
150
+ "gr.ChatInterface(chat, type=\"messages\").launch()"
151
+ ]
152
+ },
153
+ {
154
+ "cell_type": "markdown",
155
+ "metadata": {},
156
+ "source": [
157
+ "## A lot is about to happen...\n",
158
+ "\n",
159
+ "1. Be able to ask an LLM to evaluate an answer\n",
160
+ "2. Be able to rerun if the answer fails evaluation\n",
161
+ "3. Put this together into 1 workflow\n",
162
+ "\n",
163
+ "All without any Agentic framework!"
164
+ ]
165
+ },
166
+ {
167
+ "cell_type": "code",
168
+ "execution_count": 11,
169
+ "metadata": {},
170
+ "outputs": [],
171
+ "source": [
172
+ "# Create a Pydantic model for the Evaluation\n",
173
+ "\n",
174
+ "from pydantic import BaseModel\n",
175
+ "\n",
176
+ "class Evaluation(BaseModel):\n",
177
+ " is_acceptable: bool\n",
178
+ " feedback: str\n"
179
+ ]
180
+ },
181
+ {
182
+ "cell_type": "code",
183
+ "execution_count": 23,
184
+ "metadata": {},
185
+ "outputs": [],
186
+ "source": [
187
+ "evaluator_system_prompt = f\"You are an evaluator that decides whether a response to a question is acceptable. \\\n",
188
+ "You are provided with a conversation between a User and an Agent. Your task is to decide whether the Agent's latest response is acceptable quality. \\\n",
189
+ "The Agent is playing the role of {name} and is representing {name} on their website. \\\n",
190
+ "The Agent has been instructed to be professional and engaging, as if talking to a potential client or future employer who came across the website. \\\n",
191
+ "The Agent has been provided with context on {name} in the form of their summary and LinkedIn details. Here's the information:\"\n",
192
+ "\n",
193
+ "evaluator_system_prompt += f\"\\n\\n## Summary:\\n{summary}\\n\\n## LinkedIn Profile:\\n{linkedin}\\n\\n\"\n",
194
+ "evaluator_system_prompt += f\"With this context, please evaluate the latest response, replying with whether the response is acceptable and your feedback.\""
195
+ ]
196
+ },
197
+ {
198
+ "cell_type": "code",
199
+ "execution_count": 24,
200
+ "metadata": {},
201
+ "outputs": [],
202
+ "source": [
203
+ "def evaluator_user_prompt(reply, message, history):\n",
204
+ " user_prompt = f\"Here's the conversation between the User and the Agent: \\n\\n{history}\\n\\n\"\n",
205
+ " user_prompt += f\"Here's the latest message from the User: \\n\\n{message}\\n\\n\"\n",
206
+ " user_prompt += f\"Here's the latest response from the Agent: \\n\\n{reply}\\n\\n\"\n",
207
+ " user_prompt += f\"Please evaluate the response, replying with whether it is acceptable and your feedback.\"\n",
208
+ " return user_prompt"
209
+ ]
210
+ },
211
+ {
212
+ "cell_type": "code",
213
+ "execution_count": 25,
214
+ "metadata": {},
215
+ "outputs": [],
216
+ "source": [
217
+ "import os\n",
218
+ "gemini = OpenAI(\n",
219
+ " api_key=os.getenv(\"GOOGLE_API_KEY\"), \n",
220
+ " base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\"\n",
221
+ ")"
222
+ ]
223
+ },
224
+ {
225
+ "cell_type": "code",
226
+ "execution_count": 26,
227
+ "metadata": {},
228
+ "outputs": [],
229
+ "source": [
230
+ "def evaluate(reply, message, history) -> Evaluation:\n",
231
+ "\n",
232
+ " messages = [{\"role\": \"system\", \"content\": evaluator_system_prompt}] + [{\"role\": \"user\", \"content\": evaluator_user_prompt(reply, message, history)}]\n",
233
+ " response = gemini.beta.chat.completions.parse(model=\"gemini-2.0-flash\", messages=messages, response_format=Evaluation)\n",
234
+ " return response.choices[0].message.parsed"
235
+ ]
236
+ },
237
+ {
238
+ "cell_type": "code",
239
+ "execution_count": 27,
240
+ "metadata": {},
241
+ "outputs": [],
242
+ "source": [
243
+ "messages = [{\"role\": \"system\", \"content\": system_prompt}] + [{\"role\": \"user\", \"content\": \"do you hold a patent?\"}]\n",
244
+ "response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
245
+ "reply = response.choices[0].message.content"
246
+ ]
247
+ },
248
+ {
249
+ "cell_type": "code",
250
+ "execution_count": null,
251
+ "metadata": {},
252
+ "outputs": [],
253
+ "source": [
254
+ "reply"
255
+ ]
256
+ },
257
+ {
258
+ "cell_type": "code",
259
+ "execution_count": null,
260
+ "metadata": {},
261
+ "outputs": [],
262
+ "source": [
263
+ "evaluate(reply, \"do you hold a patent?\", messages[:1])"
264
+ ]
265
+ },
266
+ {
267
+ "cell_type": "code",
268
+ "execution_count": 30,
269
+ "metadata": {},
270
+ "outputs": [],
271
+ "source": [
272
+ "def rerun(reply, message, history, feedback):\n",
273
+ " updated_system_prompt = system_prompt + f\"\\n\\n## Previous answer rejected\\nYou just tried to reply, but the quality control rejected your reply\\n\"\n",
274
+ " updated_system_prompt += f\"## Your attempted answer:\\n{reply}\\n\\n\"\n",
275
+ " updated_system_prompt += f\"## Reason for rejection:\\n{feedback}\\n\\n\"\n",
276
+ " messages = [{\"role\": \"system\", \"content\": updated_system_prompt}] + history + [{\"role\": \"user\", \"content\": message}]\n",
277
+ " response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
278
+ " return response.choices[0].message.content"
279
+ ]
280
+ },
281
+ {
282
+ "cell_type": "code",
283
+ "execution_count": 35,
284
+ "metadata": {},
285
+ "outputs": [],
286
+ "source": [
287
+ "def chat(message, history):\n",
288
+ " if \"patent\" in message:\n",
289
+ " system = system_prompt + \"\\n\\nEverything in your reply needs to be in pig latin - \\\n",
290
+ " it is mandatory that you respond only and entirely in pig latin\"\n",
291
+ " else:\n",
292
+ " system = system_prompt\n",
293
+ " messages = [{\"role\": \"system\", \"content\": system}] + history + [{\"role\": \"user\", \"content\": message}]\n",
294
+ " response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages)\n",
295
+ " reply =response.choices[0].message.content\n",
296
+ "\n",
297
+ " evaluation = evaluate(reply, message, history)\n",
298
+ " \n",
299
+ " if evaluation.is_acceptable:\n",
300
+ " print(\"Passed evaluation - returning reply\")\n",
301
+ " else:\n",
302
+ " print(\"Failed evaluation - retrying\")\n",
303
+ " print(evaluation.feedback)\n",
304
+ " reply = rerun(reply, message, history, evaluation.feedback) \n",
305
+ " return reply"
306
+ ]
307
+ },
308
+ {
309
+ "cell_type": "code",
310
+ "execution_count": null,
311
+ "metadata": {},
312
+ "outputs": [],
313
+ "source": [
314
+ "gr.ChatInterface(chat, type=\"messages\").launch()"
315
+ ]
316
+ },
317
+ {
318
+ "cell_type": "markdown",
319
+ "metadata": {},
320
+ "source": []
321
+ },
322
+ {
323
+ "cell_type": "code",
324
+ "execution_count": null,
325
+ "metadata": {},
326
+ "outputs": [],
327
+ "source": []
328
+ }
329
+ ],
330
+ "metadata": {
331
+ "kernelspec": {
332
+ "display_name": ".venv",
333
+ "language": "python",
334
+ "name": "python3"
335
+ },
336
+ "language_info": {
337
+ "codemirror_mode": {
338
+ "name": "ipython",
339
+ "version": 3
340
+ },
341
+ "file_extension": ".py",
342
+ "mimetype": "text/x-python",
343
+ "name": "python",
344
+ "nbconvert_exporter": "python",
345
+ "pygments_lexer": "ipython3",
346
+ "version": "3.12.9"
347
+ }
348
+ },
349
+ "nbformat": 4,
350
+ "nbformat_minor": 2
351
+ }
career_conversations/4_lab4.ipynb ADDED
@@ -0,0 +1,422 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "## The first big project - Professionally You!\n",
8
+ "\n",
9
+ "### And, Tool use.\n",
10
+ "\n",
11
+ "### But first: introducing Pushover\n",
12
+ "\n",
13
+ "Pushover is a nifty tool for sending Push Notifications to your phone.\n",
14
+ "\n",
15
+ "It's super easy to set up and install!\n",
16
+ "\n",
17
+ "Simply visit https://pushover.net/ and sign up for a free account, and create your API keys.\n",
18
+ "\n",
19
+ "As student Ron pointed out (thank you Ron!) there are actually 2 tokens to create in Pushover: \n",
20
+ "1. The User token which you get from the home page of Pushover\n",
21
+ "2. The Application token which you get by going to https://pushover.net/apps/build and creating an app \n",
22
+ "\n",
23
+ "(This is so you could choose to organize your push notifications into different apps in the future.)\n",
24
+ "\n",
25
+ "\n",
26
+ "Add to your `.env` file:\n",
27
+ "```\n",
28
+ "PUSHOVER_USER=put_your_user_token_here\n",
29
+ "PUSHOVER_TOKEN=put_the_application_level_token_here\n",
30
+ "```\n",
31
+ "\n",
32
+ "And install the Pushover app on your phone."
33
+ ]
34
+ },
35
+ {
36
+ "cell_type": "code",
37
+ "execution_count": 1,
38
+ "metadata": {},
39
+ "outputs": [],
40
+ "source": [
41
+ "# imports\n",
42
+ "\n",
43
+ "from dotenv import load_dotenv\n",
44
+ "from openai import OpenAI\n",
45
+ "import json\n",
46
+ "import os\n",
47
+ "import requests\n",
48
+ "from pypdf import PdfReader\n",
49
+ "import gradio as gr"
50
+ ]
51
+ },
52
+ {
53
+ "cell_type": "code",
54
+ "execution_count": 2,
55
+ "metadata": {},
56
+ "outputs": [],
57
+ "source": [
58
+ "# The usual start\n",
59
+ "\n",
60
+ "load_dotenv(override=True)\n",
61
+ "openai = OpenAI()"
62
+ ]
63
+ },
64
+ {
65
+ "cell_type": "code",
66
+ "execution_count": 3,
67
+ "metadata": {},
68
+ "outputs": [],
69
+ "source": [
70
+ "# For pushover\n",
71
+ "\n",
72
+ "pushover_user = os.getenv(\"PUSHOVER_USER\")\n",
73
+ "pushover_token = os.getenv(\"PUSHOVER_TOKEN\")\n",
74
+ "pushover_url = \"https://api.pushover.net/1/messages.json\""
75
+ ]
76
+ },
77
+ {
78
+ "cell_type": "code",
79
+ "execution_count": 4,
80
+ "metadata": {},
81
+ "outputs": [],
82
+ "source": [
83
+ "def push(message):\n",
84
+ " print(f\"Push: {message}\")\n",
85
+ " payload = {\"user\": pushover_user, \"token\": pushover_token, \"message\": message}\n",
86
+ " requests.post(pushover_url, data=payload)"
87
+ ]
88
+ },
89
+ {
90
+ "cell_type": "code",
91
+ "execution_count": null,
92
+ "metadata": {},
93
+ "outputs": [],
94
+ "source": [
95
+ "push(\"HEY!!\")"
96
+ ]
97
+ },
98
+ {
99
+ "cell_type": "code",
100
+ "execution_count": 9,
101
+ "metadata": {},
102
+ "outputs": [],
103
+ "source": [
104
+ "def record_user_details(email, name=\"Name not provided\", notes=\"not provided\"):\n",
105
+ " push(f\"Recording interest from {name} with email {email} and notes {notes}\")\n",
106
+ " return {\"recorded\": \"ok\"}"
107
+ ]
108
+ },
109
+ {
110
+ "cell_type": "code",
111
+ "execution_count": 4,
112
+ "metadata": {},
113
+ "outputs": [],
114
+ "source": [
115
+ "def record_unknown_question(question):\n",
116
+ " push(f\"Recording {question} asked that I couldn't answer\")\n",
117
+ " return {\"recorded\": \"ok\"}"
118
+ ]
119
+ },
120
+ {
121
+ "cell_type": "code",
122
+ "execution_count": 5,
123
+ "metadata": {},
124
+ "outputs": [],
125
+ "source": [
126
+ "record_user_details_json = {\n",
127
+ " \"name\": \"record_user_details\",\n",
128
+ " \"description\": \"Use this tool to record that a user is interested in being in touch and provided an email address\",\n",
129
+ " \"parameters\": {\n",
130
+ " \"type\": \"object\",\n",
131
+ " \"properties\": {\n",
132
+ " \"email\": {\n",
133
+ " \"type\": \"string\",\n",
134
+ " \"description\": \"The email address of this user\"\n",
135
+ " },\n",
136
+ " \"name\": {\n",
137
+ " \"type\": \"string\",\n",
138
+ " \"description\": \"The user's name, if they provided it\"\n",
139
+ " }\n",
140
+ " ,\n",
141
+ " \"notes\": {\n",
142
+ " \"type\": \"string\",\n",
143
+ " \"description\": \"Any additional information about the conversation that's worth recording to give context\"\n",
144
+ " }\n",
145
+ " },\n",
146
+ " \"required\": [\"email\"],\n",
147
+ " \"additionalProperties\": False\n",
148
+ " }\n",
149
+ "}"
150
+ ]
151
+ },
152
+ {
153
+ "cell_type": "code",
154
+ "execution_count": 6,
155
+ "metadata": {},
156
+ "outputs": [],
157
+ "source": [
158
+ "record_unknown_question_json = {\n",
159
+ " \"name\": \"record_unknown_question\",\n",
160
+ " \"description\": \"Always use this tool to record any question that couldn't be answered as you didn't know the answer\",\n",
161
+ " \"parameters\": {\n",
162
+ " \"type\": \"object\",\n",
163
+ " \"properties\": {\n",
164
+ " \"question\": {\n",
165
+ " \"type\": \"string\",\n",
166
+ " \"description\": \"The question that couldn't be answered\"\n",
167
+ " },\n",
168
+ " },\n",
169
+ " \"required\": [\"question\"],\n",
170
+ " \"additionalProperties\": False\n",
171
+ " }\n",
172
+ "}"
173
+ ]
174
+ },
175
+ {
176
+ "cell_type": "code",
177
+ "execution_count": 7,
178
+ "metadata": {},
179
+ "outputs": [],
180
+ "source": [
181
+ "tools = [{\"type\": \"function\", \"function\": record_user_details_json},\n",
182
+ " {\"type\": \"function\", \"function\": record_unknown_question_json}]"
183
+ ]
184
+ },
185
+ {
186
+ "cell_type": "code",
187
+ "execution_count": null,
188
+ "metadata": {},
189
+ "outputs": [],
190
+ "source": [
191
+ "tools"
192
+ ]
193
+ },
194
+ {
195
+ "cell_type": "code",
196
+ "execution_count": 16,
197
+ "metadata": {},
198
+ "outputs": [],
199
+ "source": [
200
+ "# This function can take a list of tool calls, and run them. This is the IF statement!!\n",
201
+ "\n",
202
+ "def handle_tool_calls(tool_calls):\n",
203
+ " results = []\n",
204
+ " for tool_call in tool_calls:\n",
205
+ " tool_name = tool_call.function.name\n",
206
+ " arguments = json.loads(tool_call.function.arguments)\n",
207
+ " print(f\"Tool called: {tool_name}\", flush=True)\n",
208
+ "\n",
209
+ " # THE BIG IF STATEMENT!!!\n",
210
+ "\n",
211
+ " if tool_name == \"record_user_details\":\n",
212
+ " result = record_user_details(**arguments)\n",
213
+ " elif tool_name == \"record_unknown_question\":\n",
214
+ " result = record_unknown_question(**arguments)\n",
215
+ "\n",
216
+ " results.append({\"role\": \"tool\",\"content\": json.dumps(result),\"tool_call_id\": tool_call.id})\n",
217
+ " return results"
218
+ ]
219
+ },
220
+ {
221
+ "cell_type": "code",
222
+ "execution_count": null,
223
+ "metadata": {},
224
+ "outputs": [],
225
+ "source": [
226
+ "globals()[\"record_unknown_question\"](\"this is a really hard question\")"
227
+ ]
228
+ },
229
+ {
230
+ "cell_type": "code",
231
+ "execution_count": 25,
232
+ "metadata": {},
233
+ "outputs": [],
234
+ "source": [
235
+ "# This is a more elegant way that avoids the IF statement.\n",
236
+ "\n",
237
+ "def handle_tool_calls(tool_calls):\n",
238
+ " results = []\n",
239
+ " for tool_call in tool_calls:\n",
240
+ " tool_name = tool_call.function.name\n",
241
+ " arguments = json.loads(tool_call.function.arguments)\n",
242
+ " print(f\"Tool called: {tool_name}\", flush=True)\n",
243
+ " tool = globals().get(tool_name)\n",
244
+ " result = tool(**arguments) if tool else {}\n",
245
+ " results.append({\"role\": \"tool\",\"content\": json.dumps(result),\"tool_call_id\": tool_call.id})\n",
246
+ " return results"
247
+ ]
248
+ },
249
+ {
250
+ "cell_type": "code",
251
+ "execution_count": 4,
252
+ "metadata": {},
253
+ "outputs": [],
254
+ "source": [
255
+ "reader = PdfReader(\"me/linkedin.pdf\")\n",
256
+ "linkedin = \"\"\n",
257
+ "for page in reader.pages:\n",
258
+ " text = page.extract_text()\n",
259
+ " if text:\n",
260
+ " linkedin += text\n",
261
+ "\n",
262
+ "with open(\"me/summary.txt\", \"r\", encoding=\"utf-8\") as f:\n",
263
+ " summary = f.read()\n",
264
+ "\n",
265
+ "name = \"Ed Donner\""
266
+ ]
267
+ },
268
+ {
269
+ "cell_type": "code",
270
+ "execution_count": 22,
271
+ "metadata": {},
272
+ "outputs": [],
273
+ "source": [
274
+ "system_prompt = f\"You are acting as {name}. You are answering questions on {name}'s website, \\\n",
275
+ "particularly questions related to {name}'s career, background, skills and experience. \\\n",
276
+ "Your responsibility is to represent {name} for interactions on the website as faithfully as possible. \\\n",
277
+ "You are given a summary of {name}'s background and LinkedIn profile which you can use to answer questions. \\\n",
278
+ "Be professional and engaging, as if talking to a potential client or future employer who came across the website. \\\n",
279
+ "If you don't know the answer to any question, use your record_unknown_question tool to record the question that you couldn't answer, even if it's about something trivial or unrelated to career. \\\n",
280
+ "If the user is engaging in discussion, try to steer them towards getting in touch via email; ask for their email and record it using your record_user_details tool. \"\n",
281
+ "\n",
282
+ "system_prompt += f\"\\n\\n## Summary:\\n{summary}\\n\\n## LinkedIn Profile:\\n{linkedin}\\n\\n\"\n",
283
+ "system_prompt += f\"With this context, please chat with the user, always staying in character as {name}.\"\n"
284
+ ]
285
+ },
286
+ {
287
+ "cell_type": "code",
288
+ "execution_count": 28,
289
+ "metadata": {},
290
+ "outputs": [],
291
+ "source": [
292
+ "def chat(message, history):\n",
293
+ " messages = [{\"role\": \"system\", \"content\": system_prompt}] + history + [{\"role\": \"user\", \"content\": message}]\n",
294
+ " done = False\n",
295
+ " while not done:\n",
296
+ "\n",
297
+ " # This is the call to the LLM - see that we pass in the tools json\n",
298
+ "\n",
299
+ " response = openai.chat.completions.create(model=\"gpt-4o-mini\", messages=messages, tools=tools)\n",
300
+ "\n",
301
+ " finish_reason = response.choices[0].finish_reason\n",
302
+ " \n",
303
+ " # If the LLM wants to call a tool, we do that!\n",
304
+ " \n",
305
+ " if finish_reason==\"tool_calls\":\n",
306
+ " message = response.choices[0].message\n",
307
+ " tool_calls = message.tool_calls\n",
308
+ " results = handle_tool_calls(tool_calls)\n",
309
+ " messages.append(message)\n",
310
+ " messages.extend(results)\n",
311
+ " else:\n",
312
+ " done = True\n",
313
+ " return response.choices[0].message.content"
314
+ ]
315
+ },
316
+ {
317
+ "cell_type": "code",
318
+ "execution_count": null,
319
+ "metadata": {},
320
+ "outputs": [],
321
+ "source": [
322
+ "gr.ChatInterface(chat, type=\"messages\").launch()"
323
+ ]
324
+ },
325
+ {
326
+ "cell_type": "markdown",
327
+ "metadata": {},
328
+ "source": [
329
+ "## And now for deployment\n",
330
+ "\n",
331
+ "This code is in `app.py`\n",
332
+ "\n",
333
+ "We will deploy to HuggingFace Spaces. Thank you student Robert M for improving these instructions.\n",
334
+ "\n",
335
+ "Before you start: remember to update the files in the \"me\" directory - your LinkedIn profile and summary.txt - so that it talks about you!\n",
336
+ "\n",
337
+ "1. Visit https://huggingface.co and set up an account \n",
338
+ "2. From the Avatar menu on the top right, choose Access Tokens. Choose \"Create New Token\". Give it WRITE permissions.\n",
339
+ "3. Take this token and add it to your .env file: `HF_TOKEN=hf_xxx`\n",
340
+ "4. From the 1_foundations folder, enter: `gradio deploy` \n",
341
+ "5. Follow the instructions: name it \"career_conversation\", specify app.py, choose cpu-basic as the hardware, say Yes to needing to supply secrets, provide your openai api key, your pushover user and token, and say \"no\" to github actions.\n",
342
+ "\n",
343
+ "And you're deployed!\n",
344
+ "\n",
345
+ "Here is mine: https://huggingface.co/spaces/ed-donner/Career_Conversation\n",
346
+ "\n",
347
+ "I just got a push notification that a student asked me how they can become President of their country 😂😂\n",
348
+ "\n",
349
+ "For more information on deployment:\n",
350
+ "\n",
351
+ "https://www.gradio.app/guides/sharing-your-app#hosting-on-hf-spaces\n",
352
+ "\n",
353
+ "To delete your Space in the future: \n",
354
+ "1. Log in to HuggingFace\n",
355
+ "2. From the Avatar menu, select your profile\n",
356
+ "3. Click on the Space itself\n",
357
+ "4. Click the settings wheel on the top right\n",
358
+ "5. Scroll to the Delete section at the bottom\n"
359
+ ]
360
+ },
361
+ {
362
+ "cell_type": "markdown",
363
+ "metadata": {},
364
+ "source": [
365
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
366
+ " <tr>\n",
367
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
368
+ " <img src=\"../assets/exercise.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
369
+ " </td>\n",
370
+ " <td>\n",
371
+ " <h2 style=\"color:#ff7800;\">Exercise</h2>\n",
372
+ " <span style=\"color:#ff7800;\">• First and foremost, deploy this for yourself! It's a real, valuable tool - the future resume..<br/>\n",
373
+ " • Next, improve the resources - add better context about yourself. If you know RAG, then add a knowledge base about you.<br/>\n",
374
+ " • Add in more tools! You could have a SQL database with common Q&A that the LLM could read and write from?<br/>\n",
375
+ " • Bring in the Evaluator from the last lab, and add other Agentic patterns.\n",
376
+ " </span>\n",
377
+ " </td>\n",
378
+ " </tr>\n",
379
+ "</table>"
380
+ ]
381
+ },
382
+ {
383
+ "cell_type": "markdown",
384
+ "metadata": {},
385
+ "source": [
386
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
387
+ " <tr>\n",
388
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
389
+ " <img src=\"../assets/business.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
390
+ " </td>\n",
391
+ " <td>\n",
392
+ " <h2 style=\"color:#00bfff;\">Commercial implications</h2>\n",
393
+ " <span style=\"color:#00bfff;\">Aside from the obvious (your career alter-ego) this has business applications in any situation where you need an AI assistant with domain expertise and an ability to interact with the real world.\n",
394
+ " </span>\n",
395
+ " </td>\n",
396
+ " </tr>\n",
397
+ "</table>"
398
+ ]
399
+ }
400
+ ],
401
+ "metadata": {
402
+ "kernelspec": {
403
+ "display_name": ".venv",
404
+ "language": "python",
405
+ "name": "python3"
406
+ },
407
+ "language_info": {
408
+ "codemirror_mode": {
409
+ "name": "ipython",
410
+ "version": 3
411
+ },
412
+ "file_extension": ".py",
413
+ "mimetype": "text/x-python",
414
+ "name": "python",
415
+ "nbconvert_exporter": "python",
416
+ "pygments_lexer": "ipython3",
417
+ "version": "3.12.9"
418
+ }
419
+ },
420
+ "nbformat": 4,
421
+ "nbformat_minor": 2
422
+ }
career_conversations/README.md ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ ---
2
+ title: career_conversations
3
+ app_file: app.py
4
+ sdk: gradio
5
+ sdk_version: 5.29.0
6
+ ---
career_conversations/app.py ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from dotenv import load_dotenv
2
+ from openai import OpenAI
3
+ import json
4
+ import os
5
+ import requests
6
+ from pypdf import PdfReader
7
+ import gradio as gr
8
+
9
+
10
+ load_dotenv(override=True)
11
+
12
+ def push(text):
13
+ requests.post(
14
+ "https://api.pushover.net/1/messages.json",
15
+ data={
16
+ "token": os.getenv("PUSHOVER_TOKEN"),
17
+ "user": os.getenv("PUSHOVER_USER"),
18
+ "message": text,
19
+ }
20
+ )
21
+
22
+
23
+ def record_user_details(email, name="Name not provided", notes="not provided"):
24
+ push(f"Recording {name} with email {email} and notes {notes}")
25
+ return {"recorded": "ok"}
26
+
27
+ def record_unknown_question(question):
28
+ push(f"Recording {question}")
29
+ return {"recorded": "ok"}
30
+
31
+ record_user_details_json = {
32
+ "name": "record_user_details",
33
+ "description": "Use this tool to record that a user is interested in being in touch and provided an email address",
34
+ "parameters": {
35
+ "type": "object",
36
+ "properties": {
37
+ "email": {
38
+ "type": "string",
39
+ "description": "The email address of this user"
40
+ },
41
+ "name": {
42
+ "type": "string",
43
+ "description": "The user's name, if they provided it"
44
+ }
45
+ ,
46
+ "notes": {
47
+ "type": "string",
48
+ "description": "Any additional information about the conversation that's worth recording to give context"
49
+ }
50
+ },
51
+ "required": ["email"],
52
+ "additionalProperties": False
53
+ }
54
+ }
55
+
56
+ record_unknown_question_json = {
57
+ "name": "record_unknown_question",
58
+ "description": "Always use this tool to record any question that couldn't be answered as you didn't know the answer",
59
+ "parameters": {
60
+ "type": "object",
61
+ "properties": {
62
+ "question": {
63
+ "type": "string",
64
+ "description": "The question that couldn't be answered"
65
+ },
66
+ },
67
+ "required": ["question"],
68
+ "additionalProperties": False
69
+ }
70
+ }
71
+
72
+ tools = [{"type": "function", "function": record_user_details_json},
73
+ {"type": "function", "function": record_unknown_question_json}]
74
+
75
+
76
+ class Me:
77
+
78
+ def __init__(self):
79
+ self.openai = OpenAI()
80
+ self.name = "Santosh Kumar"
81
+ reader = PdfReader("me/linkedin_santosh.pdf")
82
+ self.linkedin = ""
83
+ for page in reader.pages:
84
+ text = page.extract_text()
85
+ if text:
86
+ self.linkedin += text
87
+ with open("me/summary_santosh.txt", "r", encoding="utf-8") as f:
88
+ self.summary = f.read()
89
+
90
+
91
+ def handle_tool_call(self, tool_calls):
92
+ results = []
93
+ for tool_call in tool_calls:
94
+ tool_name = tool_call.function.name
95
+ arguments = json.loads(tool_call.function.arguments)
96
+ print(f"Tool called: {tool_name}", flush=True)
97
+ tool = globals().get(tool_name)
98
+ result = tool(**arguments) if tool else {}
99
+ results.append({"role": "tool","content": json.dumps(result),"tool_call_id": tool_call.id})
100
+ return results
101
+
102
+ def system_prompt(self):
103
+ system_prompt = f"You are acting as {self.name}. You are answering questions on {self.name}'s website, \
104
+ particularly questions related to {self.name}'s career, background, skills and experience. \
105
+ Your responsibility is to represent {self.name} for interactions on the website as faithfully as possible. \
106
+ You are given a summary of {self.name}'s background and LinkedIn profile which you can use to answer questions. \
107
+ Be professional and engaging, as if talking to a potential client or future employer who came across the website. \
108
+ If you don't know the answer to any question, use your record_unknown_question tool to record the question that you couldn't answer, even if it's about something trivial or unrelated to career. \
109
+ If the user is engaging in discussion, try to steer them towards getting in touch via email; ask for their email and record it using your record_user_details tool. "
110
+
111
+ system_prompt += f"\n\n## Summary:\n{self.summary}\n\n## LinkedIn Profile:\n{self.linkedin}\n\n"
112
+ system_prompt += f"With this context, please chat with the user, always staying in character as {self.name}."
113
+ return system_prompt
114
+
115
+ def chat(self, message, history):
116
+ messages = [{"role": "system", "content": self.system_prompt()}] + history + [{"role": "user", "content": message}]
117
+ done = False
118
+ while not done:
119
+ response = self.openai.chat.completions.create(model="gpt-4o-mini", messages=messages, tools=tools)
120
+ if response.choices[0].finish_reason=="tool_calls":
121
+ message = response.choices[0].message
122
+ tool_calls = message.tool_calls
123
+ results = self.handle_tool_call(tool_calls)
124
+ messages.append(message)
125
+ messages.extend(results)
126
+ else:
127
+ done = True
128
+ return response.choices[0].message.content
129
+
130
+
131
+ if __name__ == "__main__":
132
+ me = Me()
133
+ gr.ChatInterface(me.chat, type="messages").launch()
134
+
career_conversations/community_contributions/1_lab1_groq_llama.ipynb ADDED
@@ -0,0 +1,296 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "# First Agentic AI workflow with Groq and Llama-3.3 LLM(Free of cost) "
8
+ ]
9
+ },
10
+ {
11
+ "cell_type": "code",
12
+ "execution_count": 1,
13
+ "metadata": {},
14
+ "outputs": [],
15
+ "source": [
16
+ "# First let's do an import\n",
17
+ "from dotenv import load_dotenv"
18
+ ]
19
+ },
20
+ {
21
+ "cell_type": "code",
22
+ "execution_count": null,
23
+ "metadata": {},
24
+ "outputs": [],
25
+ "source": [
26
+ "# Next it's time to load the API keys into environment variables\n",
27
+ "\n",
28
+ "load_dotenv(override=True)"
29
+ ]
30
+ },
31
+ {
32
+ "cell_type": "code",
33
+ "execution_count": null,
34
+ "metadata": {},
35
+ "outputs": [],
36
+ "source": [
37
+ "# Check the Groq API key\n",
38
+ "\n",
39
+ "import os\n",
40
+ "groq_api_key = os.getenv('GROQ_API_KEY')\n",
41
+ "\n",
42
+ "if groq_api_key:\n",
43
+ " print(f\"GROQ API Key exists and begins {groq_api_key[:8]}\")\n",
44
+ "else:\n",
45
+ " print(\"GROQ API Key not set\")\n",
46
+ " \n"
47
+ ]
48
+ },
49
+ {
50
+ "cell_type": "code",
51
+ "execution_count": 4,
52
+ "metadata": {},
53
+ "outputs": [],
54
+ "source": [
55
+ "# And now - the all important import statement\n",
56
+ "# If you get an import error - head over to troubleshooting guide\n",
57
+ "\n",
58
+ "from groq import Groq"
59
+ ]
60
+ },
61
+ {
62
+ "cell_type": "code",
63
+ "execution_count": 5,
64
+ "metadata": {},
65
+ "outputs": [],
66
+ "source": [
67
+ "# Create a Groq instance\n",
68
+ "groq = Groq()"
69
+ ]
70
+ },
71
+ {
72
+ "cell_type": "code",
73
+ "execution_count": 6,
74
+ "metadata": {},
75
+ "outputs": [],
76
+ "source": [
77
+ "# Create a list of messages in the familiar Groq format\n",
78
+ "\n",
79
+ "messages = [{\"role\": \"user\", \"content\": \"What is 2+2?\"}]"
80
+ ]
81
+ },
82
+ {
83
+ "cell_type": "code",
84
+ "execution_count": null,
85
+ "metadata": {},
86
+ "outputs": [],
87
+ "source": [
88
+ "# And now call it!\n",
89
+ "\n",
90
+ "response = groq.chat.completions.create(model='llama-3.3-70b-versatile', messages=messages)\n",
91
+ "print(response.choices[0].message.content)\n"
92
+ ]
93
+ },
94
+ {
95
+ "cell_type": "code",
96
+ "execution_count": null,
97
+ "metadata": {},
98
+ "outputs": [],
99
+ "source": []
100
+ },
101
+ {
102
+ "cell_type": "code",
103
+ "execution_count": 8,
104
+ "metadata": {},
105
+ "outputs": [],
106
+ "source": [
107
+ "# And now - let's ask for a question:\n",
108
+ "\n",
109
+ "question = \"Please propose a hard, challenging question to assess someone's IQ. Respond only with the question.\"\n",
110
+ "messages = [{\"role\": \"user\", \"content\": question}]\n"
111
+ ]
112
+ },
113
+ {
114
+ "cell_type": "code",
115
+ "execution_count": null,
116
+ "metadata": {},
117
+ "outputs": [],
118
+ "source": [
119
+ "# ask it\n",
120
+ "response = groq.chat.completions.create(\n",
121
+ " model=\"llama-3.3-70b-versatile\",\n",
122
+ " messages=messages\n",
123
+ ")\n",
124
+ "\n",
125
+ "question = response.choices[0].message.content\n",
126
+ "\n",
127
+ "print(question)\n"
128
+ ]
129
+ },
130
+ {
131
+ "cell_type": "code",
132
+ "execution_count": 10,
133
+ "metadata": {},
134
+ "outputs": [],
135
+ "source": [
136
+ "# form a new messages list\n",
137
+ "messages = [{\"role\": \"user\", \"content\": question}]\n"
138
+ ]
139
+ },
140
+ {
141
+ "cell_type": "code",
142
+ "execution_count": null,
143
+ "metadata": {},
144
+ "outputs": [],
145
+ "source": [
146
+ "# Ask it again\n",
147
+ "\n",
148
+ "response = groq.chat.completions.create(\n",
149
+ " model=\"llama-3.3-70b-versatile\",\n",
150
+ " messages=messages\n",
151
+ ")\n",
152
+ "\n",
153
+ "answer = response.choices[0].message.content\n",
154
+ "print(answer)\n"
155
+ ]
156
+ },
157
+ {
158
+ "cell_type": "code",
159
+ "execution_count": null,
160
+ "metadata": {},
161
+ "outputs": [],
162
+ "source": [
163
+ "from IPython.display import Markdown, display\n",
164
+ "\n",
165
+ "display(Markdown(answer))\n",
166
+ "\n"
167
+ ]
168
+ },
169
+ {
170
+ "cell_type": "markdown",
171
+ "metadata": {},
172
+ "source": [
173
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
174
+ " <tr>\n",
175
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
176
+ " <img src=\"../assets/exercise.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
177
+ " </td>\n",
178
+ " <td>\n",
179
+ " <h2 style=\"color:#ff7800;\">Exercise</h2>\n",
180
+ " <span style=\"color:#ff7800;\">Now try this commercial application:<br/>\n",
181
+ " First ask the LLM to pick a business area that might be worth exploring for an Agentic AI opportunity.<br/>\n",
182
+ " Then ask the LLM to present a pain-point in that industry - something challenging that might be ripe for an Agentic solution.<br/>\n",
183
+ " Finally have 3 third LLM call propose the Agentic AI solution.\n",
184
+ " </span>\n",
185
+ " </td>\n",
186
+ " </tr>\n",
187
+ "</table>"
188
+ ]
189
+ },
190
+ {
191
+ "cell_type": "code",
192
+ "execution_count": 17,
193
+ "metadata": {},
194
+ "outputs": [],
195
+ "source": [
196
+ "# First create the messages:\n",
197
+ "\n",
198
+ "messages = [{\"role\": \"user\", \"content\": \"Give me a business area that might be ripe for an Agentic AI solution.\"}]\n",
199
+ "\n",
200
+ "# Then make the first call:\n",
201
+ "\n",
202
+ "response = groq.chat.completions.create(model='llama-3.3-70b-versatile', messages=messages)\n",
203
+ "\n",
204
+ "# Then read the business idea:\n",
205
+ "\n",
206
+ "business_idea = response.choices[0].message.content\n",
207
+ "\n",
208
+ "\n",
209
+ "# And repeat!"
210
+ ]
211
+ },
212
+ {
213
+ "cell_type": "code",
214
+ "execution_count": null,
215
+ "metadata": {},
216
+ "outputs": [],
217
+ "source": [
218
+ "\n",
219
+ "display(Markdown(business_idea))"
220
+ ]
221
+ },
222
+ {
223
+ "cell_type": "code",
224
+ "execution_count": 19,
225
+ "metadata": {},
226
+ "outputs": [],
227
+ "source": [
228
+ "# Update the message with the business idea from previous step\n",
229
+ "messages = [{\"role\": \"user\", \"content\": \"What is the pain point in the business area of \" + business_idea + \"?\"}]"
230
+ ]
231
+ },
232
+ {
233
+ "cell_type": "code",
234
+ "execution_count": 20,
235
+ "metadata": {},
236
+ "outputs": [],
237
+ "source": [
238
+ "# Make the second call\n",
239
+ "response = groq.chat.completions.create(model='llama-3.3-70b-versatile', messages=messages)\n",
240
+ "# Read the pain point\n",
241
+ "pain_point = response.choices[0].message.content\n"
242
+ ]
243
+ },
244
+ {
245
+ "cell_type": "code",
246
+ "execution_count": null,
247
+ "metadata": {},
248
+ "outputs": [],
249
+ "source": [
250
+ "display(Markdown(pain_point))\n"
251
+ ]
252
+ },
253
+ {
254
+ "cell_type": "code",
255
+ "execution_count": null,
256
+ "metadata": {},
257
+ "outputs": [],
258
+ "source": [
259
+ "# Make the third call\n",
260
+ "messages = [{\"role\": \"user\", \"content\": \"What is the Agentic AI solution for the pain point of \" + pain_point + \"?\"}]\n",
261
+ "response = groq.chat.completions.create(model='llama-3.3-70b-versatile', messages=messages)\n",
262
+ "# Read the agentic solution\n",
263
+ "agentic_solution = response.choices[0].message.content\n",
264
+ "display(Markdown(agentic_solution))"
265
+ ]
266
+ },
267
+ {
268
+ "cell_type": "code",
269
+ "execution_count": null,
270
+ "metadata": {},
271
+ "outputs": [],
272
+ "source": []
273
+ }
274
+ ],
275
+ "metadata": {
276
+ "kernelspec": {
277
+ "display_name": ".venv",
278
+ "language": "python",
279
+ "name": "python3"
280
+ },
281
+ "language_info": {
282
+ "codemirror_mode": {
283
+ "name": "ipython",
284
+ "version": 3
285
+ },
286
+ "file_extension": ".py",
287
+ "mimetype": "text/x-python",
288
+ "name": "python",
289
+ "nbconvert_exporter": "python",
290
+ "pygments_lexer": "ipython3",
291
+ "version": "3.12.10"
292
+ }
293
+ },
294
+ "nbformat": 4,
295
+ "nbformat_minor": 2
296
+ }
career_conversations/community_contributions/community.ipynb ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "# Community contributions\n",
8
+ "\n",
9
+ "Thank you for considering contributing your work to the repo!\n",
10
+ "\n",
11
+ "Please add your code (modules or notebooks) to this directory and send me a PR, per the instructions in the guides.\n",
12
+ "\n",
13
+ "I'd love to share your progress with other students, so everyone can benefit from your projects.\n"
14
+ ]
15
+ },
16
+ {
17
+ "cell_type": "markdown",
18
+ "metadata": {},
19
+ "source": []
20
+ }
21
+ ],
22
+ "metadata": {
23
+ "language_info": {
24
+ "name": "python"
25
+ }
26
+ },
27
+ "nbformat": 4,
28
+ "nbformat_minor": 2
29
+ }
career_conversations/gradio.ipynb ADDED
File without changes
career_conversations/haggingfacekey.py ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from dotenv import load_dotenv
2
+ from openai import OpenAI
3
+ import json
4
+ import os
5
+ import requests
6
+ from pypdf import PdfReader
7
+ import gradio as gr
8
+
9
+
10
+ load_dotenv(override=True)
11
+
12
+ print(os.getenv("HF_TOKEN"))
career_conversations/me/linkedin.pdf ADDED
Binary file (69.7 kB). View file
 
career_conversations/me/linkedin_santosh.pdf ADDED
Binary file (52.4 kB). View file
 
career_conversations/me/summary.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ My name is Ed Donner. I'm an entrepreneur, software engineer and data scientist. I'm originally from London, England, but I moved to NYC in 2000.
2
+ I love all foods, particularly French food, but strangely I'm repelled by almost all forms of cheese. I'm not allergic, I just hate the taste! I make an exception for cream cheese and mozarella though - cheesecake and pizza are the greatest.
career_conversations/me/summary_santosh.txt ADDED
File without changes
career_conversations/requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ requests
2
+ python-dotenv
3
+ gradio
4
+ pypdf
5
+ openai
community_contributions/1_lab1_groq_llama.ipynb ADDED
@@ -0,0 +1,296 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "# First Agentic AI workflow with Groq and Llama-3.3 LLM(Free of cost) "
8
+ ]
9
+ },
10
+ {
11
+ "cell_type": "code",
12
+ "execution_count": 1,
13
+ "metadata": {},
14
+ "outputs": [],
15
+ "source": [
16
+ "# First let's do an import\n",
17
+ "from dotenv import load_dotenv"
18
+ ]
19
+ },
20
+ {
21
+ "cell_type": "code",
22
+ "execution_count": null,
23
+ "metadata": {},
24
+ "outputs": [],
25
+ "source": [
26
+ "# Next it's time to load the API keys into environment variables\n",
27
+ "\n",
28
+ "load_dotenv(override=True)"
29
+ ]
30
+ },
31
+ {
32
+ "cell_type": "code",
33
+ "execution_count": null,
34
+ "metadata": {},
35
+ "outputs": [],
36
+ "source": [
37
+ "# Check the Groq API key\n",
38
+ "\n",
39
+ "import os\n",
40
+ "groq_api_key = os.getenv('GROQ_API_KEY')\n",
41
+ "\n",
42
+ "if groq_api_key:\n",
43
+ " print(f\"GROQ API Key exists and begins {groq_api_key[:8]}\")\n",
44
+ "else:\n",
45
+ " print(\"GROQ API Key not set\")\n",
46
+ " \n"
47
+ ]
48
+ },
49
+ {
50
+ "cell_type": "code",
51
+ "execution_count": 4,
52
+ "metadata": {},
53
+ "outputs": [],
54
+ "source": [
55
+ "# And now - the all important import statement\n",
56
+ "# If you get an import error - head over to troubleshooting guide\n",
57
+ "\n",
58
+ "from groq import Groq"
59
+ ]
60
+ },
61
+ {
62
+ "cell_type": "code",
63
+ "execution_count": 5,
64
+ "metadata": {},
65
+ "outputs": [],
66
+ "source": [
67
+ "# Create a Groq instance\n",
68
+ "groq = Groq()"
69
+ ]
70
+ },
71
+ {
72
+ "cell_type": "code",
73
+ "execution_count": 6,
74
+ "metadata": {},
75
+ "outputs": [],
76
+ "source": [
77
+ "# Create a list of messages in the familiar Groq format\n",
78
+ "\n",
79
+ "messages = [{\"role\": \"user\", \"content\": \"What is 2+2?\"}]"
80
+ ]
81
+ },
82
+ {
83
+ "cell_type": "code",
84
+ "execution_count": null,
85
+ "metadata": {},
86
+ "outputs": [],
87
+ "source": [
88
+ "# And now call it!\n",
89
+ "\n",
90
+ "response = groq.chat.completions.create(model='llama-3.3-70b-versatile', messages=messages)\n",
91
+ "print(response.choices[0].message.content)\n"
92
+ ]
93
+ },
94
+ {
95
+ "cell_type": "code",
96
+ "execution_count": null,
97
+ "metadata": {},
98
+ "outputs": [],
99
+ "source": []
100
+ },
101
+ {
102
+ "cell_type": "code",
103
+ "execution_count": 8,
104
+ "metadata": {},
105
+ "outputs": [],
106
+ "source": [
107
+ "# And now - let's ask for a question:\n",
108
+ "\n",
109
+ "question = \"Please propose a hard, challenging question to assess someone's IQ. Respond only with the question.\"\n",
110
+ "messages = [{\"role\": \"user\", \"content\": question}]\n"
111
+ ]
112
+ },
113
+ {
114
+ "cell_type": "code",
115
+ "execution_count": null,
116
+ "metadata": {},
117
+ "outputs": [],
118
+ "source": [
119
+ "# ask it\n",
120
+ "response = groq.chat.completions.create(\n",
121
+ " model=\"llama-3.3-70b-versatile\",\n",
122
+ " messages=messages\n",
123
+ ")\n",
124
+ "\n",
125
+ "question = response.choices[0].message.content\n",
126
+ "\n",
127
+ "print(question)\n"
128
+ ]
129
+ },
130
+ {
131
+ "cell_type": "code",
132
+ "execution_count": 10,
133
+ "metadata": {},
134
+ "outputs": [],
135
+ "source": [
136
+ "# form a new messages list\n",
137
+ "messages = [{\"role\": \"user\", \"content\": question}]\n"
138
+ ]
139
+ },
140
+ {
141
+ "cell_type": "code",
142
+ "execution_count": null,
143
+ "metadata": {},
144
+ "outputs": [],
145
+ "source": [
146
+ "# Ask it again\n",
147
+ "\n",
148
+ "response = groq.chat.completions.create(\n",
149
+ " model=\"llama-3.3-70b-versatile\",\n",
150
+ " messages=messages\n",
151
+ ")\n",
152
+ "\n",
153
+ "answer = response.choices[0].message.content\n",
154
+ "print(answer)\n"
155
+ ]
156
+ },
157
+ {
158
+ "cell_type": "code",
159
+ "execution_count": null,
160
+ "metadata": {},
161
+ "outputs": [],
162
+ "source": [
163
+ "from IPython.display import Markdown, display\n",
164
+ "\n",
165
+ "display(Markdown(answer))\n",
166
+ "\n"
167
+ ]
168
+ },
169
+ {
170
+ "cell_type": "markdown",
171
+ "metadata": {},
172
+ "source": [
173
+ "<table style=\"margin: 0; text-align: left; width:100%\">\n",
174
+ " <tr>\n",
175
+ " <td style=\"width: 150px; height: 150px; vertical-align: middle;\">\n",
176
+ " <img src=\"../assets/exercise.png\" width=\"150\" height=\"150\" style=\"display: block;\" />\n",
177
+ " </td>\n",
178
+ " <td>\n",
179
+ " <h2 style=\"color:#ff7800;\">Exercise</h2>\n",
180
+ " <span style=\"color:#ff7800;\">Now try this commercial application:<br/>\n",
181
+ " First ask the LLM to pick a business area that might be worth exploring for an Agentic AI opportunity.<br/>\n",
182
+ " Then ask the LLM to present a pain-point in that industry - something challenging that might be ripe for an Agentic solution.<br/>\n",
183
+ " Finally have 3 third LLM call propose the Agentic AI solution.\n",
184
+ " </span>\n",
185
+ " </td>\n",
186
+ " </tr>\n",
187
+ "</table>"
188
+ ]
189
+ },
190
+ {
191
+ "cell_type": "code",
192
+ "execution_count": 17,
193
+ "metadata": {},
194
+ "outputs": [],
195
+ "source": [
196
+ "# First create the messages:\n",
197
+ "\n",
198
+ "messages = [{\"role\": \"user\", \"content\": \"Give me a business area that might be ripe for an Agentic AI solution.\"}]\n",
199
+ "\n",
200
+ "# Then make the first call:\n",
201
+ "\n",
202
+ "response = groq.chat.completions.create(model='llama-3.3-70b-versatile', messages=messages)\n",
203
+ "\n",
204
+ "# Then read the business idea:\n",
205
+ "\n",
206
+ "business_idea = response.choices[0].message.content\n",
207
+ "\n",
208
+ "\n",
209
+ "# And repeat!"
210
+ ]
211
+ },
212
+ {
213
+ "cell_type": "code",
214
+ "execution_count": null,
215
+ "metadata": {},
216
+ "outputs": [],
217
+ "source": [
218
+ "\n",
219
+ "display(Markdown(business_idea))"
220
+ ]
221
+ },
222
+ {
223
+ "cell_type": "code",
224
+ "execution_count": 19,
225
+ "metadata": {},
226
+ "outputs": [],
227
+ "source": [
228
+ "# Update the message with the business idea from previous step\n",
229
+ "messages = [{\"role\": \"user\", \"content\": \"What is the pain point in the business area of \" + business_idea + \"?\"}]"
230
+ ]
231
+ },
232
+ {
233
+ "cell_type": "code",
234
+ "execution_count": 20,
235
+ "metadata": {},
236
+ "outputs": [],
237
+ "source": [
238
+ "# Make the second call\n",
239
+ "response = groq.chat.completions.create(model='llama-3.3-70b-versatile', messages=messages)\n",
240
+ "# Read the pain point\n",
241
+ "pain_point = response.choices[0].message.content\n"
242
+ ]
243
+ },
244
+ {
245
+ "cell_type": "code",
246
+ "execution_count": null,
247
+ "metadata": {},
248
+ "outputs": [],
249
+ "source": [
250
+ "display(Markdown(pain_point))\n"
251
+ ]
252
+ },
253
+ {
254
+ "cell_type": "code",
255
+ "execution_count": null,
256
+ "metadata": {},
257
+ "outputs": [],
258
+ "source": [
259
+ "# Make the third call\n",
260
+ "messages = [{\"role\": \"user\", \"content\": \"What is the Agentic AI solution for the pain point of \" + pain_point + \"?\"}]\n",
261
+ "response = groq.chat.completions.create(model='llama-3.3-70b-versatile', messages=messages)\n",
262
+ "# Read the agentic solution\n",
263
+ "agentic_solution = response.choices[0].message.content\n",
264
+ "display(Markdown(agentic_solution))"
265
+ ]
266
+ },
267
+ {
268
+ "cell_type": "code",
269
+ "execution_count": null,
270
+ "metadata": {},
271
+ "outputs": [],
272
+ "source": []
273
+ }
274
+ ],
275
+ "metadata": {
276
+ "kernelspec": {
277
+ "display_name": ".venv",
278
+ "language": "python",
279
+ "name": "python3"
280
+ },
281
+ "language_info": {
282
+ "codemirror_mode": {
283
+ "name": "ipython",
284
+ "version": 3
285
+ },
286
+ "file_extension": ".py",
287
+ "mimetype": "text/x-python",
288
+ "name": "python",
289
+ "nbconvert_exporter": "python",
290
+ "pygments_lexer": "ipython3",
291
+ "version": "3.12.10"
292
+ }
293
+ },
294
+ "nbformat": 4,
295
+ "nbformat_minor": 2
296
+ }
community_contributions/community.ipynb ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "# Community contributions\n",
8
+ "\n",
9
+ "Thank you for considering contributing your work to the repo!\n",
10
+ "\n",
11
+ "Please add your code (modules or notebooks) to this directory and send me a PR, per the instructions in the guides.\n",
12
+ "\n",
13
+ "I'd love to share your progress with other students, so everyone can benefit from your projects.\n"
14
+ ]
15
+ },
16
+ {
17
+ "cell_type": "markdown",
18
+ "metadata": {},
19
+ "source": []
20
+ }
21
+ ],
22
+ "metadata": {
23
+ "language_info": {
24
+ "name": "python"
25
+ }
26
+ },
27
+ "nbformat": 4,
28
+ "nbformat_minor": 2
29
+ }
gradio.ipynb ADDED
File without changes
haggingfacekey.py ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from dotenv import load_dotenv
2
+ from openai import OpenAI
3
+ import json
4
+ import os
5
+ import requests
6
+ from pypdf import PdfReader
7
+ import gradio as gr
8
+
9
+
10
+ load_dotenv(override=True)
11
+
12
+ print(os.getenv("HF_TOKEN"))
me/linkedin.pdf ADDED
Binary file (69.7 kB). View file
 
me/linkedin_santosh.pdf ADDED
Binary file (52.4 kB). View file
 
me/summary.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ My name is Ed Donner. I'm an entrepreneur, software engineer and data scientist. I'm originally from London, England, but I moved to NYC in 2000.
2
+ I love all foods, particularly French food, but strangely I'm repelled by almost all forms of cheese. I'm not allergic, I just hate the taste! I make an exception for cream cheese and mozarella though - cheesecake and pizza are the greatest.
me/summary_santosh.txt ADDED
File without changes
requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ requests
2
+ python-dotenv
3
+ gradio
4
+ pypdf
5
+ openai