{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Welcome to the start of your adventure in Agentic AI" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

Are you ready for action??

\n", " Have you completed all the setup steps in the setup folder?
\n", " Have you read the README? Many common questions are answered here!
\n", " Have you checked out the guides in the guides folder?
\n", " Well in that case, you're ready!!\n", "
\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

This code is a live resource - keep an eye out for my updates

\n", " I push updates regularly. As people ask questions or have problems, I add more examples and improve explanations. As a result, the code below might not be identical to the videos, as I've added more steps and better comments. Consider this like an interactive book that accompanies the lectures.

\n", " I try to send emails regularly with important updates related to the course. You can find this in the 'Announcements' section of Udemy in the left sidebar. You can also choose to receive my emails via your Notification Settings in Udemy. I'm respectful of your inbox and always try to add value with my emails!\n", "
\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### And please do remember to contact me if I can help\n", "\n", "And I love to connect: https://www.linkedin.com/in/eddonner/\n", "\n", "\n", "### New to Notebooks like this one? Head over to the guides folder!\n", "\n", "Just to check you've already added the Python and Jupyter extensions to Cursor, if not already installed:\n", "- Open extensions (View >> extensions)\n", "- Search for python, and when the results show, click on the ms-python one, and Install it if not already installed\n", "- Search for jupyter, and when the results show, click on the Microsoft one, and Install it if not already installed \n", "Then View >> Explorer to bring back the File Explorer.\n", "\n", "And then:\n", "1. Click where it says \"Select Kernel\" near the top right, and select the option called `.venv (Python 3.12.9)` or similar, which should be the first choice or the most prominent choice. You may need to choose \"Python Environments\" first.\n", "2. Click in each \"cell\" below, starting with the cell immediately below this text, and press Shift+Enter to run\n", "3. Enjoy!\n", "\n", "After you click \"Select Kernel\", if there is no option like `.venv (Python 3.12.9)` then please do the following: \n", "1. On Mac: From the Cursor menu, choose Settings >> VS Code Settings (NOTE: be sure to select `VSCode Settings` not `Cursor Settings`); \n", "On Windows PC: From the File menu, choose Preferences >> VS Code Settings(NOTE: be sure to select `VSCode Settings` not `Cursor Settings`) \n", "2. In the Settings search bar, type \"venv\" \n", "3. In the field \"Path to folder with a list of Virtual Environments\" put the path to the project root, like C:\\Users\\username\\projects\\agents (on a Windows PC) or /Users/username/projects/agents (on Mac or Linux). \n", "And then try again.\n", "\n", "Having problems with missing Python versions in that list? Have you ever used Anaconda before? It might be interferring. Quit Cursor, bring up a new command line, and make sure that your Anaconda environment is deactivated: \n", "`conda deactivate` \n", "And if you still have any problems with conda and python versions, it's possible that you will need to run this too: \n", "`conda config --set auto_activate_base false` \n", "and then from within the Agents directory, you should be able to run `uv python list` and see the Python 3.12 version." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "# First let's do an import. If you get an Import Error, double check that your Kernel is correct..\n", "\n", "from dotenv import load_dotenv" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# Next it's time to load the API keys into environment variables\n", "# If this returns false, see the next cell!\n", "\n", "load_dotenv(override=True)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Wait, did that just output `False`??\n", "\n", "If so, the most common reason is that you didn't save your `.env` file after adding the key! Be sure to have saved.\n", "\n", "Also, make sure the `.env` file is named precisely `.env` and is in the project root directory (`agents`)\n", "\n", "By the way, your `.env` file should have a stop symbol next to it in Cursor on the left, and that's actually a good thing: that's Cursor saying to you, \"hey, I realize this is a file filled with secret information, and I'm not going to send it to an external AI to suggest changes, because your keys should not be shown to anyone else.\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

Final reminders

\n", " 1. If you're not confident about Environment Variables or Web Endpoints / APIs, please read Topics 3 and 5 in this technical foundations guide.
\n", " 2. If you want to use AIs other than OpenAI, like Gemini, DeepSeek or Ollama (free), please see the first section in this AI APIs guide.
\n", " 3. If you ever get a Name Error in Python, you can always fix it immediately; see the last section of this Python Foundations guide and follow both tutorials and exercises.
\n", "
\n", "
" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "OpenAI API Key exists and begins sk-proj-\n" ] } ], "source": [ "# Check the key - if you're not using OpenAI, check whichever key you're using! Ollama doesn't need a key.\n", "\n", "import os\n", "openai_api_key = os.getenv('OPENAI_API_KEY')\n", "\n", "if openai_api_key:\n", " print(f\"OpenAI API Key exists and begins {openai_api_key[:8]}\")\n", "else:\n", " print(\"OpenAI API Key not set - please head to the troubleshooting guide in the setup folder\")\n", " \n" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "# And now - the all important import statement\n", "# If you get an import error - head over to troubleshooting in the Setup folder\n", "# Even for other LLM providers like Gemini, you still use this OpenAI import - see Guide 9 for why\n", "\n", "from openai import OpenAI" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "# And now we'll create an instance of the OpenAI class\n", "# If you're not sure what it means to create an instance of a class - head over to the guides folder (guide 6)!\n", "# If you get a NameError - head over to the guides folder (guide 6)to learn about NameErrors - always instantly fixable\n", "# If you're not using OpenAI, you just need to slightly modify this - precise instructions are in the AI APIs guide (guide 9)\n", "\n", "openai = OpenAI()" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "# Create a list of messages in the familiar OpenAI format\n", "\n", "messages = [{\"role\": \"user\", \"content\": \"What is 2+2?\"}]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "2 + 2 equals 4.\n" ] } ], "source": [ "# And now call it! Any problems, head to the troubleshooting guide\n", "# This uses GPT 4.1 nano, the incredibly cheap model\n", "# The APIs guide (guide 9) has exact instructions for using even cheaper or free alternatives to OpenAI\n", "# If you get a NameError, head to the guides folder (guide 6) to learn about NameErrors - always instantly fixable\n", "\n", "response = openai.chat.completions.create(\n", " model=\"gpt-4.1-nano\",\n", " messages=messages\n", ")\n", "\n", "print(response.choices[0].message.content)\n" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "# And now - let's ask for a question:\n", "\n", "question = \"Please propose a hard, challenging question to assess someone's IQ. Respond only with the question.\"\n", "messages = [{\"role\": \"user\", \"content\": question}]\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# ask it - this uses GPT 4.1 mini, still cheap but more powerful than nano\n", "\n", "response = openai.chat.completions.create(\n", " model=\"gpt-4.1-mini\",\n", " messages=messages\n", ")\n", "\n", "question = response.choices[0].message.content\n", "\n", "print(question)\n" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "# form a new messages list\n", "messages = [{\"role\": \"user\", \"content\": question}]\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Ask it again\n", "\n", "response = openai.chat.completions.create(\n", " model=\"gpt-4.1-mini\",\n", " messages=messages\n", ")\n", "\n", "answer = response.choices[0].message.content\n", "print(answer)\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from IPython.display import Markdown, display\n", "\n", "display(Markdown(answer))\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "# Congratulations!\n", "\n", "That was a small, simple step in the direction of Agentic AI, with your new environment!\n", "\n", "Next time things get more interesting..." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

Exercise

\n", " Now try this commercial application:
\n", " First ask the LLM to pick a business area that might be worth exploring for an Agentic AI opportunity.
\n", " Then ask the LLM to present a pain-point in that industry - something challenging that might be ripe for an Agentic solution.
\n", " Finally have 3 third LLM call propose the Agentic AI solution.
\n", " We will cover this at up-coming labs, so don't worry if you're unsure.. just give it a try!\n", "
\n", "
" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [], "source": [ "# Helper function to create bilingual messages\n", "def create_bilingual_messages(user_content):\n", " \"\"\"\n", " Creates a messages list with system prompt for bilingual (Korean/English) responses\n", " \"\"\"\n", " return [\n", " {\n", " \"role\": \"system\", \n", " \"content\": \"You must always respond in both Korean and English. Provide your answer in Korean first, then provide the same answer in English. Use clear section headers like '### 한국어:' and '### English:' to separate the languages.\"\n", " },\n", " {\n", " \"role\": \"user\", \n", " \"content\": user_content\n", " }\n", " ]\n", "\n", "# Example usage:\n", "# messages = create_bilingual_messages(\"Your question here\")\n" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "### 한국어: \n", "WPT(무선 전력 전송) 분야에서 에이전틱 AI(Agentic AI, 자율적 인공지능) 기회가 있을 만한 비즈니스 영역 중 하나는 **스마트 전력 네트워크 최적화 및 관리**입니다.\n", "\n", "무선 전력 전송 시스템은 여러 장치에 비효율 없이 전력을 분배하는 것이 중요합니다. 에이전틱 AI는 실시간으로 여러 센서와 디바이스 데이터를 분석하여 최적의 전력 배분, 네트워크 장애 감지, 예측적 유지보수, 그리고 동적 환경 변화에 따른 효율적인 전력 조절 등을 자율적으로 수행할 수 있습니다. 특히 스마트 시티, IoT 디바이스 혹은 전기차 충전 인프라에서 무선 전력 전송 네트워크의 효율성을 극대화하는 데 큰 역할을 할 수 있습니다.\n", "\n", "이외에도 에이전틱 AI가 WPT 및 관련 인프라의 보안 강화, 사용자 맞춤 전력 서비스 제공, 에너지 소비 패턴 분석 및 최적화 등 다양한 영역에서 혁신을 이끌 수 있습니다.\n", "\n", "### English: \n", "One promising business area in the WPT (Wireless Power Transmission) field for an Agentic AI opportunity is **smart power network optimization and management**.\n", "\n", "Wireless power transmission systems require efficient distribution of power across multiple devices. Agentic AI can autonomously analyze real-time data from various sensors and devices to optimize power allocation, detect network faults, perform predictive maintenance, and dynamically adjust power flow according to environmental changes. This is particularly valuable in smart cities, IoT devices, or electric vehicle charging infrastructures, where maximizing the efficiency of wireless power networks is critical.\n", "\n", "Additionally, Agentic AI can drive innovation in WPT by enhancing security of wireless power systems and infrastructure, delivering personalized power services to users, and optimizing energy consumption patterns among other possibilities." ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# First create the messages:\n", "\n", "messages = create_bilingual_messages(\"Pick a business area in WPT (Wireless power transmission) field that might worth exploring for an Agentic AI opportunity.\")\n", "\n", "# Then make the first call:\n", "\n", "response = openai.chat.completions.create(\n", " model=\"gpt-4.1-mini\",\n", " messages=messages\n", ")\n", "\n", "# Then read the business idea:\n", "\n", "business_idea = response.choices[0].message.content\n", "\n", "display(Markdown(business_idea))\n", "\n" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "### 한국어: \n", "WPT(무선 전력 전송) 분야에서 중요한 페인 포인트 중 하나는 **복잡한 다중 장치 전력 분배의 실시간 최적화와 장애 대응의 어려움**입니다. \n", "무선 전력 네트워크가 여러 디바이스에 동시에 전력을 공급할 때, 각 장치의 전력 요구량과 네트워크 상태가 지속적으로 변하기 때문에 전력 분배의 효율성을 유지하기 어렵습니다. 또한, 네트워크 내 작은 이상 신호나 장애를 빠르게 감지하고 대응하지 못하면 전력 낭비나 서비스 중단으로 이어지는 위험이 큽니다. \n", "이 문제는 특히 IoT가 확대되고, 전기차 충전 및 스마트 시티 인프라가 복잡해질수록 더욱 심각해지며, 수동적인 관리 체계로는 한계가 있습니다.\n", "\n", "에이전틱 AI는 이러한 상황에서 실시간 데이터를 자율적으로 분석하고, 동적 환경 변화에 맞춰 최적의 전력 분배 전략을 실행하며, 장애를 조기에 감지하여 예측 가능한 유지보수를 가능하게 할 수 있습니다.\n", "\n", "### English: \n", "A major pain point in the WPT (Wireless Power Transmission) industry is **the difficulty of real-time optimization and fault response in complex multi-device power distribution**. \n", "When wireless power networks supply power to multiple devices simultaneously, the power demands and network conditions of each device continuously fluctuate, making it challenging to maintain efficient power allocation. Additionally, failure to promptly detect and address minor anomalies or faults within the network can lead to power wastage or service interruptions. \n", "This issue becomes increasingly critical as IoT expands, electric vehicle charging and smart city infrastructures become more complex, and purely manual management systems reach their limits.\n", "\n", "Agentic AI can autonomously analyze real-time data in such situations, execute optimal power distribution strategies adapted to dynamic environmental changes, and detect faults early enough to enable predictive maintenance." ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# ask the LLM to propose a pain-point in the given industry\n", "\n", "messages = create_bilingual_messages(f\"Please propose a pain-point in the given industry: {business_idea}\")\n", "\n", "response = openai.chat.completions.create(\n", " model=\"gpt-4.1-mini\",\n", " messages=messages)\n", "\n", "pain_point = response.choices[0].message.content\n", "\n", "display(Markdown(pain_point))\n", "\n" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "data": { "text/markdown": [ "### 한국어: \n", "Agentic AI 솔루션 제안: \n", "\n", "1. **실시간 데이터 통합 및 분석 에이전트** \n", "다중 센서와 IoT 디바이스로부터 전력 사용량, 환경 상태, 네트워크 상태 데이터를 수집하는 에이전트를 배치합니다. 이 에이전트는 실시간으로 데이터를 통합하고 이상 징후를 탐지하며, 복잡한 다변량 시계열 데이터를 AI 기반 예측 모델에 입력합니다. \n", "\n", "2. **동적 전력 분배 최적화 에이전트** \n", "수집된 데이터를 바탕으로 각 디바이스별 전력 요구량과 네트워크 상태를 고려한 최적 전력 분배 계획을 실시간으로 산출합니다. 강화학습(RL) 또는 최적화 알고리즘을 활용해 에너지 효율과 서비스 품질을 극대화하는 전략을 개발, 적용합니다. \n", "\n", "3. **장애 예측 및 대응 에이전트** \n", "이상 신호나 장애 패턴을 빠르게 탐지해 자동으로 경고를 발송하고, 자체 진단 후 재분배 전략을 실행하거나 문제 발생 가능 구간을 사전에 차단하여 장애 확산을 방지합니다. 또한, 단순 알림을 넘어 예측 유지보수까지 실행할 수 있도록 설계합니다. \n", "\n", "4. **모듈화된 협업 시스템** \n", "각 에이전트가 독립적으로 작업하면서도 상호 연동하는 구조를 가집니다. 예를 들어, 장애 예측 에이전트가 이슈를 발견하면 동적 분배 에이전트에 즉시 정보를 전달하여 전력 재배분을 유도합니다. \n", "\n", "5. **인간-에이전트 인터페이스** \n", "운영자가 에이전트의 권고사항을 모니터링하고 수동 개입할 수 있는 대시보드를 제공합니다. AI의 결정 과정과 현재 상태를 투명하게 시각화하여 신뢰도를 높이며, 비상 상황에서는 신속한 대응을 가능하게 합니다. \n", "\n", "이러한 Agentic AI 시스템은 무선 전력 네트워크의 복잡한 환경 변화에 유연하게 대응하며, 수동 처리 한계를 극복해 전력 분배 효율성과 신뢰성을 획기적으로 개선할 수 있습니다. \n", "\n", "---\n", "\n", "### English: \n", "Proposed Agentic AI Solution: \n", "\n", "1. **Real-time Data Integration and Analysis Agent** \n", "Deploy agents that gather power consumption, environmental conditions, and network status data from multiple sensors and IoT devices. These agents integrate real-time data, detect anomalies, and feed complex multivariate time-series data into AI-based predictive models. \n", "\n", "2. **Dynamic Power Distribution Optimization Agent** \n", "Based on collected data, the agent calculates real-time optimal power allocation plans considering each device’s power demand and network conditions. It uses reinforcement learning or optimization algorithms to develop and apply strategies maximizing energy efficiency and service quality. \n", "\n", "3. **Fault Prediction and Response Agent** \n", "Rapidly detects abnormal signals or fault patterns, automatically issues alerts, performs self-diagnosis, and executes redistribution strategies or pre-emptively isolates potential fault zones to prevent fault propagation. It is designed to enable predictive maintenance beyond simple notifications. \n", "\n", "4. **Modular Collaborative System** \n", "Each agent operates independently but interacts seamlessly. For instance, the fault prediction agent immediately communicates detected issues to the dynamic distribution agent, prompting power reallocation. \n", "\n", "5. **Human-Agent Interface** \n", "Provide dashboards where operators can monitor agent recommendations and intervene manually if needed. Visualization of AI decision processes and current system status enhances trust and allows swift response during emergencies. \n", "\n", "This Agentic AI system flexibly adapts to complex changes within wireless power networks, overcoming manual management limitations to drastically improve power distribution efficiency and reliability." ], "text/plain": [ "" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "# have 3 third LLM call propose the Agentic AI solution. \n", "\n", "messages = create_bilingual_messages(f\"Propose an Agentic AI solution for this pain point: {pain_point}\")\n", "\n", "response = openai.chat.completions.create(\n", " model=\"gpt-4.1-mini\",\n", " messages=messages)\n", "\n", "agentic_solution = response.choices[0].message.content\n", "\n", "display(Markdown(agentic_solution))\n", "\n" ] } ], "metadata": { "kernelspec": { "display_name": ".venv", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.12" } }, "nbformat": 4, "nbformat_minor": 2 }