{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Unica Chatbot for Q&A" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This chatbot will be working for `Q&A` and we will make sure that it's not limited by trained data knowledge by also be able to search relevant information on the internet by using `tavily`." ] }, { "cell_type": "markdown", "metadata": { "jp-MarkdownHeadingCollapsed": true }, "source": [ "## Installing all the packages" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Packages installed successfully.\n" ] } ], "source": [ "# Adding all packages\n", "import sys\n", "import subprocess\n", "\n", "# Install packages from requirements.txt\n", "def install_packages(requirements_file='requirements.txt'):\n", " try:\n", " subprocess.check_call([sys.executable, '-m', 'pip', 'install', '-r', requirements_file])\n", " print(\"Packages installed successfully.\")\n", " except subprocess.CalledProcessError as e:\n", " print(f\"Error installing packages: {e}\")\n", "\n", "# Call the function to install packages\n", "install_packages()\n" ] }, { "cell_type": "markdown", "metadata": { "jp-MarkdownHeadingCollapsed": true }, "source": [ "## Chatbot Logic" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "# importing the packages to be used\n", "import logging\n", "import os\n", "import markdown\n", "from dotenv import load_dotenv\n", "import gradio as gr\n", "from langchain_groq import ChatGroq\n", "from langchain.utilities.tavily_search import TavilySearchAPIWrapper\n", "from langchain_community.tools.tavily_search import TavilySearchResults\n", "from langchain_core.messages import AnyMessage, SystemMessage, HumanMessage\n" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "# Set up logging configuration\n", "logging.basicConfig(\n", " level=logging.INFO,\n", " format='%(asctime)s - %(levelname)s - %(message)s',\n", " datefmt='%Y-%m-%d %H:%M:%S'\n", ")\n", "logger = logging.getLogger(__name__)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "# Intializing the API Key from the environment variables\n", "# Load environment variables from .env.local file\n", "load_dotenv('.env.local')\n", "\n", "# Access the variables\n", "groq_api_key = os.getenv('GROQ_API_KEY')\n", "tavily_api_key = os.getenv('TAVILY_API_KEY')\n", "\n", "# LLM Initialization\n", "llm = ChatGroq(\n", " model_name=\"llama-3.3-70b-versatile\",\n", " groq_api_key=groq_api_key,\n", " temperature=0\n", ")\n", "\n", "# Tavily Search engine for LLM\n", "tavilySearch = TavilySearchAPIWrapper(tavily_api_key=tavily_api_key)\n", "search_tool = TavilySearchResults(max_results=3, api_wrapper=tavilySearch)" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "# System Prompt\n", "system_prompt = \"\"\"\n", "You are Unica, a friendly and helpful assistant designed to support students on the Moodle platform. Your primary goal is to provide quick and accurate answers to students' study-related questions, helping them navigate their courses and resources efficiently.\n", "\n", "Guidelines:\n", "1. Understand the context of Moodle and the student's coursework.\n", "2. Be concise and clear in your responses.\n", "3. Provide relevant information directly addressing the student's question.\n", "4. Maintain a positive and encouraging tone.\n", "5. Offer study tips when appropriate.\n", "6. Handle unknowns gracefully by suggesting resources or encouraging further inquiry.\n", "7. Respect privacy and maintain a professional demeanor.\n", "8. Encourage engagement with course materials and resources.\n", "9. Use Markdown formatting to enhance the readability of your responses.\n", "\n", "Example Responses:\n", "- Student: \"How do I submit my assignment on Moodle?\"\n", " Unica: \"To submit your assignment, navigate to the course page, find the assignment link, and click on **'Submit assignment'**. Follow the prompts to upload your file. If you encounter any issues, feel free to ask for further assistance!\"\n", "\n", "- Student: \"What are the upcoming deadlines for my course?\"\n", " Unica: \"To view upcoming deadlines, check the course calendar or the announcements section on your Moodle dashboard. If you have specific questions about a deadline, it's best to contact your instructor.\"\n", "\"\"\"" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "# Agent Class\n", "class Agent:\n", " def __init__(self, model, tools, system_prompt=\"\"):\n", " self.system_prompt = system_prompt\n", " self.model = model\n", " self.tools = {t.name: t for t in tools}\n", "\n", " def call_groq(self, messages):\n", " if self.system_prompt:\n", " messages = [SystemMessage(content=self.system_prompt)] + messages\n", " logger.info(f\"Calling Groq with messages: {messages}\")\n", " message = self.model.invoke(messages)\n", " logger.info(f\"Groq response: {message}\")\n", " return message\n", "\n", " def handle_query(self, user_query):\n", " messages = [HumanMessage(content=user_query)]\n", " response = self.call_groq(messages)\n", " return response.content\n" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "# Initialize Agent\n", "agent = Agent(model=llm, tools=[search_tool], system_prompt=system_prompt)\n", "\n", "def render_markdown(markdown_text, is_user=False):\n", " # Convert Markdown to HTML\n", " html_content = markdown.markdown(markdown_text)\n", " # Wrap the HTML content in a chat bubble layout\n", " bubble_class = \"user-bubble\" if is_user else \"assistant-bubble\"\n", " bubble_html = f\"\"\"\n", "