AI & ML interests

None defined yet.

Semantic Kernel Chatbot with Cosmos DB

A sophisticated AI-powered chatbot built with Microsoft Semantic Kernel that intelligently queries databases using both predefined SQL templates and dynamic query generation. The system includes RAG capabilities, analytics dashboards, and semantic query clustering.

Overview

This project implements an intelligent database query assistant that leverages Large Language Models (LLMs) to interact with data stored in Azure Cosmos DB. The chatbot can understand natural language queries and either use predefined SQL templates or generate custom queries on the fly.

Key Features

Intelligent Query System

  • Semantic Kernel Integration: Built on Microsoft Semantic Kernel framework for orchestrating AI workflows
  • Template-based: Predefined SQL queries with parameter filling by the LLM
  • Dynamic Generation: LLM generates custom SQL queries for complex or novel requests
  • Cosmos DB Backend: All data stored and managed in Azure Cosmos DB
  • Query History Storage: All queries stored in Cosmos DB for continuous learning and semantic clustering

RAG Implementation

  • Local Ollama Integration: Run models locally for enhanced privacy and reduced costs
  • Retrieval-Augmented Generation: Improves response accuracy by retrieving relevant context

Analytics Dashboard

  • Query Monitoring: Track all queries made to the system
  • Error Tracking: Comprehensive error monitoring and logging
  • Semantic Clustering: Queries are semantically clustered using RAG to identify patterns and common use cases
  • Usage Insights: Understand how users interact with the chatbot

Architecture

Semantic Kernel Chatbot image

Langchain RAG + Ollama Chatbot image

Technical Stack

Cloud Deployment -

  • AI Framework: Microsoft Semantic Kernel
  • LLM: Azure AI Foundry - GPT-4o-mini
  • Database: Azure Cosmos DB
  • Analytics: Custom dashboard for monitoring and clustering

Local Deployment -

  • AI Framework: Langchain
  • LLM: Ollama - Llama3
  • RAG: Vector embeddings for semantic search

Core Components

  1. Semantic Kernel Plugins The system includes custom plugins that enable the LLM to interact with the database:

Predefined Query Plugin: Contains template SQL queries with parameters that the LLM fills based on user intent Dynamic Query Generator: Allows the LLM to construct SQL queries from scratch for complex requests RAG Plugin: Retrieves relevant context from historical queries to improve responses

  1. Query Processing Flow

User submits natural language query Semantic Kernel analyzes intent System decides between:

Using a predefined SQL template (parameter filling) Generating a new SQL query dynamically

Query executes against Cosmos DB Query and result stored for analytics and RAG Response returned to user

  1. Analytics & Monitoring

Query Storage: All queries logged to Cosmos DB with metadata Error Monitoring: Track and analyze failed queries Semantic Clustering: Queries grouped by semantic similarity using RAG embeddings Usage Patterns: Identify common query types and user behaviors

models 0

None public yet

datasets 0

None public yet