SemanticKernelFinal
Convert natural language queries to Cosmos DB SQL
None defined yet.
A sophisticated AI-powered chatbot built with Microsoft Semantic Kernel that intelligently queries databases using both predefined SQL templates and dynamic query generation. The system includes RAG capabilities, analytics dashboards, and semantic query clustering.
This project implements an intelligent database query assistant that leverages Large Language Models (LLMs) to interact with data stored in Azure Cosmos DB. The chatbot can understand natural language queries and either use predefined SQL templates or generate custom queries on the fly.
Langchain RAG + Ollama Chatbot

Cloud Deployment -
Local Deployment -
Predefined Query Plugin: Contains template SQL queries with parameters that the LLM fills based on user intent Dynamic Query Generator: Allows the LLM to construct SQL queries from scratch for complex requests RAG Plugin: Retrieves relevant context from historical queries to improve responses
User submits natural language query Semantic Kernel analyzes intent System decides between:
Using a predefined SQL template (parameter filling) Generating a new SQL query dynamically
Query executes against Cosmos DB Query and result stored for analytics and RAG Response returned to user
Query Storage: All queries logged to Cosmos DB with metadata Error Monitoring: Track and analyze failed queries Semantic Clustering: Queries grouped by semantic similarity using RAG embeddings Usage Patterns: Identify common query types and user behaviors
Convert natural language queries to Cosmos DB SQL
Add, update, or delete converter data with metadata sync
Analyze and visualize chatbot interactions
Convert natural language queries to Cosmos DB SQL
Find LED converter information and recommendations