santanche commited on
Commit
a30a065
·
1 Parent(s): 6459d83

feat (start): first setup

Browse files
Files changed (6) hide show
  1. .gitignore +45 -0
  2. Dockerfile +53 -0
  3. README.md +67 -5
  4. requirements.txt +7 -0
  5. server.py +185 -0
  6. static/index.html +278 -0
.gitignore ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Python
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+ *.so
6
+ .Python
7
+ env/
8
+ venv/
9
+ ENV/
10
+ build/
11
+ develop-eggs/
12
+ dist/
13
+ downloads/
14
+ eggs/
15
+ .eggs/
16
+ lib/
17
+ lib64/
18
+ parts/
19
+ sdist/
20
+ var/
21
+ wheels/
22
+ *.egg-info/
23
+ .installed.cfg
24
+ *.egg
25
+
26
+ # IDE
27
+ .vscode/
28
+ .idea/
29
+ *.swp
30
+ *.swo
31
+ *~
32
+
33
+ # OS
34
+ .DS_Store
35
+ Thumbs.db
36
+
37
+ # Ollama models (large files)
38
+ .ollama/
39
+
40
+ # Logs
41
+ *.log
42
+
43
+ # Environment variables
44
+ .env
45
+ .env.local
Dockerfile ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.11-slim
2
+
3
+ # Set working directory
4
+ WORKDIR /app
5
+
6
+ # Install system dependencies
7
+ RUN apt-get update && apt-get install -y \
8
+ curl \
9
+ && rm -rf /var/lib/apt/lists/*
10
+
11
+ # Install Ollama
12
+ RUN curl -fsSL https://ollama.ai/install.sh | sh
13
+
14
+ # Copy requirements first for better caching
15
+ COPY requirements.txt .
16
+
17
+ # Install Python dependencies
18
+ RUN pip install --no-cache-dir -r requirements.txt
19
+
20
+ # Copy application files
21
+ COPY server.py .
22
+ COPY static/ ./static/
23
+
24
+ # Create directory for Ollama models
25
+ RUN mkdir -p /root/.ollama
26
+
27
+ # Expose port
28
+ EXPOSE 7860
29
+
30
+ # Set environment variables
31
+ ENV OLLAMA_HOST=0.0.0.0:11434
32
+ ENV PYTHONUNBUFFERED=1
33
+
34
+ # Create startup script
35
+ RUN echo '#!/bin/bash\n\
36
+ set -e\n\
37
+ \n\
38
+ echo "Starting Ollama server..."\n\
39
+ ollama serve &\n\
40
+ OLLAMA_PID=$!\n\
41
+ \n\
42
+ echo "Waiting for Ollama to be ready..."\n\
43
+ sleep 5\n\
44
+ \n\
45
+ echo "Pulling phi3 model..."\n\
46
+ ollama pull phi3\n\
47
+ \n\
48
+ echo "Starting FastAPI server..."\n\
49
+ uvicorn server:app --host 0.0.0.0 --port 7860\n\
50
+ ' > /app/start.sh && chmod +x /app/start.sh
51
+
52
+ # Run the startup script
53
+ CMD ["/app/start.sh"]
README.md CHANGED
@@ -1,12 +1,74 @@
1
  ---
2
- title: Customizable Sml Agents
3
- emoji: 🌍
4
- colorFrom: red
5
- colorTo: yellow
6
  sdk: docker
7
  pinned: false
8
  license: gpl-3.0
9
  short_description: Customizable SML Agents
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Customizable SML Agents
3
+ emoji: 🤖
4
+ colorFrom: blue
5
+ colorTo: indigo
6
  sdk: docker
7
  pinned: false
8
  license: gpl-3.0
9
  short_description: Customizable SML Agents
10
  ---
11
 
12
+ # Natural Language to SQL - Multi-Agent System
13
+
14
+ A multi-agent system powered by small language models (SLMs) that converts natural language questions into SQL queries.
15
+
16
+ ## Features
17
+
18
+ - 🔍 **Schema Analyzer Agent**: Identifies relevant tables and columns
19
+ - ⚙️ **Query Generator Agent**: Converts natural language to SQL
20
+ - ✓ **Syntax Validator Agent**: Validates SQL syntax
21
+ - 🎯 **Semantic Verifier Agent**: Ensures query answers the question
22
+ - 🔄 **Iterative Refinement**: Up to 3 attempts to get the correct query
23
+ - 📋 **Real-time Logging**: See each agent's input and output
24
+
25
+ ## Architecture
26
+
27
+ This system demonstrates how autonomous agents can interact with Small Language Models (SLMs) running locally to solve complex problems. Each agent has a specific role, and they work together through an iterative pipeline.
28
+
29
+ ## Technology Stack
30
+
31
+ - **Backend**: FastAPI
32
+ - **Frontend**: React with Tailwind CSS
33
+ - **LLM**: Ollama (phi3 model)
34
+ - **Agent Framework**: LangChain
35
+
36
+ ## Usage
37
+
38
+ 1. Enter your database schema
39
+ 2. Customize agent prompts (optional)
40
+ 3. Type your question in natural language
41
+ 4. Click "Execute Pipeline"
42
+ 5. Watch the agents work together to generate SQL
43
+
44
+ ## Example
45
+
46
+ **Question**: "What were the top 5 products by revenue in 2024?"
47
+
48
+ **Schema**:
49
+ ```
50
+ - products (id, name, category, price)
51
+ - orders (id, customer_id, order_date, total)
52
+ - order_items (id, order_id, product_id, quantity, price)
53
+ ```
54
+
55
+ **Generated SQL**:
56
+ ```sql
57
+ SELECT p.name, SUM(oi.quantity * oi.price) AS revenue
58
+ FROM products p
59
+ JOIN order_items oi ON p.id = oi.product_id
60
+ JOIN orders o ON oi.order_id = o.id
61
+ WHERE YEAR(o.order_date) = 2024
62
+ GROUP BY p.id, p.name
63
+ ORDER BY revenue DESC
64
+ LIMIT 5
65
+ ```
66
+
67
+ ## Educational Purpose
68
+
69
+ This application is designed for teaching concepts related to:
70
+ - Multi-agent systems
71
+ - Small Language Models (SLMs)
72
+ - Agent orchestration patterns
73
+ - Iterative refinement techniques
74
+ - Tool integration with LLMs
requirements.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ fastapi==0.109.0
2
+ uvicorn[standard]==0.27.0
3
+ langchain==0.1.0
4
+ langchain-community==0.0.13
5
+ sqlparse==0.4.4
6
+ pydantic==2.5.3
7
+ aiofiles==23.2.1
server.py ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from fastapi import FastAPI, HTTPException
2
+ from fastapi.middleware.cors import CORSMiddleware
3
+ from fastapi.responses import StreamingResponse, FileResponse
4
+ from fastapi.staticfiles import StaticFiles
5
+ from pydantic import BaseModel
6
+ from langchain_community.llms import Ollama
7
+ from langchain.prompts import PromptTemplate
8
+ from langchain.chains import LLMChain
9
+ import sqlparse
10
+ import json
11
+ import asyncio
12
+ from typing import AsyncGenerator
13
+ from pathlib import Path
14
+ import os
15
+
16
+ app = FastAPI(title="NL to SQL Multi-Agent System")
17
+
18
+ # Enable CORS for web interface
19
+ app.add_middleware(
20
+ CORSMiddleware,
21
+ allow_origins=["*"],
22
+ allow_credentials=True,
23
+ allow_methods=["*"],
24
+ allow_headers=["*"],
25
+ )
26
+
27
+ # Mount static files directory
28
+ static_dir = Path(__file__).parent / "static"
29
+ static_dir.mkdir(exist_ok=True)
30
+
31
+ app.mount("/static", StaticFiles(directory=static_dir), name="static")
32
+
33
+ # Request model
34
+ class ExecutionRequest(BaseModel):
35
+ schema: str
36
+ schema_prompt: str
37
+ query_prompt: str
38
+ syntax_prompt: str
39
+ semantic_prompt: str
40
+ question: str
41
+ model: str = "phi3"
42
+ max_iterations: int = 3
43
+
44
+ # Initialize LLM
45
+ def get_llm(model_name: str):
46
+ return Ollama(model=model_name, temperature=0.1)
47
+
48
+ # Syntax validator (deterministic)
49
+ def validate_syntax(sql_query: str):
50
+ try:
51
+ parsed = sqlparse.parse(sql_query)
52
+ if not parsed:
53
+ return False, "Empty or invalid SQL"
54
+ return True, "Syntax valid"
55
+ except Exception as e:
56
+ return False, f"Syntax error: {str(e)}"
57
+
58
+ # Stream event helper
59
+ def create_event(event_type: str, **kwargs):
60
+ data = {"type": event_type, **kwargs}
61
+ return f"data: {json.dumps(data)}\n\n"
62
+
63
+ # Main agent pipeline
64
+ async def execute_pipeline(request: ExecutionRequest) -> AsyncGenerator[str, None]:
65
+ try:
66
+ # Initialize LLM
67
+ llm = get_llm(request.model)
68
+
69
+ yield create_event("agent_start", agent="Schema Analyzer")
70
+ yield create_event("agent_input", content=f"Schema: {request.schema[:100]}... | Question: {request.question}")
71
+
72
+ # Agent 1: Schema Analyzer
73
+ schema_prompt = PromptTemplate(
74
+ input_variables=["schema", "question"],
75
+ template=request.schema_prompt
76
+ )
77
+ schema_chain = LLMChain(llm=llm, prompt=schema_prompt)
78
+
79
+ relevant_schema_result = schema_chain.invoke({
80
+ "schema": request.schema,
81
+ "question": request.question
82
+ })
83
+ relevant_schema = relevant_schema_result.get('text', relevant_schema_result) if isinstance(relevant_schema_result, dict) else relevant_schema_result
84
+
85
+ yield create_event("agent_output", content=relevant_schema.strip())
86
+
87
+ # Iteration loop
88
+ sql_query = None
89
+ for iteration in range(request.max_iterations):
90
+ yield create_event("iteration", iteration=iteration + 1)
91
+
92
+ # Agent 2: Query Generator
93
+ yield create_event("agent_start", agent="Query Generator")
94
+ yield create_event("agent_input", content=f"Relevant schema: {relevant_schema[:100]}...")
95
+
96
+ query_prompt = PromptTemplate(
97
+ input_variables=["question", "relevant_schema"],
98
+ template=request.query_prompt
99
+ )
100
+ sql_chain = LLMChain(llm=llm, prompt=query_prompt)
101
+
102
+ sql_result = sql_chain.invoke({
103
+ "question": request.question,
104
+ "relevant_schema": relevant_schema
105
+ })
106
+ sql_query = sql_result.get('text', sql_result) if isinstance(sql_result, dict) else sql_result
107
+ sql_query = sql_query.strip()
108
+
109
+ yield create_event("agent_output", content=sql_query)
110
+
111
+ # Agent 3: Syntax Validator
112
+ yield create_event("agent_start", agent="Syntax Validator")
113
+ is_valid, syntax_msg = validate_syntax(sql_query)
114
+
115
+ if is_valid:
116
+ yield create_event("validation", content=syntax_msg, status="pass")
117
+ else:
118
+ yield create_event("validation", content=syntax_msg, status="fail")
119
+ continue
120
+
121
+ # Agent 4: Semantic Verifier
122
+ yield create_event("agent_start", agent="Semantic Verifier")
123
+ yield create_event("agent_input", content=f"Checking if SQL answers: {request.question}")
124
+
125
+ verify_prompt = PromptTemplate(
126
+ input_variables=["question", "sql_query"],
127
+ template=request.semantic_prompt
128
+ )
129
+ verify_chain = LLMChain(llm=llm, prompt=verify_prompt)
130
+
131
+ verification_result = verify_chain.invoke({
132
+ "question": request.question,
133
+ "sql_query": sql_query
134
+ })
135
+ verification = verification_result.get('text', verification_result) if isinstance(verification_result, dict) else verification_result
136
+
137
+ yield create_event("agent_output", content=verification.strip())
138
+
139
+ if "YES" in verification.upper():
140
+ yield create_event("validation", content="Query is semantically correct", status="pass")
141
+ break
142
+ else:
143
+ yield create_event("validation", content="Query has semantic issues", status="fail")
144
+
145
+ # Final result
146
+ yield create_event("final_result", sql=sql_query if sql_query else "No valid SQL generated")
147
+
148
+ except Exception as e:
149
+ yield create_event("error", message=str(e))
150
+
151
+ @app.get("/")
152
+ async def root():
153
+ """Serve the main web interface"""
154
+ index_file = static_dir / "index.html"
155
+ if index_file.exists():
156
+ return FileResponse(index_file)
157
+ return {"message": "Place index.html in the static/ directory"}
158
+
159
+ @app.post("/execute")
160
+ async def execute(request: ExecutionRequest):
161
+ """Execute the multi-agent NL to SQL pipeline with streaming logs"""
162
+ return StreamingResponse(
163
+ execute_pipeline(request),
164
+ media_type="text/event-stream"
165
+ )
166
+
167
+ @app.get("/health")
168
+ async def health():
169
+ """Health check endpoint"""
170
+ return {"status": "ok"}
171
+
172
+ @app.get("/models")
173
+ async def list_models():
174
+ """List available Ollama models"""
175
+ # This would require calling ollama CLI or API
176
+ # For now, return common models
177
+ return {
178
+ "models": ["phi3", "llama3.2:3b", "gemma2:2b", "mistral"]
179
+ }
180
+
181
+ if __name__ == "__main__":
182
+ import uvicorn
183
+ # HuggingFace Spaces uses port 7860
184
+ port = int(os.environ.get("PORT", 7860))
185
+ uvicorn.run(app, host="0.0.0.0", port=port)
static/index.html ADDED
@@ -0,0 +1,278 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+ <head>
4
+ <meta charset="UTF-8">
5
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
+ <title>NL to SQL Multi-Agent System</title>
7
+ <script crossorigin src="https://unpkg.com/react@18/umd/react.production.min.js"></script>
8
+ <script crossorigin src="https://unpkg.com/react-dom@18/umd/react-dom.production.min.js"></script>
9
+ <script src="https://unpkg.com/@babel/standalone/babel.min.js"></script>
10
+ <script src="https://cdn.tailwindcss.com"></script>
11
+ </head>
12
+ <body>
13
+ <div id="root"></div>
14
+
15
+ <script type="text/babel">
16
+ const { useState } = React;
17
+
18
+ const NLtoSQLApp = () => {
19
+ const [isExecuting, setIsExecuting] = useState(false);
20
+ const [logs, setLogs] = useState('');
21
+
22
+ const [schema, setSchema] = useState(`Tables:
23
+ - products (id, name, category, price)
24
+ - orders (id, customer_id, order_date, total)
25
+ - order_items (id, order_id, product_id, quantity, price)`);
26
+
27
+ const [schemaPrompt, setSchemaPrompt] = useState(`Given this database schema:
28
+ {schema}
29
+
30
+ For the question: "{question}"
31
+
32
+ List only the relevant tables and columns needed. Be concise.
33
+
34
+ Relevant schema:`);
35
+
36
+ const [queryPrompt, setQueryPrompt] = useState(`Convert this question to SQL using only these tables/columns:
37
+ {relevant_schema}
38
+
39
+ Question: {question}
40
+
41
+ SQL query (return only the query, no explanation):`);
42
+
43
+ const [syntaxPrompt, setSyntaxPrompt] = useState(`Check if this SQL query is syntactically valid:
44
+ {sql_query}
45
+
46
+ Answer with "VALID" or "INVALID" and explain any issues.
47
+
48
+ Answer:`);
49
+
50
+ const [semanticPrompt, setSemanticPrompt] = useState(`Does this SQL query correctly answer the question?
51
+
52
+ Question: {question}
53
+ SQL: {sql_query}
54
+
55
+ Answer with "YES" or "NO" and briefly explain why.
56
+
57
+ Answer:`);
58
+
59
+ const [question, setQuestion] = useState("What were the top 5 products by revenue in 2024?");
60
+
61
+ const addLog = (message, type = 'info') => {
62
+ const timestamp = new Date().toLocaleTimeString();
63
+ const prefix = type === 'error' ? '❌' : type === 'success' ? '✅' : type === 'agent' ? '🤖' : 'ℹ️';
64
+ setLogs(prev => `${prev}[${timestamp}] ${prefix} ${message}\n`);
65
+ };
66
+
67
+ const executeAgentPipeline = async () => {
68
+ setIsExecuting(true);
69
+ setLogs('');
70
+
71
+ addLog('Starting NL to SQL conversion...', 'info');
72
+ addLog(`Question: "${question}"`, 'info');
73
+
74
+ try {
75
+ const response = await fetch('/execute', {
76
+ method: 'POST',
77
+ headers: {
78
+ 'Content-Type': 'application/json',
79
+ },
80
+ body: JSON.stringify({
81
+ schema,
82
+ schema_prompt: schemaPrompt,
83
+ query_prompt: queryPrompt,
84
+ syntax_prompt: syntaxPrompt,
85
+ semantic_prompt: semanticPrompt,
86
+ question
87
+ }),
88
+ });
89
+
90
+ if (!response.ok) {
91
+ throw new Error(`HTTP error! status: ${response.status}`);
92
+ }
93
+
94
+ const reader = response.body.getReader();
95
+ const decoder = new TextDecoder();
96
+
97
+ while (true) {
98
+ const { done, value } = await reader.read();
99
+ if (done) break;
100
+
101
+ const chunk = decoder.decode(value);
102
+ const lines = chunk.split('\n');
103
+
104
+ for (const line of lines) {
105
+ if (line.startsWith('data: ')) {
106
+ try {
107
+ const data = JSON.parse(line.slice(6));
108
+
109
+ if (data.type === 'agent_start') {
110
+ addLog(`Agent: ${data.agent}`, 'agent');
111
+ } else if (data.type === 'agent_input') {
112
+ addLog(`Input: ${data.content}`, 'info');
113
+ } else if (data.type === 'agent_output') {
114
+ addLog(`Output: ${data.content}`, 'info');
115
+ } else if (data.type === 'validation') {
116
+ addLog(`Validation: ${data.content}`, data.status === 'pass' ? 'success' : 'error');
117
+ } else if (data.type === 'iteration') {
118
+ addLog(`--- Iteration ${data.iteration} ---`, 'info');
119
+ } else if (data.type === 'final_result') {
120
+ addLog(`\n=== FINAL SQL QUERY ===\n${data.sql}`, 'success');
121
+ } else if (data.type === 'error') {
122
+ addLog(`Error: ${data.message}`, 'error');
123
+ }
124
+ } catch (e) {
125
+ // Skip malformed JSON
126
+ }
127
+ }
128
+ }
129
+ }
130
+
131
+ } catch (error) {
132
+ addLog(`Error: ${error.message}`, 'error');
133
+ } finally {
134
+ setIsExecuting(false);
135
+ }
136
+ };
137
+
138
+ return (
139
+ <div className="min-h-screen bg-gradient-to-br from-blue-50 to-indigo-100 p-6">
140
+ <div className="max-w-7xl mx-auto">
141
+ <div className="bg-white rounded-lg shadow-xl p-6 mb-6">
142
+ <div className="flex items-center gap-3 mb-6">
143
+ <svg className="w-8 h-8 text-indigo-600" fill="none" stroke="currentColor" viewBox="0 0 24 24">
144
+ <path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M4 7v10c0 2.21 3.582 4 8 4s8-1.79 8-4V7M4 7c0 2.21 3.582 4 8 4s8-1.79 8-4M4 7c0-2.21 3.582-4 8-4s8 1.79 8 4m0 5c0 2.21-3.582 4-8 4s-8-1.79-8-4" />
145
+ </svg>
146
+ <h1 className="text-3xl font-bold text-gray-800">Natural Language to SQL</h1>
147
+ <svg className="w-8 h-8 text-indigo-600" fill="none" stroke="currentColor" viewBox="0 0 24 24">
148
+ <path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M9.663 17h4.673M12 3v1m6.364 1.636l-.707.707M21 12h-1M4 12H3m3.343-5.657l-.707-.707m2.828 9.9a5 5 0 117.072 0l-.548.547A3.374 3.374 0 0014 18.469V19a2 2 0 11-4 0v-.531c0-.895-.356-1.754-.988-2.386l-.548-.547z" />
149
+ </svg>
150
+ </div>
151
+ <p className="text-gray-600 mb-4">Multi-Agent System with Small Language Models</p>
152
+ </div>
153
+
154
+ <div className="grid grid-cols-1 lg:grid-cols-2 gap-6">
155
+ {/* Left Column - Configuration */}
156
+ <div className="space-y-4">
157
+ {/* Database Schema */}
158
+ <div className="bg-white rounded-lg shadow p-4">
159
+ <label className="block text-sm font-semibold text-gray-700 mb-2">
160
+ 📊 Database Schema
161
+ </label>
162
+ <textarea
163
+ value={schema}
164
+ onChange={(e) => setSchema(e.target.value)}
165
+ className="w-full h-32 p-3 border border-gray-300 rounded-lg font-mono text-sm focus:ring-2 focus:ring-indigo-500 focus:border-transparent"
166
+ placeholder="Enter database schema..."
167
+ />
168
+ </div>
169
+
170
+ {/* Schema Analyzer Prompt */}
171
+ <div className="bg-white rounded-lg shadow p-4">
172
+ <label className="block text-sm font-semibold text-gray-700 mb-2">
173
+ 🔍 Schema Analyzer Prompt
174
+ </label>
175
+ <textarea
176
+ value={schemaPrompt}
177
+ onChange={(e) => setSchemaPrompt(e.target.value)}
178
+ className="w-full h-32 p-3 border border-gray-300 rounded-lg font-mono text-sm focus:ring-2 focus:ring-indigo-500 focus:border-transparent"
179
+ />
180
+ </div>
181
+
182
+ {/* Query Generator Prompt */}
183
+ <div className="bg-white rounded-lg shadow p-4">
184
+ <label className="block text-sm font-semibold text-gray-700 mb-2">
185
+ ⚙️ Query Generator Prompt
186
+ </label>
187
+ <textarea
188
+ value={queryPrompt}
189
+ onChange={(e) => setQueryPrompt(e.target.value)}
190
+ className="w-full h-32 p-3 border border-gray-300 rounded-lg font-mono text-sm focus:ring-2 focus:ring-indigo-500 focus:border-transparent"
191
+ />
192
+ </div>
193
+
194
+ {/* Syntax Validator Prompt */}
195
+ <div className="bg-white rounded-lg shadow p-4">
196
+ <label className="block text-sm font-semibold text-gray-700 mb-2">
197
+ ✓ Syntax Validator Prompt
198
+ </label>
199
+ <textarea
200
+ value={syntaxPrompt}
201
+ onChange={(e) => setSyntaxPrompt(e.target.value)}
202
+ className="w-full h-32 p-3 border border-gray-300 rounded-lg font-mono text-sm focus:ring-2 focus:ring-indigo-500 focus:border-transparent"
203
+ />
204
+ </div>
205
+
206
+ {/* Semantic Verifier Prompt */}
207
+ <div className="bg-white rounded-lg shadow p-4">
208
+ <label className="block text-sm font-semibold text-gray-700 mb-2">
209
+ 🎯 Semantic Verifier Prompt
210
+ </label>
211
+ <textarea
212
+ value={semanticPrompt}
213
+ onChange={(e) => setSemanticPrompt(e.target.value)}
214
+ className="w-full h-32 p-3 border border-gray-300 rounded-lg font-mono text-sm focus:ring-2 focus:ring-indigo-500 focus:border-transparent"
215
+ />
216
+ </div>
217
+
218
+ {/* User Question */}
219
+ <div className="bg-white rounded-lg shadow p-4">
220
+ <label className="block text-sm font-semibold text-gray-700 mb-2">
221
+ 💬 User Question
222
+ </label>
223
+ <textarea
224
+ value={question}
225
+ onChange={(e) => setQuestion(e.target.value)}
226
+ className="w-full h-24 p-3 border border-gray-300 rounded-lg text-sm focus:ring-2 focus:ring-indigo-500 focus:border-transparent"
227
+ placeholder="Enter your question in natural language..."
228
+ />
229
+ </div>
230
+
231
+ {/* Execute Button */}
232
+ <button
233
+ onClick={executeAgentPipeline}
234
+ disabled={isExecuting}
235
+ className="w-full bg-indigo-600 hover:bg-indigo-700 disabled:bg-gray-400 text-white font-semibold py-3 px-6 rounded-lg shadow-lg transition-colors duration-200 flex items-center justify-center gap-2"
236
+ >
237
+ {isExecuting ? (
238
+ <>
239
+ <svg className="w-5 h-5 animate-spin" fill="none" stroke="currentColor" viewBox="0 0 24 24">
240
+ <circle className="opacity-25" cx="12" cy="12" r="10" stroke="currentColor" strokeWidth="4"></circle>
241
+ <path className="opacity-75" fill="currentColor" d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"></path>
242
+ </svg>
243
+ Executing...
244
+ </>
245
+ ) : (
246
+ <>
247
+ <svg className="w-5 h-5" fill="none" stroke="currentColor" viewBox="0 0 24 24">
248
+ <path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M14.752 11.168l-3.197-2.132A1 1 0 0010 9.87v4.263a1 1 0 001.555.832l3.197-2.132a1 1 0 000-1.664z" />
249
+ <path strokeLinecap="round" strokeLinejoin="round" strokeWidth={2} d="M21 12a9 9 0 11-18 0 9 9 0 0118 0z" />
250
+ </svg>
251
+ Execute Pipeline
252
+ </>
253
+ )}
254
+ </button>
255
+ </div>
256
+
257
+ {/* Right Column - Logs */}
258
+ <div className="bg-white rounded-lg shadow p-4">
259
+ <label className="block text-sm font-semibold text-gray-700 mb-2">
260
+ 📋 Execution Log
261
+ </label>
262
+ <textarea
263
+ value={logs}
264
+ readOnly
265
+ className="w-full h-[calc(100vh-200px)] p-3 border border-gray-300 rounded-lg font-mono text-xs bg-gray-50 focus:outline-none overflow-auto"
266
+ placeholder="Logs will appear here when you execute the pipeline..."
267
+ />
268
+ </div>
269
+ </div>
270
+ </div>
271
+ </div>
272
+ );
273
+ };
274
+
275
+ ReactDOM.render(<NLtoSQLApp />, document.getElementById('root'));
276
+ </script>
277
+ </body>
278
+ </html>