minatosnow commited on
Commit
21307f1
·
verified ·
1 Parent(s): afb47cd

Upload folder using huggingface_hub

Browse files
Files changed (17) hide show
  1. .gitignore +7 -0
  2. Dockerfile +25 -0
  3. README.md +111 -8
  4. app.py +79 -0
  5. data_ingestion.py +38 -0
  6. deploy_to_hf.py +42 -0
  7. evaluate.py +107 -0
  8. graph.py +318 -0
  9. knowledge.md +3 -0
  10. requirements.txt +15 -0
  11. results.md +8 -0
  12. run_gemini.py +42 -0
  13. run_ollama.py +35 -0
  14. static/index.html +108 -0
  15. static/script.js +250 -0
  16. static/style.css +492 -0
  17. test_data.csv +2 -0
.gitignore ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ chroma_db
2
+ .venv
3
+ __pycache__
4
+ .env
5
+ data.csv
6
+ evaluation_results.csv
7
+ tariff.pdf
Dockerfile ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.11-slim
2
+
3
+ # Create a user to run the app
4
+ RUN useradd -m -u 1000 user
5
+
6
+ # Set home to the user's home directory
7
+ ENV HOME=/home/user \
8
+ PATH=/home/user/.local/bin:$PATH
9
+
10
+ WORKDIR $HOME/app
11
+
12
+ # Copy the application code
13
+ COPY --chown=user:user . $HOME/app/
14
+
15
+ # Switch to the new user
16
+ USER user
17
+
18
+ # Install Python dependencies
19
+ RUN pip install --no-cache-dir -r requirements.txt
20
+
21
+ # Expose the default Hugging Face Spaces port
22
+ EXPOSE 7860
23
+
24
+ # Command to run the FastApi application
25
+ CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"]
README.md CHANGED
@@ -1,11 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
- title: Hts
3
- emoji: 💻
4
- colorFrom: pink
5
- colorTo: yellow
6
- sdk: docker
7
- pinned: false
8
- short_description: Canadian Harmonized Tariff Schedule (HTS) Code Identifier
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
1
+ # Multi-Agent Canadian HTS Classification System
2
+
3
+ This repository contains a sophisticated, multi-agent AI pipeline designed to intelligently predict Canadian Harmonized Tariff Schedule (HTS) codes from plain-text item descriptions. The system leverages **LangGraph**, **LangChain**, and **Chroma** vector databases to achieve high-accuracy classifications through a combination of live web research, official document retrieval, and self-consistency voting.
4
+
5
+ ## 🚀 Key Features
6
+
7
+ * **Multi-Agent Orchestration**: Powered by LangGraph for structured, stateful workflows.
8
+ * **Search Agent**: Real-time web browsing using DuckDuckGo and BeautifulSoup to gather material composition and industry context.
9
+ * **RAG Agent**: Contextual retrieval from the official 2024 Canadian Tariff Schedule (PDF) via ChromaDB.
10
+ * **Self-Consistency Voting**: Uses a 3-instance ensemble with prompt perturbations to calculate confidence scores for each digit of the HTS code.
11
+ * **Human-in-the-Loop Escalation**: Automatically triggers a clarification chat interface if the AI's confidence in any HTS element falls below 60%.
12
+ * **Modern Web UI**: A glassmorphic, responsive web interface built with FastAPI and Vanilla CSS/JS for real-time interaction and reasoning visualization.
13
+ * **Model Agnostic**: Supports Google Gemini (Cloud) and Ollama (Local) pipelines.
14
+
15
+ ---
16
+
17
+ ## 🏗️ Architecture
18
+
19
+ The classification workflow follows a directed acyclic graph (DAG):
20
+
21
+ 1. **Search Node**: Formulates optimized queries, scrapes results, and extracts relevant technical specifications.
22
+ 2. **RAG Node**: Vectorizes the item description to pull specific legal headings and subheadings from the local 1,500+ page tariff schedule.
23
+ 3. **Decision Node**:
24
+ * Synthesizes Web + PDF context.
25
+ * Runs 3 independent classification attempts.
26
+ * Performs element-wise voting to determine the final 10-digit code (`XXXX.XX.XX.XX`).
27
+ * Calculates a confidence score for each segment (Chapter, Heading, Subheading, etc.).
28
+ 4. **Escalation Logic**: If consensus is not reached, it prepares a targeted clarifying question for the user.
29
+
30
  ---
31
+
32
+ ## 🛠️ Setup & Installation
33
+
34
+ ### 1. Environment Configuration
35
+
36
+ Ensure you have Python 3.11+ installed. We recommend using a virtual environment.
37
+
38
+ ```bash
39
+ # Install dependencies
40
+ pip install -r requirements.txt
41
+ # Note: Ensure fastapi, uvicorn, and duckduckgo-search (ddgs) are also installed.
42
+ ```
43
+
44
+ ### 2. API Credentials
45
+
46
+ Create a `.env` file in the root directory:
47
+
48
+ ```env
49
+ GEMINI_API_KEY=your_google_ai_studio_api_key
50
+ ```
51
+
52
+ ### 3. Data Ingestion (First-Time Only)
53
+
54
+ Populate the Chroma vector store with extracts from the official `tariff.pdf`:
55
+
56
+ ```bash
57
+ python data_ingestion.py
58
+ ```
59
+ *This creates the `chroma_db/` directory.*
60
+
61
+ ---
62
+
63
+ ## 🖥️ Usage
64
+
65
+ ### Option A: The Web Application (Recommended)
66
+
67
+ Start the interactive web interface to experience the human-escalation chatbot and visual reasoning chain.
68
+
69
+ ```bash
70
+ python app.py
71
+ ```
72
+ Access the UI at: `http://localhost:8001`
73
+
74
+ ### Option B: Interactive CLI Wrappers
75
+
76
+ Test individual descriptions and view full internal logs in the terminal.
77
+
78
+ * **Gemini (Cloud)**: `python run_gemini.py`
79
+ * **Ollama (Local)**: `python run_ollama.py`
80
+
81
+ ### Option C: Batch Evaluation
82
+
83
+ Benchmark the system against `data.csv` to calculate accuracy and generate performance reports for a specific model. You must specify the model using the `--model` argument:
84
+
85
+ **For Gemini:**
86
+ ```bash
87
+ python evaluate.py --model gemini-3.1-flash-lite-preview
88
+ ```
89
+
90
+ **For Ollama:**
91
+ ```bash
92
+ python evaluate.py --model qwen3:4b
93
+ ```
94
+ *Results are saved to a dynamically named CSV, e.g., `gemini-3.1-flash-lite-preview_evaluation_results.csv`.*
95
+
96
+ ---
97
+
98
+ ## 📂 Project Structure
99
+
100
+ | File | Description |
101
+ | :--- | :--- |
102
+ | `app.py` | FastAPI application serving the Web UI and API endpoints. |
103
+ | `graph.py` | Core LangGraph logic, agent node definitions, and escalation chat. |
104
+ | `data_ingestion.py` | PDF parsing and vector database (RAG) initialization. |
105
+ | `evaluate.py` | Metric generation and batch testing script. |
106
+ | `run_*.py` | CLI entry points for specific LLM backends. |
107
+ | `static/` | Frontend assets (HTML/CSS/JS) for the web interface. |
108
+ | `tariff.pdf` | The official reference document for Canadian HTS codes. |
109
+ | `requirements.txt` | Python dependencies. |
110
+
111
  ---
112
 
113
+ ## ⚖️ License
114
+ This project is intended for research and internal logistical optimization. Always verify final HTS codes with official customs documentation for legal compliance.
app.py ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from fastapi import FastAPI, HTTPException
3
+ from fastapi.staticfiles import StaticFiles
4
+ from fastapi.responses import FileResponse
5
+ from pydantic import BaseModel
6
+ from dotenv import load_dotenv
7
+ from langchain_google_genai import ChatGoogleGenerativeAI
8
+ from graph import run_pipeline, process_escalation_chat
9
+ from typing import List, Dict
10
+
11
+ # Initialize FastAPI
12
+ app = FastAPI(title="HTS Code Generator API")
13
+
14
+ # Load Env
15
+ load_dotenv()
16
+ gemini_api_key = os.environ.get("GEMINI_API_KEY")
17
+ os.environ["GOOGLE_API_KEY"] = gemini_api_key
18
+
19
+ # Mount static directory for the frontend
20
+ app.mount("/static", StaticFiles(directory="static"), name="static")
21
+
22
+ # LLM setup
23
+ gemini_llm = ChatGoogleGenerativeAI(
24
+ model="gemini-3.1-flash-lite-preview",
25
+ disable_streaming=True
26
+ )
27
+
28
+ class ClassifyRequest(BaseModel):
29
+ description: str
30
+
31
+ class EscalationRequest(BaseModel):
32
+ description: str
33
+ search_results: str
34
+ rag_context: str
35
+ chat_history: List[Dict[str, str]]
36
+
37
+ @app.get("/")
38
+ def read_index():
39
+ return FileResponse("static/index.html")
40
+
41
+ @app.post("/api/classify")
42
+ def classify_hts(request: ClassifyRequest):
43
+ if not request.description.strip():
44
+ raise HTTPException(status_code=400, detail="Description is required.")
45
+
46
+ try:
47
+ # Run the LangGraph Pipeline
48
+ result = run_pipeline(request.description, gemini_llm)
49
+ return {
50
+ "success": True,
51
+ "final_hts_code": result.get("final_hts_code"),
52
+ "element_confidences": result.get("element_confidences"),
53
+ "escalation_needed": result.get("escalation_needed"),
54
+ "escalation_question": result.get("escalation_question"),
55
+ "reasoning_steps": result.get("reasoning_steps"),
56
+ "search_results": result.get("search_results"),
57
+ "rag_context": result.get("rag_context"),
58
+ "search_queries": result.get("search_queries")
59
+ }
60
+ except Exception as e:
61
+ raise HTTPException(status_code=500, detail=str(e))
62
+
63
+ @app.post("/api/escalation")
64
+ def handle_escalation(request: EscalationRequest):
65
+ try:
66
+ result = process_escalation_chat(
67
+ request.description,
68
+ request.search_results,
69
+ request.rag_context,
70
+ request.chat_history,
71
+ gemini_llm
72
+ )
73
+ return {"success": True, "result": result}
74
+ except Exception as e:
75
+ raise HTTPException(status_code=500, detail=str(e))
76
+
77
+ if __name__ == "__main__":
78
+ import uvicorn
79
+ uvicorn.run(app, host="0.0.0.0", port=8001)
data_ingestion.py ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from langchain_community.document_loaders import PyPDFLoader
3
+ from langchain_text_splitters import RecursiveCharacterTextSplitter
4
+ from langchain_huggingface import HuggingFaceEmbeddings
5
+ from langchain_chroma import Chroma
6
+
7
+ def ingest_data(pdf_path="tariff.pdf", persist_directory="./chroma_db"):
8
+ print(f"Loading {pdf_path}...")
9
+ loader = PyPDFLoader(pdf_path)
10
+ documents = loader.load()
11
+
12
+ print(f"Loaded {len(documents)} pages. Splitting text...")
13
+ text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
14
+ splits = text_splitter.split_documents(documents)
15
+
16
+ print(f"Created {len(splits)} chunks. Generating embeddings and storing in VectorDB...")
17
+
18
+ # Using a fast, local embedding model that both Gemini and Ollama agents can share
19
+ embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
20
+
21
+ vectorstore = Chroma.from_documents(
22
+ documents=splits,
23
+ embedding=embeddings,
24
+ persist_directory=persist_directory
25
+ )
26
+
27
+ print(f"Successfully ingested {len(splits)} chunks into {persist_directory}")
28
+ return vectorstore
29
+
30
+ if __name__ == "__main__":
31
+ # Ensure current working directory is the project root
32
+ script_dir = os.path.dirname(os.path.abspath(__file__))
33
+ os.chdir(script_dir)
34
+
35
+ if os.path.exists("./chroma_db"):
36
+ print("VectorDB already exists at ./chroma_db. Skipping ingestion.")
37
+ else:
38
+ ingest_data()
deploy_to_hf.py ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import argparse
3
+ from huggingface_hub import HfApi
4
+
5
+ def deploy(token, repo_id):
6
+ api = HfApi(token=token)
7
+
8
+ print(f"Deploying to {repo_id}...")
9
+
10
+ # Create the repo if it doesn't exist yet
11
+ try:
12
+ api.create_repo(repo_id=repo_id, repo_type="space", space_sdk="docker", exist_ok=True)
13
+ print(f"Ensured Space repo {repo_id} exists.")
14
+ except Exception as e:
15
+ print(f"Note on create_repo: {e}")
16
+
17
+ # Upload all files in the current directory
18
+ # excluding secret or unnecessary files
19
+ api.upload_folder(
20
+ folder_path=".",
21
+ repo_id=repo_id,
22
+ repo_type="space",
23
+ ignore_patterns=[
24
+ ".git",
25
+ ".git/*",
26
+ ".venv",
27
+ ".venv/*",
28
+ "__pycache__",
29
+ "__pycache__/*",
30
+ ".env",
31
+ "*.pyc"
32
+ ],
33
+ )
34
+ print(f"Deployment successful! Check out your Space at: https://huggingface.co/spaces/{repo_id}")
35
+
36
+ if __name__ == "__main__":
37
+ parser = argparse.ArgumentParser(description="Deploy app to Hugging Face Spaces")
38
+ parser.add_argument("--token", required=True, help="Your Hugging Face User Access Token (with Write permissions)")
39
+ parser.add_argument("--repo-id", default="minatosnow/hts", help="The Hugging Face Space ID (e.g., username/repo_name)")
40
+ args = parser.parse_args()
41
+
42
+ deploy(args.token, args.repo_id)
evaluate.py ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import argparse
3
+ import pandas as pd
4
+ from dotenv import load_dotenv
5
+ from graph import run_pipeline
6
+ from langchain_google_genai import ChatGoogleGenerativeAI
7
+ from langchain_ollama import ChatOllama
8
+
9
+ def main():
10
+ parser = argparse.ArgumentParser(description="Evaluate HTS Classification Pipeline")
11
+ parser.add_argument("--model", type=str, required=True, help="Model name to evaluate (e.g., gemini-3.1-flash-lite-preview, qwen3:4b)")
12
+ args = parser.parse_args()
13
+
14
+ model_name = args.model
15
+
16
+ load_dotenv()
17
+ if "gemini" in model_name.lower():
18
+ gemini_api_key = os.environ.get("GEMINI_API_KEY")
19
+ if gemini_api_key:
20
+ os.environ["GOOGLE_API_KEY"] = gemini_api_key
21
+
22
+ print(f"Initializing Gemini pipeline with model: {model_name}")
23
+ llm = ChatGoogleGenerativeAI(
24
+ model=model_name,
25
+ disable_streaming=True
26
+ )
27
+ else:
28
+ print(f"Initializing Ollama pipeline with model: {model_name}")
29
+ llm = ChatOllama(model=model_name)
30
+
31
+ # Load dataset
32
+ df = pd.read_csv("data.csv")
33
+
34
+ results = []
35
+ element_correct = {
36
+ "Chapter (first 2)": 0,
37
+ "Heading (next 2)": 0,
38
+ "Subheading 1 (next 2)": 0,
39
+ "Subheading 2 (next 2)": 0,
40
+ "Statistical Suffix (last 2)": 0,
41
+ "Whole (10 digits)": 0
42
+ }
43
+ total = len(df)
44
+
45
+ for idx, row in df.iterrows():
46
+ desc = str(row['desc'])
47
+ label = str(row['label']).strip()
48
+ print(f"\n[{idx+1}/{total}] Processing item: {desc}")
49
+ print(f"Target Label: {label}")
50
+
51
+ try:
52
+ result = run_pipeline(desc, llm, disable_escalation=True)
53
+ pred = result["final_hts_code"]
54
+ except Exception as e:
55
+ print(f"Error ({model_name}): {e}")
56
+ pred = "ERROR"
57
+
58
+ print(f"Prediction: {pred}")
59
+
60
+ # Evaluation logic
61
+ # Some HTS codes might have spaces or dashes, keep only digits
62
+ clean_label = ''.join(filter(str.isdigit, label)).ljust(10, '0')[:10]
63
+
64
+ # Clean predictions to match the label format exactly (10 digits)
65
+ clean_pred = ''.join(filter(str.isdigit, pred)).ljust(10, '0')[:10]
66
+
67
+ match_chapter = clean_label[0:2] == clean_pred[0:2]
68
+ match_heading = clean_label[2:4] == clean_pred[2:4]
69
+ match_subheading1 = clean_label[4:6] == clean_pred[4:6]
70
+ match_subheading2 = clean_label[6:8] == clean_pred[6:8]
71
+ match_statistical_suffix = clean_label[8:10] == clean_pred[8:10]
72
+ match_whole = clean_label == clean_pred
73
+
74
+ if match_chapter: element_correct["Chapter (first 2)"] += 1
75
+ if match_heading: element_correct["Heading (next 2)"] += 1
76
+ if match_subheading1: element_correct["Subheading 1 (next 2)"] += 1
77
+ if match_subheading2: element_correct["Subheading 2 (next 2)"] += 1
78
+ if match_statistical_suffix: element_correct["Statistical Suffix (last 2)"] += 1
79
+ if match_whole: element_correct["Whole (10 digits)"] += 1
80
+
81
+ results.append({
82
+ "Description": desc,
83
+ "Target_HTS": clean_label,
84
+ "Prediction": clean_pred,
85
+ "Correct_Chapter": match_chapter,
86
+ "Correct_Heading": match_heading,
87
+ "Correct_Subheading_1": match_subheading1,
88
+ "Correct_Subheading_2": match_subheading2,
89
+ "Correct_Statistical_Suffix": match_statistical_suffix,
90
+ "Correct_Whole": match_whole
91
+ })
92
+
93
+ print("\n--- Final Evaluation ---")
94
+ print(f"Model: {model_name}")
95
+ print(f"Total Items: {total}")
96
+ for el, count in element_correct.items():
97
+ print(f"{el} Accuracy: {count}/{total} ({count/total*100:.2f}%)")
98
+
99
+ safe_model_name = model_name.replace(":", "_").replace("/", "_")
100
+ output_filename = f"{safe_model_name}_evaluation_results.csv"
101
+
102
+ results_df = pd.DataFrame(results)
103
+ results_df.to_csv(output_filename, index=False)
104
+ print(f"Saved results to {output_filename}")
105
+
106
+ if __name__ == "__main__":
107
+ main()
graph.py ADDED
@@ -0,0 +1,318 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import json
3
+ import requests
4
+ from bs4 import BeautifulSoup
5
+ from typing import TypedDict, List, Any, Dict
6
+ from langgraph.graph import StateGraph, START, END
7
+ from ddgs import DDGS
8
+ from langchain_chroma import Chroma
9
+ from langchain_huggingface import HuggingFaceEmbeddings
10
+ from langchain_core.messages import SystemMessage, HumanMessage
11
+ import logging
12
+ import warnings
13
+
14
+ # Suppress harmless warnings
15
+ os.environ["TOKENIZERS_PARALLELISM"] = "false"
16
+ os.environ["TRANSFORMERS_VERBOSITY"] = "error"
17
+ logging.getLogger("transformers").setLevel(logging.ERROR)
18
+ logging.getLogger("transformers.modeling_utils").setLevel(logging.ERROR)
19
+ logging.getLogger("sentence_transformers").setLevel(logging.ERROR)
20
+ warnings.filterwarnings("ignore")
21
+
22
+ class GraphState(TypedDict):
23
+ item_description: str
24
+ search_queries: List[str]
25
+ search_results: str
26
+ rag_context: str
27
+ reasoning_steps: str
28
+ final_hts_code: str
29
+ element_confidences: Dict[str, Any]
30
+ escalation_needed: bool
31
+ escalation_question: str
32
+ disable_escalation: bool
33
+ llm: Any
34
+
35
+ def search_node(state: GraphState):
36
+ llm = state["llm"]
37
+ item_desc = state["item_description"]
38
+
39
+ # 1. Ask LLM to generate a search query
40
+ sys_msg = SystemMessage(content="You are an expert search assistant. Given an item description, generate a concise DuckDuckGo search query to find its Canadian HTS code, material composition, or tariff classification. Output ONLY the query string. Do not include quotes or extra text.")
41
+ user_msg = HumanMessage(content=item_desc)
42
+
43
+ response = llm.invoke([sys_msg, user_msg])
44
+ content = response.content
45
+ if isinstance(content, list):
46
+ content = content[0].get("text", "") if isinstance(content[0], dict) else str(content[0])
47
+ query = content.strip().strip('"').strip("'")
48
+
49
+ # 2. Search DDG
50
+ ddgs = DDGS()
51
+ try:
52
+ results = list(ddgs.text(query + " Canadian HTS code", max_results=3))
53
+ except Exception as e:
54
+ results = []
55
+
56
+ # 3. Scrape results
57
+ scraped_content = ""
58
+ for r in results:
59
+ url = r.get("href")
60
+ if not url: continue
61
+ try:
62
+ resp = requests.get(url, timeout=5)
63
+ soup = BeautifulSoup(resp.content, "html.parser")
64
+ text = " ".join([p.text for p in soup.find_all("p")])
65
+ scraped_content += f"\nSource ({url}):\n{text[:1000]}\n"
66
+ except:
67
+ scraped_content += f"\nSource ({url}): {r.get('body')}\n"
68
+
69
+ if not scraped_content.strip():
70
+ scraped_content = "No relevant web results found."
71
+
72
+ return {"search_queries": [query], "search_results": scraped_content}
73
+
74
+ def rag_node(state: GraphState):
75
+ item_desc = state["item_description"]
76
+
77
+ embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
78
+ vectorstore = Chroma(persist_directory="./chroma_db", embedding_function=embeddings)
79
+ retriever = vectorstore.as_retriever(search_kwargs={"k": 5})
80
+
81
+ docs = retriever.invoke(item_desc)
82
+ rag_context = "\n\n".join([d.page_content for d in docs])
83
+
84
+ return {"rag_context": rag_context}
85
+
86
+ def decision_node(state: GraphState):
87
+ # Retrieve base LLM
88
+ base_llm = state["llm"]
89
+ item_desc = state["item_description"]
90
+ search_results = state["search_results"]
91
+ rag_context = state["rag_context"]
92
+
93
+ sys_msg = SystemMessage(content="""You are an expert Canadian customs classification agent.
94
+ Analyze the item description, web search context, and official Canadian Tariff PDF context to determine the 10-digit Canadian HTS code.
95
+ Output your response as STRICT JSON with the following structure:
96
+ {
97
+ "reasoning": "Step-by-step logic for the classification...",
98
+ "chapter": "First 2 digits",
99
+ "heading": "Next 2 digits",
100
+ "subheading": "Next 2 digits",
101
+ "additional_subheading": "Next 2 digits",
102
+ "statistical_suffix": "Last 2 digits"
103
+ }
104
+ Ensure all digits combined make exactly 10 digits. Do not include markdown formatting like ```json. Output ONLY the raw JSON object.
105
+ """)
106
+ user_msg = HumanMessage(content=f"Item Description: {item_desc}\n\nWeb Search Context:\n{search_results}\n\nTariff Context:\n{rag_context}")
107
+
108
+ # 5-Ensemble Voting for Self-Consistency
109
+ # We will simulate perturbations using 5 different variations of the prompt slightly.
110
+ perturbations = [
111
+ "What is the HTS code for this item?",
112
+ "Carefully re-evaluate the material composition and function. What is the 10-digit HTS code?",
113
+ "Please provide the most accurate 10-digit Tariff classification based on the provided evidence.",
114
+ "Considering all the retrieved context and the item's main characteristics, determine its exact 10-digit HTS code.",
115
+ "Synthesize the search context and tariff information to deduce the most specific 10-digit Canadian HTS classification."
116
+ ]
117
+
118
+ results_list = []
119
+ first_reasoning = ""
120
+
121
+ for i, pert in enumerate(perturbations):
122
+ perturbed_user_msg = HumanMessage(content=f"{user_msg.content}\n\n{pert}")
123
+ # Note: Ideally we'd vary temperature, but LangChain's ChatGoogleGenerativeAI might need re-instantiation for that.
124
+ # Using altered user prompts is a standard perturbation.
125
+ try:
126
+ response = base_llm.invoke([sys_msg, perturbed_user_msg])
127
+ content = response.content
128
+ if isinstance(content, list):
129
+ content = content[0].get("text", "") if isinstance(content[0], dict) else str(content[0])
130
+
131
+ # clean json
132
+ start = content.find('{')
133
+ end = content.rfind('}') + 1
134
+ json_str = content[start:end] if start != -1 and end != 0 else content
135
+
136
+ data = None
137
+ try:
138
+ data = json.loads(json_str)
139
+ except Exception:
140
+ try:
141
+ import ast
142
+ # Fallback for Python dict strings (e.g. single quotes)
143
+ data = ast.literal_eval(json_str)
144
+ except Exception:
145
+ try:
146
+ # Fallback for minor quote replacement
147
+ data = json.loads(json_str.replace("'", '"'))
148
+ except Exception:
149
+ pass # Parsing completely failed
150
+
151
+ if not isinstance(data, dict):
152
+ continue # Skip this ensemble run if it didn't produce a dictionary
153
+
154
+ if i == 0:
155
+ first_reasoning = data.get("reasoning", "")
156
+ results_list.append(data)
157
+ except Exception:
158
+ # Silently pass on this ensemble run if there was an API or severe extraction error
159
+ pass
160
+ # Voting logic
161
+ elements = ["chapter", "heading", "subheading", "additional_subheading", "statistical_suffix"]
162
+ votes = {el: {} for el in elements}
163
+
164
+ for res in results_list:
165
+ if not isinstance(res, dict): continue
166
+ for el in elements:
167
+ val = res.get(el, "")
168
+ # Ensure it's exactly 2 digits
169
+ val = ''.join(filter(str.isdigit, str(val)))[:2].zfill(2)
170
+ votes[el][val] = votes[el].get(val, 0) + 1
171
+
172
+ final_elements = {}
173
+ element_confidences = {}
174
+ escalation_needed = False
175
+
176
+ for el in elements:
177
+ if not votes[el]:
178
+ # Fallback if all failed
179
+ final_elements[el] = "00"
180
+ element_confidences[el] = {"value": "00", "confidence": "0% (0/5)", "score": 0.0}
181
+ escalation_needed = True
182
+ continue
183
+
184
+ # Get max voted value
185
+ top_val = max(votes[el], key=votes[el].get)
186
+ vote_count = votes[el][top_val]
187
+ total_runs = len(results_list) if len(results_list) > 0 else 5
188
+ confidence_fraction = vote_count / total_runs
189
+
190
+ final_elements[el] = top_val
191
+ element_confidences[el] = {
192
+ "value": top_val,
193
+ "confidence": f"{int(confidence_fraction*100)}% ({vote_count}/{total_runs})",
194
+ "score": confidence_fraction
195
+ }
196
+
197
+ # If not unanimous or at least 2/3, we escalate
198
+ if confidence_fraction < 0.6: # Less than 2 out of 3
199
+ escalation_needed = True
200
+
201
+ final_hts_code = f"{final_elements['chapter']}{final_elements['heading']}.{final_elements['subheading']}.{final_elements['additional_subheading']}.{final_elements['statistical_suffix']}"
202
+
203
+ escalation_question = ""
204
+ # Check disable_escalation flag to conditionally override question generation
205
+ disable_escalation = state.get("disable_escalation", False)
206
+
207
+ if escalation_needed and not disable_escalation:
208
+ # Generate the first escalation question
209
+ esc_sys_msg = SystemMessage(content="You are an expert Canadian customs classification assistant. Our AI ensemble could not confidently classify an item. Given the item context and the elements we are struggling with, ask the user ONE targeted, clarifying question to help determine the correct 10-digit HTS code.")
210
+ esc_user_msg = HumanMessage(content=f"Item: {item_desc}\n\nSearch Context: {search_results}\n\nTariff Context: {rag_context}\n\nCurrent Best Confidences:\n{json.dumps(element_confidences, indent=2)}\n\nWhat ONE question should I ask the user to clarify the classification?")
211
+ try:
212
+ esc_response = base_llm.invoke([esc_sys_msg, esc_user_msg])
213
+ content = esc_response.content
214
+ escalation_question = content[0].get("text", "") if isinstance(content, list) else str(content).strip()
215
+ except:
216
+ escalation_question = "Could you please provide more details about the material composition or specific function of this item?"
217
+
218
+ return {
219
+ "final_hts_code": final_hts_code,
220
+ "reasoning_steps": first_reasoning,
221
+ "element_confidences": element_confidences,
222
+ "escalation_needed": escalation_needed,
223
+ "escalation_question": escalation_question
224
+ }
225
+
226
+ def process_escalation_chat(item_desc: str, search_results: str, rag_context: str, chat_history: List[Dict[str, str]], llm: Any):
227
+ # Chat history is a list of dicts: [{"role": "user"|"assistant", "content": "..."}]
228
+
229
+ sys_msg = SystemMessage(content="""You are an expert Canadian customs classification agent interacting with a human.
230
+ Your goal is to determine the 10-digit Canadian HTS code.
231
+ If you are still unsure, ask ONE clarifying question.
232
+ If you are confident you know the 10-digit code, you MUST output a STRICT JSON payload with exactly this structure:
233
+ {
234
+ "reasoning": "Step-by-step logic",
235
+ "chapter": "First 2 digits",
236
+ "heading": "Next 2 digits",
237
+ "subheading": "Next 2 digits",
238
+ "additional_subheading": "Next 2 digits",
239
+ "statistical_suffix": "Last 2 digits",
240
+ "final_hts_code": "XXXX.XX.XX.XX",
241
+ "element_confidences": {
242
+ "chapter": { "value": "XX", "confidence": "100% (Human Verified)", "score": 1.0 },
243
+ "heading": { "value": "XX", "confidence": "100% (Human Verified)", "score": 1.0 },
244
+ "subheading": { "value": "XX", "confidence": "100% (Human Verified)", "score": 1.0 },
245
+ "additional_subheading": { "value": "XX", "confidence": "100% (Human Verified)", "score": 1.0 },
246
+ "statistical_suffix": { "value": "XX", "confidence": "100% (Human Verified)", "score": 1.0 }
247
+ },
248
+ "is_final": true
249
+ }
250
+ Do NOT output JSON until you are absolutely confident. Otherwise, just output conversational text asking the clarifying question.
251
+ """)
252
+
253
+ messages = [sys_msg]
254
+ messages.append(HumanMessage(content=f"Initial Item Description: {item_desc}\n\nWeb Search Context:\n{search_results}\n\nTariff Context:\n{rag_context}"))
255
+
256
+ for msg in chat_history:
257
+ if msg["role"] == "user":
258
+ messages.append(HumanMessage(content=msg["content"]))
259
+ elif msg["role"] == "assistant":
260
+ messages.append(SystemMessage(content=f"Assistant previously said: {msg['content']}")) # Use SystemMessage or AIMessage. AIMessage is better if available. Doing SystemMessage for simplicity/safety
261
+
262
+ response = llm.invoke(messages)
263
+ content = response.content
264
+ if isinstance(content, list):
265
+ content = content[0].get("text", "") if isinstance(content[0], dict) else str(content[0])
266
+
267
+ content = content.strip()
268
+
269
+ # Try parsing as JSON
270
+ try:
271
+ start = content.find('{')
272
+ end = content.rfind('}') + 1
273
+ if start != -1 and end != 0:
274
+ json_str = content[start:end]
275
+ data = json.loads(json_str)
276
+ if data.get("is_final") or data.get("final_hts_code"):
277
+ return {"is_final": True, "data": data}
278
+ except:
279
+ pass
280
+
281
+ # If not JSON or failed to parse, it's a question
282
+ return {"is_final": False, "message": content}
283
+
284
+ def build_graph():
285
+ builder = StateGraph(GraphState)
286
+ # builder.add_node("search", search_node)
287
+ builder.add_node("rag", rag_node)
288
+ builder.add_node("decision", decision_node)
289
+
290
+ # builder.add_edge(START, "search")
291
+ # builder.add_edge("search", "rag")
292
+ builder.add_edge(START, "rag")
293
+ builder.add_edge("rag", "decision")
294
+ builder.add_edge("decision", END)
295
+
296
+ return builder.compile()
297
+
298
+ # Instantiate the executable graph
299
+ graph = build_graph()
300
+
301
+ def run_pipeline(item_description: str, llm: Any, disable_escalation: bool = False):
302
+ initial_state = {
303
+ "item_description": item_description,
304
+ "llm": llm,
305
+ "search_queries": [],
306
+ "search_results": "",
307
+ "rag_context": "",
308
+ "reasoning_steps": "",
309
+ "final_hts_code": "",
310
+ "element_confidences": {},
311
+ "escalation_needed": False,
312
+ "escalation_question": "",
313
+ "disable_escalation": disable_escalation
314
+ }
315
+ result = graph.invoke(initial_state)
316
+ return result
317
+
318
+
knowledge.md ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ To activate the Python environment:
2
+ - run `bash` command
3
+ - run `source .venv/bin/activate` command
requirements.txt ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ langchain
2
+ langgraph
3
+ langchain-google-genai
4
+ langchain-ollama
5
+ chromadb
6
+ pypdf
7
+ sentence-transformers
8
+ ddgs
9
+ pandas
10
+ beautifulsoup4
11
+ requests
12
+ fastapi
13
+ uvicorn
14
+ python-multipart
15
+ langchain-huggingface
results.md ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ Model: gemini-3.1-flash-lite-preview
2
+ Total Items: 144
3
+ Chapter (first 2) Accuracy: 107/144 (74.31%)
4
+ Heading (next 2) Accuracy: 89/144 (61.81%)
5
+ Subheading 1 (next 2) Accuracy: 75/144 (52.08%)
6
+ Subheading 2 (next 2) Accuracy: 88/144 (61.11%)
7
+ Statistical Suffix (last 2) Accuracy: 69/144 (47.92%)
8
+ Whole (10 digits) Accuracy: 28/144 (19.44%)
run_gemini.py ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from dotenv import load_dotenv
3
+ from graph import run_pipeline
4
+ from langchain_google_genai import ChatGoogleGenerativeAI
5
+
6
+ def main():
7
+ load_dotenv()
8
+ gemini_api_key = os.environ.get("GEMINI_API_KEY")
9
+ os.environ["GOOGLE_API_KEY"] = gemini_api_key
10
+
11
+ # Initialize Gemini LLM
12
+ gemini_llm = ChatGoogleGenerativeAI(
13
+ model="gemini-3.1-flash-lite-preview",
14
+ disable_streaming=True
15
+ )
16
+
17
+ print("Welcome to the Gemini HTS Code Generator!")
18
+ print("Enter 'quit' to exit.")
19
+
20
+ while True:
21
+ desc = input("\nEnter item description: ")
22
+ if desc.lower() in ["quit", "exit", "q"]:
23
+ break
24
+
25
+ print("Running pipeline (Search + RAG)...")
26
+ try:
27
+ result = run_pipeline(desc, gemini_llm)
28
+ print("\n" + "="*50)
29
+ print("REASONING STEPS & CONTEXT")
30
+ print("="*50)
31
+ print(f"Generated Search Queries: {result.get('search_queries', [])}")
32
+ print("\n--- Websites Read (Search Context) ---")
33
+ print(result.get("search_results", "None"))
34
+ print("\n--- Tariff PDF (RAG Context) ---")
35
+ print(result.get("rag_context", "None"))
36
+ print("="*50)
37
+ print(f"\n=> Predicted HTS Code: {result.get('final_hts_code', 'ERROR')}")
38
+ except Exception as e:
39
+ print(f"An error occurred: {e}")
40
+
41
+ if __name__ == "__main__":
42
+ main()
run_ollama.py ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ from graph import run_pipeline
3
+ from langchain_ollama import ChatOllama
4
+
5
+ def main():
6
+ # Initialize Ollama LLM
7
+ ollama_llm = ChatOllama(model="qwen3:4b")
8
+
9
+ print("Welcome to the Ollama (Qwen) HTS Code Generator!")
10
+ print("Note: Ensure you have run 'ollama serve' and have the model pulled.")
11
+ print("Enter 'quit' to exit.")
12
+
13
+ while True:
14
+ desc = input("\nEnter item description: ")
15
+ if desc.lower() in ["quit", "exit", "q"]:
16
+ break
17
+
18
+ print("Running pipeline (Search + RAG)...")
19
+ try:
20
+ result = run_pipeline(desc, ollama_llm)
21
+ print("\n" + "="*50)
22
+ print("REASONING STEPS & CONTEXT")
23
+ print("="*50)
24
+ print(f"Generated Search Queries: {result.get('search_queries', [])}")
25
+ print("\n--- Websites Read (Search Context) ---")
26
+ print(result.get("search_results", "None"))
27
+ print("\n--- Tariff PDF (RAG Context) ---")
28
+ print(result.get("rag_context", "None"))
29
+ print("="*50)
30
+ print(f"\n=> Predicted HTS Code: {result.get('final_hts_code', 'ERROR')}")
31
+ except Exception as e:
32
+ print(f"An error occurred: {e}")
33
+
34
+ if __name__ == "__main__":
35
+ main()
static/index.html ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!DOCTYPE html>
2
+ <html lang="en">
3
+
4
+ <head>
5
+ <meta charset="UTF-8">
6
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
7
+ <title>AI HTS Code Generator</title>
8
+ <link rel="stylesheet" href="/static/style.css">
9
+ <link href="https://fonts.googleapis.com/css2?family=Outfit:wght@300;400;600;700&display=swap" rel="stylesheet">
10
+ <script src="https://cdn.jsdelivr.net/npm/marked/marked.min.js"></script>
11
+ </head>
12
+
13
+ <body>
14
+ <div class="blob shape1"></div>
15
+ <div class="blob shape2"></div>
16
+
17
+ <div class="container">
18
+ <header>
19
+ <h1>Global Trade HTS Assistant</h1>
20
+ <p>Describe your product to generate a 10-digit Harmonized Tariff Schedule code using self-consistency AI.
21
+ </p>
22
+ </header>
23
+
24
+ <main>
25
+ <div class="input-section glass-panel">
26
+ <textarea id="product-desc" placeholder="e.g., Men's 100% cotton woven long sleeve shirt..."
27
+ rows="3"></textarea>
28
+ <button id="generate-btn" class="primary-btn">Lookup HTS Code</button>
29
+ </div>
30
+
31
+ <div id="results-section" class="results-section hidden">
32
+ <div class="glass-panel output-panel">
33
+ <div id="final-code-container">
34
+ <h2>Final HTS Code</h2>
35
+ <div id="hts-code-display" class="hts-code-display"></div>
36
+ <div class="confidence-grid" id="confidence-grid">
37
+ <!-- Confidence bars injected here -->
38
+ </div>
39
+ </div>
40
+
41
+ <div id="escalation-warning" class="hidden warning-box">
42
+ <svg viewBox="0 0 24 24" class="icon">
43
+ <path fill="currentColor" d="M12 2L1 21h22M12 6l7.5 13h-15M11 10v4h2v-4m-2 6v2h2v-2"></path>
44
+ </svg>
45
+ <span>Human Review Escalation Required: Our AI needs your help to finalize this code.</span>
46
+ </div>
47
+
48
+ <!-- Chatbot UI for Escalation -->
49
+ <div id="escalation-chat-container" class="hidden">
50
+ <div class="accordion">
51
+ <div class="accordion-item active" id="chat-accordion-item">
52
+ <button class="accordion-header" id="chat-accordion-btn">
53
+ Clarification Chat
54
+ <span class="arrow">▼</span>
55
+ </button>
56
+ <div class="accordion-content">
57
+ <div class="escalation-chat">
58
+ <div id="chat-messages" class="chat-messages">
59
+ <!-- Messages injected here -->
60
+ </div>
61
+ <div class="chat-input-area" id="chat-input-area">
62
+ <input type="text" id="chat-input"
63
+ placeholder="Type your response here..." />
64
+ <button id="send-chat-btn" class="chat-btn">Send</button>
65
+ </div>
66
+ </div>
67
+ </div>
68
+ </div>
69
+ </div>
70
+ </div>
71
+ </div>
72
+
73
+ <div class="glass-panel reasoning-panel" id="reasoning-panel-container">
74
+ <h3>AI Reasoning Chain</h3>
75
+ <div id="reasoning-text" class="markdown-body"></div>
76
+
77
+ <div class="accordion">
78
+ <div class="accordion-item">
79
+ <button class="accordion-header">
80
+ View Search & RAG References
81
+ <span class="arrow">▼</span>
82
+ </button>
83
+ <div class="accordion-content">
84
+ <h4>Search Queries Used</h4>
85
+ <ul id="search-queries"></ul>
86
+
87
+ <h4>Web Context</h4>
88
+ <pre id="web-context"></pre>
89
+
90
+ <h4>Canadian Tariff PDF Context</h4>
91
+ <pre id="rag-context"></pre>
92
+ </div>
93
+ </div>
94
+ </div>
95
+ </div>
96
+ </div>
97
+
98
+ <div id="loader" class="loader-container hidden">
99
+ <div class="spinner"></div>
100
+ <p id="loader-message">Analyzing product context...</p>
101
+ </div>
102
+ </main>
103
+ </div>
104
+
105
+ <script src="/static/script.js"></script>
106
+ </body>
107
+
108
+ </html>
static/script.js ADDED
@@ -0,0 +1,250 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ document.addEventListener('DOMContentLoaded', () => {
2
+ const generateBtn = document.getElementById('generate-btn');
3
+ const productDesc = document.getElementById('product-desc');
4
+ const resultsSection = document.getElementById('results-section');
5
+ const loader = document.getElementById('loader');
6
+
7
+ // Accordion Logic
8
+ document.querySelectorAll('.accordion-header').forEach(header => {
9
+ header.addEventListener('click', () => {
10
+ const item = header.parentElement;
11
+ item.classList.toggle('active');
12
+ const content = item.querySelector('.accordion-content');
13
+ if (item.classList.contains('active')) {
14
+ content.style.maxHeight = content.scrollHeight + "px";
15
+ } else {
16
+ content.style.maxHeight = "0";
17
+ }
18
+ });
19
+ });
20
+
21
+ generateBtn.addEventListener('click', async () => {
22
+ const desc = productDesc.value.trim();
23
+ if (!desc) return alert("Please enter a product description!");
24
+
25
+ // UI Reset
26
+ resultsSection.classList.add('hidden');
27
+ loader.classList.remove('hidden');
28
+ generateBtn.disabled = true;
29
+
30
+ const loaderMessage = document.getElementById('loader-message');
31
+ const loadingSteps = [
32
+ "Parsing item description...",
33
+ "Searching DuckDuckGo for context...",
34
+ "Retrieving official Canadian Tariff schedules...",
35
+ "Running self-consistency ensemble 1 of 3...",
36
+ "Running self-consistency ensemble 2 of 3...",
37
+ "Running self-consistency ensemble 3 of 3...",
38
+ "Cross-verifying confidence and vote share...",
39
+ "Finalizing HTS code and extracting reasoning..."
40
+ ];
41
+ let stepIdx = 0;
42
+ loaderMessage.textContent = loadingSteps[0];
43
+
44
+ const loaderInterval = setInterval(() => {
45
+ stepIdx++;
46
+ if (stepIdx < loadingSteps.length) {
47
+ loaderMessage.style.opacity = '0';
48
+ setTimeout(() => {
49
+ loaderMessage.textContent = loadingSteps[stepIdx];
50
+ loaderMessage.style.transition = 'opacity 0.3s ease';
51
+ loaderMessage.style.opacity = '1';
52
+ }, 300);
53
+ } else {
54
+ loaderMessage.style.opacity = '0';
55
+ setTimeout(() => {
56
+ loaderMessage.textContent = "Almost done... rendering final classification.";
57
+ loaderMessage.style.opacity = '1';
58
+ }, 300);
59
+ }
60
+ }, 3000);
61
+
62
+ try {
63
+ const res = await fetch('/api/classify', {
64
+ method: 'POST',
65
+ headers: { 'Content-Type': 'application/json' },
66
+ body: JSON.stringify({ description: desc })
67
+ });
68
+ const data = await res.json();
69
+
70
+ if (res.ok) {
71
+ renderResults(data);
72
+ } else {
73
+ alert("Error: " + data.detail);
74
+ }
75
+ } catch (err) {
76
+ console.error(err);
77
+ alert("An error occurred while calling the API.");
78
+ } finally {
79
+ clearInterval(loaderInterval);
80
+ loader.classList.add('hidden');
81
+ generateBtn.disabled = false;
82
+ }
83
+ });
84
+
85
+ function renderResults(data) {
86
+ resultsSection.classList.remove('hidden');
87
+
88
+ // HTS Code & Warning
89
+ document.getElementById('hts-code-display').textContent = data.final_hts_code || "XXXX.XX.XX.XX";
90
+
91
+ const warningBox = document.getElementById('escalation-warning');
92
+ if (data.escalation_needed) {
93
+ warningBox.classList.remove('hidden');
94
+ } else {
95
+ warningBox.classList.add('hidden');
96
+ }
97
+
98
+ // Confidence Grid
99
+ renderConfidenceGrid(data.element_confidences);
100
+
101
+ // Reasoning & Context
102
+ document.getElementById('reasoning-text').innerText = data.reasoning_steps || "No reasoning provided.";
103
+ document.getElementById('search-queries').innerHTML = (data.search_queries || []).map(q => `<li>${q}</li>`).join('');
104
+ document.getElementById('web-context').innerText = data.search_results || "None.";
105
+ document.getElementById('rag-context').innerText = data.rag_context || "None.";
106
+
107
+ // Chatbot Initialization
108
+ const chatContainer = document.getElementById('escalation-chat-container');
109
+ const chatMessages = document.getElementById('chat-messages');
110
+ const chatInput = document.getElementById('chat-input');
111
+ const sendChatBtn = document.getElementById('send-chat-btn');
112
+ chatMessages.innerHTML = "";
113
+
114
+ const finalCodeContainer = document.getElementById('final-code-container');
115
+ const reasoningPanelContainer = document.getElementById('reasoning-panel-container');
116
+
117
+ if (data.escalation_needed) {
118
+ chatContainer.classList.remove('hidden');
119
+ finalCodeContainer.classList.add('hidden');
120
+ reasoningPanelContainer.classList.add('hidden');
121
+
122
+ window.currentEscalationContext = {
123
+ description: document.getElementById('product-desc').value.trim(),
124
+ search_results: data.search_results,
125
+ rag_context: data.rag_context,
126
+ chat_history: []
127
+ };
128
+
129
+ appendChatMessage("assistant", data.escalation_question || "I need more details to finalize this code.");
130
+ } else {
131
+ chatContainer.classList.add('hidden');
132
+ finalCodeContainer.classList.remove('hidden');
133
+ reasoningPanelContainer.classList.remove('hidden');
134
+ }
135
+
136
+ // Chat Handlers
137
+ sendChatBtn.onclick = async () => {
138
+ const text = chatInput.value.trim();
139
+ if (!text) return;
140
+
141
+ appendChatMessage("user", text);
142
+ chatInput.value = "";
143
+ sendChatBtn.disabled = true;
144
+
145
+ try {
146
+ const res = await fetch('/api/escalation', {
147
+ method: 'POST',
148
+ headers: { 'Content-Type': 'application/json' },
149
+ body: JSON.stringify(window.currentEscalationContext)
150
+ });
151
+ const escData = await res.json();
152
+
153
+ if (res.ok && escData.success) {
154
+ const r = escData.result;
155
+ if (r.is_final) {
156
+ appendChatMessage("assistant", "Excellent, I have enough confidence now! Updating the final code.");
157
+ document.getElementById('hts-code-display').textContent = r.data.final_hts_code;
158
+ document.getElementById('escalation-warning').classList.add('hidden');
159
+ document.getElementById('chat-input-area').classList.add('hidden');
160
+
161
+ // Show AI reasoning chain
162
+ if (r.data.reasoning) {
163
+ document.getElementById('reasoning-text').innerText = r.data.reasoning;
164
+ }
165
+
166
+ // Re-render confidence grid with human-verified values
167
+ if (r.data.element_confidences) {
168
+ renderConfidenceGrid(r.data.element_confidences);
169
+ }
170
+
171
+ // Output the final HTS code and reasoning, show the confidence grid
172
+ document.getElementById('final-code-container').classList.remove('hidden');
173
+ document.getElementById('reasoning-panel-container').classList.remove('hidden');
174
+ document.getElementById('confidence-grid').classList.remove('hidden');
175
+
176
+ // Collapse the Chat Accordion
177
+ const chatAccordionItem = document.getElementById('chat-accordion-item');
178
+ if (chatAccordionItem) {
179
+ chatAccordionItem.classList.remove('active');
180
+ const content = chatAccordionItem.querySelector('.accordion-content');
181
+ if (content) content.style.maxHeight = "0";
182
+ }
183
+ } else {
184
+ appendChatMessage("assistant", r.message);
185
+ }
186
+ } else {
187
+ alert("Chat error: " + (escData.detail || "Unknown error"));
188
+ }
189
+ } catch (err) {
190
+ console.error(err);
191
+ alert("Error sending chat message.");
192
+ } finally {
193
+ sendChatBtn.disabled = false;
194
+ }
195
+ };
196
+
197
+ chatInput.onkeypress = (e) => {
198
+ if (e.key === 'Enter') sendChatBtn.click();
199
+ }
200
+ }
201
+
202
+ function renderConfidenceGrid(element_confidences) {
203
+ if (!element_confidences) return;
204
+
205
+ const grid = document.getElementById('confidence-grid');
206
+ grid.innerHTML = '';
207
+ const order = [
208
+ { key: 'chapter', label: 'Chapter' },
209
+ { key: 'heading', label: 'Heading' },
210
+ { key: 'subheading', label: 'Subheading' },
211
+ { key: 'additional_subheading', label: 'Additional' },
212
+ { key: 'statistical_suffix', label: 'Statistical' }
213
+ ];
214
+
215
+ order.forEach(item => {
216
+ const conf = element_confidences[item.key];
217
+ if (!conf) return;
218
+
219
+ const isLow = conf.score < 0.6;
220
+ const card = document.createElement('div');
221
+ card.className = `conf-card ${isLow ? 'low-conf' : ''}`;
222
+
223
+ card.innerHTML = `
224
+ <div class="label">${item.label}</div>
225
+ <div class="value">${conf.value}</div>
226
+ <div class="score">${conf.confidence}</div>
227
+ `;
228
+ grid.appendChild(card);
229
+ });
230
+ }
231
+
232
+ function appendChatMessage(role, text) {
233
+ const chatMessages = document.getElementById('chat-messages');
234
+ const div = document.createElement('div');
235
+ div.className = `chat-msg msg-${role}`;
236
+
237
+ if (role === 'assistant' && typeof window.marked !== 'undefined') {
238
+ div.innerHTML = window.marked.parse(text);
239
+ } else {
240
+ div.textContent = text;
241
+ }
242
+
243
+ chatMessages.appendChild(div);
244
+ chatMessages.scrollTop = chatMessages.scrollHeight;
245
+
246
+ if (window.currentEscalationContext) {
247
+ window.currentEscalationContext.chat_history.push({ role, content: text });
248
+ }
249
+ }
250
+ });
static/style.css ADDED
@@ -0,0 +1,492 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ :root {
2
+ --primary: #059669;
3
+ --primary-glow: rgba(5, 150, 105, 0.2);
4
+ --bg-light: #f1f5f9;
5
+ --glass-bg: rgba(255, 255, 255, 0.7);
6
+ --glass-border: rgba(255, 255, 255, 0.4);
7
+ --text-main: #0f172a;
8
+ --text-muted: #475569;
9
+ --danger: #e11d48;
10
+ --warning: #d97706;
11
+ --card-bg: rgba(255, 255, 255, 0.5);
12
+ }
13
+
14
+ * {
15
+ box-sizing: border-box;
16
+ margin: 0;
17
+ padding: 0;
18
+ }
19
+
20
+ body {
21
+ font-family: 'Outfit', sans-serif;
22
+ background-color: var(--bg-light);
23
+ color: var(--text-main);
24
+ min-height: 100vh;
25
+ overflow-x: hidden;
26
+ position: relative;
27
+ }
28
+
29
+ /* Dynamic Background Blobs */
30
+ .blob {
31
+ position: absolute;
32
+ filter: blur(80px);
33
+ z-index: -1;
34
+ opacity: 0.4;
35
+ }
36
+
37
+ .shape1 {
38
+ top: -100px;
39
+ left: -100px;
40
+ width: 500px;
41
+ height: 500px;
42
+ background: radial-gradient(circle, #bae6fd, transparent);
43
+ animation: float 10s ease-in-out infinite;
44
+ }
45
+
46
+ .shape2 {
47
+ bottom: -150px;
48
+ right: -50px;
49
+ width: 600px;
50
+ height: 600px;
51
+ background: radial-gradient(circle, #ddd6fe, transparent);
52
+ animation: float 12s ease-in-out infinite reverse;
53
+ }
54
+
55
+ @keyframes float {
56
+ 0% {
57
+ transform: translate(0, 0);
58
+ }
59
+
60
+ 50% {
61
+ transform: translate(40px, -60px);
62
+ }
63
+
64
+ 100% {
65
+ transform: translate(0, 0);
66
+ }
67
+ }
68
+
69
+ .container {
70
+ max-width: 1200px;
71
+ margin: 0 auto;
72
+ padding: 3rem 1.5rem;
73
+ }
74
+
75
+ header {
76
+ text-align: center;
77
+ margin-bottom: 3.5rem;
78
+ }
79
+
80
+ header h1 {
81
+ font-size: 2.8rem;
82
+ font-weight: 700;
83
+ margin-bottom: 0.75rem;
84
+ background: linear-gradient(to right, #059669, #2563eb);
85
+ -webkit-background-clip: text;
86
+ background-clip: text;
87
+ -webkit-text-fill-color: transparent;
88
+ letter-spacing: -0.5px;
89
+ }
90
+
91
+ header p {
92
+ color: var(--text-muted);
93
+ font-size: 1.15rem;
94
+ max-width: 600px;
95
+ margin: 0 auto;
96
+ }
97
+
98
+ .glass-panel {
99
+ background: var(--glass-bg);
100
+ backdrop-filter: blur(16px);
101
+ -webkit-backdrop-filter: blur(16px);
102
+ border: 1px solid var(--glass-border);
103
+ border-radius: 24px;
104
+ padding: 2.5rem;
105
+ box-shadow: 0 10px 40px rgba(0, 0, 0, 0.05), inset 0 0 0 1px rgba(255, 255, 255, 0.4);
106
+ }
107
+
108
+ .input-section {
109
+ display: flex;
110
+ flex-direction: column;
111
+ gap: 1.5rem;
112
+ margin-bottom: 2.5rem;
113
+ }
114
+
115
+ textarea {
116
+ width: 100%;
117
+ background: rgba(255, 255, 255, 0.8);
118
+ border: 1px solid rgba(15, 23, 42, 0.1);
119
+ border-radius: 12px;
120
+ color: var(--text-main);
121
+ padding: 1.25rem;
122
+ font-family: inherit;
123
+ font-size: 1.05rem;
124
+ resize: vertical;
125
+ outline: none;
126
+ transition: all 0.3s ease;
127
+ box-shadow: 0 2px 10px rgba(0, 0, 0, 0.02);
128
+ }
129
+
130
+ textarea:focus {
131
+ border-color: var(--primary);
132
+ box-shadow: 0 0 0 4px var(--primary-glow);
133
+ }
134
+
135
+ .primary-btn {
136
+ background: linear-gradient(135deg, #059669 0%, #10b981 100%);
137
+ color: white;
138
+ border: none;
139
+ padding: 1.1rem 2.5rem;
140
+ font-size: 1.1rem;
141
+ font-weight: 600;
142
+ border-radius: 12px;
143
+ cursor: pointer;
144
+ transition: all 0.3s ease;
145
+ align-self: flex-end;
146
+ box-shadow: 0 4px 15px rgba(16, 185, 129, 0.3);
147
+ }
148
+
149
+ .primary-btn:hover {
150
+ transform: translateY(-2px);
151
+ box-shadow: 0 6px 20px rgba(16, 185, 129, 0.4);
152
+ }
153
+
154
+ .primary-btn:active {
155
+ transform: translateY(0);
156
+ }
157
+
158
+ /* Results Section */
159
+ .hidden {
160
+ display: none !important;
161
+ }
162
+
163
+ .results-section {
164
+ display: flex;
165
+ flex-direction: column;
166
+ gap: 2.5rem;
167
+ animation: fadeIn 0.5s ease-out;
168
+ }
169
+
170
+ @keyframes fadeIn {
171
+ from {
172
+ opacity: 0;
173
+ transform: translateY(10px);
174
+ }
175
+
176
+ to {
177
+ opacity: 1;
178
+ transform: translateY(0);
179
+ }
180
+ }
181
+
182
+ .hts-code-display {
183
+ font-size: 3.5rem;
184
+ letter-spacing: 5px;
185
+ text-align: center;
186
+ font-weight: 800;
187
+ margin: 1.5rem 0;
188
+ color: var(--primary);
189
+ filter: drop-shadow(0 0 1px rgba(5, 150, 105, 0.3));
190
+ }
191
+
192
+ .warning-box {
193
+ background: rgba(225, 29, 72, 0.05);
194
+ border: 1px solid rgba(225, 29, 72, 0.1);
195
+ border-left: 5px solid var(--danger);
196
+ padding: 1.25rem;
197
+ border-radius: 12px;
198
+ display: flex;
199
+ align-items: center;
200
+ gap: 1rem;
201
+ color: #9f1239;
202
+ margin-bottom: 2rem;
203
+ }
204
+
205
+ .warning-box svg {
206
+ width: 28px;
207
+ height: 28px;
208
+ flex-shrink: 0;
209
+ }
210
+
211
+ .confidence-grid {
212
+ display: grid;
213
+ grid-template-columns: repeat(auto-fit, minmax(170px, 1fr));
214
+ gap: 1.25rem;
215
+ margin-top: 2rem;
216
+ }
217
+
218
+ .conf-card {
219
+ background: var(--card-bg);
220
+ padding: 1.5rem;
221
+ border-radius: 16px;
222
+ text-align: center;
223
+ border: 1px solid var(--glass-border);
224
+ box-shadow: 0 4px 12px rgba(0, 0, 0, 0.03);
225
+ transition: transform 0.3s ease;
226
+ }
227
+
228
+ .conf-card:hover {
229
+ transform: translateY(-4px);
230
+ }
231
+
232
+ .conf-card.low-conf {
233
+ border-color: rgba(225, 29, 72, 0.3);
234
+ background: rgba(225, 29, 72, 0.03);
235
+ }
236
+
237
+ .conf-card .label {
238
+ font-size: 0.85rem;
239
+ color: var(--text-muted);
240
+ text-transform: uppercase;
241
+ letter-spacing: 1.2px;
242
+ font-weight: 600;
243
+ margin-bottom: 0.5rem;
244
+ }
245
+
246
+ .conf-card .value {
247
+ font-size: 1.75rem;
248
+ font-weight: 700;
249
+ margin: 0.5rem 0;
250
+ color: var(--text-main);
251
+ }
252
+
253
+ .conf-card .score {
254
+ font-size: 0.95rem;
255
+ font-weight: 600;
256
+ color: var(--primary);
257
+ }
258
+
259
+ .conf-card.low-conf .score {
260
+ color: var(--danger);
261
+ }
262
+
263
+ /* Accordion */
264
+ .accordion {
265
+ margin-top: 1rem;
266
+ }
267
+
268
+ .accordion-item {
269
+ border: 1px solid var(--glass-border);
270
+ border-radius: 12px;
271
+ margin-bottom: 1rem;
272
+ background: rgba(255, 255, 255, 0.3);
273
+ overflow: hidden;
274
+ }
275
+
276
+ .accordion-header {
277
+ width: 100%;
278
+ padding: 1.25rem;
279
+ background: transparent;
280
+ border: none;
281
+ font-size: 1.05rem;
282
+ font-weight: 600;
283
+ color: var(--text-main);
284
+ cursor: pointer;
285
+ display: flex;
286
+ justify-content: space-between;
287
+ align-items: center;
288
+ transition: background 0.3s ease;
289
+ }
290
+
291
+ .accordion-header:hover {
292
+ background: rgba(255, 255, 255, 0.4);
293
+ }
294
+
295
+ .accordion-content {
296
+ padding: 0 1.25rem;
297
+ max-height: 0;
298
+ overflow: hidden;
299
+ transition: all 0.4s cubic-bezier(0.4, 0, 0.2, 1);
300
+ }
301
+
302
+ .accordion-item.active .accordion-content {
303
+ padding: 1.25rem;
304
+ max-height: 3000px;
305
+ }
306
+
307
+ .accordion-item.active .arrow {
308
+ transform: rotate(180deg);
309
+ }
310
+
311
+ .arrow {
312
+ transition: transform 0.3s ease;
313
+ font-size: 0.8rem;
314
+ opacity: 0.5;
315
+ }
316
+
317
+ #reasoning-text {
318
+ line-height: 1.7;
319
+ color: var(--text-main);
320
+ font-size: 1.05rem;
321
+ margin-bottom: 2rem;
322
+ white-space: pre-wrap;
323
+ }
324
+
325
+ pre {
326
+ background: #f8fafc;
327
+ border: 1px solid #e2e8f0;
328
+ padding: 1.25rem;
329
+ border-radius: 8px;
330
+ overflow-x: auto;
331
+ font-size: 0.9rem;
332
+ color: #475569;
333
+ white-space: pre-wrap;
334
+ margin-top: 1rem;
335
+ margin-bottom: 1.5rem;
336
+ font-family: 'Fira Code', monospace;
337
+ }
338
+
339
+ h3 {
340
+ font-size: 1.4rem;
341
+ color: var(--text-main);
342
+ margin-bottom: 1.25rem;
343
+ }
344
+
345
+ h4 {
346
+ color: var(--text-main);
347
+ margin-bottom: 0.75rem;
348
+ font-weight: 600;
349
+ }
350
+
351
+ /* Loader */
352
+ .loader-container {
353
+ display: flex;
354
+ flex-direction: column;
355
+ align-items: center;
356
+ justify-content: center;
357
+ padding: 5rem 0;
358
+ }
359
+
360
+ .spinner {
361
+ width: 60px;
362
+ height: 60px;
363
+ border: 5px solid rgba(5, 150, 105, 0.1);
364
+ border-top-color: var(--primary);
365
+ border-radius: 50%;
366
+ animation: spin 1s cubic-bezier(0.5, 0.1, 0.4, 0.9) infinite;
367
+ margin-bottom: 1.5rem;
368
+ box-shadow: 0 0 20px rgba(5, 150, 105, 0.1);
369
+ }
370
+
371
+ @keyframes spin {
372
+ 100% {
373
+ transform: rotate(360deg);
374
+ }
375
+ }
376
+
377
+ .loader-container p {
378
+ color: var(--text-muted);
379
+ font-weight: 500;
380
+ animation: pulse 2s infinite;
381
+ }
382
+
383
+ @keyframes pulse {
384
+
385
+ 0%,
386
+ 100% {
387
+ opacity: 1;
388
+ }
389
+
390
+ 50% {
391
+ opacity: 0.6;
392
+ }
393
+ }
394
+
395
+ #search-queries li {
396
+ margin-bottom: 0.5rem;
397
+ margin-left: 1.5rem;
398
+ color: var(--text-muted);
399
+ }
400
+
401
+ /* Escalation Chat */
402
+ .escalation-chat {
403
+ margin-top: 2rem;
404
+ border-top: 1px solid var(--glass-border);
405
+ padding-top: 1.5rem;
406
+ }
407
+
408
+ .escalation-chat h3 {
409
+ font-size: 1.25rem;
410
+ margin-bottom: 1rem;
411
+ color: var(--warning);
412
+ }
413
+
414
+ .chat-messages {
415
+ max-height: 300px;
416
+ overflow-y: auto;
417
+ padding: 1rem;
418
+ background: rgba(255, 255, 255, 0.4);
419
+ border-radius: 12px;
420
+ margin-bottom: 1rem;
421
+ display: flex;
422
+ flex-direction: column;
423
+ gap: 0.75rem;
424
+ border: 1px solid var(--glass-border);
425
+ }
426
+
427
+ .chat-msg {
428
+ padding: 0.85rem 1.25rem;
429
+ border-radius: 16px;
430
+ max-width: 80%;
431
+ font-size: 0.95rem;
432
+ line-height: 1.5;
433
+ animation: fadeIn 0.3s ease;
434
+ }
435
+
436
+ .msg-assistant {
437
+ background: var(--card-bg);
438
+ border: 1px solid var(--glass-border);
439
+ color: var(--text-main);
440
+ align-self: flex-start;
441
+ border-bottom-left-radius: 4px;
442
+ }
443
+
444
+ .msg-user {
445
+ background: var(--primary);
446
+ color: white;
447
+ align-self: flex-end;
448
+ border-bottom-right-radius: 4px;
449
+ box-shadow: 0 4px 10px rgba(5, 150, 105, 0.2);
450
+ }
451
+
452
+ .chat-input-area {
453
+ display: flex;
454
+ gap: 0.5rem;
455
+ }
456
+
457
+ .chat-input-area input {
458
+ flex: 1;
459
+ background: rgba(255, 255, 255, 0.8);
460
+ border: 1px solid rgba(15, 23, 42, 0.1);
461
+ border-radius: 12px;
462
+ padding: 1rem;
463
+ font-family: inherit;
464
+ font-size: 1rem;
465
+ outline: none;
466
+ transition: all 0.3s ease;
467
+ }
468
+
469
+ .chat-input-area input:focus {
470
+ border-color: var(--primary);
471
+ box-shadow: 0 0 0 4px var(--primary-glow);
472
+ }
473
+
474
+ .chat-btn {
475
+ background: var(--text-main);
476
+ color: white;
477
+ border: none;
478
+ border-radius: 12px;
479
+ padding: 0 1.5rem;
480
+ font-weight: 600;
481
+ cursor: pointer;
482
+ transition: all 0.2s;
483
+ }
484
+
485
+ .chat-btn:hover {
486
+ background: #1e293b;
487
+ }
488
+
489
+ .chat-btn:disabled {
490
+ background: #cbd5e1;
491
+ cursor: not-allowed;
492
+ }
test_data.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ desc,label
2
+ sports balls latex,9506620090