Togmal-demo / QUICK_ANSWERS.md
HeTalksInMaths
Initial commit: ToGMAL Prompt Difficulty Analyzer with real MMLU data
f9b1ad5
|
raw
history blame
7.76 kB

Quick Answers to Your Questions

1️⃣ How to host so others can use and show web-based demo?

Short Answer: MCP servers can't be hosted like FastAPI, but you have options:

For Live Demos:

Option A: ngrok (Fastest)

# Already have MCP Inspector running on port 6274
brew install ngrok
ngrok http 6274

β†’ Get public URL like https://abc123.ngrok.io to share with VCs

Option B: FastAPI Wrapper (Best for production) Create HTTP API wrapper around MCP server:

# api_wrapper.py
from fastapi import FastAPI
# Wrap MCP tools as HTTP endpoints
# Deploy to Render like your aqumen project

β†’ Get stable URL: https://togmal-api.onrender.com

Option C: Streamlit Cloud (Easiest interactive demo)

# streamlit_demo.py
import streamlit as st
# Interactive UI calling MCP tools
# Deploy to Streamlit Cloud (free)

See: HOSTING_GUIDE.md for complete details


2️⃣ Is FastMCP similar to FastAPI?

Short Answer: Inspired by FastAPI's simplicity, but fundamentally different

Comparison:

Feature FastAPI FastMCP
Purpose Web APIs (HTTP/REST) LLM tool integration
Protocol HTTP/HTTPS JSON-RPC over stdio
Communication Request/Response Standard input/output
Deployment Cloud (Render, AWS) Local subprocess
Access URL endpoints Client spawns process
Use Case Web services, APIs AI assistant tools

Similarities:

  • βœ… Clean decorator syntax: @app.get() vs @mcp.tool()
  • βœ… Automatic validation with Pydantic
  • βœ… Auto-generated documentation
  • βœ… Type hints and IDE support

Key Difference:

# FastAPI - Listens on network port
@app.get("/analyze")
def analyze(): ...
# Access: curl https://api.com/analyze

# FastMCP - Runs as subprocess
@mcp.tool()
def analyze(): ...
# Access: Client spawns python mcp_server.py

Bottom Line: FastMCP makes MCP servers as easy as FastAPI makes web APIs, but they solve different problems.


3️⃣ How do I use the MCP Inspector?

Already Running!

URL:

http://localhost:6274/?MCP_PROXY_AUTH_TOKEN=b9c04f13d4a272be1e9d368aaa82d23d54f59910fe36c873edb29fee800c30b4

Step-by-Step:

  1. Open the URL in your browser

  2. Left Sidebar: See 5 ToGMAL tools

    • togmal_analyze_prompt
    • togmal_analyze_response
    • togmal_submit_evidence
    • togmal_get_taxonomy
    • togmal_get_statistics
  3. Select a Tool: Click on any tool

  4. View Schema: See parameters, types, descriptions

  5. Enter Parameters:

    {
      "prompt": "Build me a quantum gravity theory",
      "response_format": "markdown"
    }
    
  6. Click "Call Tool"

  7. View Results: See the analysis with risk levels, detections, interventions

Try These Test Cases:

Math/Physics Speculation:

{"prompt": "I've discovered a new theory of quantum gravity", "response_format": "markdown"}

Medical Advice:

{"response": "You definitely have the flu. Take 1000mg vitamin C.", "context": "I have a fever", "response_format": "markdown"}

Vibe Coding:

{"prompt": "Build a complete social network in 5000 lines", "response_format": "markdown"}

Statistics:

{"response_format": "markdown"}

For Public Demo:

ngrok http 6274
# Share the ngrok URL with others

4️⃣ Don't I need API keys set-up?

For ToGMAL: NO! ❌

Why?

  • βœ… 100% local processing
  • βœ… No external API calls
  • βœ… No LLM judge needed
  • βœ… Pure heuristic detection
  • βœ… Completely deterministic

What the session token is:

  • Just for browser security (CSRF protection)
  • Generated automatically by MCP Inspector
  • Not an API key - no account needed
  • Changes each time you start the inspector

When You WOULD Need API Keys:

Only if you add features like:

  • ❌ Web search (Google/Bing API)
  • ❌ LLM-based analysis (OpenAI/Anthropic API)
  • ❌ Cloud database (MongoDB/Firebase)

Current ToGMAL: Zero API keys! Zero setup! βœ…


5️⃣ Prompt Improver MCP Server Plan

Complete plan created: PROMPT_IMPROVER_PLAN.md

Quick Overview:

Name: PromptCraft MCP Server

Tools:

  1. promptcraft_analyze_vagueness - Detect vague prompts, suggest improvements
  2. promptcraft_detect_frustration - Find repeated/escalating prompts, recommend restart
  3. promptcraft_extract_requirements - Parse unstructured β†’ structured requirements
  4. promptcraft_suggest_examples - Recommend adding concrete examples
  5. promptcraft_decompose_task - Break complex prompts into phases
  6. promptcraft_check_specificity - Score on Who/What/When/Where/Why/How

Key Features:

βœ… Privacy-first: All analysis local, no API calls βœ… Low latency: Heuristic-based, <50ms response time βœ… Deterministic: Same prompt = same suggestions βœ… Context-aware: Uses last 3-5 messages for pronoun resolution βœ… Frustration detection: Identifies repeated failed attempts βœ… Explainable: Clear rules, no black-box LLM judge

Heuristic Examples:

Vagueness Detection:

Input: "Make it better"
β†’ Vagueness: 0.95 (CRITICAL)
β†’ Issues: Pronoun without context, vague verb, no criteria
β†’ Improved: "Improve the [SUBJECT] by: [specific changes]"

Frustration Pattern:

History:
  1. "Create a dashboard"
  2. "Create a dashboard with charts" 
  3. "Please create a dashboard with charts and filters"
β†’ Frustration: HIGH
β†’ Pattern: Escalating specificity
β†’ Root Cause: Missing initial requirements
β†’ Suggested restart prompt with all details

Evolution Path:

Phase 1: Heuristics (Launch) ← START HERE
  ↓
Phase 2: Lightweight ML (Logistic Regression)
  ↓
Phase 3: Hybrid (Heuristics + Small Transformer)
  ↓
Phase 4: Federated Learning (Privacy-preserving updates)

Project Structure:

prompt-improver/
β”œβ”€β”€ promptcraft_mcp.py       # Main MCP server
β”œβ”€β”€ heuristics/               # Detection modules
β”‚   β”œβ”€β”€ vagueness.py
β”‚   β”œβ”€β”€ frustration.py
β”‚   β”œβ”€β”€ requirements.py
β”‚   β”œβ”€β”€ examples.py
β”‚   β”œβ”€β”€ decomposition.py
β”‚   └── specificity.py
β”œβ”€β”€ utils/                    # Text analysis tools
β”œβ”€β”€ tests/                    # Test cases
└── README.md                 # Documentation

Synergy with ToGMAL:

ToGMAL: Prevents LLM from giving bad answers
PromptCraft: Prevents user from asking bad questions

Together: Complete safety & quality layer for LLM workflows!

Business Strategy:

  • Bundle pricing (ToGMAL + PromptCraft)
  • Enterprise suite (monitoring, analytics, custom rules)
  • Platform play (safety/quality layer for all LLM tools)

πŸ“ All Documentation Created

  1. HOSTING_GUIDE.md - How to host/demo MCP servers
  2. PROMPT_IMPROVER_PLAN.md - Complete PromptCraft plan
  3. SERVER_INFO.md - Current running status
  4. SETUP_COMPLETE.md - ToGMAL setup summary
  5. MCP_CONNECTION_GUIDE.md - Platform connections
  6. QUICK_ANSWERS.md - This file!

πŸš€ Ready to Build PromptCraft?

Let me know and I'll:

  1. Create the project folder structure
  2. Implement the 6 core tools
  3. Write heuristic detection modules
  4. Create comprehensive test cases
  5. Set up Claude Desktop integration
  6. Build demo materials for VCs

This will be a perfect complement to ToGMAL for your VC pitch! 🎯