title: IITM LLM Quiz Solver
emoji: π§
colorFrom: green
colorTo: blue
sdk: docker
sdk_version: '0'
app_file: app/main.py
pinned: false
A complete Python project with FastAPI that acts as an API endpoint to automatically solve dynamic quiz tasks using a headless browser and optional LLM reasoning.
Features
- π FastAPI-based REST API
- π Playwright for headless browser automation
- π€ OpenAI GPT integration for complex reasoning
- π Data processing (CSV, JSON, PDF, etc.)
- π Recursive quiz solving
- β‘ Async/await for performance
- π³ Docker support for easy deployment
Project Structure
/app
- main.py # FastAPI server
- solver.py # All quiz solving logic (browser, LLM, calculations, handlers - consolidated)
/Dockerfile
/requirements.txt
/README.md
/LICENSE
Installation
Local Development
- Clone the repository:
git clone <repository-url>
cd IITMTdsPrj2
- Install Python dependencies:
pip install -r requirements.txt
- Install Playwright browsers:
playwright install chromium
Set environment variables:
Quick Setup (Windows PowerShell):
.\setup_env.ps1Quick Setup (Linux/Mac):
source setup_env.shManual Setup (choose whichever LLM provider you prefer):
# Windows PowerShell $env:QUIZ_SECRET = "your_secret_key" $env:OPENAI_API_KEY = "sk-your-openai-api-key" # Optional - OpenAI $env:OPENROUTER_API_KEY = "sk-or-your-openrouter" # Optional - OpenRouter GPT-5-nano # Linux/Mac export QUIZ_SECRET="your_secret_key" export OPENAI_API_KEY="sk-your-openai-api-key" # Optional export OPENROUTER_API_KEY="sk-or-your-openrouter" # OptionalOr use .env file:
- Copy
.env.exampleto.env(if available) - Fill in your values
- The app will automatically load it
π See ENV_SETUP.md for detailed instructions
- Copy
Run the server:
python -m app.main
# or
uvicorn app.main:app --host 0.0.0.0 --port 8000
API Endpoints
POST /solve
Main endpoint to solve a quiz.
Request Body:
{
"email": "user@example.com",
"secret": "your_secret",
"url": "https://example.com/quiz"
}
Response:
200 OK: Quiz solved successfully400 Bad Request: Invalid request format403 Forbidden: Invalid secret500 Internal Server Error: Server error504 Gateway Timeout: Request timeout (>3 minutes)
POST /demo
Demo endpoint for testing (same as /solve but with more lenient error handling).
Request Body: Same as /solve
GET /health
Health check endpoint.
Response:
{
"status": "healthy"
}
Deployment on Hugging Face Spaces
Method 1: Using Dockerfile (Recommended)
Create a new Space on Hugging Face:
- Go to https://huggingface.co/spaces
- Create a new Space
- Select "Docker" as the SDK
Upload your files:
- Upload all project files to your Space
- Ensure
Dockerfileis in the root directory
Set Environment Variables:
- Go to Space Settings β Variables and secrets
- Add the following:
QUIZ_SECRET: Your secret key for authenticationOPENAI_API_KEY: Your OpenAI API key (optional)OPENROUTER_API_KEY: Your OpenRouter key (e.g., GPT-5-nano)PORT: 8000 (usually set automatically)
Deploy:
- Hugging Face will automatically build and deploy your Docker container
- The API will be available at:
https://<your-username>-<space-name>.hf.space
Method 2: Using Docker Compose (Alternative)
If you need more control, you can use docker-compose.yml:
version: '3.8'
services:
app:
build: .
ports:
- "8000:8000"
environment:
- QUIZ_SECRET=${QUIZ_SECRET}
- OPENAI_API_KEY=${OPENAI_API_KEY}
Environment Variables
| Variable | Description | Required | Default |
|---|---|---|---|
QUIZ_SECRET |
Secret key for API authentication | Yes | default_secret_change_me |
OPENAI_API_KEY |
OpenAI API key for LLM features | No | - |
OPENROUTER_API_KEY |
OpenRouter key (e.g., GPT-5-nano) | No | - |
OPENROUTER_MODEL |
Override OpenRouter model (default gpt-5-nano) | No | gpt-5-nano |
PORT |
Server port | No | 8000 |
Testing
Test with curl:
curl -X POST "https://tds-llm-analysis.s-anand.net/demo" \
-H "Content-Type: application/json" \
-d '{
"email": "test@example.com",
"secret": "your_secret",
"url": "https://example.com/quiz"
}'
Test with Python:
import requests
response = requests.post(
"https://tds-llm-analysis.s-anand.net/demo",
json={
"email": "test@example.com",
"secret": "your_secret",
"url": "https://example.com/quiz"
}
)
print(response.json())
How It Works
- Request Validation: Validates email, secret, and URL format
- Secret Authentication: Checks secret against expected value (403 if wrong)
- Page Loading: Uses Playwright to load and render the quiz page
- Content Extraction: Extracts all text, HTML, links, and images
- Submit URL Detection: Automatically finds the submit URL from page content
- Question Solving:
- Extracts question text
- Tries multiple strategies:
- Check if answer is in page
- Download and process data files (CSV, JSON, PDF)
- Use LLM for complex reasoning
- Answer Submission: Submits answer to detected submit URL
- Recursive Solving: If response contains next URL, solves recursively
- Response: Returns final result
Solver Strategies
The solver uses multiple strategies in order:
- Direct Answer Extraction: Checks if answer is already in page
- Data File Processing: Downloads and processes CSV, JSON, PDF files
- LLM Reasoning: Uses GPT-4o-mini (OpenAI) or GPT-5-nano (OpenRouter) for complex questions
- Fallback: Returns question analysis if all else fails
Error Handling
- Invalid JSON β 400 Bad Request
- Wrong secret β 403 Forbidden
- Page load errors β 500 with error details
- Timeout (>3 min) β 504 Gateway Timeout
- All errors are logged for debugging
Limitations
- Maximum recursion depth: 10 quizzes
- Timeout: 3 minutes per request
- Requires internet connection for external URLs
- OpenAI API key needed for LLM features (optional)
License
MIT License - see LICENSE file for details.
Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
Support
For issues and questions, please open an issue on the repository.