|
|
---
|
|
|
title: IITM LLM Quiz Solver
|
|
|
emoji: π§
|
|
|
colorFrom: green
|
|
|
colorTo: blue
|
|
|
sdk: docker
|
|
|
sdk_version: "0"
|
|
|
app_file: app/main.py
|
|
|
pinned: false
|
|
|
---
|
|
|
|
|
|
|
|
|
A complete Python project with FastAPI that acts as an API endpoint to automatically solve dynamic quiz tasks using a headless browser and optional LLM reasoning.
|
|
|
|
|
|
## Features
|
|
|
|
|
|
- π FastAPI-based REST API
|
|
|
- π Playwright for headless browser automation
|
|
|
- π€ OpenAI GPT integration for complex reasoning
|
|
|
- π Data processing (CSV, JSON, PDF, etc.)
|
|
|
- π Recursive quiz solving
|
|
|
- β‘ Async/await for performance
|
|
|
- π³ Docker support for easy deployment
|
|
|
|
|
|
## Project Structure
|
|
|
|
|
|
```
|
|
|
/app
|
|
|
- main.py # FastAPI server
|
|
|
- solver.py # All quiz solving logic (browser, LLM, calculations, handlers - consolidated)
|
|
|
/Dockerfile
|
|
|
/requirements.txt
|
|
|
/README.md
|
|
|
/LICENSE
|
|
|
```
|
|
|
|
|
|
## Installation
|
|
|
|
|
|
### Local Development
|
|
|
|
|
|
1. Clone the repository:
|
|
|
```bash
|
|
|
git clone <repository-url>
|
|
|
cd IITMTdsPrj2
|
|
|
```
|
|
|
|
|
|
2. Install Python dependencies:
|
|
|
```bash
|
|
|
pip install -r requirements.txt
|
|
|
```
|
|
|
|
|
|
3. Install Playwright browsers:
|
|
|
```bash
|
|
|
playwright install chromium
|
|
|
```
|
|
|
|
|
|
4. Set environment variables:
|
|
|
|
|
|
**Quick Setup (Windows PowerShell):**
|
|
|
```powershell
|
|
|
.\setup_env.ps1
|
|
|
```
|
|
|
|
|
|
**Quick Setup (Linux/Mac):**
|
|
|
```bash
|
|
|
source setup_env.sh
|
|
|
```
|
|
|
|
|
|
**Manual Setup (choose whichever LLM provider you prefer):**
|
|
|
```bash
|
|
|
# Windows PowerShell
|
|
|
$env:QUIZ_SECRET = "your_secret_key"
|
|
|
$env:OPENAI_API_KEY = "sk-your-openai-api-key" # Optional - OpenAI
|
|
|
$env:OPENROUTER_API_KEY = "sk-or-your-openrouter" # Optional - OpenRouter GPT-5-nano
|
|
|
|
|
|
# Linux/Mac
|
|
|
export QUIZ_SECRET="your_secret_key"
|
|
|
export OPENAI_API_KEY="sk-your-openai-api-key" # Optional
|
|
|
export OPENROUTER_API_KEY="sk-or-your-openrouter" # Optional
|
|
|
```
|
|
|
|
|
|
**Or use .env file:**
|
|
|
- Copy `.env.example` to `.env` (if available)
|
|
|
- Fill in your values
|
|
|
- The app will automatically load it
|
|
|
|
|
|
π **See [ENV_SETUP.md](ENV_SETUP.md) for detailed instructions**
|
|
|
|
|
|
5. Run the server:
|
|
|
```bash
|
|
|
python -m app.main
|
|
|
# or
|
|
|
uvicorn app.main:app --host 0.0.0.0 --port 8000
|
|
|
```
|
|
|
|
|
|
## API Endpoints
|
|
|
|
|
|
### POST /solve
|
|
|
|
|
|
Main endpoint to solve a quiz.
|
|
|
|
|
|
**Request Body:**
|
|
|
```json
|
|
|
{
|
|
|
"email": "user@example.com",
|
|
|
"secret": "your_secret",
|
|
|
"url": "https://example.com/quiz"
|
|
|
}
|
|
|
```
|
|
|
|
|
|
**Response:**
|
|
|
- `200 OK`: Quiz solved successfully
|
|
|
- `400 Bad Request`: Invalid request format
|
|
|
- `403 Forbidden`: Invalid secret
|
|
|
- `500 Internal Server Error`: Server error
|
|
|
- `504 Gateway Timeout`: Request timeout (>3 minutes)
|
|
|
|
|
|
### POST /demo
|
|
|
|
|
|
Demo endpoint for testing (same as `/solve` but with more lenient error handling).
|
|
|
|
|
|
**Request Body:** Same as `/solve`
|
|
|
|
|
|
### GET /health
|
|
|
|
|
|
Health check endpoint.
|
|
|
|
|
|
**Response:**
|
|
|
```json
|
|
|
{
|
|
|
"status": "healthy"
|
|
|
}
|
|
|
```
|
|
|
|
|
|
## Deployment on Hugging Face Spaces
|
|
|
|
|
|
### Method 1: Using Dockerfile (Recommended)
|
|
|
|
|
|
1. **Create a new Space on Hugging Face:**
|
|
|
- Go to https://huggingface.co/spaces
|
|
|
- Create a new Space
|
|
|
- Select "Docker" as the SDK
|
|
|
|
|
|
2. **Upload your files:**
|
|
|
- Upload all project files to your Space
|
|
|
- Ensure `Dockerfile` is in the root directory
|
|
|
|
|
|
3. **Set Environment Variables:**
|
|
|
- Go to Space Settings β Variables and secrets
|
|
|
- Add the following:
|
|
|
- `QUIZ_SECRET`: Your secret key for authentication
|
|
|
- `OPENAI_API_KEY`: Your OpenAI API key (optional)
|
|
|
- `OPENROUTER_API_KEY`: Your OpenRouter key (e.g., GPT-5-nano)
|
|
|
- `PORT`: 8000 (usually set automatically)
|
|
|
|
|
|
4. **Deploy:**
|
|
|
- Hugging Face will automatically build and deploy your Docker container
|
|
|
- The API will be available at: `https://<your-username>-<space-name>.hf.space`
|
|
|
|
|
|
### Method 2: Using Docker Compose (Alternative)
|
|
|
|
|
|
If you need more control, you can use `docker-compose.yml`:
|
|
|
|
|
|
```yaml
|
|
|
version: '3.8'
|
|
|
services:
|
|
|
app:
|
|
|
build: .
|
|
|
ports:
|
|
|
- "8000:8000"
|
|
|
environment:
|
|
|
- QUIZ_SECRET=${QUIZ_SECRET}
|
|
|
- OPENAI_API_KEY=${OPENAI_API_KEY}
|
|
|
```
|
|
|
|
|
|
## Environment Variables
|
|
|
|
|
|
| Variable | Description | Required | Default |
|
|
|
|----------|-------------|----------|---------|
|
|
|
| `QUIZ_SECRET` | Secret key for API authentication | Yes | `default_secret_change_me` |
|
|
|
| `OPENAI_API_KEY` | OpenAI API key for LLM features | No | - |
|
|
|
| `OPENROUTER_API_KEY` | OpenRouter key (e.g., GPT-5-nano) | No | - |
|
|
|
| `OPENROUTER_MODEL` | Override OpenRouter model (default gpt-5-nano) | No | `gpt-5-nano` |
|
|
|
| `PORT` | Server port | No | `8000` |
|
|
|
|
|
|
## Testing
|
|
|
|
|
|
### Test with curl:
|
|
|
|
|
|
```bash
|
|
|
curl -X POST "https://tds-llm-analysis.s-anand.net/demo" \
|
|
|
-H "Content-Type: application/json" \
|
|
|
-d '{
|
|
|
"email": "test@example.com",
|
|
|
"secret": "your_secret",
|
|
|
"url": "https://example.com/quiz"
|
|
|
}'
|
|
|
```
|
|
|
|
|
|
### Test with Python:
|
|
|
|
|
|
```python
|
|
|
import requests
|
|
|
|
|
|
response = requests.post(
|
|
|
"https://tds-llm-analysis.s-anand.net/demo",
|
|
|
json={
|
|
|
"email": "test@example.com",
|
|
|
"secret": "your_secret",
|
|
|
"url": "https://example.com/quiz"
|
|
|
}
|
|
|
)
|
|
|
|
|
|
print(response.json())
|
|
|
```
|
|
|
|
|
|
## How It Works
|
|
|
|
|
|
1. **Request Validation**: Validates email, secret, and URL format
|
|
|
2. **Secret Authentication**: Checks secret against expected value (403 if wrong)
|
|
|
3. **Page Loading**: Uses Playwright to load and render the quiz page
|
|
|
4. **Content Extraction**: Extracts all text, HTML, links, and images
|
|
|
5. **Submit URL Detection**: Automatically finds the submit URL from page content
|
|
|
6. **Question Solving**:
|
|
|
- Extracts question text
|
|
|
- Tries multiple strategies:
|
|
|
- Check if answer is in page
|
|
|
- Download and process data files (CSV, JSON, PDF)
|
|
|
- Use LLM for complex reasoning
|
|
|
7. **Answer Submission**: Submits answer to detected submit URL
|
|
|
8. **Recursive Solving**: If response contains next URL, solves recursively
|
|
|
9. **Response**: Returns final result
|
|
|
|
|
|
## Solver Strategies
|
|
|
|
|
|
The solver uses multiple strategies in order:
|
|
|
|
|
|
1. **Direct Answer Extraction**: Checks if answer is already in page
|
|
|
2. **Data File Processing**: Downloads and processes CSV, JSON, PDF files
|
|
|
3. **LLM Reasoning**: Uses GPT-4o-mini (OpenAI) or GPT-5-nano (OpenRouter) for complex questions
|
|
|
4. **Fallback**: Returns question analysis if all else fails
|
|
|
|
|
|
## Error Handling
|
|
|
|
|
|
- Invalid JSON β 400 Bad Request
|
|
|
- Wrong secret β 403 Forbidden
|
|
|
- Page load errors β 500 with error details
|
|
|
- Timeout (>3 min) β 504 Gateway Timeout
|
|
|
- All errors are logged for debugging
|
|
|
|
|
|
## Limitations
|
|
|
|
|
|
- Maximum recursion depth: 10 quizzes
|
|
|
- Timeout: 3 minutes per request
|
|
|
- Requires internet connection for external URLs
|
|
|
- OpenAI API key needed for LLM features (optional)
|
|
|
|
|
|
## License
|
|
|
|
|
|
MIT License - see LICENSE file for details.
|
|
|
|
|
|
## Contributing
|
|
|
|
|
|
1. Fork the repository
|
|
|
2. Create a feature branch
|
|
|
3. Make your changes
|
|
|
4. Submit a pull request
|
|
|
|
|
|
## Support
|
|
|
|
|
|
For issues and questions, please open an issue on the repository.
|
|
|
|
|
|
|