A newer version of the Gradio SDK is available:
6.5.0
metadata
title: Topcoder Challenge Scout (Agentic MCP)
emoji: 🚀
colorFrom: indigo
colorTo: green
sdk: gradio
sdk_version: 4.44.0
app_file: app.py
pinned: false
Topcoder Challenge Scout (Agentic MCP)
An agentic AI app that discovers and prioritizes Topcoder challenges using the Topcoder MCP server and OpenAI for keyword extraction, intelligent scoring, and action planning. The model decides what to search (keywords) and how to rank results; you provide only high-level requirements.
Use case
- Input a free-form requirement (e.g., "Looking for recent active LLM development challenges with web UI").
- The agent extracts minimal, broad keywords automatically (1–3 terms).
- Fetches challenges via MCP; defaults to last 90 days and active/open status.
- Uses OpenAI to score relevance, produce recommendation stars (1–5), and return a brief plan.
- Displays a compact table (title, tags, prize, deadline, stars, AI reason) plus a quick plan.
Setup
1. Install Dependencies
pip install -r requirements.txt
2. Configure OpenAI API
You need an OpenAI API key to enable agentic keyword extraction, scoring, and planning features.
Quick Setup:
python setup_env.py
Manual Setup:
- Get your API key from https://platform.openai.com/api-keys
- Set environment variable:
# macOS/Linux export OPENAI_API_KEY=your_api_key_here # Windows (Command Prompt) set OPENAI_API_KEY=your_api_key_here # Windows (PowerShell) $env:OPENAI_API_KEY="your_api_key_here"
Optional Configuration:
OPENAI_BASE_URL: Custom API base URL (default: https://api.openai.com/v1)OPENAI_MODEL: Model to use (default: gpt-4o-mini)
3. Run the App (Local)
python app.py
Open the printed local URL to use the UI.
UI inputs:
Requirements: free-form text. The model extracts keywords automatically.Use MCP (recommended): toggle live Topcoder data.Debug mode: shows detailed MCP and model logs.
Defaults applied by the agent:
- Time window: within last 90 days
- Status: active/open (unknown kept; closed/cancelled filtered)
Features
MCP Integration
- Uses the official MCP SDK via SSE/streamable HTTP
- Connects to
https://api.topcoder-dev.com/v6/mcp/mcpandhttps://api.topcoder-dev.com/v6/mcp/sse - Fetches real-time challenge data; includes robust normalization and fallbacks
- Built-in retries and detailed debug logging
AI-Powered Features
- Keyword Extraction: From free-form requirements to 1–3 broad search terms
- Smart Scoring: Relevance scoring with reasons (0–1), converted to 1–5 stars
- Action Planning: Generates concise next steps for top picks
- Context-Aware: Considers skills, tags, brief descriptions, and prize amounts
Cost Information
Using gpt-4o-mini (recommended):
- ~100-200 tokens per scoring query
- ~200-300 tokens per plan generation
- Estimated cost: $0.02-0.10 per 1000 queries
Debug Mode
Enable "Debug mode" in the UI to view:
- MCP connection attempts and responses
- OpenAI API calls and responses
- Detailed scoring and ranking logs
- Error messages and troubleshooting info
Deploy on Hugging Face Spaces
- Create a new Space
- Space type:
Gradio - Hardware:
CPU basic - Runtime:
Python
- Space type:
- Add files
- Upload the repository contents (
app.py,requirements.txt,README.md, etc.)
- Upload the repository contents (
- Set secrets in the Space
- Settings → Secrets → Add new secret:
- Name:
OPENAI_API_KEY - Value: your OpenAI API key
- Name:
- (Optional)
OPENAI_BASE_URLandOPENAI_MODELif using a compatible provider
- Settings → Secrets → Add new secret:
- Build & run
- The Space auto-installs from
requirements.txtand runsapp.py - Ensure outbound network is allowed for MCP and OpenAI endpoints
- The Space auto-installs from
- Test
- Open the Space URL
- Enter a natural-language requirement and click "Find challenges"
- Verify you see recommendation stars and an AI plan
Notes
- The app listens on Gradio’s default port;
demo.launch(server_name="0.0.0.0")is set. - No GPU required; designed for CPU basic.
Deliverables Checklist (per Spec)
- Functional agent application using the Topcoder MCP server (SSE/HTTP) with a clear, user-friendly UI (Gradio)
- Runs on Hugging Face Spaces (CPU Basic) without GPU
- Clear use case and purpose documented (this README)
- Intelligent features: LLM-based keyword extraction, relevance scoring with reasons, star recommendations, and plan generation
- Robust behavior: retries, fallbacks, and debug logging
- Deployment/configuration instructions for Spaces (this README, plus
setup_env.pyfor local setup) - Optional: short demo video (3–5 minutes) showcasing the agent’s core workflow
Submission Guidance
Provide either (or both):
- Public Hugging Face Space URL with the agent running; and/or
- A downloadable zip of this repository
Also include:
- A brief use case summary (can reference this README)
- Short notes on MCP integration (tools discovery and call flow)
- Any custom configuration (e.g., model/base URL overrides)
Recommended final checklist before submitting:
OPENAI_API_KEYsecret set in the Space- Space loads and returns live results from MCP
- Table shows stars and AI reason; Plan section renders
- Debug mode produces logs without errors