fffffwl commited on
Commit
d8229c6
·
0 Parent(s):
Files changed (8) hide show
  1. .gitignore +2 -0
  2. README.md +135 -0
  3. app.py +667 -0
  4. mcp_demo.py +28 -0
  5. requirements.txt +7 -0
  6. setup_env.py +155 -0
  7. spec.md +225 -0
  8. test_openai.py +211 -0
.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ __pycache__/
2
+ .env
README.md ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Topcoder Challenge Scout (Agentic MCP)
2
+
3
+ An agentic AI app that discovers and prioritizes Topcoder challenges using the Topcoder MCP server and OpenAI for keyword extraction, intelligent scoring, and action planning. The model decides what to search (keywords) and how to rank results; you provide only high-level requirements.
4
+
5
+ ## Use case
6
+ - Input a free-form requirement (e.g., "Looking for recent active LLM development challenges with web UI").
7
+ - The agent extracts minimal, broad keywords automatically (1–3 terms).
8
+ - Fetches challenges via MCP; defaults to last 90 days and active/open status.
9
+ - Uses OpenAI to score relevance, produce recommendation stars (1–5), and return a brief plan.
10
+ - Displays a compact table (title, tags, prize, deadline, stars, AI reason) plus a quick plan.
11
+
12
+ ## Setup
13
+
14
+ ### 1. Install Dependencies
15
+ ```bash
16
+ pip install -r requirements.txt
17
+ ```
18
+
19
+ ### 2. Configure OpenAI API
20
+ You need an OpenAI API key to enable agentic keyword extraction, scoring, and planning features.
21
+
22
+ **Quick Setup:**
23
+ ```bash
24
+ python setup_env.py
25
+ ```
26
+
27
+ **Manual Setup:**
28
+ 1. Get your API key from https://platform.openai.com/api-keys
29
+ 2. Set environment variable:
30
+ ```bash
31
+ # macOS/Linux
32
+ export OPENAI_API_KEY=your_api_key_here
33
+
34
+ # Windows (Command Prompt)
35
+ set OPENAI_API_KEY=your_api_key_here
36
+
37
+ # Windows (PowerShell)
38
+ $env:OPENAI_API_KEY="your_api_key_here"
39
+ ```
40
+
41
+ **Optional Configuration:**
42
+ - `OPENAI_BASE_URL`: Custom API base URL (default: https://api.openai.com/v1)
43
+ - `OPENAI_MODEL`: Model to use (default: gpt-4o-mini)
44
+
45
+ ### 3. Run the App (Local)
46
+ ```bash
47
+ python app.py
48
+ ```
49
+
50
+ Open the printed local URL to use the UI.
51
+
52
+ UI inputs:
53
+ - `Requirements`: free-form text. The model extracts keywords automatically.
54
+ - `Use MCP (recommended)`: toggle live Topcoder data.
55
+ - `Debug mode`: shows detailed MCP and model logs.
56
+
57
+ Defaults applied by the agent:
58
+ - Time window: within last 90 days
59
+ - Status: active/open (unknown kept; closed/cancelled filtered)
60
+
61
+ ## Features
62
+
63
+ ### MCP Integration
64
+ - Uses the official MCP SDK via SSE/streamable HTTP
65
+ - Connects to `https://api.topcoder-dev.com/v6/mcp/mcp` and `https://api.topcoder-dev.com/v6/mcp/sse`
66
+ - Fetches real-time challenge data; includes robust normalization and fallbacks
67
+ - Built-in retries and detailed debug logging
68
+
69
+ ### AI-Powered Features
70
+ - **Keyword Extraction**: From free-form requirements to 1–3 broad search terms
71
+ - **Smart Scoring**: Relevance scoring with reasons (0–1), converted to 1–5 stars
72
+ - **Action Planning**: Generates concise next steps for top picks
73
+ - **Context-Aware**: Considers skills, tags, brief descriptions, and prize amounts
74
+
75
+ ### Cost Information
76
+ Using gpt-4o-mini (recommended):
77
+ - ~100-200 tokens per scoring query
78
+ - ~200-300 tokens per plan generation
79
+ - Estimated cost: $0.02-0.10 per 1000 queries
80
+
81
+ ## Debug Mode
82
+ Enable "Debug mode" in the UI to view:
83
+ - MCP connection attempts and responses
84
+ - OpenAI API calls and responses
85
+ - Detailed scoring and ranking logs
86
+ - Error messages and troubleshooting info
87
+
88
+ ## Deploy on Hugging Face Spaces
89
+ 1. Create a new Space
90
+ - Space type: `Gradio`
91
+ - Hardware: `CPU basic`
92
+ - Runtime: `Python`
93
+ 2. Add files
94
+ - Upload the repository contents (`app.py`, `requirements.txt`, `README.md`, etc.)
95
+ 3. Set secrets in the Space
96
+ - Settings → Secrets → Add new secret:
97
+ - Name: `OPENAI_API_KEY`
98
+ - Value: your OpenAI API key
99
+ - (Optional) `OPENAI_BASE_URL` and `OPENAI_MODEL` if using a compatible provider
100
+ 4. Build & run
101
+ - The Space auto-installs from `requirements.txt` and runs `app.py`
102
+ - Ensure outbound network is allowed for MCP and OpenAI endpoints
103
+ 5. Test
104
+ - Open the Space URL
105
+ - Enter a natural-language requirement and click "Find challenges"
106
+ - Verify you see recommendation stars and an AI plan
107
+
108
+ Notes
109
+ - The app listens on Gradio’s default port; `demo.launch(server_name="0.0.0.0")` is set.
110
+ - No GPU required; designed for CPU basic.
111
+
112
+ ## Deliverables Checklist (per Spec)
113
+ - Functional agent application using the Topcoder MCP server (SSE/HTTP) with a clear, user-friendly UI (Gradio)
114
+ - Runs on Hugging Face Spaces (CPU Basic) without GPU
115
+ - Clear use case and purpose documented (this README)
116
+ - Intelligent features: LLM-based keyword extraction, relevance scoring with reasons, star recommendations, and plan generation
117
+ - Robust behavior: retries, fallbacks, and debug logging
118
+ - Deployment/configuration instructions for Spaces (this README, plus `setup_env.py` for local setup)
119
+ - Optional: short demo video (3–5 minutes) showcasing the agent’s core workflow
120
+
121
+ ## Submission Guidance
122
+ Provide either (or both):
123
+ 1. Public Hugging Face Space URL with the agent running; and/or
124
+ 2. A downloadable zip of this repository
125
+
126
+ Also include:
127
+ - A brief use case summary (can reference this README)
128
+ - Short notes on MCP integration (tools discovery and call flow)
129
+ - Any custom configuration (e.g., model/base URL overrides)
130
+
131
+ Recommended final checklist before submitting:
132
+ - `OPENAI_API_KEY` secret set in the Space
133
+ - Space loads and returns live results from MCP
134
+ - Table shows stars and AI reason; Plan section renders
135
+ - Debug mode produces logs without errors
app.py ADDED
@@ -0,0 +1,667 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+ import re
4
+ import time
5
+ import uuid
6
+ import sys
7
+ import logging
8
+ import asyncio
9
+ import traceback
10
+ from datetime import datetime, timezone, timedelta
11
+ from dataclasses import dataclass
12
+ from typing import List, Optional, Tuple
13
+
14
+ import gradio as gr
15
+ import requests
16
+ from mcp import ClientSession
17
+ from mcp.client.sse import sse_client
18
+ from openai import OpenAI
19
+ from dotenv import load_dotenv
20
+
21
+ load_dotenv()
22
+
23
+ MCP_SSE_URL = "https://api.topcoder-dev.com/v6/mcp/sse"
24
+ MCP_MESSAGES_URL = "https://api.topcoder-dev.com/v6/mcp/messages"
25
+ MCP_HTTP_BASE = "https://api.topcoder-dev.com/v6/mcp/mcp"
26
+
27
+
28
+ # Console logger for runtime diagnostics
29
+ logging.basicConfig(
30
+ level=logging.INFO,
31
+ format="%(asctime)s [%(levelname)s] %(message)s",
32
+ stream=sys.stdout,
33
+ )
34
+ logger = logging.getLogger("challenge_scout")
35
+
36
+
37
+ @dataclass
38
+ class Challenge:
39
+ id: str
40
+ title: str
41
+ prize: float
42
+ deadline: str
43
+ tags: List[str]
44
+ description: str = ""
45
+
46
+
47
+ FALLBACK_DATA: List[Challenge] = [
48
+ Challenge(id="123", title="Build a Minimal MCP Agent UI", prize=300.0, deadline="2025-08-20", tags=["mcp", "python", "gradio"]),
49
+ Challenge(id="456", title="Topcoder Data Parsing Toolkit", prize=500.0, deadline="2025-08-15", tags=["data", "api", "json"]),
50
+ Challenge(id="789", title="AI-Powered Web Application", prize=250.0, deadline="2025-08-18", tags=["ai", "web", "optimization"]),
51
+ ]
52
+
53
+
54
+ def _open_mcp_session(
55
+ connect_timeout: float = 5.0,
56
+ read_window_seconds: float = 8.0,
57
+ debug: bool = False,
58
+ max_retries: int = 2,
59
+ ) -> Tuple[Optional[str], str]:
60
+ """Open SSE and extract a sessionId from the first data event.
61
+
62
+ We keep the stream open for up to `read_window_seconds` to wait for the
63
+ line like: "data: /v6/mcp/messages?sessionId=<uuid>".
64
+ """
65
+ logs: List[str] = []
66
+ headers = {
67
+ "Accept": "text/event-stream",
68
+ "Cache-Control": "no-cache",
69
+ "Connection": "keep-alive",
70
+ }
71
+ backoff = 0.6
72
+ for attempt in range(1, max_retries + 2):
73
+ if debug:
74
+ logger.info(f"SSE dialing attempt={attempt} url={MCP_SSE_URL}")
75
+ try:
76
+ t0 = time.time()
77
+ resp = requests.get(
78
+ MCP_SSE_URL,
79
+ headers=headers,
80
+ timeout=(connect_timeout, read_window_seconds + 2.0),
81
+ stream=True,
82
+ )
83
+ elapsed = time.time() - t0
84
+ logs.append(f"SSE attempt {attempt}: status={resp.status_code}, elapsed={elapsed:.2f}s")
85
+ if debug:
86
+ logger.info(f"SSE attempt {attempt} status={resp.status_code} elapsed={elapsed:.2f}s")
87
+ resp.raise_for_status()
88
+
89
+ start = time.time()
90
+ pattern = re.compile(r"^data:\s*/v6/mcp/messages\?sessionId=([a-f0-9-]+)\s*$", re.IGNORECASE)
91
+ for line in resp.iter_lines(decode_unicode=True, chunk_size=1):
92
+ if line is None:
93
+ if time.time() - start > read_window_seconds:
94
+ logs.append("SSE: timeout waiting for data line")
95
+ break
96
+ continue
97
+ if not isinstance(line, str):
98
+ if time.time() - start > read_window_seconds:
99
+ logs.append("SSE: timeout (non-str line)")
100
+ break
101
+ continue
102
+ if debug:
103
+ logs.append(f"SSE line: {line[:160]}")
104
+ logger.info(f"SSE line: {line[:160]}")
105
+ m = pattern.match(line)
106
+ if m:
107
+ sid = m.group(1)
108
+ logs.append(f"SSE: obtained sessionId={sid}")
109
+ if debug:
110
+ logger.info(f"SSE obtained sessionId={sid}")
111
+ return sid, "\n".join(logs)
112
+ if time.time() - start > read_window_seconds:
113
+ logs.append("SSE: read window exceeded without sessionId")
114
+ if debug:
115
+ logger.info("SSE: read window exceeded without sessionId")
116
+ break
117
+ except Exception as e:
118
+ logs.append(f"SSE error on attempt {attempt}: {e}")
119
+ if debug:
120
+ logger.info(f"SSE error on attempt {attempt}: {e}")
121
+ time.sleep(backoff)
122
+ backoff *= 1.6
123
+ return None, "\n".join(logs)
124
+
125
+
126
+ def _mcp_list_challenges(
127
+ debug: bool = False,
128
+ retries: int = 2,
129
+ ) -> Tuple[List[Challenge], Optional[str], str]:
130
+ """Call MCP using the official SDK (JSON-RPC over streamable HTTP) and return compact challenges.
131
+
132
+ Returns (challenges, error, debug_logs)
133
+ """
134
+ logs: List[str] = []
135
+ async def _do() -> Tuple[List[Challenge], Optional[str], str]:
136
+ try:
137
+ t0 = time.time()
138
+ async with sse_client(MCP_SSE_URL) as (read, write):
139
+ async with ClientSession(read, write) as session:
140
+ await session.initialize()
141
+ tools = await session.list_tools()
142
+ names = [t.name for t in tools.tools]
143
+ logs.append(f"SDK tools: {names}")
144
+ # Prefer a known tool name pattern
145
+ tool_name = None
146
+ for cand in [
147
+ "query-tc-challenges",
148
+ "query-tc-challenges-public",
149
+ "query-tc-challenges-private",
150
+ ]:
151
+ if cand in names:
152
+ tool_name = cand
153
+ break
154
+ if not tool_name and names:
155
+ tool_name = names[0]
156
+ if not tool_name:
157
+ return [], "No tools available", "\n".join(logs)
158
+ # Call with minimal args (tool often supports pagination; start small)
159
+ args = {"page": 1, "pageSize": 20}
160
+ logs.append(f"Calling tool {tool_name} with args: {args}")
161
+ try:
162
+ result = await session.call_tool(tool_name, args)
163
+ except Exception as e:
164
+ logs.append(f"call_tool error: {e}")
165
+ return [], str(e), "\n".join(logs)
166
+
167
+ items: List[Challenge] = []
168
+ payload = getattr(result, "structuredContent", None)
169
+ if not payload:
170
+ # Fallback to text content join and attempt JSON extraction of an object with data[] or array
171
+ combined_text = "\n".join(
172
+ getattr(c, "text", "") for c in getattr(result, "content", []) if hasattr(c, "text")
173
+ )
174
+ obj_match = re.search(r"\{[\s\S]*\}$", combined_text)
175
+ if obj_match:
176
+ try:
177
+ payload = json.loads(obj_match.group(0))
178
+ except Exception:
179
+ payload = None
180
+ # Normalize list of challenge-like dicts
181
+ raw_list = []
182
+ if isinstance(payload, dict) and isinstance(payload.get("data"), list):
183
+ raw_list = payload.get("data")
184
+ logs.append(f"Received {len(raw_list)} items from data[]")
185
+ elif isinstance(payload, list):
186
+ raw_list = payload
187
+ logs.append(f"Received {len(raw_list)} items from top-level list")
188
+ else:
189
+ logs.append(f"Unexpected payload shape: {type(payload)}")
190
+
191
+ def to_ch(item: dict) -> Challenge:
192
+ title = (
193
+ str(item.get("name") or item.get("title") or item.get("challengeTitle") or "")
194
+ )
195
+ # prize from prizeSets → sum values if present
196
+ prize = 0.0
197
+ prize_sets = item.get("prizeSets")
198
+ if isinstance(prize_sets, list):
199
+ for s in prize_sets:
200
+ prizes = s.get("prizes") if isinstance(s, dict) else None
201
+ if isinstance(prizes, list):
202
+ for p in prizes:
203
+ val = None
204
+ if isinstance(p, dict):
205
+ val = p.get("value") or p.get("amount")
206
+ try:
207
+ prize += float(val or 0)
208
+ except Exception:
209
+ pass
210
+ else:
211
+ prize_val = item.get("totalPrizes") or item.get("prize") or item.get("totalPrize") or 0
212
+ try:
213
+ prize = float(prize_val or 0)
214
+ except Exception:
215
+ prize = 0.0
216
+ # deadline candidates
217
+ deadline = (
218
+ str(
219
+ item.get("endDate")
220
+ or item.get("submissionEndDate")
221
+ or item.get("registrationEndDate")
222
+ or item.get("startDate")
223
+ or ""
224
+ )
225
+ )
226
+ # tags candidates
227
+ tg = item.get("tags")
228
+ if not isinstance(tg, list):
229
+ tg = []
230
+ # add some fallbacks/contextual labels
231
+ for extra in [item.get("track"), item.get("type"), item.get("status")]:
232
+ if extra:
233
+ tg.append(extra)
234
+ # skills may be list of dicts or strings
235
+ sk = item.get("skills")
236
+ if isinstance(sk, list):
237
+ for s in sk:
238
+ if isinstance(s, dict):
239
+ name = s.get("name") or s.get("id")
240
+ if name:
241
+ tg.append(str(name))
242
+ else:
243
+ tg.append(str(s))
244
+ tg = [str(x) for x in tg if x]
245
+ desc = str(item.get("description") or "")
246
+ return Challenge(
247
+ id=str(item.get("id", "")),
248
+ title=title,
249
+ prize=prize,
250
+ deadline=deadline,
251
+ tags=tg,
252
+ description=desc,
253
+ )
254
+
255
+ for it in raw_list:
256
+ if isinstance(it, dict):
257
+ items.append(to_ch(it))
258
+ logs.append(f"Normalized {len(items)} challenges")
259
+ return items, None, "\n".join(logs)
260
+ except Exception as e:
261
+ logs.append(f"SDK fatal: {e}")
262
+ logs.append(traceback.format_exc())
263
+ # Try to unwrap ExceptionGroup-like messages if any
264
+ try:
265
+ from exceptiongroup import ExceptionGroup # type: ignore
266
+ except Exception:
267
+ ExceptionGroup = None # type: ignore
268
+ if ExceptionGroup and isinstance(e, ExceptionGroup): # type: ignore[arg-type]
269
+ for idx, sub in enumerate(e.exceptions): # type: ignore[attr-defined]
270
+ logs.append(f" sub[{idx}]: {type(sub).__name__}: {sub}")
271
+ return [], str(e), "\n".join(logs)
272
+
273
+ # Run the async block
274
+ return asyncio.run(_do())
275
+
276
+
277
+ def shortlist(challenge_list: List[Challenge], keyword: str, min_prize: float) -> List[Challenge]:
278
+ keyword_lower = keyword.lower().strip()
279
+ results = []
280
+ for ch in challenge_list:
281
+ if ch.prize < min_prize:
282
+ continue
283
+ hay = f"{ch.title} {' '.join(ch.tags)}".lower()
284
+ if keyword_lower and keyword_lower not in hay:
285
+ continue
286
+ results.append(ch)
287
+ # lightweight ranking: by prize desc
288
+ results.sort(key=lambda c: c.prize, reverse=True)
289
+ return results
290
+
291
+
292
+ def _parse_deadline(dt_str: str) -> Optional[datetime]:
293
+ if not dt_str:
294
+ return None
295
+ try:
296
+ # Try ISO 8601 with Z
297
+ return datetime.fromisoformat(dt_str.replace("Z", "+00:00"))
298
+ except Exception:
299
+ try:
300
+ return datetime.strptime(dt_str, "%Y-%m-%d")
301
+ except Exception:
302
+ return None
303
+
304
+
305
+ def _filter_by_days(items: List[Challenge], days_ahead: int) -> List[Challenge]:
306
+ if days_ahead <= 0:
307
+ return items
308
+ now = datetime.now(timezone.utc)
309
+ limit = now + timedelta(days=days_ahead)
310
+ kept: List[Challenge] = []
311
+ for ch in items:
312
+ dt = _parse_deadline(ch.deadline)
313
+ if dt is None:
314
+ kept.append(ch)
315
+ continue
316
+ if now <= dt <= limit:
317
+ kept.append(ch)
318
+ return kept
319
+
320
+
321
+ def _generate_plan(items: List[Challenge], keyword: str, min_prize: float, days_ahead: int, debug: bool = False) -> Tuple[str, str]:
322
+ # Keep context tiny: only top 8 rows of compact fields
323
+ top = items[:8]
324
+ compact = [
325
+ {
326
+ "title": c.title,
327
+ "prize": c.prize,
328
+ "deadline": c.deadline,
329
+ "tags": c.tags[:5],
330
+ }
331
+ for c in top
332
+ ]
333
+
334
+ logs: List[str] = []
335
+ api_key = os.getenv("OPENAI_API_KEY")
336
+ base_url = os.getenv("OPENAI_BASE_URL", "https://api.openai.com/v1")
337
+
338
+ if not api_key:
339
+ logs.append("Missing OPENAI_API_KEY")
340
+ return "", "\n".join(logs)
341
+
342
+ try:
343
+ client = OpenAI(
344
+ api_key=api_key,
345
+ base_url=base_url
346
+ )
347
+
348
+ prompt = (
349
+ "You are a concise challenge scout. Given compact challenge metadata, output:\n"
350
+ "- Top 3 picks (title + brief reason)\n"
351
+ "- Quick plan of action (3 bullets)\n"
352
+ f"Constraints: keyword='{keyword}', min_prize>={min_prize}, within {days_ahead} days.\n"
353
+ f"Data: {json.dumps(compact)}"
354
+ )
355
+
356
+ if debug:
357
+ logs.append(f"PLAN OpenAI prompt: {prompt[:1500]}")
358
+
359
+ response = client.chat.completions.create(
360
+ model=os.getenv("OPENAI_MODEL", "gpt-4o-mini"),
361
+ messages=[
362
+ {"role": "system", "content": "You are a helpful, terse assistant."},
363
+ {"role": "user", "content": prompt},
364
+ ],
365
+ temperature=0.3,
366
+ timeout=20
367
+ )
368
+
369
+ text = response.choices[0].message.content.strip()
370
+
371
+ if debug:
372
+ logs.append(f"PLAN OpenAI output: {text[:800]}")
373
+ logger.info("\n".join(logs))
374
+
375
+ return text, "\n".join(logs)
376
+
377
+ except Exception as e:
378
+ if debug:
379
+ logs.append(f"PLAN OpenAI error: {e}")
380
+ logger.info("\n".join(logs))
381
+ return "", "\n".join(logs)
382
+
383
+
384
+ def _score_items(items: List[Challenge], keyword: str, debug: bool = False) -> Tuple[List[tuple[Challenge, float, str]], str]:
385
+ """Score challenges using OpenAI API and return (challenge, score, reason) tuples"""
386
+ api_key = os.getenv("OPENAI_API_KEY")
387
+ base_url = os.getenv("OPENAI_BASE_URL", "https://api.openai.com/v1")
388
+
389
+ results: List[tuple[Challenge, float, str]] = []
390
+ slog: List[str] = []
391
+
392
+ if not items:
393
+ return results, "\n".join(slog)
394
+
395
+ if not api_key:
396
+ if debug:
397
+ slog.append("Missing OPENAI_API_KEY")
398
+ return results, "\n".join(slog)
399
+
400
+ compact = [
401
+ {
402
+ "id": c.id,
403
+ "title": c.title,
404
+ "prize": c.prize,
405
+ "deadline": c.deadline,
406
+ "tags": c.tags[:6],
407
+ "description": (c.description or "").replace("\n", " ")[:400],
408
+ }
409
+ for c in items[:8]
410
+ ]
411
+
412
+ scoring_prompt = (
413
+ "You are an expert Topcoder challenge analyst. Analyze items and rate match to the query.\n"
414
+ f"Query: {keyword}\n"
415
+ "Items: " + json.dumps(compact) + "\n\n"
416
+ "Instructions:\n"
417
+ "- Consider skills, tags, and brief description.\n"
418
+ "- Higher prize is slightly better all else equal.\n"
419
+ "- Return ONLY JSON array of objects: [{id, score, reason}] where 0<=score<=1.\n"
420
+ "- Do not include any extra text."
421
+ )
422
+
423
+ try:
424
+ client = OpenAI(
425
+ api_key=api_key,
426
+ base_url=base_url
427
+ )
428
+
429
+ if debug:
430
+ slog.append(f"OpenAI model: {os.getenv('OPENAI_MODEL', 'gpt-4o-mini')}")
431
+ slog.append(f"OpenAI prompt: {scoring_prompt[:1500]}")
432
+
433
+ response = client.chat.completions.create(
434
+ model=os.getenv("OPENAI_MODEL", "gpt-4o-mini"),
435
+ messages=[
436
+ {"role": "system", "content": "You are a helpful, terse assistant. Return JSON only."},
437
+ {"role": "user", "content": scoring_prompt},
438
+ ],
439
+ temperature=0.2,
440
+ timeout=40
441
+ )
442
+
443
+ text = response.choices[0].message.content.strip()
444
+
445
+ if debug:
446
+ slog.append(f"OpenAI raw output: {text[:800]}")
447
+
448
+ # Extract JSON from response
449
+ m = re.search(r"\[\s*\{[\s\S]*\}\s*\]", text)
450
+ if not m:
451
+ if debug:
452
+ slog.append("No valid JSON array found in response")
453
+ return results, "\n".join(slog)
454
+
455
+ try:
456
+ arr = json.loads(m.group(0))
457
+ score_map: dict[str, tuple[float, str]] = {}
458
+
459
+ for obj in arr:
460
+ if isinstance(obj, dict):
461
+ cid = str(obj.get("id", ""))
462
+ score = float(obj.get("score", 0))
463
+ reason = str(obj.get("reason", ""))
464
+ score_map[cid] = (max(0.0, min(1.0, score)), reason)
465
+
466
+ # Build results with scores
467
+ out: List[tuple[Challenge, float, str]] = []
468
+ for c in items:
469
+ if c.id in score_map:
470
+ s, r = score_map[c.id]
471
+ out.append((c, s, r))
472
+ else:
473
+ # Fallback: light prize-based score for missing items
474
+ out.append((c, min(c.prize / 1000.0, 1.0) * 0.3, ""))
475
+
476
+ if debug:
477
+ slog.append(f"OpenAI parsed {len(score_map)} scores")
478
+ logger.info("\n".join(slog))
479
+
480
+ return out, "\n".join(slog)
481
+
482
+ except json.JSONDecodeError as e:
483
+ if debug:
484
+ slog.append(f"JSON decode error: {e}")
485
+ return results, "\n".join(slog)
486
+
487
+ except Exception as e:
488
+ if debug:
489
+ slog.append(f"OpenAI API error: {e}")
490
+ logger.info("\n".join(slog))
491
+ return results, "\n".join(slog)
492
+
493
+
494
+ def _require_llm_config() -> Tuple[bool, str]:
495
+ """Check if OpenAI API is properly configured"""
496
+ if os.getenv("OPENAI_API_KEY"):
497
+ return True, ""
498
+ return False, "Set OPENAI_API_KEY for LLM scoring"
499
+
500
+
501
+ def _extract_keywords(requirements: str, debug: bool = False) -> Tuple[str, str]:
502
+ """Use LLM to extract 1-3 broad, concise keywords from free-form requirements."""
503
+ logs: List[str] = []
504
+ api_key = os.getenv("OPENAI_API_KEY")
505
+ base_url = os.getenv("OPENAI_BASE_URL", "https://api.openai.com/v1")
506
+ if not api_key:
507
+ return "", "Missing OPENAI_API_KEY"
508
+ try:
509
+ client = OpenAI(api_key=api_key, base_url=base_url)
510
+ prompt = (
511
+ "You extract compact search keywords.\n"
512
+ "Given a user's requirements, output a comma-separated list of up to 3 broad keywords (1-3 words each).\n"
513
+ "Keep them minimal, generic, and deduplicated.\n"
514
+ "Examples:\n"
515
+ "- 'Need an LLM project with Python and simple UI' -> LLM, Python, UI\n"
516
+ "- 'Data visualization challenges about dashboards' -> data visualization, dashboard\n"
517
+ f"Requirements: {requirements.strip()}\n"
518
+ "Return only the keywords, comma-separated."
519
+ )
520
+ if debug:
521
+ logs.append(f"KW prompt: {prompt[:500]}")
522
+ resp = client.chat.completions.create(
523
+ model=os.getenv("OPENAI_MODEL", "gpt-4o-mini"),
524
+ messages=[
525
+ {"role": "system", "content": "You are a terse assistant. Return only keywords."},
526
+ {"role": "user", "content": prompt},
527
+ ],
528
+ temperature=0.2,
529
+ timeout=20,
530
+ )
531
+ text = (resp.choices[0].message.content or "").strip()
532
+ # normalize to simple comma-separated
533
+ text = re.sub(r"\s*[,\n]\s*", ", ", text)
534
+ # keep at most 3
535
+ parts = [p.strip() for p in text.split(",") if p.strip()]
536
+ keywords = ", ".join(parts[:3])
537
+ if debug:
538
+ logs.append(f"KW extracted: {keywords}")
539
+ return keywords, "\n".join(logs)
540
+ except Exception as e:
541
+ if debug:
542
+ logs.append(f"KW error: {e}")
543
+ return "", "\n".join(logs)
544
+
545
+
546
+ def _filter_active(items: List[Challenge]) -> List[Challenge]:
547
+ """Keep items that appear to be active/online based on tags or status hints."""
548
+ kept: List[Challenge] = []
549
+ for ch in items:
550
+ tags_lower = [t.lower() for t in ch.tags]
551
+ if any(t == "active" or t == "open" for t in tags_lower):
552
+ kept.append(ch)
553
+ else:
554
+ # If status is unknown, keep it (avoid over-filtering)
555
+ if not any(t in ("completed", "closed", "cancelled", "draft") for t in tags_lower):
556
+ kept.append(ch)
557
+ return kept
558
+
559
+
560
+ def _stars_for_score(score: float) -> str:
561
+ n = max(1, min(5, int(round(score * 5))))
562
+ return "★" * n + "☆" * (5 - n)
563
+
564
+
565
+ def run_query(requirements: str, use_mcp: bool, debug_mode: bool):
566
+ debug_text = ""
567
+ if debug_mode:
568
+ logger.info(
569
+ f"RUN start use_mcp={use_mcp} keyword={keyword!r} min_prize={min_prize} days_ahead={days_ahead}"
570
+ )
571
+ # Defaults for agent flow
572
+ min_prize = 0.0
573
+ days_ahead = 90
574
+
575
+ # Extract compact keywords from requirements
576
+ keyword, kw_logs = _extract_keywords(requirements, debug=debug_mode)
577
+
578
+ if use_mcp:
579
+ items, err, dbg = _mcp_list_challenges(debug=debug_mode)
580
+ debug_text = dbg or ""
581
+ if kw_logs:
582
+ debug_text = (debug_text + "\n\n" + kw_logs).strip()
583
+ if err:
584
+ items = FALLBACK_DATA
585
+ status = f"MCP fallback: {err}"
586
+ if debug_mode:
587
+ logger.info(f"RUN fallback reason: {err}")
588
+ else:
589
+ status = "MCP OK"
590
+ if debug_mode:
591
+ logger.info("RUN MCP OK")
592
+ else:
593
+ items = FALLBACK_DATA
594
+ status = "Local sample"
595
+
596
+ # Apply time window first, then keyword/prize filtering
597
+ items = _filter_by_days(items, days_ahead)
598
+ items = _filter_active(items)
599
+ # Apply light shortlist by prize only; rely on AI scoring for relevance
600
+ filtered = shortlist(items, "", min_prize)
601
+ # Enforce LLM config for scoring
602
+ ok, cfg_msg = _require_llm_config()
603
+ score_logs = ""
604
+ if not ok:
605
+ status = f"{status} | LLM config required: {cfg_msg}"
606
+ else:
607
+ scored, score_logs = _score_items(filtered, keyword, debug=debug_mode)
608
+ if scored:
609
+ scored.sort(key=lambda t: t[1], reverse=True)
610
+ filtered = [c for c, _, _ in scored]
611
+ # If MCP returned items but filtering removed all (e.g., many items missing prize), show unfiltered
612
+ if not filtered and items:
613
+ filtered = items
614
+ status = f"{status} (no matches; showing unfiltered)"
615
+ plan_text, plan_logs = _generate_plan(filtered, keyword, min_prize, days_ahead, debug=debug_mode) if filtered else ("", "")
616
+
617
+ # Build a map from id to (score, reason) for star display
618
+ id_to_score_reason: dict[str, tuple[float, str]] = {}
619
+ if ok:
620
+ for c, s, r in (scored if 'scored' in locals() else []):
621
+ id_to_score_reason[c.id] = (s, r)
622
+
623
+ rows = []
624
+ for c in filtered:
625
+ s, r = id_to_score_reason.get(c.id, (0.0, ""))
626
+ stars = _stars_for_score(s)
627
+ rows.append([
628
+ c.title,
629
+ f"${c.prize:,.0f}",
630
+ c.deadline,
631
+ ", ".join(c.tags),
632
+ stars,
633
+ (r[:160] + ("…" if len(r) > 160 else "")) if r else "",
634
+ c.id,
635
+ ])
636
+ # Append model logs into debug output when enabled and also log to console
637
+ if debug_mode:
638
+ merged_logs = (debug_text + "\n\n" + score_logs + "\n\n" + plan_logs).strip()
639
+ debug_text = merged_logs
640
+ if merged_logs:
641
+ logger.info(merged_logs)
642
+ return rows, status, debug_text, plan_text
643
+
644
+
645
+ with gr.Blocks(title="Topcoder Challenge Scout") as demo:
646
+ gr.Markdown("**Topcoder Challenge Scout** — agent picks tools, you provide requirements")
647
+ with gr.Row():
648
+ requirements = gr.Textbox(label="Requirements", placeholder="e.g. Looking for recent active LLM development challenges with web UI", lines=3)
649
+ use_mcp = gr.Checkbox(label="Use MCP (recommended)", value=True)
650
+ debug_mode = gr.Checkbox(label="Debug mode", value=False)
651
+ gr.Markdown("Default filters: within last 90 days, active status. The agent extracts minimal keywords automatically.")
652
+ run_btn = gr.Button("Find challenges")
653
+ status = gr.Textbox(label="Status", interactive=False)
654
+ table = gr.Dataframe(headers=["Title", "Prize", "Deadline", "Tags", "Recommend", "AI Reason", "Id"], wrap=True)
655
+ plan_md = gr.Markdown("", label="Plan")
656
+ with gr.Accordion("Debug output", open=False):
657
+ debug_out = gr.Textbox(label="Logs", interactive=False, lines=8)
658
+
659
+ run_btn.click(
660
+ fn=run_query,
661
+ inputs=[requirements, use_mcp, debug_mode],
662
+ outputs=[table, status, debug_out, plan_md],
663
+ )
664
+
665
+
666
+ if __name__ == "__main__":
667
+ demo.launch(server_name="0.0.0.0", server_port=int(os.getenv("PORT", "7860")))
mcp_demo.py ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import asyncio
2
+ import os
3
+ from typing import Any
4
+
5
+ from mcp import ClientSession
6
+ from mcp.client.streamable_http import streamablehttp_client
7
+
8
+ MCP_BASE = os.getenv("TOPCODER_MCP_BASE", "https://api.topcoder-dev.com/v6/mcp/mcp")
9
+
10
+
11
+ async def main() -> None:
12
+ async with streamablehttp_client(MCP_BASE) as (read, write, _):
13
+ async with ClientSession(read, write) as session:
14
+ await session.initialize()
15
+ tools = await session.list_tools()
16
+ names = [t.name for t in tools.tools]
17
+ print("Tools:", names)
18
+
19
+ # If a challenges tool exists, try a harmless list/describe call signature
20
+ # We just print availability here; you can extend to actual tool calls once schema is known
21
+
22
+
23
+ def run() -> None:
24
+ asyncio.run(main())
25
+
26
+
27
+ if __name__ == "__main__":
28
+ run()
requirements.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ gradio==4.44.0
2
+ requests==2.32.3
3
+ uvicorn==0.30.3
4
+ mcp==1.2.0
5
+ openai==1.55.3
6
+ python-dotenv==1.0.0
7
+ exceptiongroup==1.2.2 ; python_version < '3.11'
setup_env.py ADDED
@@ -0,0 +1,155 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Environment setup script for OpenAI API configuration
4
+ """
5
+
6
+ import os
7
+ import sys
8
+
9
+
10
+ def check_openai_config():
11
+ """Check current OpenAI configuration"""
12
+ print("🔍 检查当前OpenAI配置")
13
+ print("-" * 40)
14
+
15
+ api_key = os.getenv("OPENAI_API_KEY")
16
+ base_url = os.getenv("OPENAI_BASE_URL", "https://api.openai.com/v1")
17
+ model = os.getenv("OPENAI_MODEL", "gpt-4o-mini")
18
+
19
+ print(f"🔑 OPENAI_API_KEY: {'✅ 已设置' if api_key else '❌ 未设置'}")
20
+ print(f"🌐 OPENAI_BASE_URL: {base_url}")
21
+ print(f"🤖 OPENAI_MODEL: {model}")
22
+
23
+ if api_key:
24
+ # 隐藏API key的大部分内容,只显示前4位和后4位
25
+ masked_key = f"{api_key[:4]}...{api_key[-4:]}" if len(api_key) > 8 else "***"
26
+ print(f" (API Key: {masked_key})")
27
+ return True
28
+ else:
29
+ return False
30
+
31
+
32
+ def show_setup_guide():
33
+ """显示设置指南"""
34
+ print("\n🚀 OpenAI API 设置指南")
35
+ print("=" * 50)
36
+
37
+ print("1. 获取OpenAI API Key:")
38
+ print(" • 访问: https://platform.openai.com/api-keys")
39
+ print(" • 登录或注册OpenAI账户")
40
+ print(" • 点击 'Create new secret key'")
41
+ print(" • 复制生成的API key")
42
+
43
+ print("\n2. 设置环境变量:")
44
+ if sys.platform.startswith('win'):
45
+ print(" Windows (Command Prompt):")
46
+ print(" set OPENAI_API_KEY=your_api_key_here")
47
+ print(" Windows (PowerShell):")
48
+ print(" $env:OPENAI_API_KEY=\"your_api_key_here\"")
49
+ else:
50
+ print(" macOS/Linux:")
51
+ print(" export OPENAI_API_KEY=your_api_key_here")
52
+
53
+ print("\n3. 可选配置:")
54
+ print(" • 自定义API基础URL (如果使用代理或其他OpenAI兼容服务):")
55
+ if sys.platform.startswith('win'):
56
+ print(" set OPENAI_BASE_URL=https://your-proxy.com/v1")
57
+ else:
58
+ print(" export OPENAI_BASE_URL=https://your-proxy.com/v1")
59
+
60
+ print(" • 自定义模型 (默认: gpt-4o-mini):")
61
+ if sys.platform.startswith('win'):
62
+ print(" set OPENAI_MODEL=gpt-3.5-turbo")
63
+ else:
64
+ print(" export OPENAI_MODEL=gpt-3.5-turbo")
65
+
66
+ print("\n4. 重启应用:")
67
+ print(" python app.py")
68
+
69
+
70
+ def show_cost_info():
71
+ """显示费用信息"""
72
+ print("\n💰 OpenAI API 费用说明")
73
+ print("=" * 30)
74
+ print("• gpt-4o-mini (推荐):")
75
+ print(" - 输入: $0.15/1M tokens")
76
+ print(" - 输出: $0.60/1M tokens")
77
+ print("• gpt-3.5-turbo:")
78
+ print(" - 输入: $0.50/1M tokens")
79
+ print(" - 输出: $1.50/1M tokens")
80
+ print("• gpt-4:")
81
+ print(" - 输入: $30.00/1M tokens")
82
+ print(" - 输出: $60.00/1M tokens")
83
+
84
+ print("\n📊 预估使用量:")
85
+ print("• 每次challenge评分查询: ~100-200 tokens")
86
+ print("• 每次计划生成: ~200-300 tokens")
87
+ print("• 预计每1000次查询成本: $0.02-0.10 (使用gpt-4o-mini)")
88
+
89
+
90
+ def show_troubleshooting():
91
+ """显示故障排除指南"""
92
+ print("\n🔧 故障排除")
93
+ print("=" * 20)
94
+ print("如果遇到问题:")
95
+ print("1. 确认API key正确设置且有效")
96
+ print("2. 检查网络连接")
97
+ print("3. 确认OpenAI账户有足够余额")
98
+ print("4. 如果使用代理,确认OPENAI_BASE_URL正确")
99
+ print("5. 查看应用日志中的详细错误信息")
100
+
101
+
102
+ def create_env_file():
103
+ """创建.env文件示例"""
104
+ env_content = """# OpenAI API Configuration
105
+ # 在此处设置您的OpenAI API Key
106
+ OPENAI_API_KEY=your_api_key_here
107
+
108
+ # 可选: 自定义API基础URL (用于代理或其他兼容服务)
109
+ # OPENAI_BASE_URL=https://api.openai.com/v1
110
+
111
+ # 可选: 自定义模型 (默认: gpt-4o-mini)
112
+ # OPENAI_MODEL=gpt-4o-mini
113
+ """
114
+
115
+ try:
116
+ with open('.env.example', 'w', encoding='utf-8') as f:
117
+ f.write(env_content)
118
+ print("✅ 已创建 .env.example 文件")
119
+ print(" 请复制为 .env 并填入您的API key")
120
+ except Exception as e:
121
+ print(f"❌ 创建 .env.example 失败: {e}")
122
+
123
+
124
+ def main():
125
+ print("🤖 AI Agent - OpenAI API 配置助手")
126
+ print("=" * 50)
127
+
128
+ # 检查当前配置
129
+ has_api_key = check_openai_config()
130
+
131
+ if has_api_key:
132
+ print("\n✅ OpenAI API 已配置完成!")
133
+ print("您可以直接运行应用: python app.py")
134
+ else:
135
+ print("\n❌ OpenAI API 未配置")
136
+ show_setup_guide()
137
+
138
+ # 显示费用信息
139
+ show_cost_info()
140
+
141
+ # 创建环境文件示例
142
+ print("\n📝 创建配置文件示例")
143
+ print("-" * 30)
144
+ create_env_file()
145
+
146
+ # 显示故障排除
147
+ show_troubleshooting()
148
+
149
+ print("\n" + "=" * 50)
150
+ print("配置完成后,请运行: python app.py")
151
+ print("=" * 50)
152
+
153
+
154
+ if __name__ == "__main__":
155
+ main()
spec.md ADDED
@@ -0,0 +1,225 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Challenge Overview
2
+ Welcome to the Learn AI Challenge series! Your mission is to explore AI and the Model Context Protocol (MCP) by building an innovative AI Agent application that leverages the Topcoder MCP server.
3
+
4
+ The agent can take many forms—chatbot, automation tool, research assistant, code generator, data visualization tool, or something entirely unique.
5
+
6
+ You are free to use open-source agents or develop your own from scratch. The only requirement is that the agent must utilize the Topcoder MCP server and deliver a meaningful, user-friendly experience through the server's tools and resources.
7
+
8
+ This is your opportunity to learn, innovate, and demonstrate your creativity and engineering expertise using cutting-edge AI technologies—plus earn rewards and community recognition.
9
+
10
+ Challenge Incentives
11
+ Top 10 winners will receive cash prizes.
12
+ Winners and top submissions will be awarded an "AI Explorer" badge on their Topcoder profiles.
13
+ Selected high-quality submissions will be featured on the Topcoder Hugging Face organization page, providing recognition and exposure to the broader AI community.
14
+ Challenge Details
15
+ Technology Focus: Development of AI Agents using the Model Context Protocol (MCP)
16
+ Submission Duration: 21 days
17
+ Agent Type: Any form of intelligent application is allowed—such as chatbots, virtual assistants, research tools, code generators, or data visualization agents.
18
+ Creativity: Participants are encouraged to push boundaries and design original, inventive AI applications.
19
+ Functionality: The AI agent must address a clearly defined problem or purpose and deliver impactful, practical results.
20
+ Implementation Requirements:
21
+
22
+ The AI agent must be designed to run within a Hugging Face Space.
23
+ You can use any SDK or programming language, as long as it functions correctly in the Hugging Face Space environment.
24
+ The agent must run smoothly on the default free-tier hardware (CPU Basic) provided by Hugging Face—no GPU usage required.
25
+ It must connect to and interact with the Topcoder MCP server and deliver clear, purposeful results through an intuitive and user-friendly interface.
26
+ Submission Requirements
27
+ A fully functional AI Agent application, either as a downloadable codebase (zip file) or a publicly accessible Hugging Face Space link—or both.
28
+ The submission must be based on a clearly defined use case that addresses a specific problem. Participants should also include a simple, clear document (as part of the submission) that outlines the problem being solved, the rationale behind the chosen use case, and how the AI agent addresses it. This clarity should also be reflected in the agent’s implementation and user experience.
29
+ Accompanying documentation that clearly explains how to deploy, configure, and run the AI agent on Hugging Face Spaces.
30
+ An optional demo video (up to 5 minutes) showcasing the agent’s core functionality and how it addresses the defined use case.
31
+ What You Will Learn
32
+ By participating in this challenge, you will:
33
+
34
+ Gain a practical understanding of AI agent architecture, development workflows, and prompt engineering techniques.
35
+ Strengthen your hands-on skills using technologies such as TypeScript, Python, JSON, APIs, and Gradio in real-world scenarios.
36
+ Learn to build, integrate, and test AI applications using the Model Context Protocol (MCP) server provided by Topcoder.
37
+ Develop proficiency in working with data ingestion, backend logic, and designing clear and effective user experiences (UI/UX) for AI tools.
38
+ Learn how to prepare, optimize, and deploy your application to Hugging Face Spaces using their CPU Basic environment.
39
+ Improve your ability to define clear use cases, articulate the problem your solution addresses, and communicate that effectively through documentation and demonstration.
40
+ Engage with the broader Topcoder AI community to gain visibility, collect feedback, and earn recognition through badges, publications, or prizes.
41
+ Build portfolio-worthy work that demonstrates your ability to solve meaningful problems using modern AI tools.
42
+ Learning Path & Plan
43
+ This 3-week challenge is structured as a guided learning journey. Below is a comprehensive, curriculum-style blueprint divided into modules, each with a clear objective, tasks, and deliverables. This will help you progress from beginner to deployer by the end of the challenge.
44
+
45
+ Module 1: Foundations of AI Agents & MCP (Days 1–2)
46
+ Objective: Understand the core principles of AI agents and how the Model Context Protocol (MCP) enables real-world AI integration.
47
+
48
+ What are AI agents? (Examples: chatbots, automation tools, assistants, research agents)
49
+
50
+ Core capabilities of agents: statefulness, prompt context, responsiveness, and output generation
51
+
52
+ Overview of the Model Context Protocol (MCP) and how it facilitates structured communication between users and AI models
53
+
54
+ Learn about transports like SSE and Streamable HTTP used by MCP
55
+
56
+ Understand how MCP connects to Topcoder’s public APIs and what kind of tools/data you can access
57
+
58
+ Recommended Course: Model Context Protocol (MCP) Course by Hugging Face
59
+
60
+ Covers how to build MCP clients
61
+ Shows integration with Gradio
62
+ Includes code walkthroughs, examples, and deployment steps
63
+ Deliverable: Written summary explaining:
64
+
65
+ The type of AI agent you want to build
66
+ How MCP can be used in that context
67
+ Module 2: Development Environment Setup (Days 1–2)
68
+ Objective: Get your local environment ready to build, test, and run MCP-powered AI agents.
69
+
70
+ Install and configure a suitable IDE (e.g., VSCode, Cursor)
71
+ Set up the Topcoder MCP server endpoint (
72
+ https://api.topcoder-dev.com/v6/mcp/mcp
73
+ ) in your IDE’s
74
+ settings.json
75
+ Install relevant SDKs and dependencies (e.g., TypeScript, Python, Gradio, Node.js, etc.)
76
+ Use the MCP Inspector to validate server connection and test a basic query
77
+ Recommended Resources:
78
+
79
+ Gradio Getting Started Guide (with code examples and UI integrations)
80
+ VSCode Official Setup Instructions, Cursor Editor (AI-powered IDE)
81
+ Topcoder MCP Inspector GitHub
82
+ Deliverable:
83
+
84
+ A working development environment
85
+ Screenshot or confirmation of successful test interaction with MCP server using MCP Inspector
86
+ Module 3: Exploring MCP Server Capabilities (Days 1–2)
87
+ Objective: Learn how to explore the Topcoder MCP server, understand available tools and data, and identify what can power your agent.
88
+
89
+ Understand the difference between SSE and Streamable HTTP transports in MCP.
90
+ Launch the MCP Inspector and connect it to the Topcoder MCP endpoint (
91
+ https://api.topcoder-dev.com/v6/mcp
92
+ ).
93
+ Explore the public tools and resources available via the server.
94
+ Review example requests and responses, and document the ones relevant to your intended use case.
95
+ Determine the input and output formats for at least one useful endpoint.
96
+ Recommended Resources:
97
+
98
+ Topcoder MCP Inspector GitHub
99
+ SSE and HTTP Streaming - MDN Reference
100
+ Deliverable:
101
+
102
+ A table listing at least 3 MCP tools or endpoints with their purpose, input, and expected output
103
+ Screenshots or logs of actual request/response payloads from the MCP Inspector
104
+ Module 4: Refine the Use Case & Design the Agent (Days 1–2)
105
+ Objective: Revisit your original agent idea after exploring the Topcoder MCP server and finalize a well-defined use case and experience flow.
106
+
107
+ Reflect on your initial idea from Module 1 in light of the MCP tools and data you've explored
108
+ Validate whether the original use case still makes sense or needs to be adapted
109
+ Clearly define:
110
+ The real-world problem you're solving
111
+ Why it's meaningful
112
+ How your agent will solve it using one or more MCP tools
113
+ Create a user journey flow, wireframe, or basic mockup to visualize how users will interact with your agent
114
+ Deliverable:
115
+
116
+ Finalized use case document with:
117
+ Problem definition
118
+ Target user or audience
119
+ Description of solution and value proposition
120
+ MCP tools or endpoints your agent will use
121
+ UX mockup or user journey diagram that reflects the agent's intended experience. You can use v0.dev to quickly generate visually appealing mockups with the help of AI.
122
+ Module 5: Building Agent Logic & MCP Integration (Days 2–3)
123
+ Objective: Implement the core logic of your agent to communicate effectively with the Topcoder MCP server, retrieve data, and generate intelligent outputs.
124
+
125
+ Set up your project structure using a preferred language (Python, TypeScript, etc.)
126
+ Choose your connection method (SSE or Streamable HTTP) and implement a request handler for MCP
127
+ Integrate the MCP endpoint(s) you selected in Module 3
128
+ Process incoming data and prepare meaningful responses
129
+ Apply prompt engineering principles to enhance agent output quality
130
+ Ensure your logic runs efficiently on Hugging Face CPU Basic (no GPU dependencies)
131
+ Recommended Resources:
132
+
133
+ Prompt Engineering Guide
134
+ Example backend agent using Python with SSE + Hugging Face: MCP Agent Boilerplate (Python)
135
+ Demonstrates how to:
136
+ Connect to Topcoder MCP using SSE
137
+ Build an async backend agent using FastAPI
138
+ Deploy to Hugging Face with CPU basic config
139
+ This is ideal if you're building a backend-only agent with no frontend SDK involved.
140
+ EventSource API Reference: MDN Docs Guide](https://github.com/dair-ai/Prompt-Engineering-Guide)
141
+ Example SSE integration (React or Node): MDN EventSource Docs
142
+ Deliverable:
143
+
144
+ A backend agent logic module that:
145
+ Successfully connects to and queries the MCP server
146
+ Returns usable responses to the frontend
147
+ Demonstrates use of at least one real MCP tool or endpoint
148
+ A brief explanation (in your README) of how the MCP connection is implemented and which tools it uses
149
+ Module 6: UI/UX Development (Days 2–3)
150
+ Objective: Create an accessible, intuitive interface to showcase your AI agent’s capabilities.
151
+
152
+ Choose a simple UI framework such as Gradio, Streamlit, or a minimal HTML/JS frontend (Gradio recommended for ease of Hugging Face integration)
153
+ Link the UI to your backend logic (built in Module 5)
154
+ Focus on making input fields and output displays clear and accessible
155
+ Provide contextual labels, input placeholders, and output formatting where helpful
156
+ Consider adding a loading indicator or progress feedback if MCP requests take time
157
+ Deliverable: A lightweight, working UI prototype that allows users to interact with your agent and see live responses from the MCP server
158
+
159
+ Module 7: End-to-End Testing & Debugging (Days 1–2)
160
+ Objective: Ensure your agent is production-ready through comprehensive testing and optimization.
161
+
162
+ Perform full end-to-end testing: UI → backend → MCP → backend → UI
163
+ Simulate real user interactions and input variations
164
+ Identify and handle edge cases, unexpected input, and communication errors (e.g., timeouts, SSE disconnects)
165
+ Add fallback behavior or helpful messages for errors
166
+ Profile and optimize response time, agent feedback, and loading states
167
+ Deliverable: A stable, polished, and bug-free agent that reliably handles diverse inputs and edge scenarios
168
+
169
+ Module 8: Deployment on Hugging Face Spaces (Days 1–2)
170
+ Objective: Deploy your agent to a live environment and ensure it's accessible and stable.
171
+
172
+ Prepare
173
+ requirements.txt
174
+ and
175
+ README.md
176
+ with necessary dependencies and setup instructions
177
+ Push your project to a new or existing Hugging Face Space
178
+ Confirm successful deployment on the default CPU Basic hardware (no GPU)
179
+ Perform smoke testing to validate end-to-end functionality
180
+ Deliverable: A fully deployed Hugging Face Space (public link) or a repository that’s ready to be deployed without additional changes
181
+
182
+ Module 9: Documentation & Demo (Days 1–2)
183
+ Objective: Ensure your project is easy to understand, use, and present.
184
+
185
+ Create a well-structured README that includes setup instructions, usage examples, and technical details
186
+ Clearly articulate the use case and the problem your agent addresses
187
+ (Optional) Record a 3–5 minute demo video walking through how your agent works
188
+ Deliverable: Comprehensive documentation with an optional demo video
189
+
190
+ Module 10: Final Packaging & Submission (Day 1)
191
+ Objective: Finalize and submit your agent project with all required materials.
192
+
193
+ Package your project as a zip file or provide a live public link (e.g., Hugging Face Space)
194
+ Carefully review the challenge submission checklist to ensure everything is included
195
+ Submit your entry via the challenge platform before the deadline
196
+ Deliverable: A complete submission package that meets all criteria and is ready for judging
197
+
198
+ The Topcoder MCP Server
199
+ The Model Context Protocol (MCP) server provided by Topcoder is hosted at: https://api.topcoder-dev.com/v6/mcp.
200
+
201
+ It supports two transport mechanisms for agent communication:
202
+
203
+ Server-Sent Events (SSE): https://api.topcoder-dev.com/v6/mcp/sse
204
+ Streamable HTTP: https://api.topcoder-dev.com/v6/mcp/mcp
205
+ Your agent must use one of these transports to interact with the MCP server.
206
+ No authentication is currently required—your agent can connect and send requests without needing a token.
207
+ The server exposes publicly available data and tools sourced from the Topcoder APIs. In future updates, we may introduce authenticated features and enhanced tools that require login or user tokens.
208
+
209
+ Review & Evaluation
210
+ We will review submissions on an iterative basis. After completing each module, please upload your work (e.g., Document, link to GitHub repo, Hugging Face Space, or zipped files) and notify the challenge copilot by email, the copilot will reivew and share feedback over email.
211
+
212
+ You will typically receive feedback within 1–2 days, helping you improve and refine your work before progressing.
213
+
214
+ This optional collaborative review process ensures you stay on track and gives you the best chance of submitting a high-quality agent by the end.
215
+
216
+ Final Evaluation Criteria
217
+ Your final submission will be judged based on the following:
218
+
219
+ Functionality & Stability (30%) – Does the agent work as intended across different inputs and edge cases?
220
+ Use Case Relevance (20%) – How clearly defined and meaningful is the problem the agent addresses?
221
+ MCP Integration (20%) – How effectively does the agent leverage the Topcoder MCP tools and transports?
222
+ User Experience & Interface (15%) – Is the UI intuitive, responsive, and polished?
223
+ Documentation & Demo (10%) – Is the README clear? Does the optional video explain the agent effectively?
224
+ Originality & Creativity (5%) – Is the idea novel or uniquely executed?
225
+ High-scoring submissions typically demonstrate smooth functionality, a clear purpose, practical use of MCP tools, and thoughtful design.
test_openai.py ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Test OpenAI integration
4
+ """
5
+
6
+ import os
7
+ import json
8
+ from openai import OpenAI
9
+
10
+
11
+ def test_openai_connection():
12
+ """Test basic OpenAI connection"""
13
+ api_key = os.getenv("OPENAI_API_KEY")
14
+ base_url = os.getenv("OPENAI_BASE_URL", "https://api.openai.com/v1")
15
+
16
+ if not api_key:
17
+ print("❌ 请先设置 OPENAI_API_KEY 环境变量")
18
+ print(" export OPENAI_API_KEY=your_api_key_here")
19
+ return False
20
+
21
+ try:
22
+ client = OpenAI(
23
+ api_key=api_key,
24
+ base_url=base_url
25
+ )
26
+
27
+ print(f"🔗 连接到: {base_url}")
28
+ print(f"🔑 API Key: {api_key[:4]}...{api_key[-4:]}")
29
+
30
+ # 测试简单的聊天完成
31
+ response = client.chat.completions.create(
32
+ model=os.getenv("OPENAI_MODEL", "gpt-4o-mini"),
33
+ messages=[
34
+ {"role": "user", "content": "Hello! Please respond with just 'OK' to confirm the connection."}
35
+ ],
36
+ max_tokens=10,
37
+ temperature=0
38
+ )
39
+
40
+ result = response.choices[0].message.content.strip()
41
+ print(f"✅ 连接成功! 响应: {result}")
42
+
43
+ return True
44
+
45
+ except Exception as e:
46
+ print(f"❌ 连接失败: {e}")
47
+ return False
48
+
49
+
50
+ def test_scoring_function():
51
+ """Test the scoring functionality similar to what's used in app.py"""
52
+ api_key = os.getenv("OPENAI_API_KEY")
53
+
54
+ if not api_key:
55
+ print("❌ 请先设置 OPENAI_API_KEY 环境变量")
56
+ return False
57
+
58
+ try:
59
+ client = OpenAI(
60
+ api_key=api_key,
61
+ base_url=os.getenv("OPENAI_BASE_URL", "https://api.openai.com/v1")
62
+ )
63
+
64
+ # 测试数据,类似app.py中的格式
65
+ test_challenges = [
66
+ {
67
+ "id": "test1",
68
+ "title": "Build a Python Web API",
69
+ "prize": 500.0,
70
+ "deadline": "2025-02-15",
71
+ "tags": ["python", "api", "web"],
72
+ "description": "Create a REST API using Python and FastAPI framework"
73
+ },
74
+ {
75
+ "id": "test2",
76
+ "title": "React Frontend Development",
77
+ "prize": 300.0,
78
+ "deadline": "2025-02-20",
79
+ "tags": ["react", "frontend", "javascript"],
80
+ "description": "Build a modern React application with responsive design"
81
+ }
82
+ ]
83
+
84
+ scoring_prompt = (
85
+ "You are an expert Topcoder challenge analyst. Analyze items and rate match to the query.\n"
86
+ f"Query: python web development\n"
87
+ "Items: " + json.dumps(test_challenges) + "\n\n"
88
+ "Instructions:\n"
89
+ "- Consider skills, tags, and brief description.\n"
90
+ "- Higher prize is slightly better all else equal.\n"
91
+ "- Return ONLY JSON array of objects: [{id, score, reason}] where 0<=score<=1.\n"
92
+ "- Do not include any extra text."
93
+ )
94
+
95
+ print("🧪 测试智能评分功能...")
96
+
97
+ response = client.chat.completions.create(
98
+ model=os.getenv("OPENAI_MODEL", "gpt-4o-mini"),
99
+ messages=[
100
+ {"role": "system", "content": "You are a helpful assistant. Return JSON only."},
101
+ {"role": "user", "content": scoring_prompt}
102
+ ],
103
+ temperature=0.2,
104
+ timeout=30
105
+ )
106
+
107
+ result = response.choices[0].message.content.strip()
108
+ print(f"📊 评分结果: {result}")
109
+
110
+ # 尝试解析JSON
111
+ try:
112
+ scores = json.loads(result)
113
+ print("✅ JSON解析成功!")
114
+ for item in scores:
115
+ print(f" - {item.get('id')}: 分数={item.get('score')}, 原因={item.get('reason')}")
116
+ return True
117
+ except json.JSONDecodeError as e:
118
+ print(f"❌ JSON解析失败: {e}")
119
+ return False
120
+
121
+ except Exception as e:
122
+ print(f"❌ 评分测试失败: {e}")
123
+ return False
124
+
125
+
126
+ def test_planning_function():
127
+ """Test the planning functionality"""
128
+ api_key = os.getenv("OPENAI_API_KEY")
129
+
130
+ if not api_key:
131
+ print("❌ 请先设置 OPENAI_API_KEY 环境变量")
132
+ return False
133
+
134
+ try:
135
+ client = OpenAI(
136
+ api_key=api_key,
137
+ base_url=os.getenv("OPENAI_BASE_URL", "https://api.openai.com/v1")
138
+ )
139
+
140
+ # 测试数据
141
+ test_challenges = [
142
+ {
143
+ "title": "Build a Python Web API",
144
+ "prize": 500.0,
145
+ "deadline": "2025-02-15",
146
+ "tags": ["python", "api", "web"]
147
+ }
148
+ ]
149
+
150
+ prompt = (
151
+ "You are a concise challenge scout. Given compact challenge metadata, output:\n"
152
+ "- Top 3 picks (title + brief reason)\n"
153
+ "- Quick plan of action (3 bullets)\n"
154
+ f"Constraints: keyword='python', min_prize>=100, within 30 days.\n"
155
+ f"Data: {json.dumps(test_challenges)}"
156
+ )
157
+
158
+ print("📋 测试计划生成功能...")
159
+
160
+ response = client.chat.completions.create(
161
+ model=os.getenv("OPENAI_MODEL", "gpt-4o-mini"),
162
+ messages=[
163
+ {"role": "system", "content": "You are a helpful, terse assistant."},
164
+ {"role": "user", "content": prompt}
165
+ ],
166
+ temperature=0.3,
167
+ timeout=30
168
+ )
169
+
170
+ result = response.choices[0].message.content.strip()
171
+ print(f"📝 计划结果:\n{result}")
172
+ print("✅ 计划生成成功!")
173
+
174
+ return True
175
+
176
+ except Exception as e:
177
+ print(f"❌ 计划测试失败: {e}")
178
+ return False
179
+
180
+
181
+ def main():
182
+ print("🤖 OpenAI API 集成测试")
183
+ print("=" * 40)
184
+
185
+ # 测试基本连接
186
+ if not test_openai_connection():
187
+ print("\n❌ 基本连接测试失败,请检查配置")
188
+ return
189
+
190
+ print("\n" + "-" * 40)
191
+
192
+ # 测试评分功能
193
+ if not test_scoring_function():
194
+ print("❌ 评分功能测试失败")
195
+ return
196
+
197
+ print("\n" + "-" * 40)
198
+
199
+ # 测试计划功能
200
+ if not test_planning_function():
201
+ print("❌ 计划功能测试失败")
202
+ return
203
+
204
+ print("\n" + "=" * 40)
205
+ print("🎉 所有测试通过! OpenAI集成工作正常")
206
+ print("现在可以运行主应用: python app.py")
207
+ print("=" * 40)
208
+
209
+
210
+ if __name__ == "__main__":
211
+ main()