| # Module 5: Building Agent Logic & MCP Integration |
|
|
| ## Overview |
| This module focuses on implementing the core logic of the Topcoder Challenge Steward Agent to communicate effectively with the Topcoder MCP server, retrieve challenge data, and generate intelligent, personalized outputs for users. |
|
|
| ## Project Structure |
|
|
| ### Core Architecture |
| ``` |
| tc-agent/ |
| βββ app.py # FastAPI main application |
| βββ config/ |
| β βββ settings.py # Configuration management |
| βββ services/ |
| β βββ mcp_service.py # MCP client management |
| β βββ challenge_service.py # Challenge processing logic |
| β βββ ai_service.py # AI agent orchestration |
| β βββ challenge_agent.py # Agent service for notifications |
| β βββ scheduler.py # Daily notification scheduler |
| βββ prompts/ |
| β βββ challenge_prompts.py # Prompt engineering templates |
| βββ tools/ |
| β βββ email_tool.py # Email notification tool |
| βββ utils/ |
| βββ json_parser.py # Response parsing utilities |
| βββ error_handler.py # Error handling |
| βββ logger.py # Logging configuration |
| ``` |
|
|
| ## MCP Integration Implementation |
|
|
| ### 1. Connection Method: Server-Sent Events (SSE) |
| We chose SSE over Streamable HTTP for real-time, persistent connections to the Topcoder MCP server. |
|
|
| **Key Implementation (`services/mcp_service.py`):** |
| ```python |
| from strands.tools.mcp import MCPClient |
| from mcp.client.sse import sse_client |
| |
| class MCPService: |
| def create_topcoder_client(self) -> MCPClient: |
| """Create a Topcoder MCP client instance""" |
| logger.info(f"Creating Topcoder MCP client for endpoint: {settings.TOPCODER_MCP_ENDPOINT}") |
| return MCPClient(lambda: sse_client(settings.TOPCODER_MCP_ENDPOINT)) |
| ``` |
| |
| **Configuration:** |
| - **Endpoint**: `https://api.topcoder-dev.com/v6/mcp/sse` |
| - **Transport**: Server-Sent Events for real-time data streaming |
| - **Authentication**: No authentication required (public API) |
| |
| ### 2. MCP Tool Integration |
| The agent dynamically discovers and utilizes available MCP tools from the Topcoder server: |
| |
| ```python |
| def get_topcoder_tools(self, client: MCPClient) -> List[Any]: |
| """Get available Topcoder MCP tools from the client""" |
| try: |
| tools = client.list_tools_sync() |
| tool_names = [getattr(tool, 'name', str(tool)) for tool in tools] |
| logger.info(f"Available Topcoder MCP tools: {tool_names}") |
| return tools |
| except Exception as e: |
| logger.error(f"Error getting Topcoder MCP tools: {e}") |
| return [] |
| ``` |
| |
| ### 3. Challenge Processing Logic |
| The core challenge processing is handled in `services/challenge_service.py`: |
| |
| #### Dry Run Mode (Interactive Testing) |
| ```python |
| def get_challenges_dry_run(self, preferences: str) -> Dict: |
| """Get challenges for dry run (immediate response without sending emails)""" |
| try: |
| # Create MCP client |
| topcoder_client = self.mcp_service.create_topcoder_client() |
| |
| with topcoder_client: |
| # Get available tools |
| mcp_tools = self.mcp_service.get_topcoder_tools(topcoder_client) |
| |
| # Create AI agent with tools |
| system_prompt = ChallengePrompts.get_dry_run_prompt(preferences) |
| agent = self.ai_service.create_agent(mcp_tools, system_prompt) |
| |
| # Process user query |
| response = agent(user_query) |
| |
| # Parse and normalize response |
| parsed_data = self.json_parser.extract_json_from_response(str(response)) |
| challenges = self.json_parser.normalize_challenges(parsed_data["challenges"]) |
| |
| return StandardAPIResponse.success(data={"challenges": challenges}) |
| except Exception as e: |
| return handle_service_error(e, "get challenges for dry run", preferences) |
| ``` |
| |
| #### Automated Processing Mode (Email Notifications) |
| ```python |
| def process_challenges_for_user(self, user_id: str, email: str, preferences: str) -> Dict: |
| """Process challenges for automated agent (email notifications)""" |
| try: |
| topcoder_client = self.mcp_service.create_topcoder_client() |
| |
| with topcoder_client: |
| mcp_tools = self.mcp_service.get_topcoder_tools(topcoder_client) |
| |
| # Combine MCP tools with email tool |
| all_tools = [send_email] + mcp_tools |
| |
| # Create agent with personalized prompt |
| system_prompt = ChallengePrompts.get_agent_prompt(preferences, email) |
| agent = self.ai_service.create_agent(all_tools, system_prompt) |
| |
| # Execute automated workflow |
| response = agent(user_query) |
| ``` |
| |
| ## Prompt Engineering |
| |
| ### 1. Dry Run Prompt Template |
| For interactive testing and immediate responses: |
| |
| ```python |
| def get_dry_run_prompt(preferences: str) -> str: |
| return f""" |
| You are a Topcoder Challenge Steward Agent specializing in finding relevant programming challenges. |
| |
| User Preferences: {preferences} |
| |
| Your tasks: |
| 1. Query active Topcoder challenges using available MCP tools |
| 2. Filter challenges based on user preferences (technologies, prize amounts, difficulty) |
| 3. Return results in structured JSON format |
| |
| Response Format: |
| {{ |
| "challenges": [ |
| {{ |
| "id": "challenge_id", |
| "title": "Challenge Title", |
| "prize": "$X,XXX", |
| "technologies": ["tech1", "tech2"], |
| "registrationEndDate": "YYYY-MM-DD", |
| "submissionEndDate": "YYYY-MM-DD", |
| "url": "challenge_url", |
| "description": "brief description" |
| }} |
| ] |
| }} |
| |
| Focus on challenges that best match the user's stated preferences. |
| """ |
| ``` |
| |
| ### 2. Agent Prompt Template |
| For automated email notifications: |
| |
| ```python |
| def get_agent_prompt(preferences: str, email: str) -> str: |
| return f""" |
| You are a professional Topcoder Challenge Steward Agent that helps developers find relevant programming challenges. |
| |
| User: {email} |
| Preferences: {preferences} |
| |
| Your workflow: |
| 1. Query active Topcoder challenges using MCP tools |
| 2. Intelligently filter based on user preferences |
| 3. If relevant challenges found (2-5 high-quality matches), send a professional email |
| 4. Include challenge details, prize information, and registration deadlines |
| |
| Email Guidelines: |
| - Professional, concise tone |
| - Highlight why each challenge matches their preferences |
| - Include prize amounts and key deadlines |
| - Provide direct registration links |
| - Only send if genuinely relevant challenges exist |
| """ |
| ``` |
| |
| ## Data Processing & Response Generation |
| |
| ### 1. JSON Response Parsing |
| Robust parsing handles various MCP response formats: |
| |
| ```python |
| class JSONParser: |
| def extract_json_from_response(self, response: str) -> dict: |
| """Extract JSON data from agent response, handling various formats""" |
| # Remove markdown code blocks and whitespace |
| # Parse JSON with error handling |
| # Return structured data or None |
| |
| def normalize_challenges(self, challenges_data) -> List[Dict]: |
| """Normalize challenge data to consistent format""" |
| # Handle different challenge data structures |
| # Ensure required fields are present |
| # Apply consistent formatting |
| ``` |
| |
| ### 2. Error Handling & Resilience |
| Comprehensive error handling ensures reliable operation: |
| |
| ```python |
| def handle_service_error(error: Exception, operation: str, context: str = None) -> Dict: |
| """Standardized error handling for service operations""" |
| return StandardAPIResponse.error( |
| message=f"Failed to {operation}", |
| error_code="SERVICE_ERROR", |
| context={"operation": operation, "context": context} |
| ) |
| ``` |
| |
| ## Performance Optimization for Hugging Face CPU Basic |
| |
| ### 1. Efficient Resource Usage |
| - **Asynchronous Operations**: All database operations use async/await |
| - **Connection Pooling**: MCP clients are created on-demand and properly closed |
| - **Memory Management**: Responses are processed and cleaned up promptly |
| |
| ### 2. CPU-Optimized Processing |
| ```python |
| # No GPU dependencies |
| # Lightweight JSON parsing |
| # Efficient string processing |
| # Minimal memory footprint for challenge data |
| ``` |
| |
| ### 3. Scalable Architecture |
| - **Singleton MCP Service**: Reuses client configurations |
| - **Centralized Error Handling**: Consistent error responses |
| - **Modular Design**: Easy to extend and maintain |
| |
| ## Key Features Implemented |
| |
| 1. **Real-time Challenge Discovery**: Uses MCP SSE connection for live data |
| 2. **Intelligent Filtering**: AI-powered preference matching |
| 3. **Dual Operation Modes**: Interactive (dry-run) and automated (scheduled) |
| 4. **Robust Error Handling**: Graceful degradation and informative error messages |
| 5. **Structured Responses**: Consistent JSON format for frontend consumption |
| 6. **Email Integration**: Automated notifications for relevant challenges |
| 7. **Performance Optimized**: Designed for Hugging Face CPU Basic environment |
| |
| ## Testing & Validation |
| |
| ### Local Testing |
| ```bash |
| # Start the application |
| python app.py |
| |
| # Test dry-run endpoint |
| curl -X POST "http://localhost:8000/api/topcoder-dry-run" \ |
| -H "Content-Type: application/json" \ |
| -d '{"preferences": "Python, machine learning, $1000+ prize"}' |
| ``` |
| |
| ### MCP Connection Verification |
| The agent automatically validates MCP connectivity on startup and logs available tools for debugging. |
| |
| ## Next Steps |
| Module 5 successfully implements the core agent logic with robust MCP integration. The next module will focus on creating an intuitive user interface to interact with these backend services. |
| |