ml-intern-custom / README.md
ScottzillaSystems's picture
Upload README.md
1fb5c33 verified
metadata
title: AI Agent Workspace
colorFrom: purple
colorTo: blue
sdk: docker
app_port: 7860

πŸ€– AI Agent Workspace

A customizable autonomous AI agent for research, coding, and ML workflows


What This Is

A fork of the ML Intern architecture β€” an autonomous AI agent that researches, writes code, and ships projects using the Hugging Face ecosystem.

Key capabilities:

  • πŸ”¬ Research β€” Search papers, datasets, models, and documentation
  • πŸ’» Code β€” Write, test, and execute Python scripts in sandboxes
  • πŸš€ Deploy β€” Launch training jobs, manage repos, push to Hub
  • πŸ“š Learn β€” Read documentation and GitHub repos to understand APIs

Architecture

User (Web UI / CLI)
    ↕
FastAPI Backend (SSE events)
    ↕
Agent Loop (LiteLLM + tool router)
    ↕
Tools: HF docs | Papers | Datasets | GitHub | Jobs | Sandbox | MCP

Stack:

  • Frontend: React + Vite + TypeScript
  • Backend: FastAPI + SSE streaming
  • Agent: LiteLLM (multi-provider) + custom tool system
  • Tools: Hugging Face APIs, GitHub, sandbox execution, MCP servers

Quick Start

Local CLI

git clone https://huggingface.co/spaces/ScottzillaSystems/ml-intern-custom
cd ml-intern-custom

# Install dependencies (requires Python 3.11+, uv)
uv sync

# Run interactive mode
python -m agent.main

# Or headless
python -m agent.main "your task here"

Environment Variables

Create .env:

# Required
HF_TOKEN=hf_...                    # Hugging Face token

# Optional - for non-HF-router models
ANTHROPIC_API_KEY=sk-ant-...       # Claude models
OPENAI_API_KEY=sk-...              # OpenAI models
GITHUB_TOKEN=ghp_...               # GitHub code search

Web UI

The Space runs the web UI automatically. For local dev:

cd frontend && npm install && npm run dev   # Vite dev server
cd backend && python main.py                 # FastAPI backend

Customization Guide

1. Change the Model

Edit configs/frontend_agent_config.json or configs/cli_agent_config.json:

{
  "model_name": "moonshotai/Kimi-K2.6",
  "mcpServers": { ... }
}

Available models via HF Router (free tier):

  • moonshotai/Kimi-K2.6
  • MiniMaxAI/MiniMax-M2.7
  • zai-org/GLM-5.1

Or any LiteLLM-compatible model:

  • anthropic/claude-sonnet-4-5-20250929
  • openai/gpt-4o
  • google/gemini-2.5-pro

2. Add/Remove Tools

Edit agent/core/tools.py:

def create_builtin_tools(local_mode=False):
    tools = [
        # Your custom tools here
        ToolSpec(name="my_tool", ...),
        # ... existing tools
    ]
    return tools

3. Add MCP Servers

Edit configs/frontend_agent_config.json:

{
  "mcpServers": {
    "my-server": {
      "transport": "http",
      "url": "https://my-mcp-server.com/mcp",
      "headers": { "Authorization": "Bearer ${MY_TOKEN}" }
    }
  }
}

4. Customize System Prompt

Edit agent/prompts/system_prompt.yaml to change the agent's personality and instructions.

5. Change Branding

  • Frontend: Edit frontend/src/theme.ts for colors
  • Frontend: Edit frontend/src/components/WelcomeScreen/WelcomeScreen.tsx for welcome text
  • Backend: Edit backend/main.py for API metadata

Project Structure

agent/
  core/           # Agent loop, session, context manager
  tools/          # Tool implementations (HF, GitHub, sandbox, etc.)
  prompts/        # System prompts
  main.py         # CLI entry point
backend/
  main.py         # FastAPI app
  routes/         # API endpoints
  session_manager.py
frontend/
  src/            # React components
configs/
  frontend_agent_config.json
  cli_agent_config.json

Original

Based on smolagents/ml-intern by Hugging Face.