adaptai / platform /aiml /mlops /CLAUDE.md
ADAPT-Chase's picture
Add files using upload-large-folder tool
42bba47 verified
|
raw
history blame
3.7 kB

CLAUDE.md

This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.

MLOps Platform Architecture

This repository contains an MLflow-based MLOps platform for tracking machine learning experiments and managing model artifacts.

Core Components

MLflow Tracking Server: SQLite-based tracking database (mlflow.db) storing experiment metadata, runs, parameters, metrics, and artifacts.

Backend Storage: Primary MLflow database located at backend/mlflow.db with experiment and run data.

Artifacts Storage: Directory structure at artifacts/ for storing model artifacts, files, and experiment outputs.

Database Structure

The MLflow SQLite database contains tables for:

  • Experiments tracking and metadata
  • Run parameters, metrics, and tags
  • Model registry and versioning
  • Artifact location references

Development Environment

Infrastructure Dependencies

  • MLflow 3.3.1: Machine learning lifecycle management platform
  • Python 3.12.3: Primary development language
  • SQLite: Database backend for MLflow tracking

Key Configuration

  • Tracking database: mlflow.db (root) and backend/mlflow.db (backend)
  • Default artifact location: artifacts/ directory
  • MLflow server configured for local development

Operational Commands

MLflow Server Operations

# Start MLflow tracking server with local database
mlflow server --backend-store-uri sqlite:///backend/mlflow.db --default-artifact-root ./artifacts/

# Start MLflow server with root database
mlflow server --backend-store-uri sqlite:///mlflow.db --default-artifact-root ./artifacts/

# Start server on specific host and port
mlflow server --host 0.0.0.0 --port 5000 --backend-store-uri sqlite:///backend/mlflow.db

MLflow CLI Operations

# List experiments
mlflow experiments list

# View specific experiment details
mlflow experiments get --experiment-id 1

# Search runs with specific parameters
mlflow search --experiment-names "Default" --filter "params.learning_rate = '0.01'"

# Track new experiment run
mlflow run . -e main --experiment-name "New Experiment"

Database Management

# Check database integrity (requires sqlite3)
sqlite3 backend/mlflow.db "PRAGMA integrity_check;"

# Backup database
cp backend/mlflow.db backend/mlflow.db.backup_$(date +%Y%m%d_%H%M%S)

# Monitor database growth
du -h backend/mlflow.db

Development Workflows

Experiment Tracking

  1. Configure MLflow tracking URI: export MLFLOW_TRACKING_URI=http://localhost:5000
  2. Start MLflow server with appropriate database backend
  3. Run experiments with MLflow autologging or manual tracking
  4. Monitor results through MLflow UI or CLI

Model Management

  1. Log models using mlflow.<framework>.log_model()
  2. Register models in MLflow model registry
  3. Deploy registered models for serving
  4. Track model versions and performance metrics

Artifact Management

  1. Store experiment artifacts in artifacts/ directory
  2. Use MLflow artifact logging for model files, plots, and datasets
  3. Maintain organized directory structure within artifacts

Important Notes

Database Consistency: Maintain consistency between root mlflow.db and backend mlflow.db - use one as primary.

Artifact Storage: Ensure proper permissions on artifacts/ directory for MLflow server write access.

Backup Strategy: Regularly backup MLflow databases to prevent data loss.

Server Configuration: Choose appropriate host binding (--host) for development vs production use.

Performance: For production use, consider migrating from SQLite to PostgreSQL or MySQL for better performance and scalability.