adaptai / platform /aiml /AGENTS.md
ADAPT-Chase's picture
Add files using upload-large-folder tool
38c314f verified
# Repository Guidelines
## Project Structure & Module Organization
- `etl/`: Core pipelines (ingestion, transformation, delivery). Key subfolders: `corpus-pipeline/`, `xet-upload/`, `bleeding-edge/`, `config/`, `team/`.
- `mlops/`: MLOps workflows, deployment and model lifecycle assets.
- `training/`: Training utilities and scripts.
- `experiments/`: Prototypes and one-off investigations.
- `models/`: Model artifacts/weights (large files; avoid committing new binaries).
- `07_documentation/`: Internal docs and process notes.
## Build, Test, and Development Commands
- **Environment**: Use Python 3.10+ in a virtualenv; copy `.env` from example or set required keys.
- **Install deps (per module)**:
- `pip install -r etl/corpus-pipeline/requirements-scrub.txt`
- **Run pipelines**:
- `python etl/master_pipeline.py`
- `python etl/corpus-pipeline/etl_pipeline.py`
- **Run tests (pytest)**:
- `python -m pytest -q etl/`
- Example: `python -m pytest -q etl/corpus-pipeline/test_full_integration.py`
## Coding Style & Naming Conventions
- **Python (PEP 8)**: 4‑space indent, `snake_case` for files/functions, `PascalCase` for classes, `UPPER_SNAKE_CASE` for constants.
- **Type hints** for new/modified functions; include docstrings for public modules.
- **Logging over print**; prefer structured logs where feasible.
- **Small modules**; place shared helpers in `etl/.../utils` when appropriate.
## Testing Guidelines
- **Framework**: `pytest` with `test_*.py` naming.
- **Scope**: Unit tests for transforms and IO boundaries; integration tests for end‑to‑end paths (e.g., `etl/corpus-pipeline/test_full_integration.py`).
- **Data**: Use minimal fixtures; do not read/write under `etl/corpus-data/` in unit tests.
- **Target**: Aim for meaningful coverage on critical paths; add regression tests for fixes.
## Commit & Pull Request Guidelines
- **Commits**: Use Conventional Commits, e.g., `feat(etl): add scrub step`, `fix(corpus-pipeline): handle empty rows`.
- **PRs**: Include purpose, scope, and risks; link issues; add before/after notes (logs, sample outputs). Checklists: tests passing, docs updated, no secrets.
## Security & Configuration Tips
- **Secrets**: Load via `.env` and/or `etl/config/etl_config.yaml`; never commit credentials.
- **Data**: Large artifacts live under `etl/corpus-data/` and `models/`; avoid adding bulky files to PRs.
- **Validation**: Sanitize external inputs; respect robots.txt and rate limits in crawlers.
## Architecture Overview
```mermaid
graph TD
A[Ingestion\n(etl/ingestion + crawlers)] --> B[Transformation\n(clean, enrich, dedupe)]
B --> C[Storage & Delivery\n(JSONL/Parquet, cloud loaders)]
C --> D[MLOps/Training\n(mlops/, training/)]
D -- feedback --> B
```
Key flows: `etl/corpus-pipeline/*` orchestrates raw → processed; delivery targets live under `etl/xet-upload/` and cloud sinks.
## CI & Badges
Add a simple test workflow at `.github/workflows/tests.yml` that installs deps and runs pytest against `etl/`.
Badge (once the workflow exists):
`![Tests](https://github.com/OWNER/REPO/actions/workflows/tests.yml/badge.svg)`
Minimal job example:
```yaml
name: Tests
on: [push, pull_request]
jobs:
pytest:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with: { python-version: '3.10' }
- run: pip install -r etl/corpus-pipeline/requirements-scrub.txt pytest
- run: python -m pytest -q etl/
```