Repository Guidelines
Project Structure & Module Organization
etl/: Core pipelines (ingestion, transformation, delivery). Key subfolders:corpus-pipeline/,xet-upload/,bleeding-edge/,config/,team/.mlops/: MLOps workflows, deployment and model lifecycle assets.training/: Training utilities and scripts.experiments/: Prototypes and one-off investigations.models/: Model artifacts/weights (large files; avoid committing new binaries).07_documentation/: Internal docs and process notes.
Build, Test, and Development Commands
- Environment: Use Python 3.10+ in a virtualenv; copy
.envfrom example or set required keys. - Install deps (per module):
pip install -r etl/corpus-pipeline/requirements-scrub.txt
- Run pipelines:
python etl/master_pipeline.pypython etl/corpus-pipeline/etl_pipeline.py
- Run tests (pytest):
python -m pytest -q etl/- Example:
python -m pytest -q etl/corpus-pipeline/test_full_integration.py
Coding Style & Naming Conventions
- Python (PEP 8): 4‑space indent,
snake_casefor files/functions,PascalCasefor classes,UPPER_SNAKE_CASEfor constants. - Type hints for new/modified functions; include docstrings for public modules.
- Logging over print; prefer structured logs where feasible.
- Small modules; place shared helpers in
etl/.../utilswhen appropriate.
Testing Guidelines
- Framework:
pytestwithtest_*.pynaming. - Scope: Unit tests for transforms and IO boundaries; integration tests for end‑to‑end paths (e.g.,
etl/corpus-pipeline/test_full_integration.py). - Data: Use minimal fixtures; do not read/write under
etl/corpus-data/in unit tests. - Target: Aim for meaningful coverage on critical paths; add regression tests for fixes.
Commit & Pull Request Guidelines
- Commits: Use Conventional Commits, e.g.,
feat(etl): add scrub step,fix(corpus-pipeline): handle empty rows. - PRs: Include purpose, scope, and risks; link issues; add before/after notes (logs, sample outputs). Checklists: tests passing, docs updated, no secrets.
Security & Configuration Tips
- Secrets: Load via
.envand/oretl/config/etl_config.yaml; never commit credentials. - Data: Large artifacts live under
etl/corpus-data/andmodels/; avoid adding bulky files to PRs. - Validation: Sanitize external inputs; respect robots.txt and rate limits in crawlers.
Architecture Overview
graph TD
A[Ingestion\n(etl/ingestion + crawlers)] --> B[Transformation\n(clean, enrich, dedupe)]
B --> C[Storage & Delivery\n(JSONL/Parquet, cloud loaders)]
C --> D[MLOps/Training\n(mlops/, training/)]
D -- feedback --> B
Key flows: etl/corpus-pipeline/* orchestrates raw → processed; delivery targets live under etl/xet-upload/ and cloud sinks.
CI & Badges
Add a simple test workflow at .github/workflows/tests.yml that installs deps and runs pytest against etl/.
Badge (once the workflow exists):

Minimal job example:
name: Tests
on: [push, pull_request]
jobs:
pytest:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with: { python-version: '3.10' }
- run: pip install -r etl/corpus-pipeline/requirements-scrub.txt pytest
- run: python -m pytest -q etl/