| # CLAUDE.md |
|
|
| This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. |
|
|
| ## Repository Overview |
|
|
| Multi-project AI research repository for single-cell biology and video understanding. All project code lives under `transfer/code/`, data under `transfer/data/`, and the shared Python 3.13 venv in `stack_env/`. |
|
|
| ### Primary Projects |
|
|
| 1. **Stack** (`transfer/code/stack/`) β Large-scale encoder-decoder foundation model for single-cell biology (in-context learning on 150M cells). Package: `arc-stack`. |
| 2. **cell-eval** (`transfer/code/cell-eval/`) β Evaluation metrics suite for single-cell perturbation prediction models. Package: `cell-eval`. |
| 3. **FOCUS** (`transfer/code/FOCUS/`) β Training-free keyframe selection for long video understanding using multi-armed bandits (ICLR 2026). |
|
|
| ### Secondary Projects |
|
|
| 4. **scGPT** (`transfer/code/scGPT/`) β Foundation model for single-cell multi-omics. Uses Poetry. |
| 5. **scDFM** (`transfer/code/scDFM/`) β Distributional flow matching for single-cell perturbation prediction (ICLR 2026). Uses Conda. |
| 6. **ori_scDFM** (`transfer/code/ori_scDFM/`) β Original upstream scDFM (cloned from AI4Science-WestlakeU/scDFM). Uses dedicated venv `ori_scDFM_env/` (Python 3.11). |
| 8. **LatentForcing** (`transfer/code/LatentForcing/`) β Image generation with reordered diffusion trajectories (arXiv 2602.11401). |
| 9. **CCFM** (`transfer/code/CCFM/`) β Cascaded Conditioned Flow Matching: hybrid of scDFM + LatentForcing + scGPT for guided perturbation prediction. In development. |
| 10. **adaptive_prompt_selection** (`transfer/code/adaptive_prompt_selection/`) β Bandit-based prompt selection for Stack in-context learning. |
| 11. **prompt_selection** (`transfer/code/prompt_selection/`) β Evaluation framework and baselines for prompt selection experiments. |
| |
| ## HPC Computing Rules (GENKAI Supercomputer) |
| |
| - **NEVER run ML/DL/LLM model inference or training on the login node.** Always submit to compute nodes via `pjsub`. |
| - Login node (genkai0002) is only for: editing code, lightweight file operations, `pip install`, job submission, checking results. |
| - Lightweight evaluation scripts (e.g., cell-eval metrics, statistical analysis) are acceptable on the login node. |
|
|
| ### Job Submission (PJM) |
|
|
| ```bash |
| pjsub script.sh # Batch job |
| pjsub --interact -L rscgrp=b-inter -L gpu=1 -L elapse=1:00:00 # Interactive GPU (1 GPU, 1h) |
| pjstat # Check status |
| pjdel <jobid> # Cancel job |
| ``` |
|
|
| See `transfer/gpu_batch.sh` and `transfer/gpu_interactive.sh` for job script templates. |
|
|
| ## Environment & Installation |
|
|
| Most projects share `stack_env/` (Python 3.13). Always activate before work: |
|
|
| ```bash |
| source /home/hp250092/ku50001222/qian/aivc/lfj/stack_env/bin/activate |
| ``` |
|
|
| **ori_scDFM** uses its own dedicated venv `ori_scDFM_env/` (Python 3.11): |
| |
| ```bash |
| source /home/hp250092/ku50001222/qian/aivc/lfj/ori_scDFM_env/bin/activate |
| ``` |
| |
| ### Installing Projects |
| |
| ```bash |
| cd transfer/code/stack && pip install -e . # Entry points: stack-train, stack-finetune, stack-embedding, stack-generation |
| cd transfer/code/cell-eval && pip install -e . # Entry point: cell-eval (subcommands: prep, run, baseline, score) |
| cd transfer/code/FOCUS && pip install -r requirements.txt |
| cd transfer/code/scGPT && poetry install |
| cd transfer/code/LatentForcing && pip install -r requirements.txt |
| # scDFM uses its own conda env: conda env create -f transfer/code/scDFM/environment.yml |
| # ori_scDFM uses ori_scDFM_env/ (already installed from environment.yml pip deps) |
| # CCFM bootstraps from scDFM: cd transfer/code/CCFM && python _bootstrap_scdfm.py |
| ``` |
| |
| Note: `uv` is available for fast dependency management (used by cell-eval CI). |
| |
| ## Common Commands |
| |
| ### Stack |
| |
| ```bash |
| stack-train --config configs/training/bc_large.yaml |
| stack-finetune --config configs/finetuning/ft_parsecg.yaml |
| stack-embedding --checkpoint <ckpt> --adata <h5ad> --genelist <pkl> --output <out.h5ad> |
| stack-generation --checkpoint <ckpt> --base-adata <h5ad> --test-adata <h5ad> --genelist <pkl> --output-dir <dir> |
| ``` |
| |
| ### cell-eval |
| |
| ```bash |
| cell-eval run -ap pred.h5ad -ar real.h5ad --num-threads 64 --profile full |
| cell-eval score --user-input agg_results.csv --base-input base_agg_results.csv |
| ``` |
| |
| ### FOCUS |
| |
| ```bash |
| cd transfer/code/FOCUS |
| python select_keyframe.py --dataset_name longvideobench --dataset_path <path> --output_dir <dir> --num_keyframes 64 |
| ``` |
| |
| ### scDFM |
| |
| ```bash |
| # Uses its own conda env |
| cd transfer/code/scDFM |
| python src/script/run.py --data norman --batch_size 48 --model_type origin --d_model 128 |
| ``` |
| |
| ### LatentForcing |
| |
| ```bash |
| cd transfer/code/LatentForcing |
| torchrun --nproc_per_node=8 main_jit.py # Multi-GPU training |
| ``` |
| |
| ### Testing & Linting |
| |
| ```bash |
| # Stack (from transfer/code/stack/) |
| pytest tests/ |
| pytest tests/test_model_core.py -k "test_name" # Single test |
| |
| # cell-eval (from transfer/code/cell-eval/) |
| pytest tests/ |
| ruff check . # Linting (rules: E, F, ERA; max-line-length=120) |
| ruff format --check . # Format check (used in CI) |
| ``` |
| |
| ## Architecture |
| |
| ### Stack (`transfer/code/stack/src/stack/`) |
| |
| Core model uses **Tabular Attention** β alternating cell-wise and gene-wise attention on cell-by-gene matrix chunks: |
| |
| - `models/core/base.py` β `StateICLModelBase`: gene reduction -> positional embedding -> N x `TabularAttentionLayer` -> output MLP. Losses: reconstruction + Sliced Wasserstein distance. |
| - `models/core/` β `StateICLModel` (alias: `scShiftAttentionModel`) wraps the base with masking and loss computation. |
| - `models/finetune/` β `ICL_FinetunedModel` with frozen-teacher distillation via `LightningFinetunedModel`. |
| - `modules/attention.py` β `MultiHeadAttention`, `TabularAttentionLayer` (cell-attn + gene-attn per layer). |
| - `modules/regularizers.py` β `SlicedWassersteinDistance`. |
| - `training/` β `LightningGeneModel` (PyTorch Lightning wrapper), `DataModule`, scheduler utils. |
| - `finetune/` β `LightningFinetunedModel` (student-teacher EMA), finetuning `DataModule`. |
| - `data/` β Dataset configs, HVG computation, H5 data management, sparse matrix loading. |
| - `cli/` β Entry points: `launch_training`, `launch_finetuning`, `embedding`, `generation`. |
| - Config: YAML files in `configs/training/` and `configs/finetuning/`. CLI args override config values. |
| |
| ### cell-eval (`transfer/code/cell-eval/src/cell_eval/`) |
| |
| Uses the **registry pattern** for metrics: |
| |
| - `_evaluator.py` β `MetricsEvaluator`: takes predicted/real AnnData, runs DE (via `pdex`), computes metrics. |
| - `metrics/_registry.py` β `MetricRegistry` with `register()` / `compute()`. Metrics are AnnData-based or DE-based. |
| - `metrics/_impl.py`, `_de.py`, `_anndata.py` β Metric implementations. |
| - `_pipeline/` β `MetricPipeline` with named profiles (e.g., `full`). |
| - `_score.py` β Baseline-normalized scoring. |
| - `_cli/` β Subcommands: `prep`, `run`, `baseline`, `score`. |
| - `_types/` β Typed containers: `PerturbationAnndataPair`, `DEComparison`. |
| |
| ### FOCUS (`transfer/code/FOCUS/`) |
| |
| Two-file architecture: |
| - `focus.py` β `FOCUS` class: pure CPE bandit algorithm (no I/O). Coarse exploration -> fine exploitation using Bernstein confidence bounds. |
| - `select_keyframe.py` β Data pipeline: video loading (decord), BLIP similarity scoring (LAVIS), result output. |
| |
| ### scDFM (`transfer/code/scDFM/src/`) |
| |
| Flow matching for perturbation prediction: `flow_matching/` (algorithm), `models/` (networks), `tokenizer/` (cell/gene tokens), `loss/` (custom losses), `data_process/` (loading). |
| |
| ### LatentForcing (`transfer/code/LatentForcing/`) |
| |
| Diffusion with reordered trajectories. Model variants: `model_jit.py`, `model_cot.py`, `model_repa.py`. Training engine: `engine_jit.py`. Entry: `main_jit.py`. |
| |
| ### CCFM (`transfer/code/CCFM/`) |
| |
| Cascaded flow matching combining scDFM + LatentForcing denoiser + scGPT embeddings. Entry: `scripts/run_cascaded.py`. Config: `config/config_cascaded.py`. Imports scDFM modules via `_scdfm_imports.py` bridge. |
| |
| ### adaptive_prompt_selection (`transfer/code/adaptive_prompt_selection/`) |
| |
| Bandit-based selection of in-context examples for Stack. `adaptive_prompt.py` (orchestrator), `cell_bandit.py` (bandit algorithm). Run via `run_experiment.py`. |
| |
| ## Dataset Details |
| |
| Main dataset (`transfer/data/stack_train/20260203_Parse_10M_PBMC_cytokines.h5ad`): 9.7M cells x 40K genes, 12 donors, 91 cytokines, 18 cell types. Key obs columns: `donor`, `cytokine`, `treatment`, `cell_type`, `sample`. Stack dataset config format: `"human:parse_bio:sample:cell_type:false"`. |
| |
| All single-cell projects use **AnnData** (`.h5ad`) as the standard data format. |
| |
| ## CI/CD |
| |
| - **cell-eval**: GitHub Actions CI (`uv sync`, `ruff format --check`, `pytest -v`, CLI smoke test) on push/PR. Python 3.12. |
| - **Stack**: GitHub Actions publishes to PyPI on release (trusted publishing via setuptools build). |
| - **scGPT**: Claude Code integration workflow for PR review. |
| |
| ## GPU Job Notes |
| |
| - Stack batch jobs set `PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:256` to avoid CUDA OOM fragmentation. |
| - Job templates: `transfer/gpu_batch.sh` (batch, 1 GPU, 3h), `transfer/gpu_interactive.sh` (interactive, configurable hours/GPUs). |
| - Login node and compute nodes share the same filesystem path: `/home/hp250092/ku50001222/qian/aivc/lfj/`. |
| |