# Local Setup Guide (Laptop) This model is part of the DevaFlow project (custom D3PM, not native `transformers.AutoModel` format). ## 1) Environment ```bash python3.11 -m venv .venv source .venv/bin/activate pip install -U pip pip install -r requirements.txt ``` ## 2) Quick Inference ```python from inference_api import predict print(predict("dharmo rakṣati rakṣitaḥ")) ``` ## 3) Transformer-Style Use ```python import torch from config import CONFIG from inference import load_model, _build_tokenizers cfg = CONFIG device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model, cfg = load_model("best_model.pt", cfg, device) src_tok, tgt_tok = _build_tokenizers(cfg) text = "yadā mano nivarteta viṣayebhyaḥ svabhāvataḥ" input_ids = torch.tensor([src_tok.encode(text)], dtype=torch.long, device=device) out = model.generate( input_ids, num_steps=cfg["inference"]["num_steps"], temperature=cfg["inference"]["temperature"], top_k=cfg["inference"]["top_k"], repetition_penalty=cfg["inference"]["repetition_penalty"], diversity_penalty=cfg["inference"]["diversity_penalty"], ) ids = [x for x in out[0].tolist() if x > 4] print(tgt_tok.decode(ids).strip()) ``` ## 4) Full Project Execution For training, UI, Tasks 1–5, ablation workflow, and HF deployment, use the full project repository and run: - `python train.py` - `python inference.py` - `python app.py` - `python analysis/run_analysis.py --task <1|2|3|4|5|all>` Task 4 note: - `--phase generate_configs` first - train ablation checkpoints - then `--phase analyze`