| # 2_training — ML model training for diamond |
| |
| Two models are trained: a **Hamiltonian** model (DeepH-E3) and a **force-field** model (NequIP). |
| All input data comes from `../1_data_prepare/data/`. |
| |
| --- |
| |
| ## Hamiltonian model (`hamiltonian/`) |
| |
| ### Data |
| |
| Training data is the AO-basis Hamiltonian H(R) computed by HPRO for each displaced supercell: |
| |
| ``` |
| 1_data_prepare/data/disp-{01..50}/reconstruction/aohamiltonian/hamiltonians.h5 |
| ``` |
| |
| `train_ham.py` builds symlinks from those 50 directories into `hamiltonian/dataset/00/` … `49/` |
| before the graph is constructed. This is necessary to avoid accidentally picking up the |
| `bands/*/reconstruction/aohamiltonian/` structures that also live under `1_data_prepare/data/`. |
|
|
| The DeepH-E3 graph is then built from `hamiltonian/dataset/` and saved to `hamiltonian/graph/`. |
| On subsequent runs the cached graph is reused automatically. |
|
|
| ### Training |
|
|
| ```bash |
| cd hamiltonian/ |
| conda activate deeph |
| python train_ham.py # uses 1_data_prepare/params.json |
| ``` |
|
|
| `train_ham.py` will: |
| 1. Build `dataset/` symlinks (skipped if graph already cached). |
| 2. Write `train.ini` and `_launcher.py` from `params.json`. |
| 3. Launch training and monitor val-loss against reference milestones every 300 epochs. |
| 4. Terminate early if val-loss is more than 5× above the reference. |
|
|
| Key hyperparameters (from `params.json → hamiltonian`): |
|
|
| | param | value | |
| |---|---| |
| | train / val / test | 30 / 10 / 10 | |
| | num_epoch | 3000 | |
| | cutoff_radius | 7.4 Å | |
| | lmax | 4 | |
| | irreps_mid | `64x0e+32x1o+16x2e+8x3o+8x4e` | |
| |
| Reference best: epoch 2017, val_loss = 1.98 × 10⁻⁶. |
|
|
| The trained model is saved under `hamiltonian/results/<timestamp>_diamond_e3/best_model.pkl`. |
|
|
| ### Inference and band comparison |
|
|
| ```bash |
| cd hamiltonian/ |
| conda activate deeph |
| python compare_bands.py # uses 1_data_prepare/params.json |
| ``` |
|
|
| `compare_bands.py` will: |
| 1. Build inference datasets for the pristine unit cell (UC) and 2×2×2 supercell (SC) |
| from `1_data_prepare/data/bands/{uc,sc}/reconstruction/aohamiltonian/`. |
| 2. Write `infer_uc/eval.ini` and `infer_sc/eval.ini` pointing to the latest model in `results/`. |
| 3. Run DeepH-E3 inference; predicted Hamiltonians are written to `infer_{uc,sc}/output/00/hamiltonians_pred.h5`. |
| 4. Diagonalize QE, HPRO, and ML Hamiltonians along the G→X→W→K→G→L k-path. |
| 5. Save `band_compare_uc_ml.png` and `band_compare_sc_ml.png`. |
|
|
| Predictions are cached: if `hamiltonians_pred.h5` already exists in `infer_{uc,sc}/output/00/`, |
| inference is skipped and only the band plot is regenerated. |
|
|
| --- |
|
|
| ## Force-field model (`forces/`) |
|
|
| ### Data |
|
|
| Forces are read from the QE SCF outputs for each displaced configuration: |
|
|
| ``` |
| 1_data_prepare/data/disp-{01..50}/scf/pw.out |
| ``` |
|
|
| The combined dataset is written to `forces/dataset.xyz` (ASE extended XYZ, 50 frames × ~16 atoms). |
|
|
| ### Training |
|
|
| ```bash |
| cd forces/ |
| conda activate deeph |
| python train_force.py # uses 1_data_prepare/params.json |
| python train_force.py --skip-training # reuse existing checkpoint, replot phonons |
| ``` |
|
|
| `train_force.py` will: |
| 1. Build `forces/dataset.xyz` from QE outputs (re-runs SCF with `tprnfor=.true.` for any |
| displacement that lacks force output). |
| 2. Run DFPT (`ph.x`) on the pristine UC at q=Γ to get the reference phonon frequencies. |
| 3. Train NequIP (`nequip-train forces/train.yaml`) in the `deeph` conda env. |
| 4. Compile the model to `forces/diamond_ase.nequip.pth`. |
| 5. Run phonopy with the NequIP calculator to get ML Γ-point frequencies. |
| 6. Save `forces/phonon_comparison.png` and print DFPT vs ML comparison. |
|
|
| The compiled model (`diamond_ase.nequip.pth`) is used downstream by the EPC pipeline. |
|
|