2_training — ML model training for diamond
Two models are trained: a Hamiltonian model (DeepH-E3) and a force-field model (NequIP).
All input data comes from ../1_data_prepare/data/.
Hamiltonian model (hamiltonian/)
Data
Training data is the AO-basis Hamiltonian H(R) computed by HPRO for each displaced supercell:
1_data_prepare/data/disp-{01..50}/reconstruction/aohamiltonian/hamiltonians.h5
train_ham.py builds symlinks from those 50 directories into hamiltonian/dataset/00/ … 49/
before the graph is constructed. This is necessary to avoid accidentally picking up the
bands/*/reconstruction/aohamiltonian/ structures that also live under 1_data_prepare/data/.
The DeepH-E3 graph is then built from hamiltonian/dataset/ and saved to hamiltonian/graph/.
On subsequent runs the cached graph is reused automatically.
Training
cd hamiltonian/
conda activate deeph
python train_ham.py # uses 1_data_prepare/params.json
train_ham.py will:
- Build
dataset/symlinks (skipped if graph already cached). - Write
train.iniand_launcher.pyfromparams.json. - Launch training and monitor val-loss against reference milestones every 300 epochs.
- Terminate early if val-loss is more than 5× above the reference.
Key hyperparameters (from params.json → hamiltonian):
| param | value |
|---|---|
| train / val / test | 30 / 10 / 10 |
| num_epoch | 3000 |
| cutoff_radius | 7.4 Å |
| lmax | 4 |
| irreps_mid | 64x0e+32x1o+16x2e+8x3o+8x4e |
Reference best: epoch 2017, val_loss = 1.98 × 10⁻⁶.
The trained model is saved under hamiltonian/results/<timestamp>_diamond_e3/best_model.pkl.
Inference and band comparison
cd hamiltonian/
conda activate deeph
python compare_bands.py # uses 1_data_prepare/params.json
compare_bands.py will:
- Build inference datasets for the pristine unit cell (UC) and 2×2×2 supercell (SC)
from
1_data_prepare/data/bands/{uc,sc}/reconstruction/aohamiltonian/. - Write
infer_uc/eval.iniandinfer_sc/eval.inipointing to the latest model inresults/. - Run DeepH-E3 inference; predicted Hamiltonians are written to
infer_{uc,sc}/output/00/hamiltonians_pred.h5. - Diagonalize QE, HPRO, and ML Hamiltonians along the G→X→W→K→G→L k-path.
- Save
band_compare_uc_ml.pngandband_compare_sc_ml.png.
Predictions are cached: if hamiltonians_pred.h5 already exists in infer_{uc,sc}/output/00/,
inference is skipped and only the band plot is regenerated.
Force-field model (forces/)
Data
Forces are read from the QE SCF outputs for each displaced configuration:
1_data_prepare/data/disp-{01..50}/scf/pw.out
The combined dataset is written to forces/dataset.xyz (ASE extended XYZ, 50 frames × ~16 atoms).
Training
cd forces/
conda activate deeph
python train_force.py # uses 1_data_prepare/params.json
python train_force.py --skip-training # reuse existing checkpoint, replot phonons
train_force.py will:
- Build
forces/dataset.xyzfrom QE outputs (re-runs SCF withtprnfor=.true.for any displacement that lacks force output). - Run DFPT (
ph.x) on the pristine UC at q=Γ to get the reference phonon frequencies. - Train NequIP (
nequip-train forces/train.yaml) in thedeephconda env. - Compile the model to
forces/diamond_ase.nequip.pth. - Run phonopy with the NequIP calculator to get ML Γ-point frequencies.
- Save
forces/phonon_comparison.pngand print DFPT vs ML comparison.
The compiled model (diamond_ase.nequip.pth) is used downstream by the EPC pipeline.