Datasets:

ArXiv:
License:
siyuanliuseed commited on
Commit
8e3a928
·
1 Parent(s): 731e53a

fix minor issues in README and evaluation script

Browse files
Files changed (2) hide show
  1. README.md +19 -4
  2. evaluate_scf_gpu.py +8 -8
README.md CHANGED
@@ -70,7 +70,7 @@ The repo currently contains the following contents:
70
 
71
  # Dataset Usage
72
 
73
- The sample dataset contains the `main` dataset (the dataset for training, validation and in-distribution testing) and the `ood-test` dataset.
74
 
75
  Each dataset contains several `parts`, each of which corresponds to a specific piece of information. The parts are:
76
 
@@ -81,7 +81,7 @@ Each dataset contains several `parts`, each of which corresponds to a specific p
81
  * `auxdensity.denfit.etb2.0`: the density coefficients on the ETB basis of def2-svp with $\beta=2.0$.
82
  * `auxdensity.denfit.etb1.5`: the density coefficients on the ETB basis of def2-svp with $\beta=1.5$.
83
 
84
- Example:
85
 
86
  ```python
87
  from dataset import SCFBenchDataset
@@ -102,7 +102,22 @@ dataset.dataset[0].keys()
102
 
103
  # Evaluation
104
 
105
- To evaluate
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
106
 
107
 
108
  # Citing SCFbench
@@ -130,4 +145,4 @@ Our modified version, the SCFbench dataset, is also licensed under [CC BY-SA 3.0
130
 
131
  ## About [ByteDance Seed Team](https://seed.bytedance.com/)
132
 
133
- Founded in 2023, ByteDance Seed Team is dedicated to crafting the industry's most advanced AI foundation models. The team aspires to become a world-class research team and make significant contributions to the advancement of science and society.
 
70
 
71
  # Dataset Usage
72
 
73
+ The `dataset` folder of the repo contains the `main` dataset (the dataset for training, validation and in-distribution testing) and the `ood-test` dataset.
74
 
75
  Each dataset contains several `parts`, each of which corresponds to a specific piece of information. The parts are:
76
 
 
81
  * `auxdensity.denfit.etb2.0`: the density coefficients on the ETB basis of def2-svp with $\beta=2.0$.
82
  * `auxdensity.denfit.etb1.5`: the density coefficients on the ETB basis of def2-svp with $\beta=1.5$.
83
 
84
+ Example usage:
85
 
86
  ```python
87
  from dataset import SCFBenchDataset
 
102
 
103
  # Evaluation
104
 
105
+ The full evaluation code is provided in `evaluate_scf_gpu.py`. To evaluate the provided NequIP model checkpoint `nequip_L_jfit.ckpt`, run:
106
+
107
+ ```bash
108
+ # evaluate on the test split of the main dataset (ID setting); note that you can also specify --num-shards and --shard-index to only run the evaluation on a subset
109
+ python evaluate_scf_gpu.py --ckpt nequip_L_jfit.ckpt --data-root dataset/main --split test --output id_test.csv
110
+
111
+ # evaluate on the ood-test dataset
112
+ python evaluate_scf_gpu.py --ckpt nequip_L_jfit.ckpt --data-root dataset/ood-test --split no --output ood_test.csv
113
+
114
+ # evaluate on the ood-test dataset, with a XC/basis transfer setting
115
+ python evaluate_scf_gpu.py --ckpt nequip_L_jfit.ckpt --data-root dataset/ood-test --split no --xc blyp --transfer-basis def2-tzvp --output ood_test_transfer.csv
116
+ ```
117
+
118
+ # Known Issues
119
+
120
+ * The species-wise linear readout layer may cause unstable model forward time on GPU in some software environments. We recommend implementing the prediction using a uniform padded basis for all species, just like how Hamiltonian prediction models (e.g., QHNet) predict the Hamiltonian matrix. We have tested this method internally but would like to keep the code in this repo consistent with our paper for reproducibility.
121
 
122
 
123
  # Citing SCFbench
 
145
 
146
  ## About [ByteDance Seed Team](https://seed.bytedance.com/)
147
 
148
+ Founded in 2023, ByteDance Seed Team is dedicated to crafting the industry's most advanced AI foundation models. The team aspires to become a world-class research team and make significant contributions to the advancement of science and society.
evaluate_scf_gpu.py CHANGED
@@ -341,17 +341,17 @@ def main():
341
  parser.add_argument('--normalize-nelec', action='store_true', help='Normalize the model prediction by number of electrons.')
342
  args = parser.parse_args()
343
 
344
- if not args.gt_mode:
345
- assert args.ckpt is not None, 'Checkpoint path is required in non-gt mode.'
346
-
347
  ckpt = torch.load(args.ckpt, map_location='cpu', weights_only=True)
348
-
349
  config = ckpt['config']
350
- model = nequip_simple_builder(**config['model'])
351
- model.load_state_dict(ckpt['state_dict'])
352
- model = model.eval().cuda()
353
  data_config = config['data']
 
 
 
 
354
  else:
 
355
  gt_mode_part = 'auxdensity.denfit' if args.model_type == 'auxdensity' else args.model_type
356
  data_config = {
357
  'r_max': 5.0,
@@ -371,7 +371,7 @@ def main():
371
  else:
372
  dataset_indices = list(range(len(dataset)))
373
 
374
- dataset_subset_indices = [i for i in dataset_indices if i % args.num_shards == args.shard_index]
375
  dataloader = DataLoader(
376
  Subset(dataset, dataset_subset_indices),
377
  batch_size=1,
 
341
  parser.add_argument('--normalize-nelec', action='store_true', help='Normalize the model prediction by number of electrons.')
342
  args = parser.parse_args()
343
 
344
+ if args.ckpt is not None:
345
+ # if a ckpt is provided, use the data config in the ckpt
 
346
  ckpt = torch.load(args.ckpt, map_location='cpu', weights_only=True)
 
347
  config = ckpt['config']
 
 
 
348
  data_config = config['data']
349
+ if not args.gt_mode:
350
+ model = nequip_simple_builder(**config['model'])
351
+ model.load_state_dict(ckpt['state_dict'])
352
+ model = model.eval().cuda()
353
  else:
354
+ assert args.gt_mode, 'Checkpoint path is required in non-gt mode.'
355
  gt_mode_part = 'auxdensity.denfit' if args.model_type == 'auxdensity' else args.model_type
356
  data_config = {
357
  'r_max': 5.0,
 
371
  else:
372
  dataset_indices = list(range(len(dataset)))
373
 
374
+ dataset_subset_indices = [j for i, j in enumerate(dataset_indices) if i % args.num_shards == args.shard_index]
375
  dataloader = DataLoader(
376
  Subset(dataset, dataset_subset_indices),
377
  batch_size=1,