Datasets:

ArXiv:
License:
ZacharyyyK commited on
Commit
3445208
·
verified ·
1 Parent(s): 94ee705

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +120 -3
README.md CHANGED
@@ -1,13 +1,130 @@
1
  ---
2
  license: cc-by-4.0
3
  ---
4
-
5
  # PubChemQCR
6
 
7
- ### The paper, code, and more details will be released soon!
 
 
8
 
9
  ### Description
10
- PubChemQCR dataset contains the DFT relaxation trajectory of ~3.5 million small molecules, which can facilitate the development of ML force field models.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
  ### License
13
  This dataset is a processed version prepared by the TAMU DIVE Lab, based on the raw DFT trajectory data originally created by Maho Nakata from RIKEN.
 
1
  ---
2
  license: cc-by-4.0
3
  ---
 
4
  # PubChemQCR
5
 
6
+ <p align="center">
7
+ <img src="images/dataset_new_v2_png.png" width="450">
8
+ </p>
9
 
10
  ### Description
11
+ PubChemQCR dataset contains the DFT relaxation trajectory of ~3.5 million small molecules, which can facilitate the development of ML force field models. The dataset is split into two portions, a subset and a full set. Both sets share the same test set but have unique training and validation sets. The dataloader has a flag that needs to be set to true when the subset is desired.
12
+
13
+ ### Data Loading
14
+ #### Flags
15
+ - 'root' : Path to directory containing LMDB files
16
+ - 'stage' : Which DFT Stage to load
17
+ - "1st" : Load DFT 1st Data
18
+ - "1st_smash" : Loads only the DFT 1st Data calculated with SMASH
19
+ - "2nd" : Load DFT 2nd Data
20
+ - "mixing" : Load DFT 1st Data & DFT 2nd Data
21
+ - "pm3" : Load PM3 Data
22
+ - "hf" : Load HF Data
23
+
24
+ - 'total_traj' : If true the entire trajectory of a molecule is loaded
25
+ - 'SubsetOnly' : If true then only the subset is loaded
26
+
27
+ #### Dataset Loading
28
+ ```python
29
+ from data import LMDBDataLoader, _STD_ENERGY, _STD_FORCE_SCALE
30
+
31
+ root = '/path/to/lmdb/dir'
32
+ batch_size = 128
33
+ num_workers = 16
34
+ stage = '1st'
35
+ total_traj = True
36
+ SubsetOnly = True
37
+
38
+ loader = LMDBDataLoader(root=root, batch_size=batch_size, num_workers=num_workers, stage=stage, total_traj=total_traj, SubsetOnly=SubsetOnly)
39
+
40
+ train_set = loader.train_loader()
41
+ val_set = loader.val_loader()
42
+ test_set = loader.test_loader()
43
+ ```
44
+ ### Training Procedure
45
+ #### Important
46
+ - Full dataset training requires a few model functionality to work. See example models for indepth usage.
47
+ - Some molecules have atoms not connected if a cutoff is too small, these nodes need to be removed with 'torch_geometric.utils.remove_isolated_nodes'
48
+ - Example usage:
49
+ ```python
50
+ edge_index, _, mask = remove_isolated_nodes(edge_index, num_nodes=data.num_nodes)
51
+ pos = data.pos[mask]
52
+ z = data.x[mask]
53
+ batch = data.batch[mask]
54
+ ```
55
+ - num_nodes flag is needed as without it the function may infer a smaller number of atoms which will cause an error
56
+ - The original batch size needs to be saved for all scatter operations. In rare instances, the whole molecule is removed and passing the original batch size into the scatter function will ensure that molecule gets the value 0. Without it you will get errors.
57
+ - Example usage:
58
+ ```python
59
+ batch_size = data.batch.max().item() + 1
60
+ ```
61
+
62
+ ```python
63
+ out = scatter(h, batch, dim=0, dim_size=batch_size, reduce='sum').squeeze()
64
+ ```
65
+ - Some models require gradient norm clipping in order to prevent loss explosion for some samples. I found gradient clipping to 1.0 was sufficient, but potential clipping values were not thoroughly explored
66
+ - The log for some epochs may show high losses in the force training phase due to single conformer explosion.
67
+ - These high losses are because a single conformer produces a very high loss. Gradient clipping prevents the model form overreacting to these outliers. This only occurs in the training force loss phase, validation should remain normal.
68
+
69
+
70
+ ### Main Procedure
71
+ ```python
72
+ import torch
73
+ import torch.nn as nn
74
+ from torch.optim import Adam
75
+ from models.schnet import SchNet
76
+ from utils import train, evaluate, ForceRMSELoss
77
+ from data import LMDBDataLoader, _STD_ENERGY, _STD_FORCE_SCALE
78
+
79
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
80
+
81
+ root = '/path/to/lmdb/dir'
82
+ batch_size = 128
83
+ num_workers = 16
84
+ stage = '1st'
85
+ total_traj = True
86
+ SubsetOnly=True
87
+
88
+ loader = LMDBDataLoader(root=root, batch_size=batch_size, num_workers=num_workers, stage=stage, total_traj=total_traj, SubsetOnly=SubsetOnly)
89
+
90
+ train_set = loader.train_loader()
91
+ val_set = loader.val_loader()
92
+ test_set = loader.test_loader()
93
+
94
+ hidden_channels = 128
95
+ num_gaussians = 128
96
+ num_filters = 128
97
+
98
+ batch_size = 128
99
+ num_interactions = 4
100
+ cutoff = 4.5
101
+
102
+ model = SchNet(num_gaussians=num_gaussians, num_filters=num_filters, hidden_channels=hidden_channels, num_interactions=num_interactions, cutoff=cutoff)
103
+ model = model.to(device)
104
+
105
+ max_epochs = 100
106
+
107
+ params = [param for _, param in model.named_parameters() if param.requires_grad]
108
+ lr = 5e-4
109
+ weight_decay = 0.0
110
+
111
+ optimizer = Adam([{'params' : params},], lr=lr, weight_decay=weight_decay)
112
+ criterion_energy = nn.L1Loss()
113
+
114
+ criterion_force = ForceRMSELoss()
115
+
116
+ for epoch in range(max_epochs):
117
+
118
+ train_energy_loss, train_force_loss = train(model, device, train_set, optimizer, criterion_energy, criterion_force)
119
+
120
+ val_energy_loss, val_force_loss = evaluate(model, device, val_set, criterion_energy, criterion_force)
121
+
122
+ print(f"#IN#Epoch {epoch + 1}, Train Energy Loss: {train_energy_loss * _STD_ENERGY:.5f}, Val Energy Loss: {val_energy_loss * _STD_ENERGY:.5f}, Train Force Loss: {train_force_loss * _STD_FORCE_SCALE:.5f}, Val Force Loss: {val_force_loss * _STD_FORCE_SCALE:.5f}")
123
+
124
+ test_energy_loss, test_force_loss = evaluate(model, device, test_set, criterion_energy, criterion_force)
125
+
126
+ print(f'Test Energy Loss: {test_energy_loss * _STD_ENERGY:.5f}, Test Force Loss: {test_force_loss * _STD_FORCE_SCALE:.5f}')
127
+ ```
128
 
129
  ### License
130
  This dataset is a processed version prepared by the TAMU DIVE Lab, based on the raw DFT trajectory data originally created by Maho Nakata from RIKEN.