Hani Park commited on
Commit
a587d61
·
1 Parent(s): 11184f3

Initial upload

Browse files
.gitattributes CHANGED
@@ -58,3 +58,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
58
  # Video files - compressed
59
  *.mp4 filter=lfs diff=lfs merge=lfs -text
60
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
58
  # Video files - compressed
59
  *.mp4 filter=lfs diff=lfs merge=lfs -text
60
  *.webm filter=lfs diff=lfs merge=lfs -text
61
+ *.csv filter=lfs diff=lfs merge=lfs -text
62
+ *.tsv filter=lfs diff=lfs merge=lfs -text
63
+ *.pdb filter=lfs diff=lfs merge=lfs -text
64
+ *.cif filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,104 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ language en
4
+ tags:
5
+ - Biology,
6
+ - Antibody,
7
+ - RosettaCommons,
8
+ pretty_name: Antibody dataset
9
+ repo:
10
+ dataset_summary: >-
11
+
12
+ citation_bibtex: |-
13
+ @article{Huang2025,
14
+ title = {SAAINT-DB: a comprehensive structural antibody database for antibody modeling and design},
15
+ volume = {46},
16
+ ISSN = {1745-7254},
17
+ url = {http://dx.doi.org/10.1038/s41401-025-01608-5},
18
+ DOI = {10.1038/s41401-025-01608-5},
19
+ number = {12},
20
+ journal = {Acta Pharmacologica Sinica},
21
+ publisher = {Springer Science and Business Media LLC},
22
+ author = {Huang, Xiaoqiang and Zhou, Jun and Chen, Shuang and Xia, Xiaofeng and Chen, Y. Eugene and Xu, Jie},
23
+ year = {2025},
24
+ month = jun,
25
+ pages = {3365–3375}
26
+ }
27
+
28
+ ---
29
+
30
+ # SAAINTDB
31
+
32
+
33
+ ## Dataset Splits
34
+ The dataset was split at the PDB level into train, validation, and test sets (70/15/15).
35
+ To maintain balanced distributions, the split was stratified based on the HL label (heavy/light chain availability).
36
+ To prevent data leakage, all entries originating from the same PDB ID were assigned to the same split.
37
+
38
+ The resulting splits are provided as CSV files in the `data/` directory.
39
+ The corresponding PDB structures for each split are also provided in the `PDB/` directory.
40
+
41
+ - train.csv rows: 15033 | number of unique PDB_ID in train split: 7649
42
+ - validation.csv rows: 3179 | number of unique PDB_ID in validation split: 1639
43
+ - test.csv rows: 3188 | number of unique PDB_ID in test split: 1639
44
+
45
+
46
+ ## Dataset Processing
47
+ The following preprocessing steps were performed to construct the dataset:
48
+ 1. Added a `PDB_ID_chain` column to serve as a unique identifier for each antibody entry (PDB ID + chain)
49
+
50
+ 2. Added an `hl_label` column indicating chain availability:
51
+ - `HL`: both heavy and light chains present
52
+ - `H_only`: only heavy chain present
53
+ - `L_only`: only light chain present
54
+ This label was later used for balanced dataset splitting.
55
+
56
+ 3. Some PDB entries referenced in the dataset were missing structure files.
57
+ We identified the missing entries and downloaded 111 mmCIF files from the RCSB Protein Data Bank (PDB),
58
+ updating the dataset to reflect the available structures as of February 2026.
59
+
60
+ 4. FASTA files corresponding to the downloaded CIF structures were missing and were subsequently generated/added.
61
+
62
+ 5. The dataset was split into **train, validation, and test sets (70/15/15)**.
63
+
64
+ 6. A `split` column was added to the dataset to indicate the assigned subset (`train`, `validation`, or `test`).
65
+
66
+ 7. The processed dataset is organized as follows:
67
+ - `data/` : contains three CSV files (`train.csv`, `validation.csv`, `test.csv`)
68
+ - `PDB/` : contains PDB structure files organized into `train/`, `validation/`, and `test/` directories
69
+
70
+ 8. For reproducibility, the file `saaintdb_raw_data_20260226.csv` corresponds to the original dataset file `saaintdb_20260226_all.tsv`,
71
+ which was downloaded from the official SAAINTDB GitHub repository and converted to CSV format for easier processing.
72
+
73
+
74
+ ## Quickstart Usage
75
+ ### Install HuggingFace Datasets package
76
+ Each subset can be loaded into python using the Huggingface [datasets](https://huggingface.co/docs/datasets/index) library.
77
+ First, from the command line install the `datasets` library
78
+
79
+ $ pip install datasets
80
+
81
+ then, from within python load the datasets library
82
+
83
+ >>> import datasets
84
+
85
+
86
+ ### Load dataset
87
+ Load the 'RosettaCommons/SAAINTDB' dataset.
88
+
89
+
90
+ ## Citation
91
+ @article{Huang2025,
92
+ title = {SAAINT-DB: a comprehensive structural antibody database for antibody modeling and design},
93
+ volume = {46},
94
+ ISSN = {1745-7254},
95
+ url = {http://dx.doi.org/10.1038/s41401-025-01608-5},
96
+ DOI = {10.1038/s41401-025-01608-5},
97
+ number = {12},
98
+ journal = {Acta Pharmacologica Sinica},
99
+ publisher = {Springer Science and Business Media LLC},
100
+ author = {Huang, Xiaoqiang and Zhou, Jun and Chen, Shuang and Xia, Xiaofeng and Chen, Y. Eugene and Xu, Jie},
101
+ year = {2025},
102
+ month = jun,
103
+ pages = {3365–3375}
104
+ }
data/test.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:420ba6e49369c4a89081c3d366c76892051e97f7dd9ffc180d24109e43f6c58f
3
+ size 3817000
data/train.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:806d9f14aec09af6b34190da4438a4649448f5836895fa8b652e0e0d573640ef
3
+ size 17958978
data/validation.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2fe5ba9ae100ca239a704c30860622e91fa2005d1cf9e3820d4ea9657e244263
3
+ size 3819755
saaintdb_raw_data_20260226.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4bebe4542b26b288f0b85b3d32aad785fe0d638f61ad564883ced3b37ed7f2a1
3
+ size 23256234