--- license: cc-by-4.0 language: en tags: - biology - antibody - protein-structure - mmcif - pdb - rosettacommons pretty_name: Antibody dataset repo: https://github.com/tommyhuangthu/SAAINT dataset_summary: >- This dataset is a curated and processed version of the antibody dataset originally introduced in the SAAINT-DB paper. We converted the original dataset into a structured dataset compatible with the Hugging Face Datasets. The dataset contains 21,400 antibody entries derived from 11,304 PDB structures. Citation_bibtex: |- @article{Huang2025, title = {SAAINT-DB: a comprehensive structural antibody database for antibody modeling and design}, volume = {46}, ISSN = {1745-7254}, url = {http://dx.doi.org/10.1038/s41401-025-01608-5}, DOI = {10.1038/s41401-025-01608-5}, number = {12}, journal = {Acta Pharmacologica Sinica}, publisher = {Springer Science and Business Media LLC}, author = {Huang, Xiaoqiang and Zhou, Jun and Chen, Shuang and Xia, Xiaofeng and Chen, Y. Eugene and Xu, Jie}, year = {2025}, month = jun, pages = {3365–3375} } configs: - config_name: default data_files: - split: train path: data/train.csv - split: validation path: data/validation.csv - split: test path: data/test.csv --- # SAAINTDB This dataset is a curated version of the [SAAINT-DB](https://www.nature.com/articles/s41401-025-01608-5) converted into a format compatible with the Hugging Face Datasets for machine learning applications. The dataset contains 21,400 antibody entries derived from 11,304 PDB structures, reflecting the available structures as of February 2026. Each entry corresponds to an antibody chain and is uniquely identified using the PDB_ID_chain field (PDB ID + chain ID). ## Dataset Splits The dataset was split at the PDB level into train, validation, and test sets (70/15/15). To maintain balanced distributions, the split was stratified based on the HL label (heavy/light chain availability). To prevent data leakage, all entries originating from the same PDB ID were assigned to the same split. Note: There are some PDBs with multiple antibodies, so the number of PDB files are fewer than the number of data entries. The resulting splits are provided as CSV files in the `data/` directory. The corresponding PDB structures for each split are also provided in the `PDB/` directory. - `train.csv` rows: 15033 | number of unique PDB_ID in train split: 7649 - `validation.csv` rows: 3179 | number of unique PDB_ID in validation split: 1639 - `test.csv` rows: 3188 | number of unique PDB_ID in test split: 1639 ## Dataset Processing Processing scripts are provided in the `src/` directory. The following preprocessing steps were performed to construct the dataset: 1. Added a `PDB_ID_chain` column to uniquely identify each antibody entry by concatenating the PDB ID with the corresponding antibody chain identifier. This ensures that multiple antibody chains originating from the same PDB structure can be distinguished and treated as separate entries 2. Added an `hl_label` column indicating chain availability: - `HL`: both heavy and light chains present - `H_only`: only heavy chain present - `L_only`: only light chain present This label was later used for balanced dataset splitting. 3. Some PDB entries referenced in the dataset were missing structure files. We identified the missing entries and downloaded 111 mmCIF files from the RCSB Protein Data Bank (PDB), updating the dataset to reflect the available structures as of February 2026. This might happened becuase the up-to-date SAAINT-DB dataset was generated in February 2026, while the PDB files were uploaded in January 2026. 4. FASTA files corresponding to the downloaded CIF structures were missing and were subsequently generated/added. 5. The dataset was split into train, validation, and test sets (70/15/15). 6. A `split` column was added to the dataset to indicate the assigned subset (`train`, `validation`, or `test`). 7. The processed dataset is organized as follows: - `data/` : contains three CSV files (`train.csv`, `validation.csv`, `test.csv`) - `PDB/` : contains PDB structure files organized into `train/`, `validation/`, and `test/` directories 8. For reproducibility, the file `saaintdb_raw_data_20260226.csv` corresponds to the original dataset file `saaintdb_20260226_all.tsv`, which was downloaded from the official SAAINTDB GitHub repository and converted to CSV format for easier processing. ## Quickstart Usage ### Install HuggingFace Datasets package Each subset can be loaded into python using the Huggingface [datasets](https://huggingface.co/docs/datasets/index) library. First, from the command line install the `datasets` library $ pip install datasets then, from within python load the datasets library >>> import datasets ### Load Dataset Load SAAINTDB dataset. >>> SAAINTDB = datasets.load_dataset('RosettaCommons/SAAINTDB') README.md: 4.01kB [00:00, 18.8MB/s] train.csv: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████| 18.0M/18.0M [00:00<00:00, 64.8MB/s] validation.csv: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 3.82M/3.82M [00:00<00:00, 34.6MB/s] test.csv: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 3.82M/3.82M [00:00<00:00, 58.8MB/s] Generating train split: 100%|██████████████████████████████████████████████████████████████████████████| 15033/15033 [00:00<00:00, 42972.11 examples/s] Generating validation split: 100%|███████████████████████████████████████████████████████████████████████| 3179/3179 [00:00<00:00, 43227.91 examples/s] Generating test split: 100%|█████████████████████████████████████████████████████████████████████████████| 3188/3188 [00:00<00:00, 46207.52 examples/s] and the dataset is loaded as a `datasets.arrow_dataset.Dataset` >>> SAAINTDB DatasetDict({ train: Dataset({ features: ['PDB_ID_chain', 'PDB_ID', 'Title', 'Mutation(s)', 'Classification', 'Deposit_date', 'Release_date', 'Method', 'Resolution', 'R_free', 'R_work', 'PMID', 'DOI', 'Model_index', 'Asym_ID_type', 'Ab_type', 'H_subgroup', 'L_subgroup', 'H_chain_ID', 'L_chain_ID', 'H_fas_seq', 'L_fas_seq', 'H_filled_pdb_seq', 'L_filled_pdb_seq', 'H_mean_radius', 'L_mean_radius', 'H_fas_seq_len', 'L_fas_seq_len', 'H_pdb_seq_len', 'L_pdb_seq_len', 'H_filled_seq_len', 'L_filled_seq_len', 'HL_inf_res_num', 'H_mol_name', 'L_mol_name', 'H_species', 'L_species', 'Ag_chain_ID(s)', 'Ag_type(s)', 'Ag_mol_name(s)', 'Ag_species', 'Ab_ag_inf_res_num', 'CDR_inf_res_num', 'CDR_inf_res_ratio', 'hl_label', 'split'], num_rows: 15033 }) validation: Dataset({ features: ['PDB_ID_chain', 'PDB_ID', 'Title', 'Mutation(s)', 'Classification', 'Deposit_date', 'Release_date', 'Method', 'Resolution', 'R_free', 'R_work', 'PMID', 'DOI', 'Model_index', 'Asym_ID_type', 'Ab_type', 'H_subgroup', 'L_subgroup', 'H_chain_ID', 'L_chain_ID', 'H_fas_seq', 'L_fas_seq', 'H_filled_pdb_seq', 'L_filled_pdb_seq', 'H_mean_radius', 'L_mean_radius', 'H_fas_seq_len', 'L_fas_seq_len', 'H_pdb_seq_len', 'L_pdb_seq_len', 'H_filled_seq_len', 'L_filled_seq_len', 'HL_inf_res_num', 'H_mol_name', 'L_mol_name', 'H_species', 'L_species', 'Ag_chain_ID(s)', 'Ag_type(s)', 'Ag_mol_name(s)', 'Ag_species', 'Ab_ag_inf_res_num', 'CDR_inf_res_num', 'CDR_inf_res_ratio', 'hl_label', 'split'], num_rows: 3179 }) test: Dataset({ features: ['PDB_ID_chain', 'PDB_ID', 'Title', 'Mutation(s)', 'Classification', 'Deposit_date', 'Release_date', 'Method', 'Resolution', 'R_free', 'R_work', 'PMID', 'DOI', 'Model_index', 'Asym_ID_type', 'Ab_type', 'H_subgroup', 'L_subgroup', 'H_chain_ID', 'L_chain_ID', 'H_fas_seq', 'L_fas_seq', 'H_filled_pdb_seq', 'L_filled_pdb_seq', 'H_mean_radius', 'L_mean_radius', 'H_fas_seq_len', 'L_fas_seq_len', 'H_pdb_seq_len', 'L_pdb_seq_len', 'H_filled_seq_len', 'L_filled_seq_len', 'HL_inf_res_num', 'H_mol_name', 'L_mol_name', 'H_species', 'L_species', 'Ag_chain_ID(s)', 'Ag_type(s)', 'Ag_mol_name(s)', 'Ag_species', 'Ab_ag_inf_res_num', 'CDR_inf_res_num', 'CDR_inf_res_ratio', 'hl_label', 'split'], num_rows: 3188 }) }) which is a column oriented format that can be accessed directly, converted in to a `pandas.DataFrame`, or `parquet` format, e.g. >>> SAAINTDB.data.column('pdb') >>> SAAINTDB.to_pandas() >>> SAAINTDB.to_parquet("dataset.parquet") ## Uses This dataset is intended for training and evaluating machine learning models on antibody structural data, particularly for tasks involving antibody chain characterization and antibody–antigen interaction analysis. Because the dataset provides paired structural files (PDB), sequences (FASTA), and tabular metadata, it is suitable for workflows that integrate structural bioinformatics with machine learning. Note: Be aware that when downloading the CSV files and opening them in Google Sheets or Microsoft Excel, the PDB_ID column may be automatically converted to scientific notation (e.g., 6e10 may appear as 6.00E10). ## Dataset Card Author Haneul Park (hahaneul@umich.edu) ## Citation ```@article{Huang2025, title = {SAAINT-DB: a comprehensive structural antibody database for antibody modeling and design}, volume = {46}, ISSN = {1745-7254}, url = {http://dx.doi.org/10.1038/s41401-025-01608-5}, DOI = {10.1038/s41401-025-01608-5}, number = {12}, journal = {Acta Pharmacologica Sinica}, publisher = {Springer Science and Business Media LLC}, author = {Huang, Xiaoqiang and Zhou, Jun and Chen, Shuang and Xia, Xiaofeng and Chen, Y. Eugene and Xu, Jie}, year = {2025}, month = jun, pages = {3365–3375} }