File size: 6,888 Bytes
28c09e5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4269e38
 
 
 
 
 
 
 
 
 
2016c6b
 
4269e38
2016c6b
 
4269e38
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
---
dataset_info:
  features:
  - name: id
    dtype: string
  - name: predicted_R
    dtype: bool
  - name: predicted_L
    dtype: bool
  - name: apo_R
    dtype: bool
  - name: apo_L
    dtype: bool
  - name: holo_R
    dtype: bool
  - name: holo_L
    dtype: bool
  - name: receptor_sequence
    dtype: string
  - name: ligand_sequence
    dtype: string
  splits:
  - name: train
    num_bytes: 846155321.5351156
    num_examples: 1472076
  - name: valid
    num_bytes: 1086182.8819621871
    num_examples: 1950
  - name: test
    num_bytes: 1144303.6905370844
    num_examples: 1945
  download_size: 99418558
  dataset_size: 848385808.1076149
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: valid
    path: data/valid-*
  - split: test
    path: data/test-*
---


# PINDER PPI dataset

The [PINDER: The Protein INteraction Dataset and Evaluation Resource](https://github.com/pinder-org/pinder) is a high quality compilation of positive protein protein interactions.
Of particular note, the train, valid, and test splits are deduplicated and heavily trimmed based on sequence and structure similarity.

For more information on the original dataset compilation, please read their [paper](https://github.com/pinder-org/pinder), [GitHub](https://github.com/pinder-org/pinder), or [docs](https://pinder-org.github.io/pinder/readme.html).

## Differences between this version and the official version
We further processed the dataset into a sequence only version via the script below.
Importantly, any entry with a sequence involved in a [NEGATOME](https://huggingface.co/datasets/Synthyra/NEGATOME) pair was removed.
This removed a few thousand from the training and one entry from the valid split.
`invalid` split entries were removed and info based on holo (bound), apo (unbound), as well as if an entry is computationally predicted was included.
We also limited the pair length to 2044 (2048 with special tokens), removed entries with sequences less than 20 amino acids, and removed entries with `X` amino acid characters.

```python
import pandas as pd
from datasets import Dataset, DatasetDict, load_dataset
from pinder.core import get_index
from pinder.core.index.utils import get_sequence_database

# --- Load the index and sequence database ---
index = get_index()
seq_db = get_sequence_database()

# --- Merge the index and sequence database ---
df = (
    pd.merge(
        index[["id", "split", "holo_R_pdb", "holo_L_pdb", "predicted_R", "predicted_L", "apo_R", "apo_L", "holo_R", "holo_L"]], 
        seq_db[["pdb", "sequence"]].rename(
            columns={"pdb": "holo_R_pdb", "sequence": "receptor_sequence"}
        ), 
        how="left"
    )
    .merge(
        seq_db[["pdb", "sequence"]].rename(
            columns={"pdb": "holo_L_pdb", "sequence": "ligand_sequence"}
        ), 
        how="left"
    )
    .drop(columns=["holo_R_pdb", "holo_L_pdb"])
)

# --- Load and process the negatome sequences ---
negatome = load_dataset('Synthyra/NEGATOME')
negatome_seqs = set()
for name, split in negatome.items():
    for sample in split:
        negatome_seqs.add(sample['SeqA'])
        negatome_seqs.add(sample['SeqB'])

# --- Filter for valid split entries (only 'test', 'val', and 'train') ---
allowed_splits = ['test', 'val', 'train']
df = df[df['split'].isin(allowed_splits)].copy()

# --- Rename the splits: 'val' -> 'valid' ---
df['split'] = df['split'].replace({'val': 'valid'})

# --- Remove entries with sequences found in the negatome ---
df = df[~(df['receptor_sequence'].isin(negatome_seqs) | df['ligand_sequence'].isin(negatome_seqs))].copy()

# --- Create the Huggingface DatasetDict with the desired splits ---
split_datasets = {}
for split in ['train', 'valid', 'test']:
    # Select the subset for the current split and reset the index.
    split_df = df[df['split'] == split].reset_index(drop=True)
    split_df = split_df.drop(columns='split')
    split_datasets[split] = Dataset.from_pandas(split_df)

hf_dataset = DatasetDict(split_datasets)

hf_dataset = hf_dataset.filter(lambda x: (len(x['receptor_sequence']) + len(x['ligand_sequence']) <= 2044) 
                          and len(x['receptor_sequence']) >= 20 
                          and len(x['ligand_sequence']) >= 20
                          and 'X' not in x['receptor_sequence'] 
                          and 'X' not in x['ligand_sequence'])

# --- Push the dataset to the hub ---
hf_dataset.push_to_hub('Synthyra/PINDER')
```

## Please cite

If you use this dataset in your work, please cite their paper.

```
@article {Kovtun2024.07.17.603980,
	author = {Kovtun, Daniel and Akdel, Mehmet and Goncearenco, Alexander and Zhou, Guoqing and Holt, Graham and Baugher, David and Lin, Dejun and Adeshina, Yusuf and Castiglione, Thomas and Wang, Xiaoyun and Marquet, C{\'e}line and McPartlon, Matt and Geffner, Tomas and Rossi, Emanuele and Corso, Gabriele and St{\"a}rk, Hannes and Carpenter, Zachary and Kucukbenli, Emine and Bronstein, Michael and Naef, Luca},
	title = {PINDER: The protein interaction dataset and evaluation resource},
	elocation-id = {2024.07.17.603980},
	year = {2024},
	doi = {10.1101/2024.07.17.603980},
	publisher = {Cold Spring Harbor Laboratory},
	abstract = {Protein-protein interactions (PPIs) are fundamental to understanding biological processes and play a key role in therapeutic advancements. As deep-learning docking methods for PPIs gain traction, benchmarking protocols and datasets tailored for effective training and evaluation of their generalization capabilities and performance across real-world scenarios become imperative. Aiming to overcome limitations of existing approaches, we introduce PINDER, a comprehensive annotated dataset that uses structural clustering to derive non-redundant interface-based data splits and includes holo (bound), apo (unbound), and computationally predicted structures. PINDER consists of 2,319,564 dimeric PPI systems (and up to 25 million augmented PPIs) and 1,955 high-quality test PPIs with interface data leakage removed. Additionally, PINDER provides a test subset with 180 dimers for comparison to AlphaFold-Multimer without any interface leakage with respect to its training set. Unsurprisingly, the PINDER benchmark reveals that the performance of existing docking models is highly overestimated when evaluated on leaky test sets. Most importantly, by retraining DiffDock-PP on PINDER interface-clustered splits, we show that interface cluster-based sampling of the training split, along with the diverse and less leaky validation split, leads to strong generalization improvements.Competing Interest StatementThe authors have declared no competing interest.},
	URL = {https://www.biorxiv.org/content/early/2024/08/13/2024.07.17.603980},
	eprint = {https://www.biorxiv.org/content/early/2024/08/13/2024.07.17.603980.full.pdf},
	journal = {bioRxiv}
}
```