File size: 7,388 Bytes
28c09e5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3bfd710
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28c09e5
 
3bfd710
 
28c09e5
3bfd710
 
28c09e5
3bfd710
28c09e5
3bfd710
 
28c09e5
 
 
 
 
 
 
 
 
 
4269e38
 
 
 
 
 
 
 
 
 
e6f9e2a
2016c6b
 
4269e38
 
 
e6f9e2a
 
4269e38
 
e6f9e2a
 
4269e38
e6f9e2a
4269e38
 
e6f9e2a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4269e38
 
e6f9e2a
 
 
 
 
 
 
 
 
 
 
 
4269e38
 
 
 
 
 
 
 
 
 
 
e6f9e2a
 
 
 
 
4269e38
 
 
e6f9e2a
 
4269e38
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e6f9e2a
4269e38
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
---
dataset_info:
  features:
  - name: id
    dtype: string
  - name: predicted_R
    dtype: bool
  - name: predicted_L
    dtype: bool
  - name: apo_R
    dtype: bool
  - name: apo_L
    dtype: bool
  - name: holo_R
    dtype: bool
  - name: holo_L
    dtype: bool
  - name: receptor_sequence
    dtype: string
  - name: ligand_sequence
    dtype: string
  - name: probability
    dtype: float16
  - name: link_density
    dtype: float16
  - name: planarity
    dtype: float16
  - name: n_residue_pairs
    dtype: int16
  - name: n_residues
    dtype: int16
  - name: buried_sasa
    dtype: float16
  - name: intermolecular_contacts
    dtype: int16
  - name: charged_charged_contacts
    dtype: int16
  - name: charged_polar_contacts
    dtype: int16
  - name: charged_apolar_contacts
    dtype: int16
  - name: polar_polar_contacts
    dtype: int16
  - name: apolar_polar_contacts
    dtype: int16
  - name: apolar_apolar_contacts
    dtype: int16
  splits:
  - name: train
    num_bytes: 892318698.5524988
    num_examples: 1488417
  - name: valid
    num_bytes: 1137131.0832482125
    num_examples: 1951
  - name: test
    num_bytes: 1194873.6905370844
    num_examples: 1945
  download_size: 128904738
  dataset_size: 894650703.3262842
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: valid
    path: data/valid-*
  - split: test
    path: data/test-*
---


# PINDER PPI dataset

The [PINDER: The Protein INteraction Dataset and Evaluation Resource](https://github.com/pinder-org/pinder) is a high quality compilation of positive protein protein interactions.
Of particular note, the train, valid, and test splits are deduplicated and heavily trimmed based on sequence and structure similarity.

For more information on the original dataset compilation, please read their [paper](https://github.com/pinder-org/pinder), [GitHub](https://github.com/pinder-org/pinder), or [docs](https://pinder-org.github.io/pinder/readme.html).

## Differences between this version and the official version
We further processed the dataset into via the script below.
`invalid` split entries were removed and info based on holo (bound), apo (unbound), as well as if an entry is computationally predicted was included.
We also limited the pair length to 2044 (2048 with special tokens), removed entries with sequences less than 20 amino acids, and removed entries with `X` amino acid characters.

```python
import pandas as pd
from datasets import Dataset, DatasetDict
from pinder.core import get_index, get_metadata
from pinder.core.index.utils import get_sequence_database


# --- Load the data ---
index = get_index()
metadata = get_metadata()
seq_db = get_sequence_database()

annotations = [
    "id",
    "probability",
    "link_density",
    "planarity",
    "n_residue_pairs",
    "n_residues",
    "buried_sasa",
    "intermolecular_contacts",
    "charged_charged_contacts",
    "charged_polar_contacts",
    "charged_apolar_contacts",
    "polar_polar_contacts",
    "apolar_polar_contacts",
    "apolar_apolar_contacts",
]

# --- Merge the data ---
df = (
    pd.merge(
        index[[
            "id",
            "split",
            "holo_R_pdb",
            "holo_L_pdb",
            "predicted_R",
            "predicted_L",
            "apo_R",
            "apo_L",
            "holo_R",
            "holo_L",
            ]], 
        seq_db[["pdb", "sequence"]].rename(
            columns={"pdb": "holo_R_pdb", "sequence": "receptor_sequence"}
        ), 
        how="left"
    )
    .merge(
        seq_db[["pdb", "sequence"]].rename(
            columns={"pdb": "holo_L_pdb", "sequence": "ligand_sequence"}
        ), 
        how="left"
    )
    .merge(
        metadata[annotations],
        on="id",
        how="left"
    )
    .drop(columns=["holo_R_pdb", "holo_L_pdb"])
)

print(df.head())


# --- Filter for valid split entries (only 'test', 'val', and 'train') ---
allowed_splits = ['test', 'val', 'train']
df = df[df['split'].isin(allowed_splits)].copy()

# --- Rename the splits: 'val' -> 'valid' ---
df['split'] = df['split'].replace({'val': 'valid'})

# --- Create the Huggingface DatasetDict with the desired splits ---
split_datasets = {}
for split in ['train', 'valid', 'test']:
    # Select the subset for the current split and reset the index.
    split_df = df[df['split'] == split].reset_index(drop=True)
    split_df = split_df.drop(columns='split')
    split_datasets[split] = Dataset.from_pandas(split_df)

hf_dataset = DatasetDict(split_datasets)

hf_dataset = hf_dataset.filter(lambda x: (len(x['receptor_sequence']) + len(x['ligand_sequence']) <= 2044) 
                          and len(x['receptor_sequence']) >= 20 
                          and len(x['ligand_sequence']) >= 20
                          and 'X' not in x['receptor_sequence'] 
                          and 'X' not in x['ligand_sequence'])

# --- Push the dataset to the hub ---
hf_dataset.push_to_hub('Synthyra/PINDER')

```

## Please cite

If you use this dataset in your work, please cite their paper.

```
@article {Kovtun2024.07.17.603980,
	author = {Kovtun, Daniel and Akdel, Mehmet and Goncearenco, Alexander and Zhou, Guoqing and Holt, Graham and Baugher, David and Lin, Dejun and Adeshina, Yusuf and Castiglione, Thomas and Wang, Xiaoyun and Marquet, C{\'e}line and McPartlon, Matt and Geffner, Tomas and Rossi, Emanuele and Corso, Gabriele and St{\"a}rk, Hannes and Carpenter, Zachary and Kucukbenli, Emine and Bronstein, Michael and Naef, Luca},
	title = {PINDER: The protein interaction dataset and evaluation resource},
	elocation-id = {2024.07.17.603980},
	year = {2024},
	doi = {10.1101/2024.07.17.603980},
	publisher = {Cold Spring Harbor Laboratory},
	abstract = {Protein-protein interactions (PPIs) are fundamental to understanding biological processes and play a key role in therapeutic advancements. As deep-learning docking methods for PPIs gain traction, benchmarking protocols and datasets tailored for effective training and evaluation of their generalization capabilities and performance across real-world scenarios become imperative. Aiming to overcome limitations of existing approaches, we introduce PINDER, a comprehensive annotated dataset that uses structural clustering to derive non-redundant interface-based data splits and includes holo (bound), apo (unbound), and computationally predicted structures. PINDER consists of 2,319,564 dimeric PPI systems (and up to 25 million augmented PPIs) and 1,955 high-quality test PPIs with interface data leakage removed. Additionally, PINDER provides a test subset with 180 dimers for comparison to AlphaFold-Multimer without any interface leakage with respect to its training set. Unsurprisingly, the PINDER benchmark reveals that the performance of existing docking models is highly overestimated when evaluated on leaky test sets. Most importantly, by retraining DiffDock-PP on PINDER interface-clustered splits, we show that interface cluster-based sampling of the training split, along with the diverse and less leaky validation split, leads to strong generalization improvements.Competing Interest StatementThe authors have declared no competing interest.},
	URL = {https://www.biorxiv.org/content/early/2024/08/13/2024.07.17.603980},
	eprint = {https://www.biorxiv.org/content/early/2024/08/13/2024.07.17.603980.full.pdf},
	journal = {bioRxiv}
}
```