Update README.md
Browse files
README.md
CHANGED
|
@@ -24,68 +24,6 @@ BiCoA-Net predicts protein-ligand dissociation rate constants (k_off) using bidi
|
|
| 24 |
- Bidirectional co-attention fusion mechanism
|
| 25 |
- Trained on curated KineticX datasets
|
| 26 |
|
| 27 |
-
## Quick Start
|
| 28 |
-
```python
|
| 29 |
-
from huggingface_hub import hf_hub_download
|
| 30 |
-
import torch
|
| 31 |
-
|
| 32 |
-
# Download model weights
|
| 33 |
-
model_path = hf_hub_download(
|
| 34 |
-
repo_id="Daisyli95/BiCoA-Net",
|
| 35 |
-
filename="pytorch_model.pt"
|
| 36 |
-
)
|
| 37 |
-
|
| 38 |
-
# Load model (FP16 format)
|
| 39 |
-
state_dict = torch.load(model_path, map_location='cpu')
|
| 40 |
-
|
| 41 |
-
# Convert to FP32 for inference (recommended)
|
| 42 |
-
state_dict_fp32 = {k: v.float() if v.dtype == torch.float16 else v
|
| 43 |
-
for k, v in state_dict.items()}
|
| 44 |
-
|
| 45 |
-
# Load into your model architecture
|
| 46 |
-
model.load_state_dict(state_dict_fp32)
|
| 47 |
-
model.eval()
|
| 48 |
-
```
|
| 49 |
-
|
| 50 |
-
## Model Details
|
| 51 |
-
|
| 52 |
-
- **Architecture**: ESM-2 (650M) + MolFormer + Bidirectional Co-Attention
|
| 53 |
-
- **Training Data**: PDBbind v2020 + Custom kinetics data
|
| 54 |
-
- **Format**: PyTorch FP16 (0.96 GB)
|
| 55 |
-
- **Parameters**: ~960M
|
| 56 |
-
- **Input**:
|
| 57 |
-
- Protein sequence (FASTA)
|
| 58 |
-
- Ligand SMILES string
|
| 59 |
-
- **Output**: Predicted log(k_off) value
|
| 60 |
-
|
| 61 |
-
## Performance
|
| 62 |
-
|
| 63 |
-
- Concordance Index (C-index): [Add your metrics]
|
| 64 |
-
- Pearson Correlation: [Add your metrics]
|
| 65 |
-
- Test on held-out GPCR targets: [Add your metrics]
|
| 66 |
-
|
| 67 |
-
## Usage Example
|
| 68 |
-
```python
|
| 69 |
-
# Assuming you have BiCoA-Net model class defined
|
| 70 |
-
from your_model import BiCoANet
|
| 71 |
-
|
| 72 |
-
# Initialize model
|
| 73 |
-
model = BiCoANet()
|
| 74 |
-
|
| 75 |
-
# Load pretrained weights
|
| 76 |
-
state_dict = torch.load(model_path, map_location='cpu')
|
| 77 |
-
state_dict = {k: v.float() for k, v in state_dict.items()}
|
| 78 |
-
model.load_state_dict(state_dict)
|
| 79 |
-
model.eval()
|
| 80 |
-
|
| 81 |
-
# Predict
|
| 82 |
-
protein_seq = "MSLQKEVQKL..."
|
| 83 |
-
ligand_smiles = "CC(C)Cc1ccc(cc1)C(C)C(O)=O"
|
| 84 |
-
|
| 85 |
-
with torch.no_grad():
|
| 86 |
-
prediction = model(protein_seq, ligand_smiles)
|
| 87 |
-
print(f"Predicted log(k_off): {prediction.item()}")
|
| 88 |
-
```
|
| 89 |
|
| 90 |
## Training Details
|
| 91 |
|
|
|
|
| 24 |
- Bidirectional co-attention fusion mechanism
|
| 25 |
- Trained on curated KineticX datasets
|
| 26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
|
| 28 |
## Training Details
|
| 29 |
|