Molecule GAT - Contrastive Learning
This model was trained for the ALTEGRAD Challenge 2025: Molecule-to-Text retrieval.
Model Description
- Architecture: GAT (Graph Attention Network)
- Task: Learn molecular graph embeddings for text retrieval
- Training: Contrastive learning (CLIP-style InfoNCE loss)
- Embedding Dimension: 768
Architecture Details
GAT(
input_dim=N/A,
hidden_dim=512,
out_dim=768,
num_layers=3,
num_heads=4,
dropout=0.1,
pooling='mean'
)
Training Details
- Loss: InfoNCE (Contrastive Loss)
- Temperature: 0.07
- Best Validation MRR: N/A
- Dataset: ALTEGRAD Challenge 2025
Usage
import torch
from torch_geometric.data import Data
from huggingface_hub import hf_hub_download
# Download model files
checkpoint_path = hf_hub_download(
repo_id="NicolasNoya/molecule-gat-contrastive",
filename="model.pt"
)
# Load checkpoint
checkpoint = torch.load(checkpoint_path, map_location='cpu')
# Reconstruct model (you'll need the model class definition)
# See model_architecture.py in this repo
model = MolGAT(
input_dim=N/A,
hidden_dim=512,
out_dim=768,
num_layers=3,
num_heads=4,
dropout=0.1,
pooling='mean'
)
model.load_state_dict(checkpoint['model_state_dict'])
model.eval()
# Use the model
with torch.no_grad():
# graph_data should be a PyTorch Geometric Data object
embedding = model(graph_data)
Model Performance
Evaluated on ALTEGRAD Challenge 2025 validation set:
- MRR (Mean Reciprocal Rank): N/A
- Recall@1: N/A
- Recall@5: N/A
- Recall@10: N/A
Citation
If you use this model, please cite:
@misc{molecule_gat_2025,
author = {NicolasNoya},
title = {Molecule GAT for Contrastive Learning},
year = {2025},
publisher = {HuggingFace},
howpublished = {\url{https://huggingface.co/NicolasNoya/molecule-gat-contrastive}}
}
License
MIT License
Contact
For questions or issues, please open an issue on the repository.
- Downloads last month
- 4
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support