# ๐Ÿง  Graph Neural Networks: A Comprehensive Implementation and Comparison [![PyTorch](https://img.shields.io/badge/PyTorch-EE4C2C?style=for-the-badge&logo=pytorch&logoColor=white)](https://pytorch.org/) [![Python](https://img.shields.io/badge/Python-3776AB?style=for-the-badge&logo=python&logoColor=white)](https://python.org/) [![MIT License](https://img.shields.io/badge/License-MIT-green.svg?style=for-the-badge)](https://choosealicense.com/licenses/mit/) A complete implementation and comparison of three state-of-the-art Graph Neural Network architectures: **GCN**, **GraphSAGE**, and **GAT** on the Cora citation network dataset. ## ๐ŸŽฏ **Project Overview** This project demonstrates the implementation and comparative analysis of Graph Neural Networks for node classification tasks. Using the Cora citation network dataset, we train and evaluate three different GNN architectures to understand their strengths and performance characteristics. ### **Key Results** - **๐Ÿฅ‡ GAT (Graph Attention Networks)**: 81.9% test accuracy - **๐Ÿฅˆ GCN (Graph Convolutional Networks)**: 79.3% test accuracy - **๐Ÿฅ‰ GraphSAGE**: 76.8% test accuracy ## ๐Ÿ“Š **Dataset: Cora Citation Network** - **2,708 nodes** (machine learning papers) - **10,556 edges** (citation relationships) - **1,433 features** per node (bag-of-words from abstracts) - **7 classes** (research areas: Neural Networks, Rule Learning, etc.) - **Semi-supervised setup**: 140 training, 500 validation, 1000 test nodes ![Graph Visualization](graph_visualization.png) *Cora citation network structure with nodes colored by research area* ## ๐Ÿ—๏ธ **Architecture Comparison** | Model | Parameters | Key Innovation | Convergence | Test Accuracy | |-------|------------|----------------|-------------|---------------| | **GCN** | 46,119 | Spectral graph convolution | 90 epochs | 79.3% | | **GraphSAGE** | 92,199 | Sampling and aggregation | 187 epochs | 76.8% | | **GAT** | 369,429 | Multi-head attention | 46 epochs | **81.9%** | ## ๐Ÿ“ˆ **Training Results** ![Training Curves](training_curves.png) *Loss and accuracy curves showing training progression for all three models* ### **Key Training Insights** - **GAT**: Fastest convergence (46 epochs) with highest final accuracy - **GCN**: Steady, reliable convergence with good performance - **GraphSAGE**: Slower start but strong final performance, took longest to converge ## ๐Ÿ” **Learned Representations** ![Embeddings Visualization](embeddings_tsne.png) *t-SNE visualization of learned node embeddings showing class separation quality* The embeddings visualization reveals: - **GAT**: Best class separation with clear clustering - **GCN**: Good separation with some overlap - **GraphSAGE**: Decent clustering with more mixed regions ## ๐Ÿš€ **Quick Start** ### **Installation** ```bash # Clone the repository git clone https://github.com/GruheshKurra/GraphNeuralNetworks-GNN-.git cd GraphNeuralNetworks-GNN- # Install dependencies pip install torch torchvision torchaudio pip install torch-geometric torch-scatter torch-sparse torch-cluster torch-spline-conv pip install matplotlib seaborn pandas numpy scikit-learn networkx ``` ### **For Apple Silicon Macs (M1/M2/M3/M4)** ```bash # The code automatically detects and uses MPS acceleration pip install torch torchvision torchaudio pip install torch-geometric torch-scatter torch-sparse torch-cluster torch-spline-conv pip install matplotlib seaborn pandas numpy scikit-learn networkx ``` ### **For Google Colab** ```python !pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu !pip install torch-geometric torch-scatter torch-sparse torch-cluster torch-spline-conv -f https://data.pyg.org/whl/torch-2.0.0+cpu.html !pip install matplotlib seaborn pandas numpy scikit-learn networkx ``` ### **Run the Training** ```bash python gnn_comparison.py ``` ## ๐Ÿ“ˆ **Key Features** - **๐ŸŽฏ Three GNN Architectures**: Complete implementations of GCN, GraphSAGE, and GAT - **๐Ÿ“Š Comprehensive Evaluation**: Accuracy, precision, recall, F1-score, confusion matrices - **๐Ÿ“ˆ Visualization**: Training curves, t-SNE embeddings, network structure - **๐Ÿ›ก๏ธ Robust Training**: Early stopping, model checkpointing, cross-platform compatibility - **๐Ÿ“ Detailed Logging**: Complete training logs instead of code comments - **๐Ÿ’พ Artifact Saving**: Models, results, and visualizations saved automatically ## ๐Ÿ—‚๏ธ **Project Structure** ``` โ”œโ”€โ”€ gnn_comparison.py # Main training script โ”œโ”€โ”€ best_*_model.pth # Best model checkpoints โ”œโ”€โ”€ *_full_model.pkl # Complete model objects โ”œโ”€โ”€ training_curves.png # Loss and accuracy visualizations โ”œโ”€โ”€ embeddings_tsne.png # t-SNE embedding visualizations โ”œโ”€โ”€ graph_visualization.png # Network structure visualization โ”œโ”€โ”€ results_summary.json # Comprehensive metrics โ”œโ”€โ”€ gnn_training.log # Complete training logs โ””โ”€โ”€ README.md # This file ``` ## ๐Ÿงช **Methodology** ### **Model Architectures** 1. **Graph Convolutional Networks (GCN)** - Spectral approach to graph convolutions - Simple and effective baseline - Fast convergence with good performance 2. **GraphSAGE (Sample and Aggregate)** - Sampling-based approach for scalability - Inductive learning capability - Handles large graphs efficiently 3. **Graph Attention Networks (GAT)** - Multi-head attention mechanism - Dynamic neighbor weighting - Best performance but highest complexity ### **Training Configuration** ```python config = { 'hidden_dim': 32, # Compact representation 'num_layers': 2, # Avoids over-smoothing 'dropout': 0.5, # Strong regularization 'learning_rate': 0.001, # Conservative learning 'weight_decay': 5e-4, # L2 regularization 'epochs': 200, # Maximum training 'patience': 20, # Early stopping 'attention_heads': 8 # Multi-head attention (GAT) } ``` ## ๐Ÿ“Š **Results Analysis** ### **Performance Metrics** | Model | Test Acc | Precision | Recall | F1-Score | Parameters | |-------|----------|-----------|--------|----------|------------| | GCN | 79.3% | 0.791 | 0.793 | 0.792 | 46K | | GraphSAGE | 76.8% | 0.765 | 0.768 | 0.766 | 92K | | GAT | **81.9%** | **0.819** | **0.819** | **0.819** | 369K | ### **Key Insights** 1. **GAT's Superior Performance**: Attention mechanism provides significant advantage 2. **Efficiency vs Performance**: GCN offers good performance with fewer parameters 3. **Convergence Speed**: GAT converges fastest despite higher complexity 4. **Regularization Impact**: Strong dropout (0.5) crucial for small training set ## ๐ŸŽจ **Visualizations Generated** The project automatically generates comprehensive visualizations: ### 1. **Network Structure Visualization** ![Graph Structure](graph_visualization.png) Shows the Cora citation network with: - Nodes colored by research area (7 classes) - Spring layout for optimal visualization - Clear community structure visible ### 2. **Training Progress Monitoring** ![Training Curves](training_curves.png) Displays for each model: - **Loss curves**: Training and validation loss progression - **Accuracy curves**: Training and validation accuracy trends - **Overfitting analysis**: Gap between train/validation performance ### 3. **Learned Representation Quality** ![Node Embeddings](embeddings_tsne.png) t-SNE visualization showing: - **Class separation**: How well models distinguish between research areas - **Embedding quality**: Clustering strength in learned representations - **Model comparison**: Visual comparison of representation learning ## ๐Ÿ› ๏ธ **Technical Details** ### **Device Compatibility** - **Apple Silicon MPS**: Automatic detection and acceleration - **NVIDIA CUDA**: GPU acceleration support - **CPU Fallback**: Universal compatibility ### **Best Practices Implemented** - Early stopping to prevent overfitting - Model checkpointing for reproducibility - Comprehensive logging for debugging - Cross-platform compatibility - Memory-efficient implementations ## ๐Ÿ“š **Learning Outcomes** This implementation demonstrates: 1. **Graph Neural Network Fundamentals** - Message passing framework - Neighborhood aggregation - Semi-supervised node classification 2. **Architecture Comparison** - Spectral vs spatial approaches - Attention mechanisms in graphs - Scalability considerations 3. **Best Practices** - Hyperparameter selection for graphs - Regularization techniques - Evaluation methodologies ## ๐Ÿ”ง **Reproducibility** All experiments are fully reproducible: - Fixed random seeds for consistent results - Complete configuration saved in `results_summary.json` - Model checkpoints saved at best validation performance - Comprehensive logging of all training steps ## ๐Ÿค **Contributing** Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change. ### **Development Setup** ```bash git clone https://github.com/GruheshKurra/GraphNeuralNetworks-GNN-.git cd GraphNeuralNetworks-GNN- pip install -r requirements.txt ``` ## ๐Ÿ“„ **License** This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. ## ๐Ÿ™ **Acknowledgments** - **PyTorch Geometric** team for excellent graph learning library - **Cora Dataset** creators for benchmark citation network - **Graph Neural Network** researchers for foundational work ## ๐Ÿ“ž **Contact** For questions or collaborations, please open an issue or reach out through GitHub. --- โญ **If you find this project helpful, please consider giving it a star!** โญ