prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the GraphSAGE model in the Inductive Representation Learning on Large Graphs paper on the PubMed (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the MLP-2 model in the Revisiting Heterophily For Graph Neural Networks paper on the PubMed (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the SGC-1 model in the Simplifying Graph Convolutional Networks paper on the PubMed (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the SGC-2 model in the Simplifying Graph Convolutional Networks paper on the PubMed (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the GPRGNN model in the Adaptive Universal Generalized PageRank Graph Neural Network paper on the PubMed (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the APPNP model in the Predict then Propagate: Graph Neural Networks meet Personalized PageRank paper on the PubMed (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the GAT model in the Graph Attention Networks paper on the PubMed (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the RR-GCN-PPV model in the R-GCN: The R Could Stand for Random paper on the DMG777K dataset? | Accuracy |
What metrics were used to measure the R-GCN model in the R-GCN: The R Could Stand for Random paper on the DMG777K dataset? | Accuracy |
What metrics were used to measure the ACM-GCN++ model in the Revisiting Heterophily For Graph Neural Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the ACMII-GCN++ model in the Revisiting Heterophily For Graph Neural Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the ACMII-Snowball-3 model in the Revisiting Heterophily For Graph Neural Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the ACMII-GCN+ model in the Revisiting Heterophily For Graph Neural Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the ACMII-Snowball-2 model in the Revisiting Heterophily For Graph Neural Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the ACM-Snowball-3 model in the Revisiting Heterophily For Graph Neural Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the ACMII-GCN model in the Revisiting Heterophily For Graph Neural Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the ACM-GCN+ model in the Revisiting Heterophily For Graph Neural Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the ACM-Snowball-2 model in the Revisiting Heterophily For Graph Neural Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the ACM-GCN model in the Revisiting Heterophily For Graph Neural Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the ACM-GCNII model in the Revisiting Heterophily For Graph Neural Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the ACM-GCNII* model in the Revisiting Heterophily For Graph Neural Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the ACM-SGC-2 model in the Revisiting Heterophily For Graph Neural Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the MLP-2 model in the Revisiting Heterophily For Graph Neural Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the GPRGNN model in the Adaptive Universal Generalized PageRank Graph Neural Network paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the ACM-SGC-1 model in the Revisiting Heterophily For Graph Neural Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the APPNP model in the Predict then Propagate: Graph Neural Networks meet Personalized PageRank paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the FAGCN model in the Beyond Low-frequency Information in Graph Convolutional Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the GCNII* model in the Simple and Deep Graph Convolutional Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the H2GCN model in the Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the HH-GraphSAGE model in the Half-Hop: A graph upsampling approach for slowing down message passing paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the HH-GAT model in the Half-Hop: A graph upsampling approach for slowing down message passing paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the GCNII model in the Simple and Deep Graph Convolutional Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the HH-GCN model in the Half-Hop: A graph upsampling approach for slowing down message passing paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the MixHop model in the MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the GCN model in the Semi-Supervised Classification with Graph Convolutional Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the Snowball-2 model in the Break the Ceiling: Stronger Multi-scale Deep Graph Convolutional Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the SGC-2 model in the Simplifying Graph Convolutional Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the GAT model in the Graph Attention Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the SGC-1 model in the Simplifying Graph Convolutional Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the GAT+JK model in the Revisiting Heterophily For Graph Neural Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the Snowball-3 model in the Break the Ceiling: Stronger Multi-scale Deep Graph Convolutional Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the GraphSAGE model in the Inductive Representation Learning on Large Graphs paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the Geom-GCN* model in the Geom-GCN: Geometric Graph Convolutional Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the GCN+JK model in the Revisiting Heterophily For Graph Neural Networks paper on the Wisconsin (60%/20%/20% random splits) dataset? | 1:1 Accuracy |
What metrics were used to measure the GraphMix (GCN) model in the GraphMix: Improved Training of GNNs for Semi-Supervised Learning paper on the CiteSeer with Public Split: fixed 5 nodes per class dataset? | Accuracy |
What metrics were used to measure the LINE model in the GraphVite: A High-Performance CPU-GPU Hybrid System for Node Embedding paper on the YouTube dataset? | Macro-F1@2%, Micro-F1@2%, runtime (s) |
What metrics were used to measure the Cleora model in the Cleora: A Simple, Strong and Scalable Graph Embedding Scheme paper on the YouTube dataset? | Macro-F1@2%, Micro-F1@2%, runtime (s) |
What metrics were used to measure the CoLinkDist model in the Distilling Self-Knowledge From Contrastive Links to Classify Graph Nodes Without Passing Messages paper on the Cora Full dataset? | Accuracy |
What metrics were used to measure the LinkDist model in the Distilling Self-Knowledge From Contrastive Links to Classify Graph Nodes Without Passing Messages paper on the Cora Full dataset? | Accuracy |
What metrics were used to measure the CoLinkDistMLP model in the Distilling Self-Knowledge From Contrastive Links to Classify Graph Nodes Without Passing Messages paper on the Cora Full dataset? | Accuracy |
What metrics were used to measure the LinkDistMLP model in the Distilling Self-Knowledge From Contrastive Links to Classify Graph Nodes Without Passing Messages paper on the Cora Full dataset? | Accuracy |
What metrics were used to measure the Truncated Krylov model in the Break the Ceiling: Stronger Multi-scale Deep Graph Convolutional Networks paper on the PubMed (0.1%) dataset? | Accuracy |
What metrics were used to measure the VCHN model in the View-Consistent Heterogeneous Network on Graphs With Few Labeled Nodes paper on the PubMed (0.1%) dataset? | Accuracy |
What metrics were used to measure the Snowball (linear + tanh) model in the Break the Ceiling: Stronger Multi-scale Deep Graph Convolutional Networks paper on the PubMed (0.1%) dataset? | Accuracy |
What metrics were used to measure the Snowball (tanh) model in the Break the Ceiling: Stronger Multi-scale Deep Graph Convolutional Networks paper on the PubMed (0.1%) dataset? | Accuracy |
What metrics were used to measure the Snowball (linear) model in the Break the Ceiling: Stronger Multi-scale Deep Graph Convolutional Networks paper on the PubMed (0.1%) dataset? | Accuracy |
What metrics were used to measure the LanczosNet model in the LanczosNet: Multi-Scale Deep Graph Convolutional Networks paper on the PubMed (0.1%) dataset? | Accuracy |
What metrics were used to measure the DCNN model in the Diffusion-Convolutional Neural Networks paper on the PubMed (0.1%) dataset? | Accuracy |
What metrics were used to measure the MT-GCN model in the Mutual Teaching for Graph Convolutional Networks paper on the PubMed (0.1%) dataset? | Accuracy |
What metrics were used to measure the AdaLanczosNet model in the LanczosNet: Multi-Scale Deep Graph Convolutional Networks paper on the PubMed (0.1%) dataset? | Accuracy |
What metrics were used to measure the GGNN model in the Gated Graph Sequence Neural Networks paper on the PubMed (0.1%) dataset? | Accuracy |
What metrics were used to measure the GCN-FP model in the Convolutional Networks on Graphs for Learning Molecular Fingerprints paper on the PubMed (0.1%) dataset? | Accuracy |
What metrics were used to measure the GraphSAGE model in the Inductive Representation Learning on Large Graphs paper on the PubMed (0.1%) dataset? | Accuracy |
What metrics were used to measure the GAT model in the Graph Attention Networks paper on the PubMed (0.1%) dataset? | Accuracy |
What metrics were used to measure the ChebyNet model in the Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering paper on the PubMed (0.1%) dataset? | Accuracy |
What metrics were used to measure the Dir-GNN model in the Edge Directionality Improves Learning on Heterophilic Graphs paper on the snap-patents dataset? | Accuracy |
What metrics were used to measure the Dual-Net GNN model in the Feature Selection: Key to Enhance Node Classification with Graph Neural Networks paper on the snap-patents dataset? | Accuracy |
What metrics were used to measure the G^2-GraphSAGE model in the Gradient Gating for Deep Multi-Rate Learning on Graphs paper on the snap-patents dataset? | Accuracy |
What metrics were used to measure the LINKX model in the Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods paper on the snap-patents dataset? | Accuracy |
What metrics were used to measure the GESN model in the Addressing Heterophily in Node Classification with Graph Echo State Networks paper on the Chameleon (48%/32%/20% fixed splits) dataset? | 1:1 Accuracy, Accuracy |
What metrics were used to measure the GREAD-BS model in the GREAD: Graph Neural Reaction-Diffusion Networks paper on the Chameleon (48%/32%/20% fixed splits) dataset? | 1:1 Accuracy, Accuracy |
What metrics were used to measure the GraphMix (GCN) model in the GraphMix: Improved Training of GNNs for Semi-Supervised Learning paper on the Pubmed random partition dataset? | Accuracy |
What metrics were used to measure the GraphSAGE model in the A New Graph Node Classification Benchmark: Learning Structure from Histology Cell Graphs paper on the Placenta dataset? | Accuracy (%) |
What metrics were used to measure the SIGN model in the A New Graph Node Classification Benchmark: Learning Structure from Histology Cell Graphs paper on the Placenta dataset? | Accuracy (%) |
What metrics were used to measure the ClusterGCN model in the A New Graph Node Classification Benchmark: Learning Structure from Histology Cell Graphs paper on the Placenta dataset? | Accuracy (%) |
What metrics were used to measure the GraphSAINT model in the A New Graph Node Classification Benchmark: Learning Structure from Histology Cell Graphs paper on the Placenta dataset? | Accuracy (%) |
What metrics were used to measure the ShaDow model in the A New Graph Node Classification Benchmark: Learning Structure from Histology Cell Graphs paper on the Placenta dataset? | Accuracy (%) |
What metrics were used to measure the DANMF model in the Deep Autoencoder-like Nonnegative Matrix Factorization for Community Detection paper on the Wiki dataset? | AUC, Macro F1, Micro F1 |
What metrics were used to measure the DAOR model in the Bridging the Gap between Community and Node Representations: Graph Embedding via Community Detection paper on the Wiki dataset? | AUC, Macro F1, Micro F1 |
What metrics were used to measure the GEMSEC 2 model in the GEMSEC: Graph Embedding with Self Clustering paper on the Deezer Croatia dataset? | Micro-F1 |
What metrics were used to measure the R-GCN model in the Modeling Relational Data with Graph Convolutional Networks paper on the AIFB dataset? | Accuracy |
What metrics were used to measure the RR-GCN-PPV-CUT model in the R-GCN: The R Could Stand for Random paper on the AIFB dataset? | Accuracy |
What metrics were used to measure the SCENE model in the SCENE: Reasoning about Traffic Scenes using Heterogeneous Graph Neural Networks paper on the AIFB dataset? | Accuracy |
What metrics were used to measure the Path Tree model in the Inducing a Decision Tree with Discriminative Paths to Classify Entities in a Knowledge Graph paper on the AIFB dataset? | Accuracy |
What metrics were used to measure the RDF2Vec+SVM model in the RDF2Vec: RDF Graph Embeddings and Their Applications paper on the AIFB dataset? | Accuracy |
What metrics were used to measure the RR-GCN-PPV model in the R-GCN: The R Could Stand for Random paper on the AIFB dataset? | Accuracy |
What metrics were used to measure the Dual-Net GNN model in the Feature Selection: Key to Enhance Node Classification with Graph Neural Networks paper on the Penn94 dataset? | Accuracy |
What metrics were used to measure the ACM-GCN++ model in the Revisiting Heterophily For Graph Neural Networks paper on the Penn94 dataset? | Accuracy |
What metrics were used to measure the ACMII-GCN++ model in the Revisiting Heterophily For Graph Neural Networks paper on the Penn94 dataset? | Accuracy |
What metrics were used to measure the GloGNN++ model in the Finding Global Homophily in Graph Neural Networks When Meeting Heterophily paper on the Penn94 dataset? | Accuracy |
What metrics were used to measure the GloGNN model in the Finding Global Homophily in Graph Neural Networks When Meeting Heterophily paper on the Penn94 dataset? | Accuracy |
What metrics were used to measure the ACM-GCN+ model in the Revisiting Heterophily For Graph Neural Networks paper on the Penn94 dataset? | Accuracy |
What metrics were used to measure the ACMII-GCN+ model in the Revisiting Heterophily For Graph Neural Networks paper on the Penn94 dataset? | Accuracy |
What metrics were used to measure the DJ-GNN model in the Diffusion-Jump GNNs: Homophiliation via Learnable Metric Filters paper on the Penn94 dataset? | Accuracy |
What metrics were used to measure the LINKX model in the Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods paper on the Penn94 dataset? | Accuracy |
What metrics were used to measure the MixHop model in the MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing paper on the Penn94 dataset? | Accuracy |
What metrics were used to measure the GCNII model in the Simple and Deep Graph Convolutional Networks paper on the Penn94 dataset? | Accuracy |
What metrics were used to measure the GCN model in the Semi-Supervised Classification with Graph Convolutional Networks paper on the Penn94 dataset? | Accuracy |
What metrics were used to measure the GCNJK model in the New Benchmarks for Learning on Non-Homophilous Graphs paper on the Penn94 dataset? | Accuracy |
What metrics were used to measure the GAT model in the Graph Attention Networks paper on the Penn94 dataset? | Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.