prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the DynAE model in the Deep Clustering with a Dynamic Autoencoder: From Reconstruction towards Centroids Construction paper on the Fashion-MNIST dataset? | Accuracy, NMI |
What metrics were used to measure the JULE-RC model in the Joint Unsupervised Learning of Deep Representations and Image Clusters paper on the Coil-20 dataset? | NMI, Accuracy |
What metrics were used to measure the Tree-SNE model in the Tree-SNE: Hierarchical Clustering and Visualization Using t-SNE paper on the Coil-20 dataset? | NMI, Accuracy |
What metrics were used to measure the AGDL model in the Graph Degree Linkage: Agglomerative Clustering on a Directed Graph paper on the Coil-20 dataset? | NMI, Accuracy |
What metrics were used to measure the DBC model in the Discriminatively Boosted Image Clustering with Fully Convolutional Auto-Encoders paper on the Coil-20 dataset? | NMI, Accuracy |
What metrics were used to measure the GDL-U model in the Graph Degree Linkage: Agglomerative Clustering on a Directed Graph paper on the Coil-20 dataset? | NMI, Accuracy |
What metrics were used to measure the GDL model in the Graph Degree Linkage: Agglomerative Clustering on a Directed Graph paper on the Coil-20 dataset? | NMI, Accuracy |
What metrics were used to measure the N2D (UMAP) model in the N2D: (Not Too) Deep Clustering via Clustering the Local Manifold of an Autoencoded Embedding paper on the HAR dataset? | Accuracy, NMI |
What metrics were used to measure the Selective HAR Clustering model in the Efficient Deep Clustering of Human Activities and How to Improve Evaluation paper on the HAR dataset? | Accuracy, NMI |
What metrics were used to measure the DMSC model in the Deep Multimodal Subspace Clustering Networks paper on the Extended Yale-B dataset? | Accuracy, NMI |
What metrics were used to measure the DSCN model in the Self-Supervised Convolutional Subspace Clustering Network paper on the Extended Yale-B dataset? | Accuracy, NMI |
What metrics were used to measure the DSC-2 model in the Deep Subspace Clustering Networks paper on the Extended Yale-B dataset? | Accuracy, NMI |
What metrics were used to measure the J-DSSC model in the Doubly Stochastic Subspace Clustering paper on the Extended Yale-B dataset? | Accuracy, NMI |
What metrics were used to measure the A-DSSC model in the Doubly Stochastic Subspace Clustering paper on the Extended Yale-B dataset? | Accuracy, NMI |
What metrics were used to measure the SSC-OMP model in the Scalable Sparse Subspace Clustering by Orthogonal Matching Pursuit paper on the Extended Yale-B dataset? | Accuracy, NMI |
What metrics were used to measure the SSC model in the Sparse Subspace Clustering: Algorithm, Theory, and Applications paper on the Extended Yale-B dataset? | Accuracy, NMI |
What metrics were used to measure the GDL-U model in the Graph Degree Linkage: Agglomerative Clustering on a Directed Graph paper on the Extended Yale-B dataset? | Accuracy, NMI |
What metrics were used to measure the AGDL model in the Graph Degree Linkage: Agglomerative Clustering on a Directed Graph paper on the Extended Yale-B dataset? | Accuracy, NMI |
What metrics were used to measure the FineGAN model in the FineGAN: Unsupervised Hierarchical Disentanglement for Fine-Grained Object Generation and Discovery paper on the Stanford Dogs dataset? | Accuracy, NMI |
What metrics were used to measure the DEPICT-Large model in the Deep Clustering via Joint Convolutional Autoencoder Embedding and Relative Entropy Minimization paper on the Stanford Dogs dataset? | Accuracy, NMI |
What metrics were used to measure the DEPICT model in the Deep Clustering via Joint Convolutional Autoencoder Embedding and Relative Entropy Minimization paper on the Stanford Dogs dataset? | Accuracy, NMI |
What metrics were used to measure the JULE model in the Joint Unsupervised Learning of Deep Representations and Image Clusters paper on the Stanford Dogs dataset? | Accuracy, NMI |
What metrics were used to measure the FineGAN model in the FineGAN: Unsupervised Hierarchical Disentanglement for Fine-Grained Object Generation and Discovery paper on the Stanford Cars dataset? | Accuracy, NMI |
What metrics were used to measure the DEPICT model in the Deep Clustering via Joint Convolutional Autoencoder Embedding and Relative Entropy Minimization paper on the Stanford Cars dataset? | Accuracy, NMI |
What metrics were used to measure the DEPICT-Large model in the Deep Clustering via Joint Convolutional Autoencoder Embedding and Relative Entropy Minimization paper on the Stanford Cars dataset? | Accuracy, NMI |
What metrics were used to measure the JULE model in the Joint Unsupervised Learning of Deep Representations and Image Clusters paper on the Stanford Cars dataset? | Accuracy, NMI |
What metrics were used to measure the DMSC model in the Deep Multimodal Subspace Clustering Networks paper on the ARL Polarimetric Thermal Face Dataset dataset? | Accuracy |
What metrics were used to measure the JULE-RC model in the Joint Unsupervised Learning of Deep Representations and Image Clusters paper on the YouTube Faces DB dataset? | NMI, Accuracy |
What metrics were used to measure the SR-K-means model in the Deep clustering: On the link between discriminative models and K-means paper on the YouTube Faces DB dataset? | NMI, Accuracy |
What metrics were used to measure the DEPICT model in the Deep Clustering via Joint Convolutional Autoencoder Embedding and Relative Entropy Minimization paper on the YouTube Faces DB dataset? | NMI, Accuracy |
What metrics were used to measure the DEC (KL based) model in the Unsupervised Deep Embedding for Clustering Analysis paper on the YouTube Faces DB dataset? | NMI, Accuracy |
What metrics were used to measure the N2D (UMAP) model in the N2D: (Not Too) Deep Clustering via Clustering the Local Manifold of an Autoencoded Embedding paper on the pendigits dataset? | Accuracy, NMI |
What metrics were used to measure the DnC-SC model in the Divide-and-conquer based Large-Scale Spectral Clustering paper on the pendigits dataset? | Accuracy, NMI |
What metrics were used to measure the Q2L-CvT(resolution 384, ImageNet-21K pretrained) model in the Query2Label: A Simple Transformer Way to Multi-Label Classification paper on the NUS-WIDE dataset? | MAP |
What metrics were used to measure the MLD-TResNet-L-AAM[448x448] model in the Combining Metric Learning and Attention Heads For Accurate and Efficient Multilabel Image Classification paper on the NUS-WIDE dataset? | MAP |
What metrics were used to measure the Q2L-TResL(resoluition 448) model in the Query2Label: A Simple Transformer Way to Multi-Label Classification paper on the NUS-WIDE dataset? | MAP |
What metrics were used to measure the TResNet-L (resolution 448) model in the Asymmetric Loss For Multi-Label Classification paper on the NUS-WIDE dataset? | MAP |
What metrics were used to measure the Q2L-R101(resolution 448) model in the Query2Label: A Simple Transformer Way to Multi-Label Classification paper on the NUS-WIDE dataset? | MAP |
What metrics were used to measure the SRN model in the Learning Spatial Regularization with Image-level Supervisions for Multi-label Image Classification paper on the NUS-WIDE dataset? | MAP |
What metrics were used to measure the MSRN model in the Multi-layered Semantic Representation Network for Multi-label Image Classification paper on the NUS-WIDE dataset? | MAP |
What metrics were used to measure the MS-CMA model in the Cross-Modality Attention with Semantic Graph Embedding for Multi-Label Classification paper on the NUS-WIDE dataset? | MAP |
What metrics were used to measure the S-CLs model in the Multi-Label Image Classification via Knowledge Distillation from Weakly-Supervised Detection paper on the NUS-WIDE dataset? | MAP |
What metrics were used to measure the TResNet-L model in the Multi-label Classification with Partial Annotations using Class-aware Selective Loss paper on the OpenImages-v6 dataset? | mAP |
What metrics were used to measure the TResNet-M model in the ML-Decoder: Scalable and Versatile Classification Head paper on the OpenImages-v6 dataset? | mAP |
What metrics were used to measure the TResNet-M model in the Multi-label Classification with Partial Annotations using Class-aware Selective Loss paper on the OpenImages-v6 dataset? | mAP |
What metrics were used to measure the TResNet-L model in the Asymmetric Loss For Multi-Label Classification paper on the OpenImages-v6 dataset? | mAP |
What metrics were used to measure the ADDS(ViT-L-336, resolution 1344) model in the Open Vocabulary Multi-Label Classification with Dual-Modal Decoder on Aligned Visual-Textual Features paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the ADDS(ViT-L-336, resolution 640) model in the Open Vocabulary Multi-Label Classification with Dual-Modal Decoder on Aligned Visual-Textual Features paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the ADDS(ViT-L-336, resolution 336) model in the Open Vocabulary Multi-Label Classification with Dual-Modal Decoder on Aligned Visual-Textual Features paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the ML-Decoder(TResNet-XL, resolution 640) model in the ML-Decoder: Scalable and Versatile Classification Head paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the Q2L-CvT(ImageNet-21K pretraining, resolution 384) model in the Query2Label: A Simple Transformer Way to Multi-Label Classification paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the MLD-TResNet-L-AAM[640x640] model in the Combining Metric Learning and Attention Heads For Accurate and Efficient Multilabel Image Classification paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the ML-Decoder(TResNet-L, resolution 640) model in the ML-Decoder: Scalable and Versatile Classification Head paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the Q2L-SwinL(ImageNet-21K pretraining, resolution 384) model in the Query2Label: A Simple Transformer Way to Multi-Label Classification paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the Q2L-TResL(ImageNet-21K pretraining, resolution 640) model in the Query2Label: A Simple Transformer Way to Multi-Label Classification paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the IDA-SwinL model in the Causality Compensated Attention for Contextual Biased Visual Recognition paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the CCD-SwinL model in the Contextual Debiasing for Visual Recognition With Causal Mechanisms paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the MlTr-XL(ImageNet-21K pretraining, resolution 384) model in the MlTr: Multi-label Classification with Transformer paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the TResNet-L-V2, (ImageNet-21K-P pretraining, resolution 640) model in the ImageNet-21K Pretraining for the Masses paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the MlTr-L(ImageNet-21K pretraining, resolution 384) model in the MlTr: Multi-label Classification with Transformer paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the TResNet-XL (resolution 640) model in the Asymmetric Loss For Multi-Label Classification paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the TResNet-L-V2, (ImageNet-21K-P pretraining, resolution 448) model in the ImageNet-21K Pretraining for the Masses paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the M3TR(ImageNet-21K-P pretraining, resolution 448) model in the M3TR: Multi-modal Multi-label Recognition with Transformer paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the TResNet-L (resolution 448) model in the Asymmetric Loss For Multi-Label Classification paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the IDA-R101 model in the Causality Compensated Attention for Contextual Biased Visual Recognition paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the TDRG-R101(576×576) model in the Transformer-based Dual Relation Graph for Multi-label Image Recognition paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the CCD-R101 model in the Contextual Debiasing for Visual Recognition With Causal Mechanisms paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the Q2L-R101(resolution 448) model in the Query2Label: A Simple Transformer Way to Multi-Label Classification paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the TDRG-R101(448×448) model in the Transformer-based Dual Relation Graph for Multi-label Image Recognition paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the MCAR (ResNet101, 576x576) model in the Learning to Discover Multi-Class Attentional Regions for Multi-Label Image Recognition paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the MS-CMA model in the Cross-Modality Attention with Semantic Graph Embedding for Multi-Label Classification paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the MCAR (ResNet101, 448x448) model in the Learning to Discover Multi-Class Attentional Regions for Multi-Label Image Recognition paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the KSSNet model in the Multi-Label Classification with Label Graph Superimposing paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the MSRN model in the Multi-layered Semantic Representation Network for Multi-label Image Classification paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the ML-GCN model in the Multi-Label Graph Convolutional Network Representation Learning paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the ResNet-SRN model in the Learning Spatial Regularization with Image-level Supervisions for Multi-label Image Classification paper on the MS-COCO dataset? | mAP |
What metrics were used to measure the Q2L-CvT(ImageNet-21K pretrained, resolution 384) model in the Query2Label: A Simple Transformer Way to Multi-Label Classification paper on the PASCAL VOC 2007 dataset? | mAP |
What metrics were used to measure the Q2L-TResL(ImageNet-21K pretrained, resolution 448) model in the Query2Label: A Simple Transformer Way to Multi-Label Classification paper on the PASCAL VOC 2007 dataset? | mAP |
What metrics were used to measure the MLD-TResNetL-AAM (resolution 448, pretrain from OpenImages V6) model in the Combining Metric Learning and Attention Heads For Accurate and Efficient Multilabel Image Classification paper on the PASCAL VOC 2007 dataset? | mAP |
What metrics were used to measure the M3TR(448×448) model in the M3TR: Multi-modal Multi-label Recognition with Transformer paper on the PASCAL VOC 2007 dataset? | mAP |
What metrics were used to measure the Q2L-TResL(resolution 448) model in the Query2Label: A Simple Transformer Way to Multi-Label Classification paper on the PASCAL VOC 2007 dataset? | mAP |
What metrics were used to measure the MSRN(pretrain from MS-COCO) model in the Multi-layered Semantic Representation Network for Multi-label Image Classification paper on the PASCAL VOC 2007 dataset? | mAP |
What metrics were used to measure the TResNet-L (resolution 448, pretrain from MS-COCO) model in the Asymmetric Loss For Multi-Label Classification paper on the PASCAL VOC 2007 dataset? | mAP |
What metrics were used to measure the SSGRL (pretrain from MS-COCO) model in the Learning Semantic-Specific Graph Representation for Multi-Label Image Recognition paper on the PASCAL VOC 2007 dataset? | mAP |
What metrics were used to measure the TDRG-R101(448×448) model in the Transformer-based Dual Relation Graph for Multi-label Image Recognition paper on the PASCAL VOC 2007 dataset? | mAP |
What metrics were used to measure the MCAR (ResNet101, 448x448) model in the Learning to Discover Multi-Class Attentional Regions for Multi-Label Image Recognition paper on the PASCAL VOC 2007 dataset? | mAP |
What metrics were used to measure the TResNet-L (resolution 448, pretrain from ImageNet) model in the Asymmetric Loss For Multi-Label Classification paper on the PASCAL VOC 2007 dataset? | mAP |
What metrics were used to measure the ML-GCN (pretrain from ImageNet) model in the Multi-Label Image Recognition with Graph Convolutional Networks paper on the PASCAL VOC 2007 dataset? | mAP |
What metrics were used to measure the SSGRL (pretrain from ImageNet) model in the Learning Semantic-Specific Graph Representation for Multi-Label Image Recognition paper on the PASCAL VOC 2007 dataset? | mAP |
What metrics were used to measure the Ours PF-DLDL model in the Deep Label Distribution Learning with Label Ambiguity paper on the PASCAL VOC 2007 dataset? | mAP |
What metrics were used to measure the ViT-B-16 (ImageNet-21K pretrained) model in the ImageNet-21K Pretraining for the Masses paper on the PASCAL VOC 2007 dataset? | mAP |
What metrics were used to measure the FeV+LV (pretrain from ImageNet) model in the Exploit Bounding Box Annotations for Multi-label Object Recognition paper on the PASCAL VOC 2007 dataset? | mAP |
What metrics were used to measure the DensNet121 model in the CheXclusion: Fairness gaps in deep chest X-ray classifiers paper on the ChestX-ray14 dataset? | Average AUC on 14 label |
What metrics were used to measure the DensNet121 model in the CheXclusion: Fairness gaps in deep chest X-ray classifiers paper on the MIMIC-CXR dataset? | Average AUC on 14 label |
What metrics were used to measure the ResNet50 (fine-tuning) model in the Do we still need ImageNet pre-training in remote sensing scene classification? paper on the MLRSNet dataset? | F1-score |
What metrics were used to measure the ResNet50 (scratch) model in the Do we still need ImageNet pre-training in remote sensing scene classification? paper on the MLRSNet dataset? | F1-score |
What metrics were used to measure the DeepAUC-v1 model in the Large-scale Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification paper on the CheXpert dataset? | AVERAGE AUC ON 14 LABEL, NUM RADS BELOW CURVE |
What metrics were used to measure the Hierarchical-Learning-V1 (ensemble) model in the Interpreting chest X-rays via CNNs that exploit hierarchical disease dependencies and uncertainty labels paper on the CheXpert dataset? | AVERAGE AUC ON 14 LABEL, NUM RADS BELOW CURVE |
What metrics were used to measure the YWW(ensemble) model in the paper on the CheXpert dataset? | AVERAGE AUC ON 14 LABEL, NUM RADS BELOW CURVE |
What metrics were used to measure the Conditional-Training-LSR model in the paper on the CheXpert dataset? | AVERAGE AUC ON 14 LABEL, NUM RADS BELOW CURVE |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.