prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the ResNet50 (128) + MIC model in the MIC: Mining Interclass Characteristics for Improved Metric Learning paper on the CUB-200-2011 dataset?
R@1
What metrics were used to measure the BN-Inception + Group Loss model in the The Group Loss for Deep Metric Learning paper on the CUB-200-2011 dataset?
R@1
What metrics were used to measure the BN-Inception + SoftTriple model in the SoftTriple Loss: Deep Metric Learning Without Triplet Sampling paper on the CUB-200-2011 dataset?
R@1
What metrics were used to measure the EPSHN(512) model in the Improved Embeddings with Easy Positive Triplet Mining paper on the CUB-200-2011 dataset?
R@1
What metrics were used to measure the Gradient Surgery model in the Dissecting the impact of different loss functions with gradient surgery paper on the CUB-200-2011 dataset?
R@1
What metrics were used to measure the ResNet-50 + Margin model in the Sampling Matters in Deep Embedding Learning paper on the CUB-200-2011 dataset?
R@1
What metrics were used to measure the HDC model in the Hard-Aware Deeply Cascaded Embedding paper on the CUB-200-2011 dataset?
R@1
What metrics were used to measure the ABE-8-512 model in the Attention-based Ensemble for Deep Metric Learning paper on the CUB-200-2011 dataset?
R@1
What metrics were used to measure the PDDM Quadruplet model in the Local Similarity-Aware Deep Feature Embedding paper on the CUB-200-2011 dataset?
R@1
What metrics were used to measure the SCT(64) model in the Hard negative examples are hard, but useful paper on the CUB-200-2011 dataset?
R@1
What metrics were used to measure the EPSHN(64) model in the Improved Embeddings with Easy Positive Triplet Mining paper on the CUB-200-2011 dataset?
R@1
What metrics were used to measure the GoogLeNet + HDML model in the Hardness-Aware Deep Metric Learning paper on the CUB-200-2011 dataset?
R@1
What metrics were used to measure the HAPPIER model in the Hierarchical Average Precision Training for Pertinent Image Retrieval paper on the DyML-Vehicle dataset?
Average-mAP
What metrics were used to measure the CSL model in the Dynamic Metric Learning: Towards a Scalable Metric Space to Accommodate Multiple Semantic Scales paper on the DyML-Vehicle dataset?
Average-mAP
What metrics were used to measure the Unicom+ViT-L@336px model in the Unicom: Universal and Compact Representation Learning for Image Retrieval paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the STIR model in the STIR: Siamese Transformer for Image Retrieval Postprocessing paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the Recall@k Surrogate Loss (ViT-B/16) model in the Recall@k Surrogate Loss with Large Batches and Similarity Mixup paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the ViT-Triplet model in the STIR: Siamese Transformer for Image Retrieval Postprocessing paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the ROADMAP (DeiT-S) model in the Robust and Decomposable Average Precision for Image Retrieval paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the Hyp-ViT model in the Hyperbolic Vision Transformers: Combining Improvements in Metric Learning paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the Hyp-DINO model in the Hyperbolic Vision Transformers: Combining Improvements in Metric Learning paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the Recall@k Surrogate Loss (ViT-B/32) model in the Recall@k Surrogate Loss with Large Batches and Similarity Mixup paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the ROADMAP (ResNet-50) model in the Robust and Decomposable Average Precision for Image Retrieval paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the CCL (ResNet-50) model in the Center Contrastive Loss for Metric Learning paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the Recall@k Surrogate Loss (ResNet-50) model in the Recall@k Surrogate Loss with Large Batches and Similarity Mixup paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the Gradient Surgery model in the Dissecting the impact of different loss functions with gradient surgery paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the HAPPIER_F model in the Hierarchical Average Precision Training for Pertinent Image Retrieval paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the SCT(512) model in the Hard negative examples are hard, but useful paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the ResNet-50 + Metrix model in the It Takes Two to Tango: Mixup for Deep Metric Learning paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the ResNet50 + Language model in the Integrating Language Guidance into Vision-based Deep Metric Learning paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the NED model in the Calibrated neighborhood aware confidence measure for deep metric learning paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the ResNet-50 + Cross-Entropy model in the A unifying mutual information view of metric learning: cross-entropy vs. pairwise losses paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the ResNet50 + S2SD model in the S2SD: Simultaneous Similarity-based Self-Distillation for Deep Metric Learning paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the HAPPIER model in the Hierarchical Average Precision Training for Pertinent Image Retrieval paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the ResNet-50 + ProxyNCA++ model in the ProxyNCA++: Revisiting and Revitalizing Proxy Neighborhood Component Analysis paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the ResNet50 + NIR model in the Non-isotropy Regularization for Proxy-based Deep Metric Learning paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the MS + DAS (K=8) model in the DAS: Densely-Anchored Sampling for Deep Metric Learning paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the BN-Inception + Proxy-Anchor model in the Proxy Anchor Loss for Deep Metric Learning paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the ResNet50 + DiVA model in the DiVA: Diverse Visual Feature Aggregation for Deep Metric Learning paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the ResNet50 + AVSL model in the Attributable Visual Similarity Learning paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the Margin + DIML model in the Towards Interpretable Deep Metric Learning with Structural Matching paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the Circle Loss model in the Circle Loss: A Unified Perspective of Pair Similarity Optimization paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the EPSHN(512) model in the Improved Embeddings with Easy Positive Triplet Mining paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the QB-Norm+RDML model in the Cross Modal Retrieval with Querybank Normalisation paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the ResNet50 (128) + MIC model in the MIC: Mining Interclass Characteristics for Improved Metric Learning paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the ResNet50 (128) + PADS model in the PADS: Policy-Adapted Sampling for Visual Similarity Learning paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the Group Loss model in the The Group Loss for Deep Metric Learning paper on the Stanford Online Products dataset?
R@1
What metrics were used to measure the Hyp-DINO model in the Hyperbolic Vision Transformers: Combining Improvements in Metric Learning paper on the CUB-200-2011 dataset?
R@1
What metrics were used to measure the MS + DAS (K=8) model in the DAS: Densely-Anchored Sampling for Deep Metric Learning paper on the CUB-200-2011 dataset?
R@1
What metrics were used to measure the InvPT model in the InvPT: Inverted Pyramid Multi-task Transformer for Dense Scene Understanding paper on the PASCAL Context dataset?
Mean Angle Error
What metrics were used to measure the PolyMaX(ConvNeXt-L) model in the PolyMaX: General Dense Prediction with Mask Transformer paper on the NYU Depth v2 dataset?
% < 11.25, % < 22.5, % < 30, Mean Angle Error, RMSE
What metrics were used to measure the iDisc model in the iDisc: Internal Discretization for Monocular Depth Estimation paper on the NYU Depth v2 dataset?
% < 11.25, % < 22.5, % < 30, Mean Angle Error, RMSE
What metrics were used to measure the Bae et al. model in the Estimating and Exploiting the Aleatoric Uncertainty in Surface Normal Estimation paper on the NYU Depth v2 dataset?
% < 11.25, % < 22.5, % < 30, Mean Angle Error, RMSE
What metrics were used to measure the Floors are Flat model in the Floors are Flat: Leveraging Semantics for Real-Time Surface Normal Prediction paper on the NYU Depth v2 dataset?
% < 11.25, % < 22.5, % < 30, Mean Angle Error, RMSE
What metrics were used to measure the X-TC (Cross-Task Consistency) model in the Robust Learning Through Cross-Task Consistency paper on the Taskonomy dataset?
L1 error
What metrics were used to measure the Bae et al. model in the Estimating and Exploiting the Aleatoric Uncertainty in Surface Normal Estimation paper on the ScanNetV2 dataset?
% < 11.25, % < 22.5, % < 30, Mean Angle Error
What metrics were used to measure the Floors are Flat model in the Floors are Flat: Leveraging Semantics for Real-Time Surface Normal Prediction paper on the ScanNetV2 dataset?
% < 11.25, % < 22.5, % < 30, Mean Angle Error
What metrics were used to measure the MSECNet model in the MSECNet: Accurate and Robust Normal Estimation for 3D Point Clouds by Multi-Scale Edge Conditioning paper on the PCPNet dataset?
RMSE
What metrics were used to measure the Hsurf model in the HSurf-Net: Normal Estimation for 3D Point Clouds by Learning Hyper Surfaces paper on the PCPNet dataset?
RMSE
What metrics were used to measure the NeAF model in the NeAF: Learning Neural Angle Fields for Point Normal Estimation paper on the PCPNet dataset?
RMSE
What metrics were used to measure the GraphFit model in the GraphFit: Learning Multi-scale Graph-Convolutional Representation for Point Cloud Normal Estimation paper on the PCPNet dataset?
RMSE
What metrics were used to measure the AdaFit model in the AdaFit: Rethinking Learning-based Normal Estimation on Point Clouds paper on the PCPNet dataset?
RMSE
What metrics were used to measure the DeepFit model in the DeepFit: 3D Surface Fitting via Neural Network Weighted Least Squares paper on the PCPNet dataset?
RMSE
What metrics were used to measure the Iter-Net model in the Deep Iterative Surface Normal Estimation paper on the PCPNet dataset?
RMSE
What metrics were used to measure the Nesti-Net model in the Nesti-Net: Normal Estimation for Unstructured 3D Point Clouds using Convolutional Neural Networks paper on the PCPNet dataset?
RMSE
What metrics were used to measure the DSN model in the On Deep Learning Techniques to Boost Monocular Depth Estimation for Autonomous Navigation paper on the NYU-Depth V2 Surface Normals dataset?
RMSE
What metrics were used to measure the DiT-L (Cascade) model in the DiT: Self-supervised Pre-training for Document Image Transformer paper on the cTDaR dataset?
Weighted Average F1-score
What metrics were used to measure the DiT-B (Cascade) model in the DiT: Self-supervised Pre-training for Document Image Transformer paper on the cTDaR dataset?
Weighted Average F1-score
What metrics were used to measure the cascadetabnet model in the CascadeTabNet: An approach for end to end table detection and structure recognition from image-based documents paper on the ICDAR2013 dataset?
Avg F1
What metrics were used to measure the CDeCNet model in the CDeC-Net: Composite Deformable Cascade Network for Table Detection in Document Images paper on the ICDAR2013 dataset?
Avg F1
What metrics were used to measure the TableNet model in the TableNet: Deep Learning model for end-to-end Table detection and Tabular data extraction from Scanned Document Images paper on the ICDAR2013 dataset?
Avg F1
What metrics were used to measure the RetinaNet model in the Table Detection in the Wild: A Novel Diverse Table Detection Dataset and Method paper on the STDW dataset?
IoU, AP
What metrics were used to measure the Selective Search model in the Table Detection in the Wild: A Novel Diverse Table Detection Dataset and Method paper on the STDW dataset?
IoU, AP
What metrics were used to measure the Ensemble multilingual BERT model model in the The Inception Team at NSURL-2019 Task 8: Semantic Question Similarity in Arabic paper on the Q2Q Arabic Benchmark dataset?
F1 score
What metrics were used to measure the Tha3aroon model in the Tha3aroon at NSURL-2019 Task 8: Semantic Question Similarity in Arabic paper on the Q2Q Arabic Benchmark dataset?
F1 score
What metrics were used to measure the mBert model in the Deep Learning Models for Multilingual Hate Speech Detection paper on the Q2Q Arabic Benchmark dataset?
F1 score
What metrics were used to measure the Ours model in the Multi-Source Fusion and Automatic Predictor Selection for Zero-Shot Video Object Segmentation paper on the YouTube dataset?
Average
What metrics were used to measure the XMem (BL30K, MS) model in the XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model paper on the DAVIS-2017 (test-dev) dataset?
Mean Jaccard & F-Measure, Jaccard, F-measure
What metrics were used to measure the XMem model in the XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model paper on the DAVIS-2017 (test-dev) dataset?
Mean Jaccard & F-Measure, Jaccard, F-measure
What metrics were used to measure the XMem (BL30K,MS) model in the XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model paper on the YouTube-VOS 2019 dataset?
Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen)
What metrics were used to measure the XMem model in the XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model paper on the YouTube-VOS 2019 dataset?
Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen)
What metrics were used to measure the BATMAN model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the YouTube-VOS 2019 dataset?
Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen)
What metrics were used to measure the AOT model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the YouTube-VOS 2019 dataset?
Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen)
What metrics were used to measure the RPCMVOS model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the YouTube-VOS 2019 dataset?
Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen)
What metrics were used to measure the MobileVOS model in the MobileVOS: Real-Time Video Object Segmentation Contrastive Learning meets Knowledge Distillation paper on the YouTube-VOS 2019 dataset?
Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen)
What metrics were used to measure the STCN model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the YouTube-VOS 2019 dataset?
Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen)
What metrics were used to measure the CFBI+ model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the YouTube-VOS 2019 dataset?
Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen)
What metrics were used to measure the SST model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the YouTube-VOS 2019 dataset?
Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen)
What metrics were used to measure the CFBI model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the YouTube-VOS 2019 dataset?
Mean Jaccard & F-Measure, Jaccard (Seen), Jaccard (Unseen), F-Measure (Seen), F-Measure (Unseen)
What metrics were used to measure the ISVOS (BL30K, MS) model in the Look Before You Match: Instance Understanding Matters in Video Object Segmentation paper on the DAVIS 2016 dataset?
J&F, F-Score, Jaccard (Mean), mIoU
What metrics were used to measure the XMem (BL30K, MS) model in the XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model paper on the DAVIS 2016 dataset?
J&F, F-Score, Jaccard (Mean), mIoU
What metrics were used to measure the BATMAN (val) model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2016 dataset?
J&F, F-Score, Jaccard (Mean), mIoU
What metrics were used to measure the STCN (val) model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2016 dataset?
J&F, F-Score, Jaccard (Mean), mIoU
What metrics were used to measure the XMem model in the XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model paper on the DAVIS 2016 dataset?
J&F, F-Score, Jaccard (Mean), mIoU
What metrics were used to measure the MobileVOS (val) model in the MobileVOS: Real-Time Video Object Segmentation Contrastive Learning meets Knowledge Distillation paper on the DAVIS 2016 dataset?
J&F, F-Score, Jaccard (Mean), mIoU
What metrics were used to measure the AOT (val) model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2016 dataset?
J&F, F-Score, Jaccard (Mean), mIoU
What metrics were used to measure the LCM (val) model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2016 dataset?
J&F, F-Score, Jaccard (Mean), mIoU
What metrics were used to measure the RPCMVOS (val) model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2016 dataset?
J&F, F-Score, Jaccard (Mean), mIoU
What metrics were used to measure the KMN (val) model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2016 dataset?
J&F, F-Score, Jaccard (Mean), mIoU
What metrics were used to measure the TransVOS (val) model in the BATMAN: Bilateral Attention Transformer in Motion-Appearance Neighboring Space for Video Object Segmentation paper on the DAVIS 2016 dataset?
J&F, F-Score, Jaccard (Mean), mIoU