prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the TFA(w/cos) model in the Frustratingly Simple Few-Shot Object Detection paper on the MS-COCO (10-shot) dataset? | AP |
What metrics were used to measure the TFA(w/fc) model in the Frustratingly Simple Few-Shot Object Detection paper on the MS-COCO (10-shot) dataset? | AP |
What metrics were used to measure the MPSR model in the Multi-Scale Positive Sample Refinement for Few-Shot Object Detection paper on the MS-COCO (10-shot) dataset? | AP |
What metrics were used to measure the Meta R-CNN model in the Meta R-CNN: Towards General Solver for Instance-Level Low-Shot Learning paper on the MS-COCO (10-shot) dataset? | AP |
What metrics were used to measure the MetaDet model in the Meta-Learning to Detect Rare Objects paper on the MS-COCO (10-shot) dataset? | AP |
What metrics were used to measure the MetaYOLO model in the Few-shot Object Detection via Feature Reweighting paper on the MS-COCO (10-shot) dataset? | AP |
What metrics were used to measure the LSTD (YOLO) model in the LSTD: A Low-Shot Transfer Detector for Object Detection paper on the MS-COCO (10-shot) dataset? | AP |
What metrics were used to measure the DE-ViT model in the Detect Every Thing with Few Examples paper on the MS-COCO (30-shot) dataset? | AP |
What metrics were used to measure the RISF (SWIN-Large) model in the Re-Scoring Using Image-Language Similarity for Few-Shot Object Detection paper on the MS-COCO (30-shot) dataset? | AP |
What metrics were used to measure the imTED+ViT-B model in the Integrally Migrating Pre-trained Transformer Encoder-decoders for Visual Object Detection paper on the MS-COCO (30-shot) dataset? | AP |
What metrics were used to measure the DETReg-ft-full DDETR model in the DETReg: Unsupervised Pretraining with Region Priors for Object Detection paper on the MS-COCO (30-shot) dataset? | AP |
What metrics were used to measure the hANMCL model in the Hierarchical Attention Network for Few-Shot Object Detection via Meta-Contrastive Learning paper on the MS-COCO (30-shot) dataset? | AP |
What metrics were used to measure the RISF (Resnet-101) model in the Re-Scoring Using Image-Language Similarity for Few-Shot Object Detection paper on the MS-COCO (30-shot) dataset? | AP |
What metrics were used to measure the Meta-DETR (Multi-Scale Feature) model in the Meta-DETR: Image-Level Few-Shot Object Detection with Inter-Class Correlation Exploitation paper on the MS-COCO (30-shot) dataset? | AP |
What metrics were used to measure the DCFS model in the Decoupling Classifier for Boosting Few-shot Object Detection and Instance Segmentation paper on the MS-COCO (30-shot) dataset? | AP |
What metrics were used to measure the DeFRCN model in the DeFRCN: Decoupled Faster R-CNN for Few-Shot Object Detection paper on the MS-COCO (30-shot) dataset? | AP |
What metrics were used to measure the Meta-DETR (Single-Scale Feature) model in the Meta-DETR: Image-Level Few-Shot Object Detection with Inter-Class Correlation Exploitation paper on the MS-COCO (30-shot) dataset? | AP |
What metrics were used to measure the imTED+ViT-S model in the Integrally Migrating Pre-trained Transformer Encoder-decoders for Visual Object Detection paper on the MS-COCO (30-shot) dataset? | AP |
What metrics were used to measure the FsDetView + PSP model in the Few-Shot Object Detection by Attending to Per-Sample-Prototype paper on the MS-COCO (30-shot) dataset? | AP |
What metrics were used to measure the FSCE model in the FSCE: Few-Shot Object Detection via Contrastive Proposal Encoding paper on the MS-COCO (30-shot) dataset? | AP |
What metrics were used to measure the FsDetView model in the Few-Shot Object Detection and Viewpoint Estimation for Objects in the Wild paper on the MS-COCO (30-shot) dataset? | AP |
What metrics were used to measure the SSR-FSD model in the Semantic Relation Reasoning for Shot-Stable Few-Shot Object Detection paper on the MS-COCO (30-shot) dataset? | AP |
What metrics were used to measure the MPSR model in the Multi-Scale Positive Sample Refinement for Few-Shot Object Detection paper on the MS-COCO (30-shot) dataset? | AP |
What metrics were used to measure the TFA w/ cos model in the Frustratingly Simple Few-Shot Object Detection paper on the MS-COCO (30-shot) dataset? | AP |
What metrics were used to measure the TFA w/ fc model in the Frustratingly Simple Few-Shot Object Detection paper on the MS-COCO (30-shot) dataset? | AP |
What metrics were used to measure the Meta R-CNN model in the Meta R-CNN : Towards General Solver for Instance-level Few-shot Learning paper on the MS-COCO (30-shot) dataset? | AP |
What metrics were used to measure the MetaDet model in the Meta-Learning to Detect Rare Objects paper on the MS-COCO (30-shot) dataset? | AP |
What metrics were used to measure the FeatReweight model in the Few-shot Object Detection via Feature Reweighting paper on the MS-COCO (30-shot) dataset? | AP |
What metrics were used to measure the LSTD (YOLO) model in the LSTD: A Low-Shot Transfer Detector for Object Detection paper on the MS-COCO (30-shot) dataset? | AP |
What metrics were used to measure the TestConsistency model in the paper on the LVIS v1.0 test-dev dataset? | AP, AP50, AP75, APr, APc, APf |
What metrics were used to measure the ps4 model in the paper on the LVIS v1.0 test-dev dataset? | AP, AP50, AP75, APr, APc, APf |
What metrics were used to measure the Asynchronous SSL model in the paper on the LVIS v1.0 test-dev dataset? | AP, AP50, AP75, APr, APc, APf |
What metrics were used to measure the CenterNet2 model in the paper on the LVIS v1.0 test-dev dataset? | AP, AP50, AP75, APr, APc, APf |
What metrics were used to measure the Organizer Provided Baseline model in the paper on the LVIS v1.0 test-dev dataset? | AP, AP50, AP75, APr, APc, APf |
What metrics were used to measure the DETReg (ours) model in the DETReg: Unsupervised Pretraining with Region Priors for Object Detection paper on the COCO 2017 dataset? | AP |
What metrics were used to measure the best_single_model_val model in the paper on the LVIS v1.0 val dataset? | AP, AP50, AP75, APr, APc, APf |
What metrics were used to measure the htc model in the paper on the LVIS v1.0 val dataset? | AP, AP50, AP75, APr, APc, APf |
What metrics were used to measure the Organizer Provided Baseline model in the paper on the LVIS v1.0 val dataset? | AP, AP50, AP75, APr, APc, APf |
What metrics were used to measure the null model in the paper on the LVIS v1.0 val dataset? | AP, AP50, AP75, APr, APc, APf |
What metrics were used to measure the Forest R-CNN model in the Forest R-CNN: Large-Vocabulary Long-Tailed Object Detection and Instance Segmentation paper on the LVIS v1.0 val dataset? | AP, AP50, AP75, APr, APc, APf |
What metrics were used to measure the person model in the paper on the LVIS v1.0 val dataset? | AP, AP50, AP75, APr, APc, APf |
What metrics were used to measure the test balloon 6 model in the paper on the LVIS v1.0 val dataset? | AP, AP50, AP75, APr, APc, APf |
What metrics were used to measure the ANet model in the AttributeNet: Attribute Enhanced Vehicle Re-Identification paper on the VeRi-Wild Large dataset? | Rank1, Rank5, mAP |
What metrics were used to measure the Baseline Model model in the paper on the VRAI test dataset? | MAP, CMC1, CMC5, CMC10 |
What metrics were used to measure the Juice Lee model in the paper on the VRAI test dataset? | MAP, CMC1, CMC5, CMC10 |
What metrics were used to measure the nanggg model in the paper on the VRAI test dataset? | MAP, CMC1, CMC5, CMC10 |
What metrics were used to measure the MBR-4B (without RK) model in the Strength in Diversity: Multi-Branch Representation Learning for Vehicle Re-Identification paper on the VeRi-Wild Small dataset? | Rank1, Rank5, mAP |
What metrics were used to measure the ANet model in the AttributeNet: Attribute Enhanced Vehicle Re-Identification paper on the VeRi-Wild Small dataset? | Rank1, Rank5, mAP |
What metrics were used to measure the Recall@k Surrogate loss (ViT-B/16) model in the Recall@k Surrogate Loss with Large Batches and Similarity Mixup paper on the VehicleID Large dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the Recall@k Surrogate loss (ResNet-50) model in the Recall@k Surrogate Loss with Large Batches and Similarity Mixup paper on the VehicleID Large dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the PNP Loss model in the Rethinking the Optimization of Average Precision: Only Penalizing Negative Instances before Positive Ones is Enough paper on the VehicleID Large dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the RPTM model in the Relation Preserving Triplet Mining for Stabilising the Triplet Loss in Re-identification Systems paper on the VehicleID Large dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the Smooth-AP model in the Smooth-AP: Smoothing the Path Towards Large-Scale Image Retrieval paper on the VehicleID Large dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the ANet model in the AttributeNet: Attribute Enhanced Vehicle Re-Identification paper on the VehicleID Large dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the vehiclenet model in the VehicleNet: Learning Robust Feature Representation for Vehicle Re-identification paper on the VehicleID Large dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the MSINet (2.3M w/o RK) model in the MSINet: Twins Contrastive Search of Multi-Scale Interaction for Object ReID paper on the VehicleID Large dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the CAL model in the Counterfactual Attention Learning for Fine-Grained Visual Categorization and Re-identification paper on the VehicleID Large dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the QD-DLF model in the Vehicle Re-identification Using Quadruple Directional Deep Learning Features paper on the VehicleID Large dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the Baseline Model model in the paper on the VRAI test-dev dataset? | MAP, CMC1, CMC5, CMC10 |
What metrics were used to measure the abu_0916 model in the paper on the VRAI test-dev dataset? | MAP, CMC1, CMC5, CMC10 |
What metrics were used to measure the Juice Lee model in the paper on the VRAI test-dev dataset? | MAP, CMC1, CMC5, CMC10 |
What metrics were used to measure the nanggg model in the paper on the VRAI test-dev dataset? | MAP, CMC1, CMC5, CMC10 |
What metrics were used to measure the asdf model in the paper on the VRAI test-dev dataset? | MAP, CMC1, CMC5, CMC10 |
What metrics were used to measure the 92 model in the paper on the VRAI test-dev dataset? | MAP, CMC1, CMC5, CMC10 |
What metrics were used to measure the vehiclenet model in the VehicleNet: Learning Robust Feature Representation for Vehicle Re-identification paper on the VeRi dataset? | mAP, Rank-1 |
What metrics were used to measure the VKD (ResVKD-50) model in the Robust Re-Identification by Multiple Views Knowledge Distillation paper on the VeRi dataset? | mAP, Rank-1 |
What metrics were used to measure the A Strong Baseline model in the A Strong Baseline for Vehicle Re-Identification paper on the CityFlow dataset? | mAP |
What metrics were used to measure the MBR4B-LAI (w/ RK) model in the Strength in Diversity: Multi-Branch Representation Learning for Vehicle Re-Identification paper on the VeRi-776 dataset? | mAP, Rank-1, Rank1, Rank5, Rank-10 |
What metrics were used to measure the RPTM model in the Relation Preserving Triplet Mining for Stabilising the Triplet Loss in Re-identification Systems paper on the VeRi-776 dataset? | mAP, Rank-1, Rank1, Rank5, Rank-10 |
What metrics were used to measure the A Strong Baseline model in the A Strong Baseline for Vehicle Re-Identification paper on the VeRi-776 dataset? | mAP, Rank-1, Rank1, Rank5, Rank-10 |
What metrics were used to measure the MBR4B-LAI (without re-ranking) model in the Strength in Diversity: Multi-Branch Representation Learning for Vehicle Re-Identification paper on the VeRi-776 dataset? | mAP, Rank-1, Rank1, Rank5, Rank-10 |
What metrics were used to measure the MBR4B (without re-ranking) model in the Strength in Diversity: Multi-Branch Representation Learning for Vehicle Re-Identification paper on the VeRi-776 dataset? | mAP, Rank-1, Rank1, Rank5, Rank-10 |
What metrics were used to measure the CLIP-ReID (without re-ranking) model in the CLIP-ReID: Exploiting Vision-Language Model for Image Re-Identification without Concrete Text Labels paper on the VeRi-776 dataset? | mAP, Rank-1, Rank1, Rank5, Rank-10 |
What metrics were used to measure the VehicleNet model in the VehicleNet: Learning Robust Visual Representation for Vehicle Re-identification paper on the VeRi-776 dataset? | mAP, Rank-1, Rank1, Rank5, Rank-10 |
What metrics were used to measure the TransReID model in the TransReID: Transformer-based Object Re-Identification paper on the VeRi-776 dataset? | mAP, Rank-1, Rank1, Rank5, Rank-10 |
What metrics were used to measure the ANet model in the AttributeNet: Attribute Enhanced Vehicle Re-Identification paper on the VeRi-776 dataset? | mAP, Rank-1, Rank1, Rank5, Rank-10 |
What metrics were used to measure the GiT model in the GiT: Graph Interactive Transformer for Vehicle Re-identification paper on the VeRi-776 dataset? | mAP, Rank-1, Rank1, Rank5, Rank-10 |
What metrics were used to measure the HPGN model in the Exploring Spatial Significance via Hybrid Pyramidal Graph Network for Vehicle Re-identification paper on the VeRi-776 dataset? | mAP, Rank-1, Rank1, Rank5, Rank-10 |
What metrics were used to measure the MSINet (2.3M w/o RK) model in the MSINet: Twins Contrastive Search of Multi-Scale Interaction for Object ReID paper on the VeRi-776 dataset? | mAP, Rank-1, Rank1, Rank5, Rank-10 |
What metrics were used to measure the CAL model in the Counterfactual Attention Learning for Fine-Grained Visual Categorization and Re-identification paper on the VeRi-776 dataset? | mAP, Rank-1, Rank1, Rank5, Rank-10 |
What metrics were used to measure the QD-DLF model in the Vehicle Re-identification Using Quadruple Directional Deep Learning Features paper on the VeRi-776 dataset? | mAP, Rank-1, Rank1, Rank5, Rank-10 |
What metrics were used to measure the Cluster Contrast model in the Cluster Contrast for Unsupervised Person Re-Identification paper on the VeRi-776 dataset? | mAP, Rank-1, Rank1, Rank5, Rank-10 |
What metrics were used to measure the ANet model in the AttributeNet: Attribute Enhanced Vehicle Re-Identification paper on the VeRi-Wild Medium dataset? | Rank1, Rank5, mAP |
What metrics were used to measure the VehicleNet model in the VehicleNet: Learning Robust Visual Representation for Vehicle Re-identification paper on the VehicleID dataset? | Rank1 |
What metrics were used to measure the Recall@k Surrogate loss (ViT-B/16) model in the Recall@k Surrogate Loss with Large Batches and Similarity Mixup paper on the VehicleID Small dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the Recall@k Surrogate loss (ResNet-50) model in the Recall@k Surrogate Loss with Large Batches and Similarity Mixup paper on the VehicleID Small dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the PNP Loss model in the Rethinking the Optimization of Average Precision: Only Penalizing Negative Instances before Positive Ones is Enough paper on the VehicleID Small dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the RPTM model in the Relation Preserving Triplet Mining for Stabilising the Triplet Loss in Re-identification Systems paper on the VehicleID Small dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the Smooth-AP model in the Smooth-AP: Smoothing the Path Towards Large-Scale Image Retrieval paper on the VehicleID Small dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the MBR-4B (without RK) model in the Strength in Diversity: Multi-Branch Representation Learning for Vehicle Re-Identification paper on the VehicleID Small dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the ANet model in the AttributeNet: Attribute Enhanced Vehicle Re-Identification paper on the VehicleID Small dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the CLIP-ReID (without re-ranking) model in the CLIP-ReID: Exploiting Vision-Language Model for Image Re-Identification without Concrete Text Labels paper on the VehicleID Small dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the vehiclenet model in the VehicleNet: Learning Robust Feature Representation for Vehicle Re-identification paper on the VehicleID Small dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the CAL model in the Counterfactual Attention Learning for Fine-Grained Visual Categorization and Re-identification paper on the VehicleID Small dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the GiT model in the GiT: Graph Interactive Transformer for Vehicle Re-identification paper on the VehicleID Small dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the HPGN model in the Exploring Spatial Significance via Hybrid Pyramidal Graph Network for Vehicle Re-identification paper on the VehicleID Small dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the QD-DLF model in the Vehicle Re-identification Using Quadruple Directional Deep Learning Features paper on the VehicleID Small dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the Recall@k Surrogate loss (ViT-B/16) model in the Recall@k Surrogate Loss with Large Batches and Similarity Mixup paper on the VehicleID Medium dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the Recall@k Surrogate loss (ResNet-50) model in the Recall@k Surrogate Loss with Large Batches and Similarity Mixup paper on the VehicleID Medium dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
What metrics were used to measure the PNP Loss model in the Rethinking the Optimization of Average Precision: Only Penalizing Negative Instances before Positive Ones is Enough paper on the VehicleID Medium dataset? | Rank-1, Rank-5, Rank1, Rank5, mAP |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.