prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the Polishing Teacher model in the Mind the Gap: Polishing Pseudo labels for Accurate Semi-supervised Object Detection paper on the COCO 5% labeled data dataset? | mAP |
What metrics were used to measure the VC model in the Semi-supervised Object Detection via Virtual Category Learning paper on the COCO 5% labeled data dataset? | mAP |
What metrics were used to measure the Unbiased Teacher v2 model in the Unbiased Teacher v2: Semi-supervised Object Detection for Anchor-free and Anchor-based Detectors paper on the COCO 5% labeled data dataset? | mAP |
What metrics were used to measure the Adaptive Class-Rebalancing model in the Semi-Supervised Object Detection with Adaptive Class-Rebalancing Self-Training paper on the COCO 5% labeled data dataset? | mAP |
What metrics were used to measure the Soft Teacher + Swin-L(HTC++, multi-scale) model in the End-to-End Semi-Supervised Object Detection with Soft Teacher paper on the COCO 5% labeled data dataset? | mAP |
What metrics were used to measure the SSOD with OCL and RUPL model in the Semi-Supervised Object Detection with Object-wise Contrastive Learning and Regression Uncertainty paper on the COCO 5% labeled data dataset? | mAP |
What metrics were used to measure the ASTOD model in the Adaptive Self-Training for Object Detection paper on the COCO 5% labeled data dataset? | mAP |
What metrics were used to measure the Omni-DETR model in the Omni-DETR: Omni-Supervised Object Detection with Transformers paper on the COCO 5% labeled data dataset? | mAP |
What metrics were used to measure the MUM model in the MUM : Mix Image Tiles and UnMix Feature Tiles for Semi-Supervised Object Detection paper on the COCO 5% labeled data dataset? | mAP |
What metrics were used to measure the RPL model in the Rethinking Pseudo Labels for Semi-Supervised Object Detection paper on the COCO 5% labeled data dataset? | mAP |
What metrics were used to measure the Unbiased Teacher model in the Unbiased Teacher for Semi-Supervised Object Detection paper on the COCO 5% labeled data dataset? | mAP |
What metrics were used to measure the Instant Teaching model in the Instant-Teaching: An End-to-End Semi-Supervised Object Detection Framework paper on the COCO 5% labeled data dataset? | mAP |
What metrics were used to measure the DETReg model in the DETReg: Unsupervised Pretraining with Region Priors for Object Detection paper on the COCO 5% labeled data dataset? | mAP |
What metrics were used to measure the STAC model in the A Simple Semi-Supervised Learning Framework for Object Detection paper on the COCO 5% labeled data dataset? | mAP |
What metrics were used to measure the Consistent-Teacher model in the Consistent-Teacher: Towards Reducing Inconsistent Pseudo-targets in Semi-supervised Object Detection paper on the COCO 2% labeled data dataset? | mAP |
What metrics were used to measure the MixTeacher-FRCNN model in the MixTeacher: Mining Promising Labels with Mixed Scale Teacher for Semi-Supervised Object Detection paper on the COCO 2% labeled data dataset? | mAP |
What metrics were used to measure the ARSL model in the Ambiguity-Resistant Semi-Supervised Learning for Dense Object Detection paper on the COCO 2% labeled data dataset? | mAP |
What metrics were used to measure the Efficient Teacher model in the Efficient Teacher: Semi-Supervised Object Detection for YOLOv5 paper on the COCO 2% labeled data dataset? | mAP |
What metrics were used to measure the Adaptive Class-Rebalancing model in the Semi-Supervised Object Detection with Adaptive Class-Rebalancing Self-Training paper on the COCO 2% labeled data dataset? | mAP |
What metrics were used to measure the Unbiased Teacher v2 model in the Unbiased Teacher v2: Semi-supervised Object Detection for Anchor-free and Anchor-based Detectors paper on the COCO 2% labeled data dataset? | mAP |
What metrics were used to measure the MixTeacher-FCOS model in the MixTeacher: Mining Promising Labels with Mixed Scale Teacher for Semi-Supervised Object Detection paper on the COCO 2% labeled data dataset? | mAP |
What metrics were used to measure the PseCo model in the PseCo: Pseudo Labeling and Consistency Training for Semi-Supervised Object Detection paper on the COCO 2% labeled data dataset? | mAP |
What metrics were used to measure the VC model in the Semi-supervised Object Detection via Virtual Category Learning paper on the COCO 2% labeled data dataset? | mAP |
What metrics were used to measure the ASTOD model in the Adaptive Self-Training for Object Detection paper on the COCO 2% labeled data dataset? | mAP |
What metrics were used to measure the MUM model in the MUM : Mix Image Tiles and UnMix Feature Tiles for Semi-Supervised Object Detection paper on the COCO 2% labeled data dataset? | mAP |
What metrics were used to measure the RPL model in the Rethinking Pseudo Labels for Semi-Supervised Object Detection paper on the COCO 2% labeled data dataset? | mAP |
What metrics were used to measure the Omni-DETR model in the Omni-DETR: Omni-Supervised Object Detection with Transformers paper on the COCO 2% labeled data dataset? | mAP |
What metrics were used to measure the Instant Teaching model in the Instant-Teaching: An End-to-End Semi-Supervised Object Detection Framework paper on the COCO 2% labeled data dataset? | mAP |
What metrics were used to measure the DETReg model in the DETReg: Unsupervised Pretraining with Region Priors for Object Detection paper on the COCO 2% labeled data dataset? | mAP |
What metrics were used to measure the STAC model in the A Simple Semi-Supervised Learning Framework for Object Detection paper on the COCO 2% labeled data dataset? | mAP |
What metrics were used to measure the CSD model in the Consistency-based Semi-supervised Learning for Object detection paper on the COCO 2% labeled data dataset? | mAP |
What metrics were used to measure the Unbiased Teacher model in the Unbiased Teacher for Semi-Supervised Object Detection paper on the COCO 2% labeled data dataset? | mAP |
What metrics were used to measure the IncRes-v2-FTCDW model in the Explainable end-to-end deep learning for diabetic retinopathy detection across multiple datasets paper on the Kaggle EyePACS dataset? | AUC, Sensitivity, Specificity |
What metrics were used to measure the InceptionV3 Ensemble model in the Reproduction study using public data of: Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs paper on the Kaggle EyePACS dataset? | AUC, Sensitivity, Specificity |
What metrics were used to measure the BiLSTM model in the Learning meters of Arabic and English poems with Recurrent Neural Networks: a step forward for language understanding and synthesis paper on the PCD dataset? | Accuracy |
What metrics were used to measure the EfficientNet+U-Net++ model in the Optic Disc, Cup and Fovea Detection from Retinal Images Using U-Net++ with EfficientNet Encoder paper on the REFUGE Challenge dataset? | Dice |
What metrics were used to measure the Segtran (EfficientNet-B4) model in the Medical Image Segmentation Using Squeeze-and-Expansion Transformers paper on the REFUGE Challenge dataset? | Dice |
What metrics were used to measure the EfficientNet+U-Net++ model in the Optic Disc, Cup and Fovea Detection from Retinal Images Using U-Net++ with EfficientNet Encoder paper on the REFUGE Challenge dataset? | IoU |
What metrics were used to measure the EfficientNet+U-Net++ model in the Optic Disc, Cup and Fovea Detection from Retinal Images Using U-Net++ with EfficientNet Encoder paper on the REFUGE Challenge dataset? | IoU |
What metrics were used to measure the HBA-U-Net model in the U-Net with Hierarchical Bottleneck Attention for Landmark Detection in Fundus Images of the Degenerated Retina paper on the IDRiD dataset? | Euclidean Distance (ED) |
What metrics were used to measure the EfficientNet+U-Net++ model in the Optic Disc, Cup and Fovea Detection from Retinal Images Using U-Net++ with EfficientNet Encoder paper on the REFUGE Challenge dataset? | Euclidean Distance (ED) |
What metrics were used to measure the HBA-U-Net model in the U-Net with Hierarchical Bottleneck Attention for Landmark Detection in Fundus Images of the Degenerated Retina paper on the IDRiD dataset? | Euclidean Distance (ED) |
What metrics were used to measure the HBA-U-Net model in the U-Net with Hierarchical Bottleneck Attention for Landmark Detection in Fundus Images of the Degenerated Retina paper on the REFUGE dataset? | Euclidean Distance (ED) |
What metrics were used to measure the HBA-U-Net model in the U-Net with Hierarchical Bottleneck Attention for Landmark Detection in Fundus Images of the Degenerated Retina paper on the ADAM dataset? | Euclidean Distance (ED) |
What metrics were used to measure the Oblique decision tree model in the Evolutionary learning of interpretable decision trees paper on the LunarLander-v2 dataset? | Average Return |
What metrics were used to measure the AWR model in the Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning paper on the LunarLander-v2 dataset? | Average Return |
What metrics were used to measure the Orthogonal decision tree model in the Evolutionary learning of interpretable decision trees paper on the Mountain Car dataset? | Average Return |
What metrics were used to measure the Oblique decision tree model in the Evolutionary learning of interpretable decision trees paper on the Mountain Car dataset? | Average Return |
What metrics were used to measure the AWR model in the Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning paper on the Walker2d-v2 dataset? | Average Return |
What metrics were used to measure the AWR model in the Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning paper on the Ant-v2 dataset? | Average Return |
What metrics were used to measure the AWR model in the Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning paper on the HalfCheetah-v2 dataset? | Average Return |
What metrics were used to measure the Oblique decision tree model in the Evolutionary learning of interpretable decision trees paper on the Cart Pole (OpenAI Gym) dataset? | Average Return |
What metrics were used to measure the Orthogonal decision tree model in the Evolutionary learning of interpretable decision trees paper on the Cart Pole (OpenAI Gym) dataset? | Average Return |
What metrics were used to measure the AWR model in the Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning paper on the Hopper-v2 dataset? | Average Return |
What metrics were used to measure the AWR model in the Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning paper on the Humanoid-v2 dataset? | Average Return |
What metrics were used to measure the Orthogonal decision tree model in the Evolutionary learning of interpretable decision trees paper on the CartPole-v1 dataset? | Average Return |
What metrics were used to measure the Oblique decision tree model in the Evolutionary learning of interpretable decision trees paper on the CartPole-v1 dataset? | Average Return |
What metrics were used to measure the Nash-MTL model in the Multi-Task Learning as a Bargaining Game paper on the NYUv2 dataset? | Mean IoU |
What metrics were used to measure the LETR model in the Line Segment Detection Using Transformers without Edges paper on the wireframe dataset dataset? | FH, sAP10, sAP15 |
What metrics were used to measure the Nash-MTL model in the Multi-Task Learning as a Bargaining Game paper on the Cityscapes test dataset? | mIoU |
What metrics were used to measure the MultiObjectiveOptimization model in the Multi-Task Learning as Multi-Objective Optimization paper on the Cityscapes test dataset? | mIoU |
What metrics were used to measure the Nash-MTL model in the Multi-Task Learning as a Bargaining Game paper on the QM9 dataset? | ∆m% |
What metrics were used to measure the IMTL-G model in the paper on the QM9 dataset? | ∆m% |
What metrics were used to measure the CAGrad model in the paper on the QM9 dataset? | ∆m% |
What metrics were used to measure the PCGrad model in the paper on the QM9 dataset? | ∆m% |
What metrics were used to measure the Gumbel-Matrix Routing model in the Flexible Multi-task Networks by Learning Parameter Allocation paper on the OMNIGLOT dataset? | Average Accuracy |
What metrics were used to measure the Mixture-of-Experts model in the Diversity and Depth in Per-Example Routing Models paper on the OMNIGLOT dataset? | Average Accuracy |
What metrics were used to measure the MGDA-UB model in the Multi-Task Learning as Multi-Objective Optimization paper on the CelebA dataset? | Error |
What metrics were used to measure the MCDA model in the Class Overwhelms: Mutual Conditional Blended-Target Domain Adaptation paper on the Office-31 dataset? | Accuracy |
What metrics were used to measure the DCGCT model in the Curriculum Graph Co-Teaching for Multi-Target Domain Adaptation paper on the Office-31 dataset? | Accuracy |
What metrics were used to measure the MT-MTDA model in the Unsupervised Multi-Target Domain Adaptation Through Knowledge Distillation paper on the Office-31 dataset? | Accuracy |
What metrics were used to measure the AMEAN model in the Blending-target Domain Adaptation by Adversarial Meta-Adaptation Networks paper on the Office-31 dataset? | Accuracy |
What metrics were used to measure the RevGrad model in the Unsupervised Domain Adaptation by Backpropagation paper on the Office-31 dataset? | Accuracy |
What metrics were used to measure the MCDA model in the Class Overwhelms: Mutual Conditional Blended-Target Domain Adaptation paper on the DomainNet dataset? | Accuracy |
What metrics were used to measure the DCGCT model in the Curriculum Graph Co-Teaching for Multi-Target Domain Adaptation paper on the DomainNet dataset? | Accuracy |
What metrics were used to measure the MCC model in the Minimum Class Confusion for Versatile Domain Adaptation paper on the DomainNet dataset? | Accuracy |
What metrics were used to measure the DADA model in the Domain Agnostic Learning with Disentangled Representations paper on the DomainNet dataset? | Accuracy |
What metrics were used to measure the STMDA-RetinaNet model in the A Multi Camera Unsupervised Domain Adaptation Pipeline for Object Detection in Cultural Sites through Adversarial Learning and Self-Training paper on the OBJ-MDA dataset? | mAP@0.5 |
What metrics were used to measure the MCDA model in the Class Overwhelms: Mutual Conditional Blended-Target Domain Adaptation paper on the Office-Home dataset? | Accuracy |
What metrics were used to measure the DCGCT model in the Curriculum Graph Co-Teaching for Multi-Target Domain Adaptation paper on the Office-Home dataset? | Accuracy |
What metrics were used to measure the AMEAN model in the Blending-target Domain Adaptation by Adversarial Meta-Adaptation Networks paper on the Office-Home dataset? | Accuracy |
What metrics were used to measure the RevGrad model in the Unsupervised Domain Adaptation by Backpropagation paper on the Office-Home dataset? | Accuracy |
What metrics were used to measure the LED(Q,F) model in the ArgSciChat: A Dataset for Argumentative Dialogues on Scientific Papers paper on the ArgSciChat dataset? | Message-F1, BScore, Mover |
What metrics were used to measure the LED(Q,P,H) model in the ArgSciChat: A Dataset for Argumentative Dialogues on Scientific Papers paper on the ArgSciChat dataset? | Message-F1, BScore, Mover |
What metrics were used to measure the LED(Q,P) model in the ArgSciChat: A Dataset for Argumentative Dialogues on Scientific Papers paper on the ArgSciChat dataset? | Message-F1, BScore, Mover |
What metrics were used to measure the PaCE model in the PaCE: Unified Multi-modal Dialogue Pre-training with Progressive and Compositional Experts paper on the MMConv dataset? | BLEU, Comb., Inform, Success |
What metrics were used to measure the SimpleTOD model in the A Simple Language Model for Task-Oriented Dialogue paper on the MMConv dataset? | BLEU, Comb., Inform, Success |
What metrics were used to measure the PaCE model in the PaCE: Unified Multi-modal Dialogue Pre-training with Progressive and Compositional Experts paper on the SIMMC2.0 dataset? | BLEU |
What metrics were used to measure the BART-large model in the Learning to Embed Multi-Modal Contexts for Situated Conversational Agents paper on the SIMMC2.0 dataset? | BLEU |
What metrics were used to measure the BART-base model in the Learning to Embed Multi-Modal Contexts for Situated Conversational Agents paper on the SIMMC2.0 dataset? | BLEU |
What metrics were used to measure the MTN model in the Multimodal Transformer Networks for End-to-End Video-Grounded Dialogue Systems paper on the SIMMC2.0 dataset? | BLEU |
What metrics were used to measure the GPT-2 model in the Language Models are Unsupervised Multitask Learners paper on the SIMMC2.0 dataset? | BLEU |
What metrics were used to measure the ST-SED-SEP model in the Zero-shot Audio Source Separation through Query-based Learning from Weakly-labeled Data paper on the AudioSet dataset? | SDR, SAR, SIR |
What metrics were used to measure the Co-Separation model in the Co-Separating Sounds of Visual Objects paper on the AudioSet dataset? | SDR, SAR, SIR |
What metrics were used to measure the Co-Separation model in the Co-Separating Sounds of Visual Objects paper on the MUSIC (multi-source) dataset? | SAR, SIR |
What metrics were used to measure the MvCLN model in the Partially View-aligned Representation Learning with Noise-robust Contrastive Loss paper on the n-MNIST dataset? | NMI |
What metrics were used to measure the MvCLN model in the Partially View-aligned Representation Learning with Noise-robust Contrastive Loss paper on the Caltech101 dataset? | NMI |
What metrics were used to measure the MvCLN model in the Partially View-aligned Representation Learning with Noise-robust Contrastive Loss paper on the Reuters En-Fr dataset? | NMI |
What metrics were used to measure the MvCLN model in the Partially View-aligned Representation Learning with Noise-robust Contrastive Loss paper on the Scene-15 dataset? | NMI |
What metrics were used to measure the 2DCNN+TRN model in the Win-Fail Action Recognition paper on the Win-Fail Action Understanding dataset? | 2-Class Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.