prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the Transformer + ASR Pretrain model in the NeurST: Neural Speech Translation Toolkit paper on the MuST-C EN->FR dataset? | Case-sensitive sacreBLEU |
What metrics were used to measure the DAL model in the Dynamic Anchor Learning for Arbitrary-Oriented Object Detection paper on the ICDAR2015 dataset? | F-Measure |
What metrics were used to measure the Multi-hop Dense Passage Retriever (MDR) model in the Reasoning over Public and Private Data in Retrieval-Based Systems paper on the ConcurrentQA dataset? | Answer F1 |
What metrics were used to measure the Beam Retrieval model in the Beam Retrieval: General End-to-End Retrieval for Multi-Hop Question Answering paper on the MuSiQue-Ans dataset? | An, Sp |
What metrics were used to measure the Lip2Wav model in the Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis paper on the GRID corpus (mixed-speech) dataset? | WER |
What metrics were used to measure the Lip2Wav model in the Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis paper on the LRW dataset? | WER |
What metrics were used to measure the Lip2Wav model in the Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis paper on the TCD-TIMIT corpus (mixed-speech) dataset? | WER |
What metrics were used to measure the CORe model in the Clinical Outcome Prediction from Admission Notes using Self-Supervised Knowledge Integration paper on the Clinical Admission Notes from MIMIC-III dataset? | AUROC |
What metrics were used to measure the BioBERT Base model in the Clinical Outcome Prediction from Admission Notes using Self-Supervised Knowledge Integration paper on the Clinical Admission Notes from MIMIC-III dataset? | AUROC |
What metrics were used to measure the BERT Base model in the Clinical Outcome Prediction from Admission Notes using Self-Supervised Knowledge Integration paper on the Clinical Admission Notes from MIMIC-III dataset? | AUROC |
What metrics were used to measure the MDD-Eval model in the MDD-Eval: Self-Training on Augmented Data for Multi-Domain Dialogue Evaluation paper on the USR-TopicalChat dataset? | Spearman Correlation, Pearson Correlation |
What metrics were used to measure the Lin-Reg (all) model in the Proxy Indicators for the Quality of Open-domain Dialogues paper on the USR-TopicalChat dataset? | Spearman Correlation, Pearson Correlation |
What metrics were used to measure the USR model in the USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation paper on the USR-TopicalChat dataset? | Spearman Correlation, Pearson Correlation |
What metrics were used to measure the USR - DR (x = c) model in the USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation paper on the USR-TopicalChat dataset? | Spearman Correlation, Pearson Correlation |
What metrics were used to measure the USR - MLM model in the USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation paper on the USR-TopicalChat dataset? | Spearman Correlation, Pearson Correlation |
What metrics were used to measure the USR - DR (x = f) model in the USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation paper on the USR-TopicalChat dataset? | Spearman Correlation, Pearson Correlation |
What metrics were used to measure the Lin-Reg (all) model in the Proxy Indicators for the Quality of Open-domain Dialogues paper on the USR-PersonaChat dataset? | Spearman Correlation, Pearson Correlation |
What metrics were used to measure the USR - DR (x = c) model in the USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation paper on the USR-PersonaChat dataset? | Spearman Correlation, Pearson Correlation |
What metrics were used to measure the USR model in the USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation paper on the USR-PersonaChat dataset? | Spearman Correlation, Pearson Correlation |
What metrics were used to measure the USR - MLM model in the USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation paper on the USR-PersonaChat dataset? | Spearman Correlation, Pearson Correlation |
What metrics were used to measure the USR - DR (x = f) model in the USR: An Unsupervised and Reference Free Evaluation Metric for Dialog Generation paper on the USR-PersonaChat dataset? | Spearman Correlation, Pearson Correlation |
What metrics were used to measure the Text2Mesh model in the Text2Mesh: Text-Driven Neural Stylization for Meshes paper on the Meshes dataset? | Mean Opinion Score (Q1:Overall), Mean Opinion Score (Q2: Content), Mean Opinion Score (Q3: Style) |
What metrics were used to measure the VQGAN model in the Text2Mesh: Text-Driven Neural Stylization for Meshes paper on the Meshes dataset? | Mean Opinion Score (Q1:Overall), Mean Opinion Score (Q2: Content), Mean Opinion Score (Q3: Style) |
What metrics were used to measure the Trove model in the Ontology-driven weak supervision for clinical entity classification in electronic health records paper on the THYME-2016 dataset? | F1 |
What metrics were used to measure the Trove model in the Ontology-driven weak supervision for clinical entity classification in electronic health records paper on the ShARe/CLEF 2014: Task 2 Disorders dataset? | F1 |
What metrics were used to measure the VGGIN-Net model in the VGGIN-Net: Deep Transfer Network for Imbalanced Breast Cancer Dataset paper on the BreakHis dataset? | Accuracy (%), 1:1 Accuracy, Accuracy (Inter-Patient) |
What metrics were used to measure the WaveMixLite-224/10 model in the Magnification Invariant Medical Image Analysis: A Comparison of Convolutional Networks, Vision Transformers, and Token Mixers paper on the BreakHis dataset? | Accuracy (%), 1:1 Accuracy, Accuracy (Inter-Patient) |
What metrics were used to measure the EfficientNet-b2 model in the Magnification Prior: A Self-Supervised Method for Learning Representations on Breast Cancer Histopathological Images paper on the BreakHis dataset? | Accuracy (%), 1:1 Accuracy, Accuracy (Inter-Patient) |
What metrics were used to measure the Semi-DETR model in the Semi-DETR: Semi-Supervised Object Detection with Detection Transformers paper on the COCO 100% labeled data dataset? | mAP |
What metrics were used to measure the Consistent-Teacher model in the Consistent-Teacher: Towards Reducing Inconsistent Pseudo-targets in Semi-supervised Object Detection paper on the COCO 100% labeled data dataset? | mAP |
What metrics were used to measure the Dense Teacher model in the Dense Teacher: Dense Pseudo-Labels for Semi-supervised Object Detection paper on the COCO 100% labeled data dataset? | mAP |
What metrics were used to measure the PseCo model in the PseCo: Pseudo Labeling and Consistency Training for Semi-Supervised Object Detection paper on the COCO 100% labeled data dataset? | mAP |
What metrics were used to measure the Soft Teacher model in the End-to-End Semi-Supervised Object Detection with Soft Teacher paper on the COCO 100% labeled data dataset? | mAP |
What metrics were used to measure the Revisiting Class Imbalance model in the Revisiting Class Imbalance for End-to-end Semi-Supervised Object Detection paper on the COCO 100% labeled data dataset? | mAP |
What metrics were used to measure the RPL model in the Rethinking Pseudo Labels for Semi-Supervised Object Detection paper on the COCO 100% labeled data dataset? | mAP |
What metrics were used to measure the Adaptive Class-Rebalancing model in the Semi-Supervised Object Detection with Adaptive Class-Rebalancing Self-Training paper on the COCO 100% labeled data dataset? | mAP |
What metrics were used to measure the MUM model in the MUM : Mix Image Tiles and UnMix Feature Tiles for Semi-Supervised Object Detection paper on the COCO 100% labeled data dataset? | mAP |
What metrics were used to measure the UNBIASED TEACHER model in the Unbiased Teacher for Semi-Supervised Object Detection paper on the COCO 100% labeled data dataset? | mAP |
What metrics were used to measure the Instant-Teaching model in the Instant-Teaching: An End-to-End Semi-Supervised Object Detection Framework paper on the COCO 100% labeled data dataset? | mAP |
What metrics were used to measure the STAC model in the A Simple Semi-Supervised Learning Framework for Object Detection paper on the COCO 100% labeled data dataset? | mAP |
What metrics were used to measure the Unbiased Teacher v2 model in the Unbiased Teacher v2: Semi-supervised Object Detection for Anchor-free and Anchor-based Detectors paper on the COCO 0.5% labeled data dataset? | mAP |
What metrics were used to measure the Adaptive Rebalancing model in the Semi-Supervised Object Detection with Adaptive Class-Rebalancing Self-Training paper on the COCO 0.5% labeled data dataset? | mAP |
What metrics were used to measure the VC model in the Semi-supervised Object Detection via Virtual Category Learning paper on the COCO 0.5% labeled data dataset? | mAP |
What metrics were used to measure the MUM model in the MUM : Mix Image Tiles and UnMix Feature Tiles for Semi-Supervised Object Detection paper on the COCO 0.5% labeled data dataset? | mAP |
What metrics were used to measure the Unbiased Teacher model in the Unbiased Teacher for Semi-Supervised Object Detection paper on the COCO 0.5% labeled data dataset? | mAP |
What metrics were used to measure the Semi-DETR model in the Semi-DETR: Semi-Supervised Object Detection with Detection Transformers paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the Consistent-Teacher model in the Consistent-Teacher: Towards Reducing Inconsistent Pseudo-targets in Semi-supervised Object Detection paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the ARSL model in the Ambiguity-Resistant Semi-Supervised Learning for Dense Object Detection paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the Efficient Teacher model in the Efficient Teacher: Semi-Supervised Object Detection for YOLOv5 paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the Revisiting Class Imbalance model in the Revisiting Class Imbalance for End-to-end Semi-Supervised Object Detection paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the Dense Teacher model in the Dense Teacher: Dense Pseudo-Labels for Semi-supervised Object Detection paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the MixTeacher-FCOS model in the MixTeacher: Mining Promising Labels with Mixed Scale Teacher for Semi-Supervised Object Detection paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the MixTeacher-FRCNN model in the MixTeacher: Mining Promising Labels with Mixed Scale Teacher for Semi-Supervised Object Detection paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the PseCo model in the PseCo: Pseudo Labeling and Consistency Training for Semi-Supervised Object Detection paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the Polishing Teacher model in the Mind the Gap: Polishing Pseudo labels for Accurate Semi-supervised Object Detection paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the Unbiased Teacher v2 model in the Unbiased Teacher v2: Semi-supervised Object Detection for Anchor-free and Anchor-based Detectors paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the Adaptive Class-Rebalancing model in the Semi-Supervised Object Detection with Adaptive Class-Rebalancing Self-Training paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the VC model in the Semi-supervised Object Detection via Virtual Category Learning paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the ASTOD model in the Adaptive Self-Training for Object Detection paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the Omni-DETR model in the Omni-DETR: Omni-Supervised Object Detection with Transformers paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the Soft Teacher model in the End-to-End Semi-Supervised Object Detection with Soft Teacher paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the SSOD with OCL and RUPL model in the Semi-Supervised Object Detection with Object-wise Contrastive Learning and Regression Uncertainty paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the RPL model in the Rethinking Pseudo Labels for Semi-Supervised Object Detection paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the Il-net (resnet-50) model in the Improving Localization for Semi-Supervised Object Detection paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the MUM model in the MUM : Mix Image Tiles and UnMix Feature Tiles for Semi-Supervised Object Detection paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the Humble teacher model in the Humble Teachers Teach Better Students for Semi-Supervised Object Detection paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the Unbiased Teacher model in the Unbiased Teacher for Semi-Supervised Object Detection paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the Instant Teaching model in the Instant-Teaching: An End-to-End Semi-Supervised Object Detection Framework paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the DETReg model in the DETReg: Unsupervised Pretraining with Region Priors for Object Detection paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the STAC model in the A Simple Semi-Supervised Learning Framework for Object Detection paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the Semi-DETR model in the Ambiguity-Resistant Semi-Supervised Learning for Dense Object Detection paper on the COCO 10% labeled data dataset? | mAP, detector |
What metrics were used to measure the Semi-DETR model in the Semi-DETR: Semi-Supervised Object Detection with Detection Transformers paper on the COCO 1% labeled data dataset? | mAP |
What metrics were used to measure the Adaptive Class-Rebalancing model in the Semi-Supervised Object Detection with Adaptive Class-Rebalancing Self-Training paper on the COCO 1% labeled data dataset? | mAP |
What metrics were used to measure the Unbiased Teacher v2 model in the Unbiased Teacher v2: Semi-supervised Object Detection for Anchor-free and Anchor-based Detectors paper on the COCO 1% labeled data dataset? | mAP |
What metrics were used to measure the Consistent-Teacher model in the Consistent-Teacher: Towards Reducing Inconsistent Pseudo-targets in Semi-supervised Object Detection paper on the COCO 1% labeled data dataset? | mAP |
What metrics were used to measure the ARSL model in the Ambiguity-Resistant Semi-Supervised Learning for Dense Object Detection paper on the COCO 1% labeled data dataset? | mAP |
What metrics were used to measure the MixTeacher-FRCNN model in the MixTeacher: Mining Promising Labels with Mixed Scale Teacher for Semi-Supervised Object Detection paper on the COCO 1% labeled data dataset? | mAP |
What metrics were used to measure the VC model in the Semi-supervised Object Detection via Virtual Category Learning paper on the COCO 1% labeled data dataset? | mAP |
What metrics were used to measure the MixTeacher-FCOS model in the MixTeacher: Mining Promising Labels with Mixed Scale Teacher for Semi-Supervised Object Detection paper on the COCO 1% labeled data dataset? | mAP |
What metrics were used to measure the Efficient Teacher model in the Efficient Teacher: Semi-Supervised Object Detection for YOLOv5 paper on the COCO 1% labeled data dataset? | mAP |
What metrics were used to measure the Polishing Teacher model in the Mind the Gap: Polishing Pseudo labels for Accurate Semi-supervised Object Detection paper on the COCO 1% labeled data dataset? | mAP |
What metrics were used to measure the PseCo model in the PseCo: Pseudo Labeling and Consistency Training for Semi-Supervised Object Detection paper on the COCO 1% labeled data dataset? | mAP |
What metrics were used to measure the MUM model in the MUM : Mix Image Tiles and UnMix Feature Tiles for Semi-Supervised Object Detection paper on the COCO 1% labeled data dataset? | mAP |
What metrics were used to measure the SSOD with OCL and RUPL model in the Semi-Supervised Object Detection with Object-wise Contrastive Learning and Regression Uncertainty paper on the COCO 1% labeled data dataset? | mAP |
What metrics were used to measure the Unbiased Teacher model in the Unbiased Teacher for Semi-Supervised Object Detection paper on the COCO 1% labeled data dataset? | mAP |
What metrics were used to measure the Soft Teacher + Swin-L(HTC++, multi-scale) model in the End-to-End Semi-Supervised Object Detection with Soft Teacher paper on the COCO 1% labeled data dataset? | mAP |
What metrics were used to measure the ASTOD model in the Adaptive Self-Training for Object Detection paper on the COCO 1% labeled data dataset? | mAP |
What metrics were used to measure the RPL model in the Rethinking Pseudo Labels for Semi-Supervised Object Detection paper on the COCO 1% labeled data dataset? | mAP |
What metrics were used to measure the Omni-DETR model in the Omni-DETR: Omni-Supervised Object Detection with Transformers paper on the COCO 1% labeled data dataset? | mAP |
What metrics were used to measure the Instant Teaching model in the Instant-Teaching: An End-to-End Semi-Supervised Object Detection Framework paper on the COCO 1% labeled data dataset? | mAP |
What metrics were used to measure the DETReg model in the DETReg: Unsupervised Pretraining with Region Priors for Object Detection paper on the COCO 1% labeled data dataset? | mAP |
What metrics were used to measure the STAC model in the A Simple Semi-Supervised Learning Framework for Object Detection paper on the COCO 1% labeled data dataset? | mAP |
What metrics were used to measure the Semi-DETR model in the Semi-DETR: Semi-Supervised Object Detection with Detection Transformers paper on the COCO 5% labeled data dataset? | mAP |
What metrics were used to measure the ARSL model in the Ambiguity-Resistant Semi-Supervised Learning for Dense Object Detection paper on the COCO 5% labeled data dataset? | mAP |
What metrics were used to measure the Efficient Teacher model in the Efficient Teacher: Semi-Supervised Object Detection for YOLOv5 paper on the COCO 5% labeled data dataset? | mAP |
What metrics were used to measure the MixTeacher-FRCNN model in the MixTeacher: Mining Promising Labels with Mixed Scale Teacher for Semi-Supervised Object Detection paper on the COCO 5% labeled data dataset? | mAP |
What metrics were used to measure the MixTeacher-FCOS model in the MixTeacher: Mining Promising Labels with Mixed Scale Teacher for Semi-Supervised Object Detection paper on the COCO 5% labeled data dataset? | mAP |
What metrics were used to measure the Dense Teacher model in the Dense Teacher: Dense Pseudo-Labels for Semi-supervised Object Detection paper on the COCO 5% labeled data dataset? | mAP |
What metrics were used to measure the PseCo model in the PseCo: Pseudo Labeling and Consistency Training for Semi-Supervised Object Detection paper on the COCO 5% labeled data dataset? | mAP |
What metrics were used to measure the Revisiting Class Imbalance model in the Revisiting Class Imbalance for End-to-end Semi-Supervised Object Detection paper on the COCO 5% labeled data dataset? | mAP |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.