prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the vlt5 - baseline model in the PlotQA: Reasoning over Scientific Plots paper on the RealCQA dataset? | 1:1 Accuracy |
What metrics were used to measure the crct- 11th ep FineTune model in the RealCQA: Scientific Chart Question Answering as a Test-bed for First-Order Logic paper on the RealCQA dataset? | 1:1 Accuracy |
What metrics were used to measure the Matcha-chartQA model in the MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering paper on the RealCQA dataset? | 1:1 Accuracy |
What metrics were used to measure the vlt5 - 11th ep FineTune model in the RealCQA: Scientific Chart Question Answering as a Test-bed for First-Order Logic paper on the RealCQA dataset? | 1:1 Accuracy |
What metrics were used to measure the MatCha model in the MatCha: Enhancing Visual Language Pretraining with Math Reasoning and Chart Derendering paper on the PlotQA dataset? | 1:1 Accuracy |
What metrics were used to measure the DePlot+FlanPaLM+Codex
(PoT Self-Consistency) model in the DePlot: One-shot visual language reasoning by plot-to-table translation paper on the PlotQA dataset? | 1:1 Accuracy |
What metrics were used to measure the VL-T5-OCR model in the ChartQA: A Benchmark for Question Answering about Charts with Visual and Logical Reasoning paper on the PlotQA dataset? | 1:1 Accuracy |
What metrics were used to measure the CRCT model in the Classification-Regression for Chart Comprehension paper on the PlotQA dataset? | 1:1 Accuracy |
What metrics were used to measure the VisionTapas-OCR model in the ChartQA: A Benchmark for Question Answering about Charts with Visual and Logical Reasoning paper on the PlotQA dataset? | 1:1 Accuracy |
What metrics were used to measure the MID model in the Mutual Information Divergence: A Unified Metric for Multimodal Generative Models paper on the Flickr8k-Expert dataset? | Kendall's Tau-c |
What metrics were used to measure the RefCLIP-S model in the CLIPScore: A Reference-free Evaluation Metric for Image Captioning paper on the Flickr8k-Expert dataset? | Kendall's Tau-c |
What metrics were used to measure the CLIP-S model in the CLIPScore: A Reference-free Evaluation Metric for Image Captioning paper on the Flickr8k-Expert dataset? | Kendall's Tau-c |
What metrics were used to measure the MID model in the Mutual Information Divergence: A Unified Metric for Multimodal Generative Models paper on the Flickr8k-CF dataset? | Kendall's Tau-b |
What metrics were used to measure the RefCLIP-S model in the CLIPScore: A Reference-free Evaluation Metric for Image Captioning paper on the Flickr8k-CF dataset? | Kendall's Tau-b |
What metrics were used to measure the CLIP-S model in the CLIPScore: A Reference-free Evaluation Metric for Image Captioning paper on the Flickr8k-CF dataset? | Kendall's Tau-b |
What metrics were used to measure the MID model in the Mutual Information Divergence: A Unified Metric for Multimodal Generative Models paper on the Pascal-50S dataset? | Mean Accuracy |
What metrics were used to measure the RefCLIP-S model in the CLIPScore: A Reference-free Evaluation Metric for Image Captioning paper on the Pascal-50S dataset? | Mean Accuracy |
What metrics were used to measure the CLIP-S model in the CLIPScore: A Reference-free Evaluation Metric for Image Captioning paper on the Pascal-50S dataset? | Mean Accuracy |
What metrics were used to measure the Rule-based model in the Chemical identification and indexing in PubMed full-text articles using deep learning and heuristics paper on the BC7 NLM-Chem dataset? | F1-score (strict) |
What metrics were used to measure the Rule-based model in the Chemical detection and indexing in PubMed full text articles using deep learning and rule-based methods paper on the BC7 NLM-Chem dataset? | F1-score (strict) |
What metrics were used to measure the MedFuse (optimal) model in the MedFuse: Multi-modal fusion with clinical time-series data and chest X-ray images paper on the MIMIC-CXR, MIMIC-IV dataset? | AUROC |
What metrics were used to measure the AutoNovel model in the AutoNovel: Automatically Discovering and Learning Novel Visual Categories paper on the cifar10 dataset? | Clustering Accuracy |
What metrics were used to measure the AutoNovel model in the AutoNovel: Automatically Discovering and Learning Novel Visual Categories paper on the SVHN dataset? | Clustering Accuracy |
What metrics were used to measure the AutoNovel model in the AutoNovel: Automatically Discovering and Learning Novel Visual Categories paper on the cifar100 dataset? | Clustering Accuracy |
What metrics were used to measure the Triple-GAN-V2 (CNN-13) model in the Triple Generative Adversarial Networks paper on the SVHN, 500 Labels dataset? | Accuracy |
What metrics were used to measure the MixMatch model in the MixMatch: A Holistic Approach to Semi-Supervised Learning paper on the SVHN, 500 Labels dataset? | Accuracy |
What metrics were used to measure the Triple-GAN-V2 (CNN-13, no aug) model in the Triple Generative Adversarial Networks paper on the SVHN, 500 Labels dataset? | Accuracy |
What metrics were used to measure the Dual Student model in the Dual Student: Breaking the Limits of the Teacher in Semi-supervised Learning paper on the SVHN, 500 Labels dataset? | Accuracy |
What metrics were used to measure the FCE model in the Flow Contrastive Estimation of Energy-Based Models paper on the SVHN, 500 Labels dataset? | Accuracy |
What metrics were used to measure the SESEMI SSL (ConvNet) model in the Exploring Self-Supervised Regularization for Supervised and Semi-Supervised Learning paper on the SVHN, 500 Labels dataset? | Accuracy |
What metrics were used to measure the MixMatch model in the MixMatch: A Holistic Approach to Semi-Supervised Learning paper on the CIFAR-10, 1000 Labels dataset? | Accuracy |
What metrics were used to measure the UPS (CNN-13) model in the In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning paper on the CIFAR-10, 1000 Labels dataset? | Accuracy |
What metrics were used to measure the Triple-GAN-V2 (ResNet-26) model in the Triple Generative Adversarial Networks paper on the CIFAR-10, 1000 Labels dataset? | Accuracy |
What metrics were used to measure the LiDAM model in the LiDAM: Semi-Supervised Learning with Localized Domain Adaptation and Iterative Matching paper on the CIFAR-10, 1000 Labels dataset? | Accuracy |
What metrics were used to measure the Dual Student (600) model in the Dual Student: Breaking the Limits of the Teacher in Semi-supervised Learning paper on the CIFAR-10, 1000 Labels dataset? | Accuracy |
What metrics were used to measure the Triple-GAN-V2 (CNN-13) model in the Triple Generative Adversarial Networks paper on the CIFAR-10, 1000 Labels dataset? | Accuracy |
What metrics were used to measure the ICT (CNN-13) model in the Interpolation Consistency Training for Semi-Supervised Learning paper on the CIFAR-10, 1000 Labels dataset? | Accuracy |
What metrics were used to measure the SESEMI SSL (ConvNet) model in the Exploring Self-Supervised Regularization for Supervised and Semi-Supervised Learning paper on the CIFAR-10, 1000 Labels dataset? | Accuracy |
What metrics were used to measure the Triple-GAN-V2 (CNN-13, no aug) model in the Triple Generative Adversarial Networks paper on the CIFAR-10, 1000 Labels dataset? | Accuracy |
What metrics were used to measure the FreeMatch model in the FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning paper on the CIFAR-10, 40 Labels dataset? | Percentage error |
What metrics were used to measure the NP-Match model in the NP-Match: When Neural Processes meet Semi-Supervised Learning paper on the CIFAR-10, 40 Labels dataset? | Percentage error |
What metrics were used to measure the RelationMatch model in the RelationMatch: Matching In-batch Relationships for Semi-supervised Learning paper on the CIFAR-10, 40 Labels dataset? | Percentage error |
What metrics were used to measure the FlexMatch model in the FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling paper on the CIFAR-10, 40 Labels dataset? | Percentage error |
What metrics were used to measure the ShrinkMatch model in the Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning paper on the CIFAR-10, 40 Labels dataset? | Percentage error |
What metrics were used to measure the DebiasPL (w/ FixMatch) model in the Debiased Learning from Naturally Imbalanced Pseudo-Labels paper on the CIFAR-10, 40 Labels dataset? | Percentage error |
What metrics were used to measure the SimMatch model in the SimMatch: Semi-supervised Learning with Similarity Matching paper on the CIFAR-10, 40 Labels dataset? | Percentage error |
What metrics were used to measure the FixMatch+CR model in the Contrastive Regularization for Semi-Supervised Learning paper on the CIFAR-10, 40 Labels dataset? | Percentage error |
What metrics were used to measure the MutexMatch (k=0.6C) model in the MutexMatch: Semi-Supervised Learning with Mutex-Based Consistency Regularization paper on the CIFAR-10, 40 Labels dataset? | Percentage error |
What metrics were used to measure the PCL model in the Probabilistic Contrastive Learning for Domain Adaptation paper on the CIFAR-10, 40 Labels dataset? | Percentage error |
What metrics were used to measure the DP-SSL model in the DP-SSL: Towards Robust Semi-supervised Learning with A Few Labeled Samples paper on the CIFAR-10, 40 Labels dataset? | Percentage error |
What metrics were used to measure the SelfMatch model in the SelfMatch: Combining Contrastive Self-Supervision and Consistency for Semi-Supervised Learning paper on the CIFAR-10, 40 Labels dataset? | Percentage error |
What metrics were used to measure the CoMatch (w. SimCLR) model in the CoMatch: Semi-supervised Learning with Contrastive Graph Regularization paper on the CIFAR-10, 40 Labels dataset? | Percentage error |
What metrics were used to measure the Semi-MMDC model in the Boosting the Performance of Semi-Supervised Learning with Unsupervised Clustering paper on the CIFAR-10, 40 Labels dataset? | Percentage error |
What metrics were used to measure the FixMatch (CTA) model in the FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper on the CIFAR-10, 40 Labels dataset? | Percentage error |
What metrics were used to measure the DoubleMatch model in the DoubleMatch: Improving Semi-Supervised Learning with Self-Supervision paper on the CIFAR-10, 40 Labels dataset? | Percentage error |
What metrics were used to measure the UL-Hopfield (ULH) model in the Unsupervised Learning using Pretrained CNN and Associative Memory Bank paper on the CIFAR-10, 40 Labels dataset? | Percentage error |
What metrics were used to measure the ReMixMatch model in the ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring paper on the CIFAR-10, 40 Labels dataset? | Percentage error |
What metrics were used to measure the SimCLR-kmediods-PAWS model in the Cold PAWS: Unsupervised class discovery and addressing the cold-start problem for semi-supervised learning paper on the EuroSAT, 20 Labels dataset? | Percentage error |
What metrics were used to measure the MutexMatch model in the MutexMatch: Semi-Supervised Learning with Mutex-Based Consistency Regularization paper on the Mini-ImageNet, 1000 Labels dataset? | Accuracy |
What metrics were used to measure the SemCo (μ=3) model in the All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training paper on the Mini-ImageNet, 1000 Labels dataset? | Accuracy |
What metrics were used to measure the SemCo (μ=7) model in the All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training paper on the Mini-ImageNet, 1000 Labels dataset? | Accuracy |
What metrics were used to measure the ShrinkMatch model in the Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning paper on the SVHN, 40 Labels dataset? | Percentage error |
What metrics were used to measure the Semi-MMDC model in the Boosting the Performance of Semi-Supervised Learning with Unsupervised Clustering paper on the SVHN, 40 Labels dataset? | Percentage error |
What metrics were used to measure the MutexMatch (k=0.6C) model in the MutexMatch: Semi-Supervised Learning with Mutex-Based Consistency Regularization paper on the SVHN, 40 Labels dataset? | Percentage error |
What metrics were used to measure the FixMatch (CTA) model in the FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper on the SVHN, 40 Labels dataset? | Percentage error |
What metrics were used to measure the DoubleMatch model in the DoubleMatch: Improving Semi-Supervised Learning with Self-Supervision paper on the SVHN, 40 Labels dataset? | Percentage error |
What metrics were used to measure the MixMatch model in the MixMatch: A Holistic Approach to Semi-Supervised Learning paper on the STL-10, 5000 Labels dataset? | Accuracy |
What metrics were used to measure the DoubleMatch model in the DoubleMatch: Improving Semi-Supervised Learning with Self-Supervision paper on the STL-10, 1000 Labels dataset? | Accuracy |
What metrics were used to measure the Semi-MMDC model in the Boosting the Performance of Semi-Supervised Learning with Unsupervised Clustering paper on the STL-10, 1000 Labels dataset? | Accuracy |
What metrics were used to measure the FixMatch (CTA) model in the FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper on the STL-10, 1000 Labels dataset? | Accuracy |
What metrics were used to measure the NP-Match model in the NP-Match: When Neural Processes meet Semi-Supervised Learning paper on the STL-10, 1000 Labels dataset? | Accuracy |
What metrics were used to measure the ReMixMatch model in the ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring paper on the STL-10, 1000 Labels dataset? | Accuracy |
What metrics were used to measure the EnAET model in the EnAET: A Self-Trained framework for Semi-Supervised and Supervised Learning with Ensemble Transformations paper on the STL-10, 1000 Labels dataset? | Accuracy |
What metrics were used to measure the MixMatch model in the MixMatch: A Holistic Approach to Semi-Supervised Learning paper on the STL-10, 1000 Labels dataset? | Accuracy |
What metrics were used to measure the CC-GAN² model in the Semi-Supervised Learning with Context-Conditional Generative Adversarial Networks paper on the STL-10, 1000 Labels dataset? | Accuracy |
What metrics were used to measure the SimCLR (CoMatch) model in the CoMatch: Semi-supervised Learning with Contrastive Graph Regularization paper on the STL-10, 1000 Labels dataset? | Accuracy |
What metrics were used to measure the SWWAE model in the Stacked What-Where Auto-encoders paper on the STL-10, 1000 Labels dataset? | Accuracy |
What metrics were used to measure the UL-Hopfield (ULH) model in the Unsupervised Learning using Pretrained CNN and Associative Memory Bank paper on the Caltech-101 dataset? | Accuracy |
What metrics were used to measure the SemiReward model in the SemiReward: A General Reward Model for Semi-supervised Learning paper on the CIFAR-100, 400 Labels dataset? | Percentage error |
What metrics were used to measure the ReMixMatch model in the USB: A Unified Semi-supervised Learning Benchmark for Classification paper on the CIFAR-100, 400 Labels dataset? | Percentage error |
What metrics were used to measure the ShrinkMatch model in the Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning paper on the CIFAR-100, 400 Labels dataset? | Percentage error |
What metrics were used to measure the PCL (Flexmatch) model in the Probabilistic Contrastive Learning for Domain Adaptation paper on the CIFAR-100, 400 Labels dataset? | Percentage error |
What metrics were used to measure the SimMatch model in the SimMatch: Semi-supervised Learning with Similarity Matching paper on the CIFAR-100, 400 Labels dataset? | Percentage error |
What metrics were used to measure the FreeMatch model in the FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning paper on the CIFAR-100, 400 Labels dataset? | Percentage error |
What metrics were used to measure the NP-Match model in the NP-Match: When Neural Processes meet Semi-Supervised Learning paper on the CIFAR-100, 400 Labels dataset? | Percentage error |
What metrics were used to measure the CCSSL(FixMatch) model in the Class-Aware Contrastive Semi-Supervised Learning paper on the CIFAR-100, 400 Labels dataset? | Percentage error |
What metrics were used to measure the FlexMatch model in the FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling paper on the CIFAR-100, 400 Labels dataset? | Percentage error |
What metrics were used to measure the FixMatch+DM model in the Harnessing Hard Mixed Samples with Decoupled Regularizer paper on the CIFAR-100, 400 Labels dataset? | Percentage error |
What metrics were used to measure the DoubleMatch model in the DoubleMatch: Improving Semi-Supervised Learning with Self-Supervision paper on the CIFAR-100, 400 Labels dataset? | Percentage error |
What metrics were used to measure the PCL (Fixmatch) model in the Probabilistic Contrastive Learning for Domain Adaptation paper on the CIFAR-100, 400 Labels dataset? | Percentage error |
What metrics were used to measure the DP-SSL model in the DP-SSL: Towards Robust Semi-supervised Learning with A Few Labeled Samples paper on the CIFAR-100, 400 Labels dataset? | Percentage error |
What metrics were used to measure the ReMixMatch model in the ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring paper on the CIFAR-100, 400 Labels dataset? | Percentage error |
What metrics were used to measure the Dash
(RA, WRN-28-8) model in the Dash: Semi-Supervised Learning with Dynamic Thresholding paper on the CIFAR-100, 400 Labels dataset? | Percentage error |
What metrics were used to measure the Dash
(CTA, WRN-28-8) model in the Dash: Semi-Supervised Learning with Dynamic Thresholding paper on the CIFAR-100, 400 Labels dataset? | Percentage error |
What metrics were used to measure the FixMatch+CR model in the Contrastive Regularization for Semi-Supervised Learning paper on the CIFAR-100, 400 Labels dataset? | Percentage error |
What metrics were used to measure the FixMatch (CTA) model in the FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper on the CIFAR-100, 400 Labels dataset? | Percentage error |
What metrics were used to measure the SimCLR-kmediods-PAWS model in the Cold PAWS: Unsupervised class discovery and addressing the cold-start problem for semi-supervised learning paper on the CIFAR-10, 100 Labels dataset? | Percentage error |
What metrics were used to measure the SimCLR-kmediods-PAWS model in the Cold PAWS: Unsupervised class discovery and addressing the cold-start problem for semi-supervised learning paper on the Imagenette, 20 Labels dataset? | Percentage error |
What metrics were used to measure the DPT model in the Diffusion Models and Semi-Supervised Learners Benefit Mutually with Few Labels paper on the ImageNet - 2 labeled data per class dataset? | Top 1 Accuracy |
What metrics were used to measure the CCSSL(FixMatch) model in the Class-Aware Contrastive Semi-Supervised Learning paper on the CIFAR-100, 2500 Labels dataset? | Percentage error |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.