prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the Pyramid Adversarial Training Improves ViT (Im21k) model in the Pyramid Adversarial Training Improves ViT Performance paper on the ImageNet-Sketch dataset?
Top-1 accuracy
What metrics were used to measure the SEER (RegNet10B) model in the Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision paper on the ImageNet-Sketch dataset?
Top-1 accuracy
What metrics were used to measure the DrViT model in the Discrete Representations Strengthen Vision Transformer Robustness paper on the ImageNet-Sketch dataset?
Top-1 accuracy
What metrics were used to measure the Pyramid Adversarial Training Improves ViT model in the Pyramid Adversarial Training Improves ViT Performance paper on the ImageNet-Sketch dataset?
Top-1 accuracy
What metrics were used to measure the Sequencer2D-L model in the Sequencer: Deep LSTM for Image Classification paper on the ImageNet-Sketch dataset?
Top-1 accuracy
What metrics were used to measure the MatchDG model in the Domain Generalization using Causal Matching paper on the Rotated Fashion-MNIST dataset?
Accuracy
What metrics were used to measure the CSD model in the Efficient Domain Generalization via Common-Specific Low-Rank Decomposition paper on the Rotated Fashion-MNIST dataset?
Accuracy
What metrics were used to measure the MAE+DAT (ViT-H) model in the Enhance the Visual Representation via Discrete Adversarial Training paper on the Stylized-ImageNet dataset?
Top 1 Accuracy
What metrics were used to measure the VOLO-D5+HAT model in the Improving Vision Transformers by Revisiting High-frequency Components paper on the Stylized-ImageNet dataset?
Top 1 Accuracy
What metrics were used to measure the DiscreteViT model in the Discrete Representations Strengthen Vision Transformer Robustness paper on the Stylized-ImageNet dataset?
Top 1 Accuracy
What metrics were used to measure the IJEEL-KVL model in the Incorporating Joint Embeddings into Goal-Oriented Dialogues with Multi-Task Learning paper on the Kvret dataset?
BLEU, Embedding Average, Greedy Matching, Vector Extrema
What metrics were used to measure the SMPLify-X model in the Multi-Modal Emotion recognition on IEMOCAP Dataset using Deep Learning paper on the Expressive hands and faces dataset (EHF). dataset?
v2v error
What metrics were used to measure the Audio + Text (Stage III) model in the HCAM -- Hierarchical Cross Attention Model for Multi-modal Emotion Recognition paper on the MELD dataset?
F1
What metrics were used to measure the COGMEN model in the COGMEN: COntextualized GNN based Multimodal Emotion recognitioN paper on the IEMOCAP dataset?
F1, Unweighted Accuracy (UA), Weighted Accuracy (WA)
What metrics were used to measure the PATHOSnet v2 (English) model in the Combining deep and unsupervised features for multilingual speech emotion recognition paper on the IEMOCAP dataset?
F1, Unweighted Accuracy (UA), Weighted Accuracy (WA)
What metrics were used to measure the Self-attention weight correction (A+T) model in the Speech Emotion Recognition Based on Self-Attention Weight Correction for Acoustic and Text Features paper on the IEMOCAP dataset?
F1, Unweighted Accuracy (UA), Weighted Accuracy (WA)
What metrics were used to measure the CHFusion (A+T+V) model in the Multimodal Sentiment Analysis using Hierarchical Fusion with Context Modeling paper on the IEMOCAP dataset?
F1, Unweighted Accuracy (UA), Weighted Accuracy (WA)
What metrics were used to measure the CHFusion (A+T) model in the Multimodal Sentiment Analysis using Hierarchical Fusion with Context Modeling paper on the IEMOCAP dataset?
F1, Unweighted Accuracy (UA), Weighted Accuracy (WA)
What metrics were used to measure the CHFusion (T+V) model in the Multimodal Sentiment Analysis using Hierarchical Fusion with Context Modeling paper on the IEMOCAP dataset?
F1, Unweighted Accuracy (UA), Weighted Accuracy (WA)
What metrics were used to measure the Audio + Text (Stage III) model in the HCAM -- Hierarchical Cross Attention Model for Multi-modal Emotion Recognition paper on the IEMOCAP dataset?
F1, Unweighted Accuracy (UA), Weighted Accuracy (WA)
What metrics were used to measure the bc-LSTM model in the Context-Dependent Sentiment Analysis in User-Generated Videos paper on the IEMOCAP dataset?
F1, Unweighted Accuracy (UA), Weighted Accuracy (WA)
What metrics were used to measure the Flor model in the KOHTD: Kazakh Offline Handwritten Text Dataset paper on the KOHTD dataset?
CER
What metrics were used to measure the Puigcerver model in the KOHTD: Kazakh Offline Handwritten Text Dataset paper on the KOHTD dataset?
CER
What metrics were used to measure the Abdallah model in the KOHTD: Kazakh Offline Handwritten Text Dataset paper on the KOHTD dataset?
CER
What metrics were used to measure the Bluche model in the KOHTD: Kazakh Offline Handwritten Text Dataset paper on the KOHTD dataset?
CER
What metrics were used to measure the AKHCRNet model in the AKHCRNet: Bengali Handwritten Character Recognition Using Deep Learning paper on the BanglaLekha Isolated Dataset dataset?
Accuracy, Cross Entropy Loss, Epochs
What metrics were used to measure the KHCR model in the Kurdish Handwritten Character Recognition using Deep Learning Techniques paper on the An extensive dataset of handwritten central Kurdish isolated characters dataset?
1:1 Accuracy
What metrics were used to measure the MiVOLO-D1 model in the MiVOLO: Multi-input Transformer for Age and Gender Estimation paper on the LAGENDA gender dataset?
Accuracy
What metrics were used to measure the MiVOLO-D1 model in the MiVOLO: Multi-input Transformer for Age and Gender Estimation paper on the LAGENDA age dataset?
CS@5, MAE
What metrics were used to measure the Relational model in the Learning from Label Relationships in Human Affect paper on the AMIGOS dataset?
CCC (Arousal), CCC (Valence), PCC (Arousal), PCC (Valence)
What metrics were used to measure the GVP-large model in the Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement paper on the CATH 4.3 dataset?
Perplexity, Sequence Recovery %(All)
What metrics were used to measure the ESM-IF model in the Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement paper on the CATH 4.3 dataset?
Perplexity, Sequence Recovery %(All)
What metrics were used to measure the Knowledge-Design model in the Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement paper on the CATH 4.2 dataset?
Perplexity, Sequence Recovery %(All)
What metrics were used to measure the PiFold model in the Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement paper on the CATH 4.2 dataset?
Perplexity, Sequence Recovery %(All)
What metrics were used to measure the ProteinMPNN model in the Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement paper on the CATH 4.2 dataset?
Perplexity, Sequence Recovery %(All)
What metrics were used to measure the GVP model in the Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement paper on the CATH 4.2 dataset?
Perplexity, Sequence Recovery %(All)
What metrics were used to measure the GCA model in the Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement paper on the CATH 4.2 dataset?
Perplexity, Sequence Recovery %(All)
What metrics were used to measure the AlphaDesign model in the Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement paper on the CATH 4.2 dataset?
Perplexity, Sequence Recovery %(All)
What metrics were used to measure the StructGNN model in the Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement paper on the CATH 4.2 dataset?
Perplexity, Sequence Recovery %(All)
What metrics were used to measure the GraphTrans model in the Knowledge-Design: Pushing the Limit of Protein Design via Knowledge Refinement paper on the CATH 4.2 dataset?
Perplexity, Sequence Recovery %(All)
What metrics were used to measure the GAL 120B model in the Galactica: A Large Language Model for Science paper on the PaenSeq dataset?
ROUGE-L
What metrics were used to measure the GAL 30B model in the Galactica: A Large Language Model for Science paper on the PaenSeq dataset?
ROUGE-L
What metrics were used to measure the GAL 6.7B model in the Galactica: A Large Language Model for Science paper on the PaenSeq dataset?
ROUGE-L
What metrics were used to measure the GAL 1.3B model in the Galactica: A Large Language Model for Science paper on the PaenSeq dataset?
ROUGE-L
What metrics were used to measure the GAL 125M model in the Galactica: A Large Language Model for Science paper on the PaenSeq dataset?
ROUGE-L
What metrics were used to measure the GAL 120B model in the Galactica: A Large Language Model for Science paper on the CASPSimSeq dataset?
ROUGE-L
What metrics were used to measure the GAL 30B model in the Galactica: A Large Language Model for Science paper on the CASPSimSeq dataset?
ROUGE-L
What metrics were used to measure the GAL 6.7B model in the Galactica: A Large Language Model for Science paper on the CASPSimSeq dataset?
ROUGE-L
What metrics were used to measure the GAL 1.3B model in the Galactica: A Large Language Model for Science paper on the CASPSimSeq dataset?
ROUGE-L
What metrics were used to measure the GAL 125M model in the Galactica: A Large Language Model for Science paper on the CASPSimSeq dataset?
ROUGE-L
What metrics were used to measure the GAL 120B model in the Galactica: A Large Language Model for Science paper on the UniProtSeq dataset?
ROUGE-L
What metrics were used to measure the GAL 30B model in the Galactica: A Large Language Model for Science paper on the UniProtSeq dataset?
ROUGE-L
What metrics were used to measure the GAL 6.7B model in the Galactica: A Large Language Model for Science paper on the UniProtSeq dataset?
ROUGE-L
What metrics were used to measure the GAL 1.3B model in the Galactica: A Large Language Model for Science paper on the UniProtSeq dataset?
ROUGE-L
What metrics were used to measure the GAL 125M model in the Galactica: A Large Language Model for Science paper on the UniProtSeq dataset?
ROUGE-L
What metrics were used to measure the NAPReg model in the NAPReg: Nouns As Proxies Regularization for Semantically Aware Cross-Modal Embeddings paper on the MSCOCO-1k dataset?
Image-to-text R@1, Text-to-image R@1
What metrics were used to measure the Dual-path CNN model in the Dual-Path Convolutional Image-Text Embeddings with Instance Loss paper on the MSCOCO-1k dataset?
Image-to-text R@1, Text-to-image R@1
What metrics were used to measure the VLPCook (R1M+) model in the Vision and Structured-Language Pretraining for Cross-Modal Food Retrieval paper on the Recipe1M dataset?
Image-to-text R@1, Text-to-image R@1
What metrics were used to measure the VLPCook model in the Vision and Structured-Language Pretraining for Cross-Modal Food Retrieval paper on the Recipe1M dataset?
Image-to-text R@1, Text-to-image R@1
What metrics were used to measure the T-Food (CLIP) model in the Transformer Decoders with MultiModal Regularization for Cross-Modal Food Retrieval paper on the Recipe1M dataset?
Image-to-text R@1, Text-to-image R@1
What metrics were used to measure the T-Food model in the Transformer Decoders with MultiModal Regularization for Cross-Modal Food Retrieval paper on the Recipe1M dataset?
Image-to-text R@1, Text-to-image R@1
What metrics were used to measure the X-MRS model in the Cross-Modal Retrieval and Synthesis (X-MRS): Closing the Modality Gap in Shared Representation Learning paper on the Recipe1M dataset?
Image-to-text R@1, Text-to-image R@1
What metrics were used to measure the H-T model in the Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers and Self-supervised Learning paper on the Recipe1M dataset?
Image-to-text R@1, Text-to-image R@1
What metrics were used to measure the SCAN model in the Cross-Modal Food Retrieval: Learning a Joint Embedding of Food Images and Recipes with Semantic Consistency and Attention Mechanism paper on the Recipe1M dataset?
Image-to-text R@1, Text-to-image R@1
What metrics were used to measure the ACME model in the Learning Cross-Modal Embeddings with Adversarial Networks for Cooking Recipes and Food Images paper on the Recipe1M dataset?
Image-to-text R@1, Text-to-image R@1
What metrics were used to measure the AdaMine model in the Cross-Modal Retrieval in the Cooking Context: Learning Semantic Text-Image Embeddings paper on the Recipe1M dataset?
Image-to-text R@1, Text-to-image R@1
What metrics were used to measure the NAPReg model in the NAPReg: Nouns As Proxies Regularization for Semantically Aware Cross-Modal Embeddings paper on the MS-COCO-2014 dataset?
Text-to-image R@1
What metrics were used to measure the X2-VLM (large) model in the X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the X2-VLM (base) model in the X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the BEiT-3 model in the Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the OmniVL (14M) model in the OmniVL:One Foundation Model for Image-Language and Video-Language Tasks paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the ERNIE-ViL 2.0 model in the ERNIE-ViL 2.0: Multi-view Contrastive Learning for Image-Text Pre-training paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the Aurora (ours, r=128) model in the Parameter-efficient Tuning of Large-scale Multimodal Foundation Model paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the X-VLM (base) model in the Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the VSE-Gradient model in the Dissecting Deep Metric Learning Losses for Image-Text Retrieval paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the ALIGN model in the Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the ViSTA model in the ViSTA: Vision and Scene Text Aggregation for Cross-Modal Retrieval paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the IAIS model in the Learning Relation Alignment for Calibrated Cross-modal Retrieval paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the ViLT-B/32 model in the ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the RCAR model in the Plug-and-Play Regulators for Image-Text Matching paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the NAPReg model in the NAPReg: Nouns As Proxies Regularization for Semantically Aware Cross-Modal Embeddings paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the SGRAF model in the Similarity Reasoning and Filtration for Image-Text Matching paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the GSMN model in the Graph Structured Network for Image-Text Matching paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the Pearl model in the paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the IMRAM model in the IMRAM: Iterative Matching with Recurrent Attention Memory for Cross-Modal Image-Text Retrieval paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the SCAN model in the Stacked Cross Attention for Image-Text Matching paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the Dual-Path (ResNet) model in the Dual-Path Convolutional Image-Text Embeddings with Instance Loss paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the SCO (ResNet) model in the Learning Semantic Concepts and Order for Image and Sentence Matching paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the VSE++ (ResNet) model in the VSE++: Improving Visual-Semantic Embeddings with Hard Negatives paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the CMPL (ResNet) model in the Deep Cross-Modal Projection Learning for Image-Text Matching paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the Dual-Path (ResNet) model in the Dual-Path Convolutional Image-Text Embeddings with Instance Loss paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the VAST model in the VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset paper on the Flickr30k dataset?
Image-to-text R@1, Image-to-text R@5, Image-to-text R@10, Text-to-image R@1, Text-to-image R@5, Text-to-image R@10
What metrics were used to measure the VLPCook model in the Vision and Structured-Language Pretraining for Cross-Modal Food Retrieval paper on the Recipe1M+ dataset?
Image-to-text R@1, Text-to-image R@1
What metrics were used to measure the Marin et al. model in the Recipe1M+: A Dataset for Learning Cross-Modal Embeddings for Cooking Recipes and Food Images paper on the Recipe1M+ dataset?
Image-to-text R@1, Text-to-image R@1
What metrics were used to measure the NAPReg model in the NAPReg: Nouns As Proxies Regularization for Semantically Aware Cross-Modal Embeddings paper on the Flickr-8k dataset?
Image-to-text R@1, Text-to-image R@1
What metrics were used to measure the OURS-COMBINED-VAL model in the Emphasizing Complementary Samples for Non-literal Cross-modal Retrieval paper on the COCO 2014 dataset?
Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10
What metrics were used to measure the VAST model in the VAST: A Vision-Audio-Subtitle-Text Omni-Modality Foundation Model and Dataset paper on the COCO 2014 dataset?
Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10
What metrics were used to measure the X2-VLM (large) model in the X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks paper on the COCO 2014 dataset?
Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10
What metrics were used to measure the BEiT-3 model in the Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks paper on the COCO 2014 dataset?
Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10
What metrics were used to measure the XFM (base) model in the Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks paper on the COCO 2014 dataset?
Text-to-image R@1, Text-to-image R@5, Text-to-image R@10, Image-to-text R@1, Image-to-text R@5, Image-to-text R@10