prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the LLaMA 7B model in the LLaMA: Open and Efficient Foundation Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the Text-davinci-002-175B (zero-shot) model in the Large Language Models are Zero-Shot Reasoners paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the GPT-Neo-125M + Self-Sampling model in the Learning Math Reasoning from Self-Sampled Correct and Partially-Correct Solutions paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the PaLM 8B (8-shot) model in the Solving Quantitative Reasoning Problems with Language Models paper on the GSM8K dataset?
Accuracy, Parameters (Billion)
What metrics were used to measure the GemNet-OC model in the GemNet-OC: Developing Graph Neural Networks for Large and Diverse Molecular Simulation Datasets paper on the OC20 dataset?
Energy MAE
What metrics were used to measure the GemNet-XL model in the Towards Training Billion Parameter Graph Neural Networks for Atomic Simulations paper on the OC20 dataset?
Energy MAE
What metrics were used to measure the SpinConv model in the Rotation Invariant Graph Neural Networks using Spin Convolutions paper on the OC20 dataset?
Energy MAE
What metrics were used to measure the Noisy Nodes model in the Simple GNN Regularisation for 3D Molecular Property Prediction & Beyond paper on the OC20 dataset?
Energy MAE
What metrics were used to measure the BERT$_{ssenet}^{c}$ model in the CPED: A Large-Scale Chinese Personalized and Emotional Dialogue Dataset for Conversational AI paper on the CPED dataset?
Accuracy (%), Macro-F1, Accuracy of Neurotism, Accuracy of Extraversion, Accuracy of Openness, Accuracy of Agreeableness, Accuracy of Conscientiousness
What metrics were used to measure the BERT$^{s}$ model in the CPED: A Large-Scale Chinese Personalized and Emotional Dialogue Dataset for Conversational AI paper on the CPED dataset?
Accuracy (%), Macro-F1, Accuracy of Neurotism, Accuracy of Extraversion, Accuracy of Openness, Accuracy of Agreeableness, Accuracy of Conscientiousness
What metrics were used to measure the BERT$^{c}$ model in the CPED: A Large-Scale Chinese Personalized and Emotional Dialogue Dataset for Conversational AI paper on the CPED dataset?
Accuracy (%), Macro-F1, Accuracy of Neurotism, Accuracy of Extraversion, Accuracy of Openness, Accuracy of Agreeableness, Accuracy of Conscientiousness
What metrics were used to measure the BERT$_{senet}^{c}$ model in the CPED: A Large-Scale Chinese Personalized and Emotional Dialogue Dataset for Conversational AI paper on the CPED dataset?
Accuracy (%), Macro-F1, Accuracy of Neurotism, Accuracy of Extraversion, Accuracy of Openness, Accuracy of Agreeableness, Accuracy of Conscientiousness
What metrics were used to measure the PromptSRC model in the Self-regulating Prompts: Foundational Model Adaptation without Forgetting paper on the ImageNet dataset?
Harmonic mean
What metrics were used to measure the MaPLe model in the MaPLe: Multi-modal Prompt Learning paper on the ImageNet dataset?
Harmonic mean
What metrics were used to measure the CLIP model in the Learning Transferable Visual Models From Natural Language Supervision paper on the ImageNet dataset?
Harmonic mean
What metrics were used to measure the POMP model in the Prompt Pre-Training with Twenty-Thousand Classes for Open-Vocabulary Visual Recognition paper on the ImageNet-S dataset?
Top-1 accuracy %
What metrics were used to measure the PromptSRC model in the Self-regulating Prompts: Foundational Model Adaptation without Forgetting paper on the ImageNet-S dataset?
Top-1 accuracy %
What metrics were used to measure the MaPLe model in the MaPLe: Multi-modal Prompt Learning paper on the ImageNet-S dataset?
Top-1 accuracy %
What metrics were used to measure the CoCoOp model in the Conditional Prompt Learning for Vision-Language Models paper on the ImageNet-S dataset?
Top-1 accuracy %
What metrics were used to measure the CLIP model in the Learning Transferable Visual Models From Natural Language Supervision paper on the ImageNet-S dataset?
Top-1 accuracy %
What metrics were used to measure the POMP model in the Prompt Pre-Training with Twenty-Thousand Classes for Open-Vocabulary Visual Recognition paper on the ImageNet-21k dataset?
Accuracy
What metrics were used to measure the VPT model in the Visual Prompt Tuning paper on the ImageNet-21k dataset?
Accuracy
What metrics were used to measure the MaPLe model in the MaPLe: Multi-modal Prompt Learning paper on the EuroSAT dataset?
Harmonic mean
What metrics were used to measure the CLIP model in the Learning Transferable Visual Models From Natural Language Supervision paper on the EuroSAT dataset?
Harmonic mean
What metrics were used to measure the CoCoOp model in the Conditional Prompt Learning for Vision-Language Models paper on the EuroSAT dataset?
Harmonic mean
What metrics were used to measure the MaPLe model in the MaPLe: Multi-modal Prompt Learning paper on the FGVC-Aircraft dataset?
Harmonic mean
What metrics were used to measure the CLIP model in the Learning Transferable Visual Models From Natural Language Supervision paper on the FGVC-Aircraft dataset?
Harmonic mean
What metrics were used to measure the CoCoOp model in the Conditional Prompt Learning for Vision-Language Models paper on the FGVC-Aircraft dataset?
Harmonic mean
What metrics were used to measure the POMP model in the Prompt Pre-Training with Twenty-Thousand Classes for Open-Vocabulary Visual Recognition paper on the ImageNet-R dataset?
Top-1 accuracy %
What metrics were used to measure the PromptSRC model in the Self-regulating Prompts: Foundational Model Adaptation without Forgetting paper on the ImageNet-R dataset?
Top-1 accuracy %
What metrics were used to measure the MaPLe model in the MaPLe: Multi-modal Prompt Learning paper on the ImageNet-R dataset?
Top-1 accuracy %
What metrics were used to measure the CoCoOP model in the Conditional Prompt Learning for Vision-Language Models paper on the ImageNet-R dataset?
Top-1 accuracy %
What metrics were used to measure the CLIP model in the Learning Transferable Visual Models From Natural Language Supervision paper on the ImageNet-R dataset?
Top-1 accuracy %
What metrics were used to measure the PromptSRC model in the Self-regulating Prompts: Foundational Model Adaptation without Forgetting paper on the ImageNet V2 dataset?
Top-1 accuracy %
What metrics were used to measure the MaPLe model in the MaPLe: Multi-modal Prompt Learning paper on the ImageNet V2 dataset?
Top-1 accuracy %
What metrics were used to measure the CoCoOp model in the Conditional Prompt Learning for Vision-Language Models paper on the ImageNet V2 dataset?
Top-1 accuracy %
What metrics were used to measure the POMP model in the Prompt Pre-Training with Twenty-Thousand Classes for Open-Vocabulary Visual Recognition paper on the ImageNet V2 dataset?
Top-1 accuracy %
What metrics were used to measure the CLIP model in the Learning Transferable Visual Models From Natural Language Supervision paper on the ImageNet V2 dataset?
Top-1 accuracy %
What metrics were used to measure the PromptSRC model in the Self-regulating Prompts: Foundational Model Adaptation without Forgetting paper on the UCF101 dataset?
Harmonic mean
What metrics were used to measure the MaPLe model in the MaPLe: Multi-modal Prompt Learning paper on the UCF101 dataset?
Harmonic mean
What metrics were used to measure the CoCoOp model in the Conditional Prompt Learning for Vision-Language Models paper on the UCF101 dataset?
Harmonic mean
What metrics were used to measure the CLIP model in the Learning Transferable Visual Models From Natural Language Supervision paper on the UCF101 dataset?
Harmonic mean
What metrics were used to measure the POMP model in the Prompt Pre-Training with Twenty-Thousand Classes for Open-Vocabulary Visual Recognition paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the MaPLe model in the MaPLe: Multi-modal Prompt Learning paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the PromptSRC model in the Self-regulating Prompts: Foundational Model Adaptation without Forgetting paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the CoCoOp model in the Conditional Prompt Learning for Vision-Language Models paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the CLIP model in the Learning Transferable Visual Models From Natural Language Supervision paper on the ImageNet-A dataset?
Top-1 accuracy %
What metrics were used to measure the PromptSRC model in the Self-regulating Prompts: Foundational Model Adaptation without Forgetting paper on the SUN397 dataset?
Harmonic mean
What metrics were used to measure the MaPLe model in the MaPLe: Multi-modal Prompt Learning paper on the SUN397 dataset?
Harmonic mean
What metrics were used to measure the CoCoOp model in the Conditional Prompt Learning for Vision-Language Models paper on the SUN397 dataset?
Harmonic mean
What metrics were used to measure the CLIP model in the Learning Transferable Visual Models From Natural Language Supervision paper on the SUN397 dataset?
Harmonic mean
What metrics were used to measure the PromptSRC model in the Self-regulating Prompts: Foundational Model Adaptation without Forgetting paper on the FGVC Aircraft dataset?
Harmonic mean
What metrics were used to measure the AV (cor+eng+box) model in the Egocentric Deep Multi-Channel Audio-Visual Active Speaker Localization paper on the EasyCom dataset?
ASL mAP
What metrics were used to measure the Efficientnet-b0 model in the EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks paper on the NCT-CRC-HE-100K dataset?
Accuracy (%), F1-Score, Precision, Specificity
What metrics were used to measure the ResNeXt-50-32x4d model in the ResNet strikes back: An improved training procedure in timm paper on the NCT-CRC-HE-100K dataset?
Accuracy (%), F1-Score, Precision, Specificity
What metrics were used to measure the RegNetY-3.2GF model in the RegNet: Self-Regulated Network for Image Classification paper on the NCT-CRC-HE-100K dataset?
Accuracy (%), F1-Score, Precision, Specificity
What metrics were used to measure the ResNet-50 model in the Deep Residual Learning for Image Recognition paper on the NCT-CRC-HE-100K dataset?
Accuracy (%), F1-Score, Precision, Specificity
What metrics were used to measure the DenseNet-169 model in the Densely Connected Convolutional Networks paper on the NCT-CRC-HE-100K dataset?
Accuracy (%), F1-Score, Precision, Specificity
What metrics were used to measure the Res2Net-50 model in the Res2Net: A New Multi-scale Backbone Architecture paper on the NCT-CRC-HE-100K dataset?
Accuracy (%), F1-Score, Precision, Specificity
What metrics were used to measure the ResNet-18 model in the Deep Residual Learning for Image Recognition paper on the NCT-CRC-HE-100K dataset?
Accuracy (%), F1-Score, Precision, Specificity
What metrics were used to measure the SNAPSHOT ENSEMBLE model in the Malaria Parasite Detection using Efficient Neural Ensembles paper on the Malaria Dataset dataset?
F1 score
What metrics were used to measure the PTRN model in the Image Projective Transformation Rectification with Synthetic Data for Smartphone-captured Chest X-ray Photos Classification paper on the CheXphoto dataset?
Mean AUC
What metrics were used to measure the CMX model in the CMX: Cross-Modal Fusion for RGB-X Semantic Segmentation with Transformers paper on the FLIR dataset?
mAP50
What metrics were used to measure the CFT model in the Cross-Modality Fusion Transformer for Multispectral Object Detection paper on the FLIR dataset?
mAP50
What metrics were used to measure the YOLOv5 (T) model in the Cross-Modality Fusion Transformer for Multispectral Object Detection paper on the FLIR dataset?
mAP50
What metrics were used to measure the GAFF (ResNet18) model in the Guided Attentive Feature Fusion for Multispectral Pedestrian Detection paper on the FLIR dataset?
mAP50
What metrics were used to measure the GAFF (VGG16) model in the Guided Attentive Feature Fusion for Multispectral Pedestrian Detection paper on the FLIR dataset?
mAP50
What metrics were used to measure the CFR_3 (VGG16) model in the Multispectral Fusion for Object Detection with Cyclic Fuse-and-Refine Blocks paper on the FLIR dataset?
mAP50
What metrics were used to measure the Halfway Fusion (VGG16) model in the Multispectral Fusion for Object Detection with Cyclic Fuse-and-Refine Blocks paper on the FLIR dataset?
mAP50
What metrics were used to measure the YOLOv5 (RGB) model in the Cross-Modality Fusion Transformer for Multispectral Object Detection paper on the FLIR dataset?
mAP50
What metrics were used to measure the CFR model in the Multispectral Fusion for Object Detection with Cyclic Fuse-and-Refine Blocks paper on the KAIST Multispectral Pedestrian Detection Benchmark dataset?
Reasonable Miss Rate
What metrics were used to measure the GAFF model in the Guided Attentive Feature Fusion for Multispectral Pedestrian Detection paper on the KAIST Multispectral Pedestrian Detection Benchmark dataset?
Reasonable Miss Rate
What metrics were used to measure the MLPD model in the MLPD: Multi-Label Pedestrian Detector in Multispectral Domain paper on the KAIST Multispectral Pedestrian Detection Benchmark dataset?
Reasonable Miss Rate
What metrics were used to measure the CFT model in the Cross-Modality Fusion Transformer for Multispectral Object Detection paper on the LLVIP dataset?
mAP50
What metrics were used to measure the MolCA, Galac1.3B model in the MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter paper on the ChEBI-20 dataset?
BLEU-2, BLEU-4, METEOR, ROUGE-1, ROUGE-2, ROUGE-L, Text2Mol
What metrics were used to measure the BioT5 model in the BioT5: Enriching Cross-modal Integration in Biology with Chemical Knowledge and Natural Language Associations paper on the ChEBI-20 dataset?
BLEU-2, BLEU-4, METEOR, ROUGE-1, ROUGE-2, ROUGE-L, Text2Mol
What metrics were used to measure the Text+Chem T5-augm-Base model in the Unifying Molecular and Textual Representations via Multi-task Language Modelling paper on the ChEBI-20 dataset?
BLEU-2, BLEU-4, METEOR, ROUGE-1, ROUGE-2, ROUGE-L, Text2Mol
What metrics were used to measure the MolCA, Galac125M model in the MolCA: Molecular Graph-Language Modeling with Cross-Modal Projector and Uni-Modal Adapter paper on the ChEBI-20 dataset?
BLEU-2, BLEU-4, METEOR, ROUGE-1, ROUGE-2, ROUGE-L, Text2Mol
What metrics were used to measure the MolReGPT (GPT-4-0314) model in the Empowering Molecule Discovery for Molecule-Caption Translation with Large Language Models: A ChatGPT Perspective paper on the ChEBI-20 dataset?
BLEU-2, BLEU-4, METEOR, ROUGE-1, ROUGE-2, ROUGE-L, Text2Mol
What metrics were used to measure the MoMu+MolT5-Large model in the A Molecular Multimodal Foundation Model Associating Molecule Graphs with Natural Language paper on the ChEBI-20 dataset?
BLEU-2, BLEU-4, METEOR, ROUGE-1, ROUGE-2, ROUGE-L, Text2Mol
What metrics were used to measure the MolT5-Large model in the Translation between Molecules and Natural Language paper on the ChEBI-20 dataset?
BLEU-2, BLEU-4, METEOR, ROUGE-1, ROUGE-2, ROUGE-L, Text2Mol
What metrics were used to measure the MolXPT model in the MolXPT: Wrapping Molecules with Text for Generative Pre-training paper on the ChEBI-20 dataset?
BLEU-2, BLEU-4, METEOR, ROUGE-1, ROUGE-2, ROUGE-L, Text2Mol
What metrics were used to measure the MolFM-Base model in the MolFM: A Multimodal Molecular Foundation Model paper on the ChEBI-20 dataset?
BLEU-2, BLEU-4, METEOR, ROUGE-1, ROUGE-2, ROUGE-L, Text2Mol
What metrics were used to measure the Text+Chem T5-Base model in the Unifying Molecular and Textual Representations via Multi-task Language Modelling paper on the ChEBI-20 dataset?
BLEU-2, BLEU-4, METEOR, ROUGE-1, ROUGE-2, ROUGE-L, Text2Mol
What metrics were used to measure the MolReGPT (GPT-3.5-turbo) model in the Empowering Molecule Discovery for Molecule-Caption Translation with Large Language Models: A ChatGPT Perspective paper on the ChEBI-20 dataset?
BLEU-2, BLEU-4, METEOR, ROUGE-1, ROUGE-2, ROUGE-L, Text2Mol
What metrics were used to measure the Text+Chem T5-augm-Small model in the Unifying Molecular and Textual Representations via Multi-task Language Modelling paper on the ChEBI-20 dataset?
BLEU-2, BLEU-4, METEOR, ROUGE-1, ROUGE-2, ROUGE-L, Text2Mol
What metrics were used to measure the Text+Chem T5-Small model in the Unifying Molecular and Textual Representations via Multi-task Language Modelling paper on the ChEBI-20 dataset?
BLEU-2, BLEU-4, METEOR, ROUGE-1, ROUGE-2, ROUGE-L, Text2Mol
What metrics were used to measure the MoMu+MolT5-Base model in the A Molecular Multimodal Foundation Model Associating Molecule Graphs with Natural Language paper on the ChEBI-20 dataset?
BLEU-2, BLEU-4, METEOR, ROUGE-1, ROUGE-2, ROUGE-L, Text2Mol
What metrics were used to measure the MolFM-Small model in the MolFM: A Multimodal Molecular Foundation Model paper on the ChEBI-20 dataset?
BLEU-2, BLEU-4, METEOR, ROUGE-1, ROUGE-2, ROUGE-L, Text2Mol
What metrics were used to measure the MolT5-Base model in the Translation between Molecules and Natural Language paper on the ChEBI-20 dataset?
BLEU-2, BLEU-4, METEOR, ROUGE-1, ROUGE-2, ROUGE-L, Text2Mol
What metrics were used to measure the MoMu+MolT5-Small model in the A Molecular Multimodal Foundation Model Associating Molecule Graphs with Natural Language paper on the ChEBI-20 dataset?
BLEU-2, BLEU-4, METEOR, ROUGE-1, ROUGE-2, ROUGE-L, Text2Mol
What metrics were used to measure the MolT5-Small model in the Translation between Molecules and Natural Language paper on the ChEBI-20 dataset?
BLEU-2, BLEU-4, METEOR, ROUGE-1, ROUGE-2, ROUGE-L, Text2Mol
What metrics were used to measure the GIT-Mol-(graph+SMILES) model in the GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and Text paper on the ChEBI-20 dataset?
BLEU-2, BLEU-4, METEOR, ROUGE-1, ROUGE-2, ROUGE-L, Text2Mol
What metrics were used to measure the GIT-Mol-graph model in the GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and Text paper on the ChEBI-20 dataset?
BLEU-2, BLEU-4, METEOR, ROUGE-1, ROUGE-2, ROUGE-L, Text2Mol
What metrics were used to measure the GIT-Mol-SMILES model in the GIT-Mol: A Multi-modal Large Language Model for Molecular Science with Graph, Image, and Text paper on the ChEBI-20 dataset?
BLEU-2, BLEU-4, METEOR, ROUGE-1, ROUGE-2, ROUGE-L, Text2Mol
What metrics were used to measure the AdaBoost Classifier model in the Is Synthetic Voice Detection Research Going Into the Right Direction? paper on the ASVspoof 2019 - LA dataset?
Accuracy (%)
What metrics were used to measure the SCL model in the Discovering Human-Object Interaction Concepts via Self-Compositional Learning paper on the HICO-DET(Unknown Concepts) dataset?
COCO-Val2017, Obj365, HICO, Novel Classes
What metrics were used to measure the ATL model in the Affordance Transfer Learning for Human-Object Interaction Detection paper on the HICO-DET(Unknown Concepts) dataset?
COCO-Val2017, Obj365, HICO, Novel Classes
What metrics were used to measure the VCL model in the Visual Compositional Learning for Human-Object Interaction Detection paper on the HICO-DET(Unknown Concepts) dataset?
COCO-Val2017, Obj365, HICO, Novel Classes
What metrics were used to measure the SCL model in the Discovering Human-Object Interaction Concepts via Self-Compositional Learning paper on the HICO-DET dataset?
COCO-Val2017, Object365, HICO, Novel classes