prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the TubeDETR model in the TubeDETR: Spatio-Temporal Video Grounding with Transformers paper on the VidSTG dataset?
Declarative m_vIoU, Declarative vIoU@0.3, Declarative vIoU@0.5, Interrogative m_vIoU, Interrogative vIoU@0.3, Interrogative vIoU@0.5
What metrics were used to measure the CZ-Det model in the Cascaded Zoom-in Detector for High Resolution Aerial Images paper on the VisDrone dataset?
AP50
What metrics were used to measure the TransMind model in the Open-TransMind: A New Baseline and Benchmark for 1st Foundation Model Challenge of Intelligent Transportation paper on the CeyMo dataset?
mAP
What metrics were used to measure the YOLOv7 model in the YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors paper on the CeyMo dataset?
mAP
What metrics were used to measure the TOOD model in the TOOD: Task-aligned One-stage Object Detection paper on the CeyMo dataset?
mAP
What metrics were used to measure the YOLOX model in the YOLOX: Exceeding YOLO Series in 2021 paper on the CeyMo dataset?
mAP
What metrics were used to measure the Sparse R-CNN model in the Sparse R-CNN: End-to-End Object Detection with Learnable Proposals paper on the CeyMo dataset?
mAP
What metrics were used to measure the InternImage-H model in the InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions paper on the BDD100K val dataset?
mAP
What metrics were used to measure the PP-YOLOE model in the PP-YOLOE: An evolved version of YOLO paper on the BDD100K val dataset?
mAP
What metrics were used to measure the HRFuser-T model in the HRFuser: A Multi-resolution Sensor Fusion Architecture for 2D Object Detection paper on the Dense Fog dataset?
dense fog hard (AP), light fog hard (AP), snow/rain hard (AP)
What metrics were used to measure the Deep Entropy Fusion model in the Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather paper on the Dense Fog dataset?
dense fog hard (AP), light fog hard (AP), snow/rain hard (AP)
What metrics were used to measure the HRFuser-T model in the HRFuser: A Multi-resolution Sensor Fusion Architecture for 2D Object Detection paper on the Clear Weather dataset?
clear hard (AP)
What metrics were used to measure the Deep Entropy Fusion model in the Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather paper on the Clear Weather dataset?
clear hard (AP)
What metrics were used to measure the Yolov8x (640x640) model in the FishEye8K: A Benchmark and Dataset for Fisheye Camera Object Detection paper on the FishEye8K dataset?
mAP
What metrics were used to measure the rotated-retinanet-rbox-r360_r50_fpn_6x.py model in the TRR360D: A dataset for 360 degree rotated rectangular box table detection paper on the TRR360D dataset?
AP50(T<90), AP90(T<90)
What metrics were used to measure the GLIP model in the Grounded Language-Image Pre-training paper on the RF100 dataset?
Average mAP
What metrics were used to measure the MAET model in the Multitask AET with Orthogonal Tangent Regularity for Dark Object Detection paper on the ExDark dataset?
mAP
What metrics were used to measure the TempoRadar model in the Exploiting Temporal Relations on Radar Perception for Autonomous Driving paper on the RADIATE dataset?
mAP@0.3
What metrics were used to measure the GPT-4 (few-shot, k=10) model in the GPT-4 Technical Report paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the TheBloke/llama-2-70b-Guanaco-QLoRA-fp16 (10-shot) model in the QLoRA: Efficient Finetuning of Quantized LLMs paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the PaLM 2-L (one-shot) model in the PaLM 2 Technical Report paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the PaLM 2-M (one-shot) model in the PaLM 2 Technical Report paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the MUPPET Roberta Large model in the Muppet: Massive Multi-task Representations with Pre-Finetuning paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the LLaMA-65B+CFG (zero-shot) model in the Stay on topic with Classifier-Free Guidance paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the PaLM 2-S (one-shot) model in the PaLM 2 Technical Report paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the GPT-3.5 (few-shot, k=10) model in the GPT-4 Technical Report paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the LLaMA-30B+CFG (zero-shot) model in the Stay on topic with Classifier-Free Guidance paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the LLaMA 2 70B (zero-shot) model in the Llama 2: Open Foundation and Fine-Tuned Chat Models paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the LLaMA 65B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the PaLM-540B (Few-Shot) model in the PaLM: Scaling Language Modeling with Pathways paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the PaLM-540B (One-Shot) model in the PaLM: Scaling Language Modeling with Pathways paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the PaLM-540B (Zero-Shot) model in the PaLM: Scaling Language Modeling with Pathways paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the LLaMA 2 34B (zero-shot) model in the Llama 2: Open Foundation and Fine-Tuned Chat Models paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the LLaMA 33B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the Megatron-Turing NLG 530B (Few-Shot) model in the Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the LLaMA-13B+CFG (zero-shot) model in the Stay on topic with Classifier-Free Guidance paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the Chinchilla (Zero-Shot) model in the Training Compute-Optimal Large Language Models paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the LLaMA 2 13B (zero-shot) model in the Llama 2: Open Foundation and Fine-Tuned Chat Models paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the Megatron-Turing NLG 530B (One-Shot) model in the Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the GPT-3 175B (Few-Shot) model in the Language Models are Few-Shot Learners paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the Gopher (zero-shot) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the LLaMA 13B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the GPT-3 (zero-shot) model in the Language Models are Few-Shot Learners paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the LLaMA 2 7B (zero-shot) model in the Llama 2: Open Foundation and Fine-Tuned Chat Models paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the LLaMA 7B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the Blooomberg GPT (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the OPT 66B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the BLOOM 176B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the Sheared-LLaMA-2.7B (50B) model in the Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the GPT-NeoX (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the Open-LLaMA-3B-v2 model in the Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the Sheared-LLaMA-1.3B (50B) model in the Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the FLAN 137B (zero-shot) model in the Finetuned Language Models Are Zero-Shot Learners paper on the HellaSwag dataset?
Accuracy
What metrics were used to measure the PJ-X model in the CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations paper on the CLEVR-X dataset?
B4, M, RL, C, Acc
What metrics were used to measure the FM model in the CLEVR-X: A Visual Reasoning Dataset for Natural Language Explanations paper on the CLEVR-X dataset?
B4, M, RL, C, Acc
What metrics were used to measure the OFA-X model in the Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language Explanations paper on the VQA-X dataset?
Human Explanation Rating
What metrics were used to measure the OFA-X-MT model in the Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language Explanations paper on the VQA-X dataset?
Human Explanation Rating
What metrics were used to measure the OFA-X-MT model in the Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language Explanations paper on the VCR dataset?
Human Explanation Rating
What metrics were used to measure the OFA-X model in the Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language Explanations paper on the VCR dataset?
Human Explanation Rating
What metrics were used to measure the Ground-truth Caption -> GPT3 (Oracle) model in the Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images paper on the WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images dataset?
Human (%)
What metrics were used to measure the Predicted Caption -> GPT3 model in the Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images paper on the WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images dataset?
Human (%)
What metrics were used to measure the BLIP2 FlanT5-XXL (Fine-tuned) model in the Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images paper on the WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images dataset?
Human (%)
What metrics were used to measure the BLIP2 FlanT5-XL (Fine-tuned) model in the Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images paper on the WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images dataset?
Human (%)
What metrics were used to measure the BLIP2 FlanT5-XXL (Zero-shot) model in the Breaking Common Sense: WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images paper on the WHOOPS! A Vision-and-Language Benchmark of Synthetic and Compositional Images dataset?
Human (%)
What metrics were used to measure the OFA-X model in the Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language Explanations paper on the e-SNLI-VE dataset?
Human Explanation Rating
What metrics were used to measure the OFA-X-MT model in the Harnessing the Power of Multi-Task Pretraining for Ground-Truth Level Natural Language Explanations paper on the e-SNLI-VE dataset?
Human Explanation Rating
What metrics were used to measure the APOLLO model in the APOLLO: An Optimized Training Approach for Long-form Numerical Reasoning paper on the ConvFinQA dataset?
Execution Accuracy, Program Accuracy
What metrics were used to measure the FinQANet (RoBERTa-large) model in the ConvFinQA: Exploring the Chain of Numerical Reasoning in Conversational Finance Question Answering paper on the ConvFinQA dataset?
Execution Accuracy, Program Accuracy
What metrics were used to measure the T5 model in the The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics paper on the SGD dataset?
METEOR
What metrics were used to measure the BART model in the The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics paper on the SGD dataset?
METEOR
What metrics were used to measure the DF-Net model in the Dynamic Fusion Network for Multi-Domain End-to-end Task-Oriented Dialog paper on the Kvret dataset?
Entity F1
What metrics were used to measure the T5-3b(UnifiedSKG) model in the UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models paper on the KVRET dataset?
Entity F1, BLEU
What metrics were used to measure the COMET model in the Contextualize Knowledge Bases with Transformer for End-to-end Task-Oriented Dialogue Systems paper on the KVRET dataset?
Entity F1, BLEU
What metrics were used to measure the DF-Net model in the Dynamic Fusion Network for Multi-Domain End-to-end Task-Oriented Dialog paper on the KVRET dataset?
Entity F1, BLEU
What metrics were used to measure the GLMP model in the Global-to-local Memory Pointer Networks for Task-Oriented Dialogue paper on the KVRET dataset?
Entity F1, BLEU
What metrics were used to measure the TTOS model in the Amalgamating Knowledge from Two Teachers for Task-oriented Dialogue System with Adversarial Training paper on the KVRET dataset?
Entity F1, BLEU
What metrics were used to measure the KB-retriever model in the Entity-Consistent End-to-end Task-Oriented Dialogue System with KB Retriever paper on the KVRET dataset?
Entity F1, BLEU
What metrics were used to measure the DSR model in the Sequence-to-Sequence Learning for Task-oriented Dialogue with Dialogue State Representation paper on the KVRET dataset?
Entity F1, BLEU
What metrics were used to measure the KV Retrieval Net model in the Key-Value Retrieval Networks for Task-Oriented Dialogue paper on the KVRET dataset?
Entity F1, BLEU
What metrics were used to measure the THPN model in the A Template-guided Hybrid Pointer Network for Knowledge-based Task-oriented Dialogue Systems paper on the KVRET dataset?
Entity F1, BLEU
What metrics were used to measure the Mem2Seq model in the Mem2Seq: Effectively Incorporating Knowledge Bases into End-to-End Task-Oriented Dialog Systems paper on the KVRET dataset?
Entity F1, BLEU
What metrics were used to measure the BART (TextBox 2.0) model in the TextBox 2.0: A Text Generation Library with Pre-trained Language Models paper on the MULTIWOZ 2.0 dataset?
BLEU-4, Score
What metrics were used to measure the ACE model in the Automated Concatenation of Embeddings for Structured Prediction paper on the SemEval-2016 Task 5 Subtask 1 (Dutch) dataset?
F1
What metrics were used to measure the M-BERT+Flair+Word+Char model in the Structure-Level Knowledge Distillation For Multilingual Sequence Labeling paper on the SemEval-2016 Task 5 Subtask 1 (Dutch) dataset?
F1
What metrics were used to measure the ACE model in the Automated Concatenation of Embeddings for Structured Prediction paper on the SemEval-2016 Task 5 Subtask 1 (Russian) dataset?
F1
What metrics were used to measure the M-BERT+Word+Char model in the Structure-Level Knowledge Distillation For Multilingual Sequence Labeling paper on the SemEval-2016 Task 5 Subtask 1 (Russian) dataset?
F1
What metrics were used to measure the RACL - Laptops model in the YASO: A Targeted Sentiment Analysis Evaluation Dataset for Open-Domain Reviews paper on the YASO - YELP dataset?
F1
What metrics were used to measure the DE-CNN model in the Double Embeddings and CNN-based Sequence Labeling for Aspect Extraction paper on the SemEval 2016 Task 5 Sub Task 1 Slot 2 dataset?
Restaurant (F1)
What metrics were used to measure the ACE model in the Automated Concatenation of Embeddings for Structured Prediction paper on the SemEval-2016 Task 5 Subtask 1 dataset?
F1
What metrics were used to measure the Wei et al. (2020) model in the Don't Eclipse Your Arts Due to Small Discrepancies: Boundary Repositioning with a Pointer Network for Aspect Extraction paper on the SemEval-2016 Task 5 Subtask 1 dataset?
F1
What metrics were used to measure the M-BERT+Flair+Word+Char model in the Structure-Level Knowledge Distillation For Multilingual Sequence Labeling paper on the SemEval-2016 Task 5 Subtask 1 dataset?
F1
What metrics were used to measure the ACE model in the Automated Concatenation of Embeddings for Structured Prediction paper on the SemEval-2016 Task 5 Subtask 1 (Spanish) dataset?
F1
What metrics were used to measure the M-BERT+Flair+Word+Char model in the Structure-Level Knowledge Distillation For Multilingual Sequence Labeling paper on the SemEval-2016 Task 5 Subtask 1 (Spanish) dataset?
F1
What metrics were used to measure the DE-CNN model in the Double Embeddings and CNN-based Sequence Labeling for Aspect Extraction paper on the SemEval 2015 Task 12 dataset?
Restaurant (F1)
What metrics were used to measure the InstructABSA model in the InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis paper on the SemEval 2014 Task 4 Sub Task 1 dataset?
Laptop (F1)
What metrics were used to measure the DE-CNN model in the Double Embeddings and CNN-based Sequence Labeling for Aspect Extraction paper on the SemEval 2014 Task 4 Sub Task 1 dataset?
Laptop (F1)
What metrics were used to measure the InstructABSA model in the InstructABSA: Instruction Learning for Aspect Based Sentiment Analysis paper on the SemEval 2014 Task 4 Sub Task 2 dataset?
Laptop (F1), Restaurant (F1), Mean F1 (Laptop + Restaurant)
What metrics were used to measure the ACE model in the Automated Concatenation of Embeddings for Structured Prediction paper on the SemEval 2014 Task 4 Sub Task 2 dataset?
Laptop (F1), Restaurant (F1), Mean F1 (Laptop + Restaurant)
What metrics were used to measure the PH-SUM model in the Improving BERT Performance for Aspect-Based Sentiment Analysis paper on the SemEval 2014 Task 4 Sub Task 2 dataset?
Laptop (F1), Restaurant (F1), Mean F1 (Laptop + Restaurant)
What metrics were used to measure the BAT model in the Adversarial Training for Aspect-Based Sentiment Analysis with BERT paper on the SemEval 2014 Task 4 Sub Task 2 dataset?
Laptop (F1), Restaurant (F1), Mean F1 (Laptop + Restaurant)