prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the Seq2Seq (Zhong et al., 2017) model in the Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning paper on the WikiSQL dataset? | Execution Accuracy, Exact Match Accuracy |
What metrics were used to measure the AlphaCode 41B + clustering model in the Competition-Level Code Generation with AlphaCode paper on the CodeContests dataset? | Test Set 10@100k |
What metrics were used to measure the CodeBERT model in the Can We Generate Shellcodes via Natural Language? An Empirical Study paper on the Shellcode_IA32 dataset? | BLEU-4, Exact Match Accuracy |
What metrics were used to measure the Seq2Seq with Attention model in the Can We Generate Shellcodes via Natural Language? An Empirical Study paper on the Shellcode_IA32 dataset? | BLEU-4, Exact Match Accuracy |
What metrics were used to measure the LSTM-based Sequence to Sequence model in the Shellcode_IA32: A Dataset for Automatic Shellcode Generation paper on the Shellcode_IA32 dataset? | BLEU-4, Exact Match Accuracy |
What metrics were used to measure the Boutaleb et al. model in the Multi-stage RGB-based Transfer Learning Pipeline for Hand Activity Recognition paper on the First-Person Hand Action Benchmark dataset? | 1:1 Accuracy |
What metrics were used to measure the Structured Keypoint Pooling model in the Unified Keypoint-based Action Recognition Framework via Structured Keypoint Pooling paper on the RWF-2000 dataset? | Accuracy |
What metrics were used to measure the Semi-Supervised Hard Attention (SSHA); pretrained on Deepmind Kinetics dataset model in the Video Violence Recognition and Localization Using a Semi-Supervised Hard Attention Model paper on the RWF-2000 dataset? | Accuracy |
What metrics were used to measure the Human Skeletons + Change Detection model in the Human skeletons and change detection for efficient violence detection in surveillance videos paper on the RWF-2000 dataset? | Accuracy |
What metrics were used to measure the Separable Convolutional LSTM model in the Efficient Two-Stream Network for Violence Detection Using Separable Convolutional LSTM paper on the RWF-2000 dataset? | Accuracy |
What metrics were used to measure the SPIL Convolution model in the Human Interaction Learning on 3D Skeleton Point Clouds for Video Violence Recognition paper on the RWF-2000 dataset? | Accuracy |
What metrics were used to measure the Flow Gated Network model in the RWF-2000: An Open Large Scale Video Database for Violence Detection paper on the RWF-2000 dataset? | Accuracy |
What metrics were used to measure the all-landmark-model model in the Classification of Abnormal Hand Movement for Aiding in Autism Detection: Machine Learning Study paper on the Self-Stimulatory Behavior Dataset dataset? | Activity Recognition |
What metrics were used to measure the CTP A model in the Learning Reasoning Strategies in End-to-End Differentiable Proving paper on the CLUTRR (k=3) dataset? | 4 Hops, 5 Hops, 6 Hops, 7 Hops, 8 Hops, 9 Hops, 10 Hops |
What metrics were used to measure the SAINT+ model in the SAINT+: Integrating Temporal Features for EdNet Correctness Prediction paper on the EdNet dataset? | AUC, Acc |
What metrics were used to measure the SAINT model in the Towards an Appropriate Query, Key, and Value Computation for Knowledge Tracing paper on the EdNet dataset? | AUC, Acc |
What metrics were used to measure the PEBG+DKT model in the Improving Knowledge Tracing via Pre-training Question Embeddings paper on the EdNet dataset? | AUC, Acc |
What metrics were used to measure the PEBG+DKVMN model in the Improving Knowledge Tracing via Pre-training Question Embeddings paper on the EdNet dataset? | AUC, Acc |
What metrics were used to measure the DKVMN model in the SAINT+: Integrating Temporal Features for EdNet Correctness Prediction paper on the EdNet dataset? | AUC, Acc |
What metrics were used to measure the DKT model in the SAINT+: Integrating Temporal Features for EdNet Correctness Prediction paper on the EdNet dataset? | AUC, Acc |
What metrics were used to measure the GIKT model in the GIKT: A Graph-based Interaction Model for Knowledge Tracing paper on the EdNet dataset? | AUC, Acc |
What metrics were used to measure the SAKT model in the SAINT+: Integrating Temporal Features for EdNet Correctness Prediction paper on the EdNet dataset? | AUC, Acc |
What metrics were used to measure the DKT model in the Deep Knowledge Tracing paper on the Assistments dataset? | AUC |
What metrics were used to measure the BKT model in the Deep Knowledge Tracing paper on the Assistments dataset? | AUC |
What metrics were used to measure the MarT_MKGformer model in the Multimodal Analogical Reasoning over Knowledge Graphs paper on the MARS (Multimodal Analogical Reasoning dataSet) dataset? | MRR |
What metrics were used to measure the MKGformer model in the Multimodal Analogical Reasoning over Knowledge Graphs paper on the MARS (Multimodal Analogical Reasoning dataSet) dataset? | MRR |
What metrics were used to measure the MarT_FLAVA model in the Multimodal Analogical Reasoning over Knowledge Graphs paper on the MARS (Multimodal Analogical Reasoning dataSet) dataset? | MRR |
What metrics were used to measure the ViLBERT model in the Multimodal Analogical Reasoning over Knowledge Graphs paper on the MARS (Multimodal Analogical Reasoning dataSet) dataset? | MRR |
What metrics were used to measure the IKRL (ANALOGY) model in the Multimodal Analogical Reasoning over Knowledge Graphs paper on the MARS (Multimodal Analogical Reasoning dataSet) dataset? | MRR |
What metrics were used to measure the IKRL model in the Multimodal Analogical Reasoning over Knowledge Graphs paper on the MARS (Multimodal Analogical Reasoning dataSet) dataset? | MRR |
What metrics were used to measure the ViLT model in the Multimodal Analogical Reasoning over Knowledge Graphs paper on the MARS (Multimodal Analogical Reasoning dataSet) dataset? | MRR |
What metrics were used to measure the TransAE model in the Multimodal Analogical Reasoning over Knowledge Graphs paper on the MARS (Multimodal Analogical Reasoning dataSet) dataset? | MRR |
What metrics were used to measure the HHolE model in the Augmenting Compositional Models for Knowledge Base Completion Using Gradient Representations paper on the FB15k dataset? | MRR |
What metrics were used to measure the COMPLEX model in the Knowledge Graph Completion via Complex Tensor Factorization paper on the FB15k dataset? | MRR |
What metrics were used to measure the TransE-Concat model in the OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs paper on the WikiKG90M-LSC dataset? | Test MRR, Validation MRR |
What metrics were used to measure the ComplEx-Concat model in the OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs paper on the WikiKG90M-LSC dataset? | Test MRR, Validation MRR |
What metrics were used to measure the ComplEx-RoBERTa model in the OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs paper on the WikiKG90M-LSC dataset? | Test MRR, Validation MRR |
What metrics were used to measure the TransE-RoBERTa model in the OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs paper on the WikiKG90M-LSC dataset? | Test MRR, Validation MRR |
What metrics were used to measure the NCC Next model in the PEg TRAnsfer Workflow recognition challenge report: Does multi-modal data improve recognition? paper on the PETRAW dataset? | Average AD-Accuracy |
What metrics were used to measure the SK model in the PEg TRAnsfer Workflow recognition challenge report: Does multi-modal data improve recognition? paper on the PETRAW dataset? | Average AD-Accuracy |
What metrics were used to measure the Hutom model in the PEg TRAnsfer Workflow recognition challenge report: Does multi-modal data improve recognition? paper on the PETRAW dataset? | Average AD-Accuracy |
What metrics were used to measure the MediCIS model in the PEg TRAnsfer Workflow recognition challenge report: Does multi-modal data improve recognition? paper on the PETRAW dataset? | Average AD-Accuracy |
What metrics were used to measure the MedAIR model in the PEg TRAnsfer Workflow recognition challenge report: Does multi-modal data improve recognition? paper on the PETRAW dataset? | Average AD-Accuracy |
What metrics were used to measure the MMLAB model in the PEg TRAnsfer Workflow recognition challenge report: Does multi-modal data improve recognition? paper on the PETRAW dataset? | Average AD-Accuracy |
What metrics were used to measure the Human model in the Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World paper on the Bongard-OpenWorld dataset? | 2-Class Accuracy |
What metrics were used to measure the SNAIL model in the Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World paper on the Bongard-OpenWorld dataset? | 2-Class Accuracy |
What metrics were used to measure the InstructBLIP + GPT-4 model in the Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World paper on the Bongard-OpenWorld dataset? | 2-Class Accuracy |
What metrics were used to measure the BLIP-2 + ChatGPT (Fine-tuned) model in the Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World paper on the Bongard-OpenWorld dataset? | 2-Class Accuracy |
What metrics were used to measure the InstructBLIP + ChatGPT + Neuro-Symbolic model in the Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World paper on the Bongard-OpenWorld dataset? | 2-Class Accuracy |
What metrics were used to measure the ChatCaptioner + ChatGPT model in the Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World paper on the Bongard-OpenWorld dataset? | 2-Class Accuracy |
What metrics were used to measure the Otter model in the Bongard-OpenWorld: Few-Shot Reasoning for Free-form Visual Concepts in the Real World paper on the Bongard-OpenWorld dataset? | 2-Class Accuracy |
What metrics were used to measure the AI Core model in the paper on the CLEVRER dataset? | Average-per ques., Descriptive, Explanatory-per opt., Explanatory-per ques., Predictive-per opt., Predictive-per ques., Counterfactual-per opt., Counterfactual-per ques. |
What metrics were used to measure the redherring model in the paper on the CLEVRER dataset? | Average-per ques., Descriptive, Explanatory-per opt., Explanatory-per ques., Predictive-per opt., Predictive-per ques., Counterfactual-per opt., Counterfactual-per ques. |
What metrics were used to measure the VRDP model in the paper on the CLEVRER dataset? | Average-per ques., Descriptive, Explanatory-per opt., Explanatory-per ques., Predictive-per opt., Predictive-per ques., Counterfactual-per opt., Counterfactual-per ques. |
What metrics were used to measure the Fighttttt model in the paper on the CLEVRER dataset? | Average-per ques., Descriptive, Explanatory-per opt., Explanatory-per ques., Predictive-per opt., Predictive-per ques., Counterfactual-per opt., Counterfactual-per ques. |
What metrics were used to measure the neural model in the paper on the CLEVRER dataset? | Average-per ques., Descriptive, Explanatory-per opt., Explanatory-per ques., Predictive-per opt., Predictive-per ques., Counterfactual-per opt., Counterfactual-per ques. |
What metrics were used to measure the NERV model in the paper on the CLEVRER dataset? | Average-per ques., Descriptive, Explanatory-per opt., Explanatory-per ques., Predictive-per opt., Predictive-per ques., Counterfactual-per opt., Counterfactual-per ques. |
What metrics were used to measure the DCL model in the paper on the CLEVRER dataset? | Average-per ques., Descriptive, Explanatory-per opt., Explanatory-per ques., Predictive-per opt., Predictive-per ques., Counterfactual-per opt., Counterfactual-per ques. |
What metrics were used to measure the troublesolver model in the paper on the CLEVRER dataset? | Average-per ques., Descriptive, Explanatory-per opt., Explanatory-per ques., Predictive-per opt., Predictive-per ques., Counterfactual-per opt., Counterfactual-per ques. |
What metrics were used to measure the v0.1 model in the paper on the CLEVRER dataset? | Average-per ques., Descriptive, Explanatory-per opt., Explanatory-per ques., Predictive-per opt., Predictive-per ques., Counterfactual-per opt., Counterfactual-per ques. |
What metrics were used to measure the First_test model in the paper on the CLEVRER dataset? | Average-per ques., Descriptive, Explanatory-per opt., Explanatory-per ques., Predictive-per opt., Predictive-per ques., Counterfactual-per opt., Counterfactual-per ques. |
What metrics were used to measure the TS_NS_IMPERIAL model in the paper on the CLEVRER dataset? | Average-per ques., Descriptive, Explanatory-per opt., Explanatory-per ques., Predictive-per opt., Predictive-per ques., Counterfactual-per opt., Counterfactual-per ques. |
What metrics were used to measure the rnn_dyn model in the paper on the CLEVRER dataset? | Average-per ques., Descriptive, Explanatory-per opt., Explanatory-per ques., Predictive-per opt., Predictive-per ques., Counterfactual-per opt., Counterfactual-per ques. |
What metrics were used to measure the epoch 9 pgd_25_0.1_eps model in the paper on the CLEVRER dataset? | Average-per ques., Descriptive, Explanatory-per opt., Explanatory-per ques., Predictive-per opt., Predictive-per ques., Counterfactual-per opt., Counterfactual-per ques. |
What metrics were used to measure the Humans model in the WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models paper on the WinoGAViL dataset? | Jaccard Index |
What metrics were used to measure the ViLT (Zero-Shot) model in the WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models paper on the WinoGAViL dataset? | Jaccard Index |
What metrics were used to measure the X-VLM (Zero-Shot) model in the WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models paper on the WinoGAViL dataset? | Jaccard Index |
What metrics were used to measure the CLIP-ViT-B/32 (Zero-Shot) model in the WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models paper on the WinoGAViL dataset? | Jaccard Index |
What metrics were used to measure the CLIP-ViT-L/14 (Zero-Shot) model in the WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models paper on the WinoGAViL dataset? | Jaccard Index |
What metrics were used to measure the CLIP-RN50x64/14 (Zero-Shot) model in the WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models paper on the WinoGAViL dataset? | Jaccard Index |
What metrics were used to measure the CLIP-RN50 (Zero-Shot) model in the WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models paper on the WinoGAViL dataset? | Jaccard Index |
What metrics were used to measure the CLIP-ViL (Zero-Shot) model in the WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models paper on the WinoGAViL dataset? | Jaccard Index |
What metrics were used to measure the LXMERT model in the Visual Spatial Reasoning paper on the VSR dataset? | accuracy |
What metrics were used to measure the ViLT model in the Visual Spatial Reasoning paper on the VSR dataset? | accuracy |
What metrics were used to measure the CLIP (finetuned) model in the Visual Spatial Reasoning paper on the VSR dataset? | accuracy |
What metrics were used to measure the CLIP (frozen) model in the Visual Spatial Reasoning paper on the VSR dataset? | accuracy |
What metrics were used to measure the VisualBERT model in the Visual Spatial Reasoning paper on the VSR dataset? | accuracy |
What metrics were used to measure the RPIN model in the Learning Long-term Visual Dynamics with Region Proposal Interaction Networks paper on the PHYRE-1B-Cross dataset? | AUCCESS |
What metrics were used to measure the Dec[Joint]1f model in the Forward Prediction for Physical Reasoning paper on the PHYRE-1B-Cross dataset? | AUCCESS |
What metrics were used to measure the Dynamics-Aware DQN model in the Physical Reasoning Using Dynamics-Aware Models paper on the PHYRE-1B-Cross dataset? | AUCCESS |
What metrics were used to measure the DQN model in the PHYRE: A New Benchmark for Physical Reasoning paper on the PHYRE-1B-Cross dataset? | AUCCESS |
What metrics were used to measure the BEiT-3 model in the Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks paper on the NLVR2 Dev dataset? | Accuracy |
What metrics were used to measure the X2-VLM (large) model in the X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks paper on the NLVR2 Dev dataset? | Accuracy |
What metrics were used to measure the XFM (base) model in the Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks paper on the NLVR2 Dev dataset? | Accuracy |
What metrics were used to measure the X2-VLM (base) model in the X$^2$-VLM: All-In-One Pre-trained Model For Vision-Language Tasks paper on the NLVR2 Dev dataset? | Accuracy |
What metrics were used to measure the CoCa model in the CoCa: Contrastive Captioners are Image-Text Foundation Models paper on the NLVR2 Dev dataset? | Accuracy |
What metrics were used to measure the VLMo model in the VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts paper on the NLVR2 Dev dataset? | Accuracy |
What metrics were used to measure the SimVLM model in the SimVLM: Simple Visual Language Model Pretraining with Weak Supervision paper on the NLVR2 Dev dataset? | Accuracy |
What metrics were used to measure the X-VLM (base) model in the Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts paper on the NLVR2 Dev dataset? | Accuracy |
What metrics were used to measure the VK-OOD model in the Differentiable Outlier Detection Enable Robust Deep Multimodal Analysis paper on the NLVR2 Dev dataset? | Accuracy |
What metrics were used to measure the ALBEF (14M) model in the Align before Fuse: Vision and Language Representation Learning with Momentum Distillation paper on the NLVR2 Dev dataset? | Accuracy |
What metrics were used to measure the SOHO model in the Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning paper on the NLVR2 Dev dataset? | Accuracy |
What metrics were used to measure the ViLT-B/32 model in the ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision paper on the NLVR2 Dev dataset? | Accuracy |
What metrics were used to measure the LXMERT (Pre-train + scratch) model in the LXMERT: Learning Cross-Modality Encoder Representations from Transformers paper on the NLVR2 Dev dataset? | Accuracy |
What metrics were used to measure the VisualBERT model in the VisualBERT: A Simple and Performant Baseline for Vision and Language paper on the NLVR2 Dev dataset? | Accuracy |
What metrics were used to measure the Swin model in the VASR: Visual Analogies of Situation Recognition paper on the VASR dataset? | 1:1 Accuracy |
What metrics were used to measure the ConvNeXt model in the VASR: Visual Analogies of Situation Recognition paper on the VASR dataset? | 1:1 Accuracy |
What metrics were used to measure the ViT model in the VASR: Visual Analogies of Situation Recognition paper on the VASR dataset? | 1:1 Accuracy |
What metrics were used to measure the DEiT model in the VASR: Visual Analogies of Situation Recognition paper on the VASR dataset? | 1:1 Accuracy |
What metrics were used to measure the VQ2 model in the What You See is What You Read? Improving Text-Image Alignment Evaluation paper on the Winoground dataset? | Text Score, Image Score, Group Score |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.