prompts
stringlengths
81
413
metrics_response
stringlengths
0
371
What metrics were used to measure the Separate And Diffuse model in the Separate And Diffuse: Using a Pretrained Diffusion Model for Improving Source Separation paper on the WSJ0-3mix dataset?
SI-SDRi
What metrics were used to measure the MossFormer (M) + DM model in the MossFormer: Pushing the Performance Limit of Monaural Speech Separation using Gated Single-Head Transformer with Convolution-Augmented Joint Self-Attentions paper on the WSJ0-3mix dataset?
SI-SDRi
What metrics were used to measure the SepIt model in the SepIt: Approaching a Single Channel Speech Separation Bound paper on the WSJ0-3mix dataset?
SI-SDRi
What metrics were used to measure the SepFormer model in the Attention is All You Need in Speech Separation paper on the WSJ0-3mix dataset?
SI-SDRi
What metrics were used to measure the Sandglasset model in the Sandglasset: A Light Multi-Granularity Self-attentive Network For Time-Domain Speech Separation paper on the WSJ0-3mix dataset?
SI-SDRi
What metrics were used to measure the Gated DualPathRNN model in the Voice Separation with an Unknown Number of Multiple Speakers paper on the WSJ0-3mix dataset?
SI-SDRi
What metrics were used to measure the U-Net model in the Singing Voice Separation with Deep U-Net Convolutional Networks paper on the iKala dataset?
NSDR
What metrics were used to measure the Separate And Diffuse model in the Separate And Diffuse: Using a Pretrained Diffusion Model for Improving Source Separation paper on the Libri10Mix dataset?
SI-SDRi
What metrics were used to measure the SepIt model in the SepIt: Approaching a Single Channel Speech Separation Bound paper on the Libri10Mix dataset?
SI-SDRi
What metrics were used to measure the Hungarian PIT model in the Many-Speakers Single Channel Speech Separation with Optimal Permutation Training paper on the Libri10Mix dataset?
SI-SDRi
What metrics were used to measure the ConvNet model in the Event2Mind: Commonsense Inference on Events, Intents, and Reactions paper on the Event2Mind dev dataset?
Average Cross-Ent
What metrics were used to measure the BiRNN 100d model in the Event2Mind: Commonsense Inference on Events, Intents, and Reactions paper on the Event2Mind dev dataset?
Average Cross-Ent
What metrics were used to measure the ViLT model in the WinoGAViL: Gamified Association Benchmark to Challenge Vision-and-Language Models paper on the WinoGAViL dataset?
Jaccard Index
What metrics were used to measure the PaLM 2 (few-shot, k=3, Direct) model in the PaLM 2 Technical Report paper on the BIG-bench (Causal Judgment) dataset?
Accuracy
What metrics were used to measure the PaLM 540B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Causal Judgment) dataset?
Accuracy
What metrics were used to measure the PaLM 2 (few-shot, k=3, CoT) model in the PaLM 2 Technical Report paper on the BIG-bench (Causal Judgment) dataset?
Accuracy
What metrics were used to measure the Chinchilla-70B (few-shot, k=5) model in the Training Compute-Optimal Large Language Models paper on the BIG-bench (Causal Judgment) dataset?
Accuracy
What metrics were used to measure the GPT-NeoX (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Causal Judgment) dataset?
Accuracy
What metrics were used to measure the BLOOM 176B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Causal Judgment) dataset?
Accuracy
What metrics were used to measure the OPT 66B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Causal Judgment) dataset?
Accuracy
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench (Causal Judgment) dataset?
Accuracy
What metrics were used to measure the BloombergGPT (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Causal Judgment) dataset?
Accuracy
What metrics were used to measure the GPT-4 (few-shot, k=25) model in the GPT-4 Technical Report paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the PaLM 2 (few-shot, CoT, SC) model in the PaLM 2 Technical Report paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the PaLM 540B (Self Improvement, Self Consistency) model in the Large Language Models Can Self-Improve paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the PaLM 540B (Self Consistency) model in the Large Language Models Can Self-Improve paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the PaLM 540B (Self Improvement, CoT Prompting) model in the Large Language Models Can Self-Improve paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the PaLM 540B (Self Improvement, Standard-Prompting) model in the Large Language Models Can Self-Improve paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the PaLM 540B (Standard-Prompting) model in the Large Language Models Can Self-Improve paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the ST-MoE-32B model in the ST-MoE: Designing Stable and Transferable Sparse Expert Models paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the GPT-3.5 (few-shot, k=25) model in the GPT-4 Technical Report paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the PaLM 540B (CoT Prompting) model in the Large Language Models Can Self-Improve paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the PaLM 2-L (one-shot) model in the PaLM 2 Technical Report paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the GAL 120B (zero-shot) model in the Galactica: A Large Language Model for Science paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the PaLM 2-M (one-shot) model in the PaLM 2 Technical Report paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the PaLM 2-S (one-shot) model in the PaLM 2 Technical Report paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the LLaMA 33B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the LLaMA 65B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the GPT-3 175B (1 shot) model in the Language Models are Few-Shot Learners paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the LLaMA 13B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the GPT-3 175B (0 shot) model in the Language Models are Few-Shot Learners paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the GPT-3 (zero-shot) model in the Galactica: A Large Language Model for Science paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the BLOOM 176B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the GLaM 64B/64E (0 shot) model in the GLaM: Efficient Scaling of Language Models with Mixture-of-Experts paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the Bloomberg GPT (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the GLaM 64B/64E (1 shot) model in the GLaM: Efficient Scaling of Language Models with Mixture-of-Experts paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the LLaMA 7B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the GPT-NeoX (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the phi-1.5-web 1.3B (zero-shot) model in the Textbooks Are All You Need II: phi-1.5 technical report paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the OPT 66B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the OPT-175B model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the SparseGPT (175B, 50% Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the SparseGPT (175B, 4:8 Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the SparseGPT (175B, 2:4 Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the BLOOM (few-shot, k=5) model in the Galactica: A Large Language Model for Science paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the OPT (few-shot, k=5) model in the Galactica: A Large Language Model for Science paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the OPT-175B (50% Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the ARC (Challenge) dataset?
Accuracy
What metrics were used to measure the DeBERTa-1.5B model in the DeBERTa: Decoding-enhanced BERT with Disentangled Attention paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the PaLM 540B (finetuned) model in the PaLM: Scaling Language Modeling with Pathways paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the T5-11B model in the Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the GESA model in the Integrating a Heterogeneous Graph with Entity-aware Self-attention using Relative Position Labels for Reading Comprehension Model paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the LUKE-Graph model in the LUKE-Graph: A Transformer-based Approach with Gated Relational Graph Attention for Cloze-style Reading Comprehension paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the LUKE (single model) model in the paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the KELM (finetuning RoBERTa-large based single model) model in the KELM: Knowledge Enhanced Pre-Trained Language Representations with Message Passing on Hierarchical Relational Graphs paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the XLNet + MTL + Verifier (ensemble) model in the paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the CSRLM (single model) model in the paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the XLNet + MTL + Verifier (single model) model in the paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the {SKG-NET} (single model) model in the paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the KELM (finetuning BERT-large based single model) model in the KELM: Knowledge Enhanced Pre-Trained Language Representations with Message Passing on Hierarchical Relational Graphs paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the SKG-BERT (single model) model in the paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the KT-NET (single model) model in the paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the DCReader+BERT (single model) model in the paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the GraphBert (single) model in the paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the GraphBert-WordNet (single) model in the paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the GraphBert-NELL (single) model in the paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the BERT-Base (single model) model in the BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the N-Grammer model in the N-Grammer: Augmenting Transformers with latent n-grams paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the PaLM 2-L (one-shot) model in the PaLM 2 Technical Report paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the PaLM 2-M (one-shot) model in the PaLM 2 Technical Report paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the PaLM 2-S (one-shot) model in the PaLM 2 Technical Report paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the GPT 3 (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the AlexaTM 20B model in the AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the Bloomberg GPT (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the OPT 66B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the BLOOM 176B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the FLAN 137B zero-shot model in the Finetuned Language Models Are Zero-Shot Learners paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the GPT-NeoX (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the ReCoRD dataset?
EM, F1
What metrics were used to measure the DeBERTaV3-large+KEAR model in the Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention paper on the CommonsenseQA dataset?
Accuracy
What metrics were used to measure the PaLM 2 (few‑shot, CoT, SC) model in the PaLM 2 Technical Report paper on the CommonsenseQA dataset?
Accuracy
What metrics were used to measure the KEAR model in the Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention paper on the CommonsenseQA dataset?
Accuracy
What metrics were used to measure the DEKCOR model in the Fusing Context Into Knowledge Graph for Commonsense Question Answering paper on the CommonsenseQA dataset?
Accuracy
What metrics were used to measure the MUPPET Roberta Large model in the Muppet: Massive Multi-task Representations with Pre-Finetuning paper on the CommonsenseQA dataset?
Accuracy
What metrics were used to measure the UnifiedQA* Khashabi et al. (2020) model in the UnifiedQA: Crossing Format Boundaries With a Single QA System paper on the CommonsenseQA dataset?
Accuracy
What metrics were used to measure the DRAGON model in the Deep Bidirectional Language-Knowledge Graph Pretraining paper on the CommonsenseQA dataset?
Accuracy
What metrics were used to measure the Albert Lan et al. (2020) (ensemble) model in the ALBERT: A Lite BERT for Self-supervised Learning of Language Representations paper on the CommonsenseQA dataset?
Accuracy
What metrics were used to measure the QA-GNN model in the QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering paper on the CommonsenseQA dataset?
Accuracy
What metrics were used to measure the XLNet+GraphReason model in the Graph-Based Reasoning over Heterogeneous External Knowledge for Commonsense Question Answering paper on the CommonsenseQA dataset?
Accuracy
What metrics were used to measure the GrapeQA: PEGA model in the GrapeQA: GRaph Augmentation and Pruning to Enhance Question-Answering paper on the CommonsenseQA dataset?
Accuracy
What metrics were used to measure the RoBERTa+HyKAS Ma et al. (2019) model in the Towards Generalizable Neuro-Symbolic Systems for Commonsense Question Answering paper on the CommonsenseQA dataset?
Accuracy
What metrics were used to measure the GPT-3 Direct Finetuned model in the Human Parity on CommonsenseQA: Augmenting Self-Attention with External Attention paper on the CommonsenseQA dataset?
Accuracy