prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the STaR (on GPT-J) model in the STaR: Bootstrapping Reasoning With Reasoning paper on the CommonsenseQA dataset? | Accuracy |
What metrics were used to measure the RoBERTa Liu et al. (2019) model in the RoBERTa: A Robustly Optimized BERT Pretraining Approach paper on the CommonsenseQA dataset? | Accuracy |
What metrics were used to measure the STaR without Rationalization (on GPT-J) model in the STaR: Bootstrapping Reasoning With Reasoning paper on the CommonsenseQA dataset? | Accuracy |
What metrics were used to measure the OPT 66B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the CommonsenseQA dataset? | Accuracy |
What metrics were used to measure the Bloomberg GPT (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the CommonsenseQA dataset? | Accuracy |
What metrics were used to measure the CAGE-reasoning model in the Explain Yourself! Leveraging Language Models for Commonsense Reasoning paper on the CommonsenseQA dataset? | Accuracy |
What metrics were used to measure the BLOOM 176B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the CommonsenseQA dataset? | Accuracy |
What metrics were used to measure the BERT_CSlarge model in the Align, Mask and Select: A Simple Method for Incorporating Commonsense Knowledge into Language Representation Models paper on the CommonsenseQA dataset? | Accuracy |
What metrics were used to measure the GPT-NeoX (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the CommonsenseQA dataset? | Accuracy |
What metrics were used to measure the GPT-J Direct Finetuned model in the STaR: Bootstrapping Reasoning With Reasoning paper on the CommonsenseQA dataset? | Accuracy |
What metrics were used to measure the KagNet model in the KagNet: Knowledge-Aware Graph Networks for Commonsense Reasoning paper on the CommonsenseQA dataset? | Accuracy |
What metrics were used to measure the BERT-LARGE model in the CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge paper on the CommonsenseQA dataset? | Accuracy |
What metrics were used to measure the Few-shot CoT LaMDA 137B model in the STaR: Bootstrapping Reasoning With Reasoning paper on the CommonsenseQA dataset? | Accuracy |
What metrics were used to measure the Few-shot CoT GPT-J model in the STaR: Bootstrapping Reasoning With Reasoning paper on the CommonsenseQA dataset? | Accuracy |
What metrics were used to measure the Few-shot Direct GPT-J model in the STaR: Bootstrapping Reasoning With Reasoning paper on the CommonsenseQA dataset? | Accuracy |
What metrics were used to measure the GPT-4 (few-shot, k=5) model in the GPT-4 Technical Report paper on the WinoGrande dataset? | Accuracy |
What metrics were used to measure the PaLM 2-L (one-shot) model in the PaLM 2 Technical Report paper on the WinoGrande dataset? | Accuracy |
What metrics were used to measure the GPT-3.5 (few-shot, k=5) model in the GPT-4 Technical Report paper on the WinoGrande dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (zero-shot) model in the PaLM: Scaling Language Modeling with Pathways paper on the WinoGrande dataset? | Accuracy |
What metrics were used to measure the PaLM 2-M (one-shot) model in the PaLM 2 Technical Report paper on the WinoGrande dataset? | Accuracy |
What metrics were used to measure the PaLM 2-S (one-shot) model in the PaLM 2 Technical Report paper on the WinoGrande dataset? | Accuracy |
What metrics were used to measure the PaLM 62B (zero-shot) model in the PaLM: Scaling Language Modeling with Pathways paper on the WinoGrande dataset? | Accuracy |
What metrics were used to measure the PaLM-cont 62B (zero-shot) model in the PaLM: Scaling Language Modeling with Pathways paper on the WinoGrande dataset? | Accuracy |
What metrics were used to measure the LLaMA 65B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the WinoGrande dataset? | Accuracy |
What metrics were used to measure the LLaMA 33B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the WinoGrande dataset? | Accuracy |
What metrics were used to measure the Chinchilla 70B (zero-shot) model in the Training Compute-Optimal Large Language Models paper on the WinoGrande dataset? | Accuracy |
What metrics were used to measure the phi-1.5-web 1.3B (zero-shot) model in the Textbooks Are All You Need II: phi-1.5 technical report paper on the WinoGrande dataset? | Accuracy |
What metrics were used to measure the LLaMA 13B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the WinoGrande dataset? | Accuracy |
What metrics were used to measure the GPT-3 175B (zero-shot) model in the Language Models are Few-Shot Learners paper on the WinoGrande dataset? | Accuracy |
What metrics were used to measure the LLaMA 7B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the WinoGrande dataset? | Accuracy |
What metrics were used to measure the Gopher 280B (zero-shot) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the WinoGrande dataset? | Accuracy |
What metrics were used to measure the BLOOM 176B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the WinoGrande dataset? | Accuracy |
What metrics were used to measure the OPT 66B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the WinoGrande dataset? | Accuracy |
What metrics were used to measure the Bloomberg GPT (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the WinoGrande dataset? | Accuracy |
What metrics were used to measure the GPT-NeoX (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the WinoGrande dataset? | Accuracy |
What metrics were used to measure the PaLM-540B (few-shot, k=5) model in the PaLM: Scaling Language Modeling with Pathways paper on the BIG-bench (Winowhy) dataset? | Accuracy |
What metrics were used to measure the Chinchilla-70B (few-shot, k=5) model in the Training Compute-Optimal Large Language Models paper on the BIG-bench (Winowhy) dataset? | Accuracy |
What metrics were used to measure the PaLM-62B (few-shot, k=5) model in the PaLM: Scaling Language Modeling with Pathways paper on the BIG-bench (Winowhy) dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench (Winowhy) dataset? | Accuracy |
What metrics were used to measure the ST-MoE-32B model in the ST-MoE: Designing Stable and Transferable Sparse Expert Models paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the PaLM 2-L (one-shot) model in the PaLM 2 Technical Report paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the PaLM 2-M (one-shot) model in the PaLM 2 Technical Report paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the PaLM 2-S (one-shot) model in the PaLM 2 Technical Report paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the LLaMA-65B+CFG (zero-shot) model in the Stay on topic with Classifier-Free Guidance paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the GAL 120B (zero-shot) model in the Galactica: A Large Language Model for Science paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the LLaMA-30B+CFG (zero-shot) model in the Stay on topic with Classifier-Free Guidance paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the LLaMA 33B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the LLaMA-13B+CFG (zero-shot) model in the Stay on topic with Classifier-Free Guidance paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the LLaMA 65B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the phi-1.5-web 1.3B (zero-shot) model in the Textbooks Are All You Need II: phi-1.5 technical report paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the BLOOM 176B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the GLaM (64B/64E) (5-shot) model in the GLaM: Efficient Scaling of Language Models with Mixture-of-Experts paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the LLaMA 13B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the Bloomberg GPT (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the LLaMA 7B (zero-shot) model in the LLaMA: Open and Efficient Foundation Language Models paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the OPT 66B (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the GPT-3 175B (1 shot) model in the Language Models are Few-Shot Learners paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the OPT-175B model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the GPT-NeoX (one-shot) model in the BloombergGPT: A Large Language Model for Finance paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the SparseGPT (175B, 50% Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the GPT-3 175B (0 shot) model in the Language Models are Few-Shot Learners paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the GPT-3 (zero-shot) model in the Galactica: A Large Language Model for Science paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the SparseGPT (175B, 4:8 Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the GLaM 64B/64E (0-shot) model in the GLaM: Efficient Scaling of Language Models with Mixture-of-Experts paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the SparseGPT (175B, 2:4 Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the LLaMA-7B+CFG (zero-shot) model in the Stay on topic with Classifier-Free Guidance paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the BLOOM (few-shot, k=5) model in the Galactica: A Large Language Model for Science paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the OPT (few-shot, k=5) model in the Galactica: A Large Language Model for Science paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the OPT-175B (50% Sparsity) model in the SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot paper on the ARC (Easy) dataset? | Accuracy |
What metrics were used to measure the FLAN 137B zero-shot model in the Finetuned Language Models Are Zero-Shot Learners paper on the WSC273 dataset? | Accuracy |
What metrics were used to measure the NMN [kottur2018visual] model in the Visual Coreference Resolution in Visual Dialog using Neural Module Networks paper on the Visual Dialog v0.9 dataset? | 1 in 10 R@5 |
What metrics were used to measure the Human Benchmark model in the RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark paper on the PARus dataset? | Accuracy |
What metrics were used to measure the Golden Transformer model in the paper on the PARus dataset? | Accuracy |
What metrics were used to measure the YaLM 1.0B few-shot model in the paper on the PARus dataset? | Accuracy |
What metrics were used to measure the RuGPT3XL few-shot model in the paper on the PARus dataset? | Accuracy |
What metrics were used to measure the ruT5-large-finetune model in the paper on the PARus dataset? | Accuracy |
What metrics were used to measure the RuGPT3Medium model in the paper on the PARus dataset? | Accuracy |
What metrics were used to measure the RuGPT3Large model in the paper on the PARus dataset? | Accuracy |
What metrics were used to measure the RuBERT plain model in the paper on the PARus dataset? | Accuracy |
What metrics were used to measure the RuGPT3Small model in the paper on the PARus dataset? | Accuracy |
What metrics were used to measure the ruT5-base-finetune model in the paper on the PARus dataset? | Accuracy |
What metrics were used to measure the Multilingual Bert model in the paper on the PARus dataset? | Accuracy |
What metrics were used to measure the ruRoberta-large finetune model in the paper on the PARus dataset? | Accuracy |
What metrics were used to measure the RuBERT conversational model in the paper on the PARus dataset? | Accuracy |
What metrics were used to measure the MT5 Large model in the mT5: A massively multilingual pre-trained text-to-text transformer paper on the PARus dataset? | Accuracy |
What metrics were used to measure the SBERT_Large_mt_ru_finetuning model in the paper on the PARus dataset? | Accuracy |
What metrics were used to measure the SBERT_Large model in the paper on the PARus dataset? | Accuracy |
What metrics were used to measure the majority_class model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the PARus dataset? | Accuracy |
What metrics were used to measure the ruBert-large finetune model in the paper on the PARus dataset? | Accuracy |
What metrics were used to measure the Baseline TF-IDF1.1 model in the RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark paper on the PARus dataset? | Accuracy |
What metrics were used to measure the Random weighted model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the PARus dataset? | Accuracy |
What metrics were used to measure the heuristic majority model in the Unreasonable Effectiveness of Rule-Based Heuristics in Solving Russian SuperGLUE Tasks paper on the PARus dataset? | Accuracy |
What metrics were used to measure the ruBert-base finetune model in the paper on the PARus dataset? | Accuracy |
What metrics were used to measure the PaLM 2 (few-shot, k=3, CoT) model in the PaLM 2 Technical Report paper on the BIG-bench (Date Understanding) dataset? | Accuracy |
What metrics were used to measure the PaLM 2 (few-shot, k=3, Direct) model in the PaLM 2 Technical Report paper on the BIG-bench (Date Understanding) dataset? | Accuracy |
What metrics were used to measure the Bloomberg GPT (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Date Understanding) dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (few-shot,k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Date Understanding) dataset? | Accuracy |
What metrics were used to measure the Chinchilla-70B (few-shot, k=5) model in the Training Compute-Optimal Large Language Models paper on the BIG-bench (Date Understanding) dataset? | Accuracy |
What metrics were used to measure the BLOOM 176B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Date Understanding) dataset? | Accuracy |
What metrics were used to measure the OPT 66B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Date Understanding) dataset? | Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.