prompts stringlengths 81 413 | metrics_response stringlengths 0 371 |
|---|---|
What metrics were used to measure the FeatMatch model in the FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning paper on the Mini-ImageNet, 4000 Labels dataset? | Accuracy |
What metrics were used to measure the SemCo (μ=3) model in the All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training paper on the Mini-ImageNet, 4000 Labels dataset? | Accuracy |
What metrics were used to measure the SemCo (μ=7) model in the All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training paper on the Mini-ImageNet, 4000 Labels dataset? | Accuracy |
What metrics were used to measure the DPT model in the Diffusion Models and Semi-Supervised Learners Benefit Mutually with Few Labels paper on the ImageNet - 5 labeled data per class dataset? | Top 1 Accuracy |
What metrics were used to measure the UL-Hopfield (ULH) model in the Unsupervised Learning using Pretrained CNN and Associative Memory Bank paper on the Caltech-256 dataset? | Accuracy |
What metrics were used to measure the LiDAM model in the LiDAM: Semi-Supervised Learning with Localized Domain Adaptation and Iterative Matching paper on the CIFAR-100, 5000 Labels dataset? | Accuracy (%) |
What metrics were used to measure the UPS (CNN-13) model in the In Defense of Pseudo-Labeling: An Uncertainty-Aware Pseudo-label Selection Framework for Semi-Supervised Learning paper on the CIFAR-100, 4000 Labels dataset? | Accuracy |
What metrics were used to measure the SimCLR-kmediods-PAWS model in the Cold PAWS: Unsupervised class discovery and addressing the cold-start problem for semi-supervised learning paper on the CIFAR-10, 30 Labels dataset? | Percentage error |
What metrics were used to measure the RuGPT-3 Large model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the Ethics dataset? | Accuracy |
What metrics were used to measure the RuGPT-3 Meduim model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the Ethics dataset? | Accuracy |
What metrics were used to measure the RuGPT-3 Small model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the Ethics dataset? | Accuracy |
What metrics were used to measure the Human benchmark model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the Ethics dataset? | Accuracy |
What metrics were used to measure the Human benchmark model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the Ethics 2 dataset? | Accuracy |
What metrics were used to measure the RuGPT-3 Small model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the Ethics 2 dataset? | Accuracy |
What metrics were used to measure the RuGPT-3 Large model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the Ethics 2 dataset? | Accuracy |
What metrics were used to measure the RuGPT-3 Medium model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the Ethics 2 dataset? | Accuracy |
What metrics were used to measure the SGPT-BE-5.8B model in the SGPT: GPT Sentence Embeddings for Semantic Search paper on the CLIMATE-FEVER (BEIR) dataset? | nDCG@10 |
What metrics were used to measure the monoT5-3B model in the No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval paper on the CLIMATE-FEVER (BEIR) dataset? | nDCG@10 |
What metrics were used to measure the BM25+CE model in the BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models paper on the CLIMATE-FEVER (BEIR) dataset? | nDCG@10 |
What metrics were used to measure the SGPT-CE-6.1B model in the SGPT: GPT Sentence Embeddings for Semantic Search paper on the CLIMATE-FEVER (BEIR) dataset? | nDCG@10 |
What metrics were used to measure the monoT5-3B model in the No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval paper on the FEVER (BEIR) dataset? | nDCG@10 |
What metrics were used to measure the BM25+CE model in the BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models paper on the FEVER (BEIR) dataset? | nDCG@10 |
What metrics were used to measure the SGPT-BE-5.8B model in the SGPT: GPT Sentence Embeddings for Semantic Search paper on the FEVER (BEIR) dataset? | nDCG@10 |
What metrics were used to measure the SGPT-CE-6.1B model in the SGPT: GPT Sentence Embeddings for Semantic Search paper on the FEVER (BEIR) dataset? | nDCG@10 |
What metrics were used to measure the monoT5-3B model in the No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval paper on the SciFact (BEIR) dataset? | nDCG@10 |
What metrics were used to measure the SGPT-BE-5.8B model in the SGPT: GPT Sentence Embeddings for Semantic Search paper on the SciFact (BEIR) dataset? | nDCG@10 |
What metrics were used to measure the BM25+CE model in the BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models paper on the SciFact (BEIR) dataset? | nDCG@10 |
What metrics were used to measure the SGPT-CE-6.1B model in the SGPT: GPT Sentence Embeddings for Semantic Search paper on the SciFact (BEIR) dataset? | nDCG@10 |
What metrics were used to measure the ColBERT model in the BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models paper on the SciFact (BEIR) dataset? | nDCG@10 |
What metrics were used to measure the MA-CIN model in the Self-Supervised Claim Identification for Automated Fact Checking paper on the CDCD dataset? | Precision, Recall |
What metrics were used to measure the Chinchilla-70B (few-shot, k=5) model in the Training Compute-Optimal Large Language Models paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the PaLM 2 (few-shot, k=3, Direct) model in the PaLM 2 Technical Report paper on the BIG-bench (Formal Fallacies Syllogisms Negation) dataset? | Accuracy |
What metrics were used to measure the PaLM 2 (few-shot, k=3, CoT) model in the PaLM 2 Technical Report paper on the BIG-bench (Formal Fallacies Syllogisms Negation) dataset? | Accuracy |
What metrics were used to measure the OPT 66B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Formal Fallacies Syllogisms Negation) dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Formal Fallacies Syllogisms Negation) dataset? | Accuracy |
What metrics were used to measure the GPT-NeoX (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Formal Fallacies Syllogisms Negation) dataset? | Accuracy |
What metrics were used to measure the BLOOM 176B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Formal Fallacies Syllogisms Negation) dataset? | Accuracy |
What metrics were used to measure the Chinchilla-70B (few-shot, k=5) model in the Training Compute-Optimal Large Language Models paper on the BIG-bench (Formal Fallacies Syllogisms Negation) dataset? | Accuracy |
What metrics were used to measure the Bloomberg GPT (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Formal Fallacies Syllogisms Negation) dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench (Formal Fallacies Syllogisms Negation) dataset? | Accuracy |
What metrics were used to measure the Chinchilla-70B (few-shot, k=5) model in the Training Compute-Optimal Large Language Models paper on the BIG-bench (Logic Grid Puzzle) dataset? | Accuracy |
What metrics were used to measure the PaLM-540B (few-shot, k=5) model in the PaLM 2 Technical Report paper on the BIG-bench (Logic Grid Puzzle) dataset? | Accuracy |
What metrics were used to measure the PaLM-62B (few-shot, k=5) model in the PaLM 2 Technical Report paper on the BIG-bench (Logic Grid Puzzle) dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench (Logic Grid Puzzle) dataset? | Accuracy |
What metrics were used to measure the Chinchilla-70B (few-shot, k=5) model in the Training Compute-Optimal Large Language Models paper on the BIG-bench (Logical Fallacy Detection) dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench (Logical Fallacy Detection) dataset? | Accuracy |
What metrics were used to measure the PaLM-540B (few-shot, k=5) model in the PaLM: Scaling Language Modeling with Pathways paper on the BIG-bench (StrategyQA) dataset? | Accuracy |
What metrics were used to measure the Chinchilla-70B (few-shot, k=5) model in the Training Compute-Optimal Large Language Models paper on the BIG-bench (StrategyQA) dataset? | Accuracy |
What metrics were used to measure the PaLM-62B (few-shot, k=5) model in the PaLM: Scaling Language Modeling with Pathways paper on the BIG-bench (StrategyQA) dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench (StrategyQA) dataset? | Accuracy |
What metrics were used to measure the PaLM 2 (few-shot, k=3, CoT) model in the PaLM 2 Technical Report paper on the BIG-bench (Temporal Sequences) dataset? | Accuracy |
What metrics were used to measure the PaLM 2 (few-shot, k=3, Direct) model in the PaLM 2 Technical Report paper on the BIG-bench (Temporal Sequences) dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Temporal Sequences) dataset? | Accuracy |
What metrics were used to measure the BLOOM 176B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Temporal Sequences) dataset? | Accuracy |
What metrics were used to measure the Chinchilla-70B (few-shot, k=5) model in the Training Compute-Optimal Large Language Models paper on the BIG-bench (Temporal Sequences) dataset? | Accuracy |
What metrics were used to measure the Bloomberg GPT (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Temporal Sequences) dataset? | Accuracy |
What metrics were used to measure the OPT 66B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Temporal Sequences) dataset? | Accuracy |
What metrics were used to measure the GPT-NeoX (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Temporal Sequences) dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench (Temporal Sequences) dataset? | Accuracy |
What metrics were used to measure the Human benchmark model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the RuWorldTree dataset? | Accuracy |
What metrics were used to measure the RuGPT-3 Large model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the RuWorldTree dataset? | Accuracy |
What metrics were used to measure the RuGPT-3 Medium model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the RuWorldTree dataset? | Accuracy |
What metrics were used to measure the RuGPT-3 Small model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the RuWorldTree dataset? | Accuracy |
What metrics were used to measure the PaLM 2 (few-shot, k=3, CoT) model in the PaLM 2 Technical Report paper on the BIG-bench (Reasoning About Colored Objects) dataset? | Accuracy |
What metrics were used to measure the PaLM 2 (few-shot, k=3, Direct) model in the PaLM 2 Technical Report paper on the BIG-bench (Reasoning About Colored Objects) dataset? | Accuracy |
What metrics were used to measure the Chinchilla-70B (few-shot, k=5) model in the Training Compute-Optimal Large Language Models paper on the BIG-bench (Reasoning About Colored Objects) dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench (Reasoning About Colored Objects) dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Reasoning About Colored Objects) dataset? | Accuracy |
What metrics were used to measure the BLOOM 176B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Reasoning About Colored Objects) dataset? | Accuracy |
What metrics were used to measure the Bloomberg GPT (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Reasoning About Colored Objects) dataset? | Accuracy |
What metrics were used to measure the OPT 66B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Reasoning About Colored Objects) dataset? | Accuracy |
What metrics were used to measure the GPT-NeoX (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Reasoning About Colored Objects) dataset? | Accuracy |
What metrics were used to measure the PaLM 2 (few-shot, k=3, CoT) model in the PaLM 2 Technical Report paper on the BIG-bench (Penguins In A Table) dataset? | Accuracy |
What metrics were used to measure the PaLM 2 (few-shot, k=3, Direct) model in the PaLM 2 Technical Report paper on the BIG-bench (Penguins In A Table) dataset? | Accuracy |
What metrics were used to measure the Chinchilla-70B (few-shot, k=5) model in the Training Compute-Optimal Large Language Models paper on the BIG-bench (Penguins In A Table) dataset? | Accuracy |
What metrics were used to measure the PaLM 540B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Penguins In A Table) dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench (Penguins In A Table) dataset? | Accuracy |
What metrics were used to measure the BLOOM 176B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Penguins In A Table) dataset? | Accuracy |
What metrics were used to measure the Bloomberg GPT (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Penguins In A Table) dataset? | Accuracy |
What metrics were used to measure the GPT-NeoX (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Penguins In A Table) dataset? | Accuracy |
What metrics were used to measure the OPT 66B (few-shot, k=3) model in the BloombergGPT: A Large Language Model for Finance paper on the BIG-bench (Penguins In A Table) dataset? | Accuracy |
What metrics were used to measure the Human benchmark model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the Winograd Automatic dataset? | Accuracy |
What metrics were used to measure the RuGPT-3 Small model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the Winograd Automatic dataset? | Accuracy |
What metrics were used to measure the RuGPT-3 Medium model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the Winograd Automatic dataset? | Accuracy |
What metrics were used to measure the RuGPT-3 Large model in the TAPE: Assessing Few-shot Russian Language Understanding paper on the Winograd Automatic dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
What metrics were used to measure the Gopher-280B (few-shot, k=5) model in the Scaling Language Models: Methods, Analysis & Insights from Training Gopher paper on the BIG-bench dataset? | Accuracy |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.